Note to journals: “methodologically sound” applies to figures too

PeerJ, like PLoS ONE, aims to publish work on the basis of “soundness” (scientific and methodological) as opposed to subjective notions of impact, interest or significance. I’d argue that effective, appropriate data visualisation is a good measure of methodology. I’d also argue that on that basis, Evolution of a research field – a micro (RNA) example fails the soundness test.

Figure 1: miRNA publications per year

Figure 1: miRNA publications per year

Let’s start with Figure 1. Equally spaced divisions on the x-axis, but the years are not equally spaced – 1993, 1996, 1997 for example. Even worse is the attempt to illustrate a rapid increase after 2004 using broken bars and a second y-axis. It’s confusing and messy.

Figure 2: language of publication

Figure 2: language of publication

Some of these crimes are repeated in Figure 2, which also introduces an ugly shading scheme to distinguish languages. When you look at it, do you think “black…aha, black = English” ? No, you do not. There’s no need for different shading or colour here (it’s not even visible for two bars); the bars are readily distinguishable from the x-axis labels.

Figure 3 repeats the shading crime and Table 1 is somewhat superfluous, as it contains much of the same data. Several more tables follow, containing data which might be better presented as charts.


Figure 4: all the previous horrors

Figure 4: all the previous horrors

Figure 4 combines all the previous horrors into 3 panels. We could go on, but let’s not. You can see the rest for yourself, it’s open access.

Publication on the basis of “soundness” need not mean sacrifices in quality. Ideally, someone at some stage in the process – a mentor before submission, a reviewer, an editor – should notice when figures are not produced to an appropriate standard and suggest improvements. I see a lot of failures like this one in the literature and the causes run right through the science career timeline. It starts with poor student training and ends with reviewers and editors who don’t know how to assess the quality of data analysis/visualisation.

It’s easy to blame “peer review lite”, but there are deeper, systemic issues of grave concern here.

3 thoughts on “Note to journals: “methodologically sound” applies to figures too

  1. fridaymeetssunday

    You make a very good point of those plots being indeed horrible!

    I don’t think yo were having a dig at PLoS One, but want to stress that “peer review lite” is not IMO generally applicable to that journal. I had a paper submitted to 3 different journal before being accepted, and whilst it was rejected in PloS (2nd submission), they provided the most thorough (and helpful) review of the lot, leading to many improvements of the work. Others reported similar experiences. This is of course anecdotal, and one would probably see very different approaches to review in PLoS One depending, for instance, on the field of research.

    And this leads nicely to where I wholeheartedly agree with you: “It starts with poor student training and ends with reviewers and editors who don’t know how to assess the quality of data analysis/visualisation”. Having been part of different fields and interacted with people in a few more (from pharmacology to Biophysics), there are very very different standards and tools used for visualization of data. Looking back, I am ashamed of some of the plots I produced while a student, but those were the ones that everybody knew and that were taught to me. Plato’s Cave comes to mind.

    A solution is to “educate” researchers on this. And whilst in the field of genomic this is relatively easy – the need to create new visualization methods, means that it is populated by people sensible to good visualisation practices, and people that know the right tools for the job (e.g. R, ggplot2) it might not be so in other fields, where Excel* and GraphPad Prism* default plots are the norm.

    I’m rambling.

    *both useful when used properly.

  2. aeon

    @fridaymeetssunday, I think your point is taken – PLoS One is just the biggest heap, you are bound to find rotten straw if you search in it. There are plenty of other heaps looking nice and dry from the outside, but are rotten from within, and I wouldn’t want to sleep on them.

    The point is: the chain is only as good as it’s weakest link. This can be a single reviewer, but I also experienced EiCs snap. Sometimes pushing papers, sometimes rejecting them after a thorough review based on own concerns.

    I think that’s enough metaphors and for a Thursday.
    Time for Friday, already, isn’t it?

Comments are closed.