PeerJ, like PLoS ONE, aims to publish work on the basis of “soundness” (scientific and methodological) as opposed to subjective notions of impact, interest or significance. I’d argue that effective, appropriate data visualisation is a good measure of methodology. I’d also argue that on that basis, Evolution of a research field – a micro (RNA) example fails the soundness test.
Figure 4 combines all the previous horrors into 3 panels. We could go on, but let’s not. You can see the rest for yourself, it’s open access.
Publication on the basis of “soundness” need not mean sacrifices in quality. Ideally, someone at some stage in the process – a mentor before submission, a reviewer, an editor – should notice when figures are not produced to an appropriate standard and suggest improvements. I see a lot of failures like this one in the literature and the causes run right through the science career timeline. It starts with poor student training and ends with reviewers and editors who don’t know how to assess the quality of data analysis/visualisation.
It’s easy to blame “peer review lite”, but there are deeper, systemic issues of grave concern here.