Measuring quality is hard

Four articles. Click on the images for larger versions.

Exhibit A: the infamous “(insert statistical method here)”. Exhibit B: “just make up an elemental analysis“. Exhibit C: a methods paper in which a significant proportion of the text was copied verbatim from a previous article. Finally, exhibit D, which shall be forever known as the “crappy Gabor” paper.

Exhibit A

Exhibit A

Exhibit B

Exhibit B

Exhibit C

Exhibit C

Exhibit D

Exhibit D

Notice anything?
I think that altmetrics are a great initiative. So long as we’re clear that what’s being measured is attention, not quality.

5 thoughts on “Measuring quality is hard

  1. The same thing happens with actual citations as well. The infamous 1989 “Cold Fusion” paper of Fleischmann and Pons has been cited over 1200 times according to Google Scholar. The 2011 Wolfe-Simon “Arsenic Life” paper over 300. And yet our definition of a “good journal” is one with a high impact factor (average number of citations per paper). So a journal that *just* publishes flashy but flawed research would have a *great* impact factor. Maybe that’s why the Glam Mags are having so much trouble lately with faked stem cell research and the like — they *want* the controversy to beef up their IF.

    • Brain dump…I suppose it might be argued that 1. these controversial examples are outliers; 2. the vast majority of articles get far less citations/views/downloads/whatever; 3. so perhaps attention/citation is a better proxy for “quality” in the “non-controversial case”…but how/if all of that could be disentangled into meaningful measures is not apparent to me right now.

  2. I doubt there is a solution. Too much of research and academia has become a game and success involves various degrees of gaming the system. Now and then the players make mistakes and the most egregious “plays” are revealed. The system currently penalises integrity, care and attention to detail in favour of flair and marketing.

Comments are closed.