Monitoring PubMed retractions: updates

chart

PubMed cumulative retractions 1977-present

There’s been a recent flurry of interest in retractions. See for example: Scientific Retractions: A Growth Industry?; summarised also by GenomeWeb in Take That Back; articles in the WSJ and the Pharmalot blog; and academic articles in the Journal of Medical Ethics and Infection & Immunity.

Several of these sources cite data from my humble web application, PMRetract. So now seems like a good time to mention that:

  • The application is still going strong and is updated regularly
  • I’ve added a few enhancements to the UI; you can follow development at GitHub
  • I’ve also added a long-overdue about page with some extra information, including the fact that I wrote it :)

Now I just need to fix up my Git repositories. Currently there’s one which pushes to GitHub and a second, with a copy of the Sinatra code for pushing to Heroku, which isn’t too smart.

6 thoughts on “Monitoring PubMed retractions: updates

    • Well…two comments. First, you could certainly use my code to retrieve data about retracted articles and the journals in which they were published.

      Second – I think there’s an error in the reasoning in your blog post. IF is calculated from citations on a per-journal basis; it’s a kind of average for citations from a journal. So you would not expect the citations of individual researchers to correlate well with IF. Clearly citations (per journal) have to be a good predictor of IF, since they are used to calculate it.

      Similarly, retraction rates are on a per-journal basis. So you are not really comparing like with like in the figures that you present; the top one shows individuals, the bottom one is journals.

      Now, if you’re asking whether an author can predict the likelihood of their work being cited versus retracted based on the journal – that’s a different and interesting question.

  1. Well, my idea is close to what you articulate in your last sentence. Despite everyone knowing that you can’t use IFs for individuals, it is nevertheless being done, all the time. For instance, I receive research funds on the basis of the IF I publish in, silly enough. Thus, IF is being used as a proxy for ‘quality’ on individual researchers. With a comparison of the correlations of citations and retractions, I’m trying to add another nail in the coffin of the IF (as if we needed any! .-)): I want to know if the IF is a better predictor of a paper in your journal of choice (i.e., often-used indicator of ‘quality’!) being retracted, than of any of your papers being cited (actual indicator of some measure of ‘quality’).

    I think this reasoning would make at least some people think (and it would make a nice slide in my IF presentations :-)

  2. I’m aware, of course, that one would need access to ResearcherID (for instance) to compute some sort of average correlation between actual citations and IF for the average researcher – this would be very hard on PubMed without unique author identifiers…

    • Yes, PubMed is quite an imperfect source of data in many ways. There are numerous articles, for example, lacking quite basic pieces of information such as year of publication. And that’s before you get to the unique author issue. Maybe the upcoming changes to Google Scholar will be of use here.

  3. Pingback: The Top Journals in Science (for retractions) : eloquentscience.com

Comments are closed.