Four articles. Click on the images for larger versions.
Exhibit A: the infamous “(insert statistical method here)”. Exhibit B: “just make up an elemental analysis“. Exhibit C: a methods paper in which a significant proportion of the text was copied verbatim from a previous article. Finally, exhibit D, which shall be forever known as the “crappy Gabor” paper.
I think that altmetrics are a great initiative. So long as we’re clear that what’s being measured is attention, not quality.
Before we start: yes, we’ve been here before. There was the Biostars question “Calculating Time From Submission To Publication / Degree Of Burden In Submitting A Paper.” That gave rise to Pierre’s excellent blog post and code + data on Figshare.
So why are we here again? 1. It’s been a couple of years. 2. This is the R (+ Ruby) version. 3. It’s always worth highlighting how the poor state of publicly-available data prevents us from doing what we’d like to do. In this case the interesting question “which bioinformatics journal should I submit to for rapid publication?” becomes “here’s an incomplete analysis using questionable data regarding publication dates.”
Let’s get it out of the way then.
A DOI, this morning
When I arrive at work, the first task for the day is “check feeds”. If I’m lucky, in the “journal TOCs” category, there will be an abstract that looks interesting, like this one on the left (click for larger version).
Sometimes, the title is a direct link to the article at the journal website. Often though, the link is a Digital Object Identifier or DOI. Frequently, when the article is labelled as “advance access” or “early”, clicking on the DOI link leads to a page like the one below on the right.
In the grand scheme of things I suppose this rates as “minor annoyance”; it means that I have to visit the journal website and search for the article in question. The question is: why does this happen? I’m not familiar with the practical details of setting up a DOI, but I assume that the journal submits article URLs to the DOI system for processing. So who do I blame – journals, for making URLs public before the DOI is ready, or the DOI system, for not processing new URLs quickly enough?
There’s also the issue of whether terms like “advance access” have any meaning in the era of instant, online publishing but that’s for another day.
May as well begin 2014 where we left off: complaining about the attitude of scientific publishers regarding reproducible computational research.
Read the rest…
This bioinformatician, at least. Hate is a strong word. Perhaps “dislike” is better.
Short answer: because you can’t get data out of them easily, if at all. Longer answer:
Read the rest…
Floating by in the Twitter stream, this from @leonidkruglyak. It leads to a light-hearted opinion(ated) piece by Sydney Brenner in Current Biology, 1996.
In 1996, you may recall, the Web was just a few years old. Amusingly (sadly?), it seems that Brenner predicted many of the topics in science publishing that we’re still discussing in 2013. It’s just that he thought they would be implemented in no time at all.
For example, open refereeing:
It is incidents such as this that have led me to question whether the anonymity of referees needs to be guarded so closely
Self-publishing/archiving and post-publication peer review:
The electronic pre-print with open discussion (not refereeing) will soon become commonplace; in fact, labs could go into the publication business by themselves
Demise of the journal impact factor, publishing economics and altmetrics:
We will need something to substitute for the present ratings given to papers appearing in ‘superior, peer-reviewed publications’ (and commercial publishers will find ways of making people pay for this)
Perhaps we should have a readership index; it should not be beyond the wit of man to devise a way of recording whenever a paper is read, hard-copied or cited
As Ethan said:
We can debate the economics, complexities, details, implementation… of open access publishing for as long as we like. However, the basic principle: that publicly-funded research should be publicly-accessible seems to me at least, very obviously correct and “the right thing to do”.
So this, from April 2012, was very depressing.
Open access not as simple as it sounds: outgoing ARC boss
For those outside Australia, the ARC is the Australian Research Council. Much debate ensued in which one contributor to the comment thread wrote:
…it is particularly galling that Sheil is projecting her own simplistic understanding of open access onto its advocates. Hopefully she will be replaced at the Australian Research Council by someone who understands and supports open access.
The ARC has introduced a new open access policy for ARC funded research which takes effect from 1 January 2013. According to this new policy the ARC requires that any publications arising from an ARC supported research project must be deposited into an open access institutional repository within a twelve (12) month period from the date of publication.
I did giggle at the assumption that the author’s version of their article is by default a Word document, but then I guess that’s true for > 90% of authors.
Outcomes like this come dangerously close to restoring hope.
A couple of years ago, I noted that some journals were not making the process of commenting on articles especially easy. My latest experience suggests that little has changed.
Read the rest…
Academic journals. Frankly, I’m not a big fan of any of them. There are too many. They cost too much. Much of what they publish is inconsequential, read by practically no-one or just downright incorrect. Much of the rest is badly-written and boring. The people who publish them have an over-inflated sense of their own importance. They’re hidden behind paywalls. And governed by ludicrous metrics. The system by which articles are accepted or rejected is arcane and ridiculous. I mean, I could go on…
No, what really troubles me about journals is that they only tell a very small part of the story – the flashy, attention-grabbing part called “results”. We learn from high school onwards that a methods section should be sufficient for anyone to reproduce the results. This is one of the great lies of science. Go read any journal in your field and give it a try. It’s even the case in computation, an area which you might think less prone to the problems in reproducing wet-lab science (“the Milli-Q must have been off”).
We have this wonderful thing called the Web now. The Web doesn’t have a page limit, so you can describe things in as much detail as you wish. Better still, you can just post your methods and data there in full, for all to see, download and reproduce to their hearts content. You’d like some credit for doing that though, right?
So if you do research – any kind of research – that involves computation, your code is open-source, reusable, well-documented and robust (think: tests) and you want to share it with the world, head over to a new journal called BMC Open Research Computation, which is now open for submissions. Your friendly team of enlightened editors awaits.
More information at Science in the Open and Saaien Tist. Full disclosure: I’m on the editorial board of this journal and was invited to write a launch post.