Tag Archives: research

I give up

It’s what – 10 years or more? – since we began to wonder when web technologies such as RSS, wikis and social bookmarking sites would be widely adopted by most working scientists, to further their productivity.

The email that I received today which began “I’ve read 3 interesting papers” and included 1 .doc, 3 .docx and 4 .pdf files as attachments is indicative of the answer to this question, which is “not any time soon.”

I’ve given up trying to educate colleagues in best practices. Clearly, I’m the one with the problem, since this is completely normal, acceptable behaviour for practically everyone that I’ve ever worked with. Instead, I’m just waiting for them to retire (or die). I reckon most senior scientists (and they’re the ones running the show) are currently aged 45-55. So it’s going to be 10-20 years before things improve.

Until then, I’ll just have to keep deleting your emails. Sorry.

Nature on reproducible research

I imagine that most people, when asked “do you think that independent confirmation of research findings is important?” would answer “yes”. I also imagine, when told that in most cases this is not possible, that those people might be concerned or perhaps incredulous. However, this really is the case, which is why I spend much of my working life in a state of concern and incredulity.

Over the years, many articles have been written on how to improve this state of affairs by adopting best practices, collectively-termed reproducible research. One of the latest is an editorial in Nature. I’ve pulled out a few quotes for discussion.
Read the rest…

Conservative (with a small “c”) research

This is really interesting. I’m reading it at work so I can’t tell you if it’s behind the paywall, but I sincerely hope not; it deserves to be read widely:

Edwards, A.M. et al. (2011)
Too many roads not taken.
Nature 470: 163–165

Most protein research focuses on those known before the human genome was mapped. Work on the slew discovered since, urge Aled M. Edwards and his colleagues.

The article includes some nicely-done bibliometric analysis. I’ve lifted a few quotes that illustrate some of the key points.

  • More than 75% of protein research still focuses on the 10% of proteins that were known before the genome was mapped
  • Around 65% of the 20,000 kinase papers published in 2009 focused on the 50 proteins that were the ‘hottest’ in the early 1990s
  • Similarly, 75% of the research activity on nuclear hormone receptors in 2009 focused on the 6 (of 48) receptors that were most studied in the mid 1990s
  • A common assumption is that previous research efforts have preferentially identified the most important proteins – the evidence doesn’t support this
  • Why the reluctance to work on the unknown? [...] scientists are wont to “fondle their problems”
  • Funding and peer-review systems are risk-averse
  • The availability of chemical probes for a given receptor dictates the level of research interest in it; the development of these tools is not driven by the importance of the protein

I love the phrase “fondle their problems.”

I’ve long felt that academic research has increasingly little to do with “advancing knowledge” and is more concerned with churning out “more of the same” to consolidate individual careers. However, that’s just me being opinionated and anecdotal. What do you think?

Rewards, output and academia

Academia takes a very narrow view of what constitutes “output”. Rewards (such as funding or tenure) are given out on the basis of (1) publications, preferably first-author, preferably in so-called high-impact journals; (2) citations, in the same journals and (3) previous rewards – “demonstrated ability in securing funding”. I always find that last catch-22 clause particularly amusing.

I started to think about this when I read What is principal component analysis? [DOI 10.1038/nbt0308-303], in the current issue of Nature Biotechnology (subscription only). Now, I’m not criticising the article or its publication: it’s well-written, educational and a good basic overview of PCA for biologists who have not previously encountered the method. However, my first reaction was to recall a number of excellent blog posts on the same topic that I’ve read recently. For example:

The Nature Biotechnology article is recognised by academia and qualifies for academic rewards. The blog posts – which are longer, more detailed, written by enthusiastic communicators and in theory, accessible to a much wider audience (as opposed to people with a subscription to Nature Biotechnology) – are not.

It doesn’t seem right to me. How does your institution evaluate and reward “non-traditional” output?