The latest issue of the journal Structure looks at structural genomics.
While the number of structures and impact has been substantive, the cost of the PSI-2 initiative is large. In the US alone, the NIH spends approximately $65 million each year on this effort. As a result, legitimate questions arise as to whether or not the money on PSI is well spent, especially at a time when funding for independent investigator-driven research appears scarce. To facilitate this debate, we will publish commentaries from both supporters and opponents of the structural genomics effort in the next few issues of Structure; we invite any additional comments from readers to be e-mailed to the Editors (firstname.lastname@example.org) We believe this debate is especially timely because of the ongoing need to shape PSI-3, which may or may not begin in 2010.
The stated aim of the Protein Structure Initiative is “to make the three-dimensional atomic-level structures of most proteins easily obtainable from knowledge of their corresponding DNA sequences”. Not everyone thinks that this is worthwhile:
Moore, P.B. (2007)
Let’s Call the Whole Thing Off: Some Thoughts on the Protein Structure Initiative.
Structure 15(11): 1350-1352
Full text (subscribers)
Incidentally, Structure is not a journal riding the Web 2.0 wave – the free, summary page is worthless and the article contains incorrect URLs and DOIs that don’t resolve. Well it’s Elsevier, what can you expect. The Opinion section of the same issue features more articles from both proponents and critics of SG programs.
Personally, I’m a fan of SG initiatives. I submit that their purpose is to provide data first, biological insight second. It always amuses me that critics of SG just can’t resist slipping sentences of the form “when I was an undergraduate in the 60s” into their articles. That’s hardly going to endear you to the current generation of omics-aware, web-savvy Googling biologists now, is it? And as for the Gershwin tune from which the article takes its title (just to reinforce that generation gap) – don’t they call off the calling-off towards the end?
I’d also take issue with some of the criticisms of computational structural biology:
How do you estimate the accuracy of a protein structure arrived at by computation? How do you validate a computed model for a protein’s structure short of determining its structure experimentally, and if you are prepared to do that, why bother computing its structure?
In a word – statistics. Your modelling method returns a good structure for 900/1000 sequences of known structure, you’re 90% confident of your unknown structure.
In my book, only models that are useful and reliable deserve to be called “structures,” and I have yet to be shown a nontrivial protein model computed from sequence alone that qualifies.
I suggest a good starting point is:
Baker, D. et al. (2007)
High-resolution structure prediction and the crystallographic phase problem.
Nature 450(7167): 259-264
Finally, we show that all-atom refinement can produce de novo protein structure predictions that reach the high accuracy required for molecular replacement without any experimental phase information and in the absence of templates suitable for molecular replacement from the Protein Data Bank. These results suggest that the combination of high-resolution structure prediction with state-of-the-art phasing tools may be unexpectedly powerful in phasing crystallographic data for which molecular replacement is hindered by the absence of sufficiently accurate previous models.
Nobody is suggesting that computational structural modelling is ready to supplant experimental structure determination. They are complementary, not opposing approaches – as with every field where experimental biology and computation intersect. I wonder though, how advanced will structure prediction be by the time I’m saying “when I was an undergraduate in the 80s”?