DokuWiki, PubMed and Ruby

I recently built a wiki for a research group using DokuWiki, one of my favourite wiki packages. As with many other wikis, developers have extended its functionality by writing plugins. Some of these are excellent, allowing users to generate lots of content with a minimum of syntax. For example, using the PubMed plugin, you type this:


and the result is this:

Which got me thinking. Assuming that you’ve searched PubMed and retrieved a bunch of references in XML format, how might you generate text in DokuWiki syntax, to paste into your wiki? Here’s the small parser that I wrote in ruby:

require 'rubygems'
require 'hpricot'

h = {}
d = Hpricot.XML(open('pubmed_result.xml'))

(d/:PubmedArticle).each do |a|
  (h["=== #{'DateCreated/Year').inner_html} ==="] ||= []) << "{{pubmed>long:#{'PMID').inner_html}}}"

puts h.sort {|a,b| b<=>a}

Nine lines – how cool is that? It uses Hpricot to parse the XML and creates a hash of arrays. Hash key is the year, formatted to show a level 4 headline in DokuWiki; hash value is an array of PMIDs, formatted with PubMed plugin syntax. At the end we just print it all out, sorting by year from newest – oldest.

As Pierre would say – that’s it.

On parsing

Parsing – the act of ripping through a file, pulling out the relevant parts and doing something useful with them, is an integral part of bioinformatics. It can be a dull procedure. It can also be challenging, requiring creativity and imagination. Frequently as a bioinformatician, you will generate output from an unfamiliar program, or a colleague will bring you a file that you haven’t encountered. Your task is to figure out how the file is structured, which regular expressions are required to parse it, what kind of output to produce and most importantly, how to handle those rogue files which don’t obey the rules.

Here’s my top ten (language-agnostic) parsing tips, focusing only on non-XML text files.
Read the rest…