XML parsing made easy: is that podcast getting longer?

Sometime in 2009, I began listening to a science podcast titled This Week in Virology, or TWiV for short. I thought it was pretty good and listened regularly up until sometime in 2016, when it seemed that most episodes were approaching two hours in duration. I listen to several podcasts when commuting to/from work, which takes up about 10 hours of my week, so I found it hard to justify two hours for one podcast, no matter how good.

Were the episodes really getting longer over time? Let’s find out using R.
Continue reading

Taking steps (in XML)

So the votes are in:

I thank you, kind readers. So here’s the plan: (1) keep blogging here as frequently as possible (perhaps monthly), (2) on more general “how to do cool stuff with data and R” topics, (3) which may still include biology from time to time. Sounds OK? Good.

So: let’s use R to analyse data from the iOS Health app.

Read the rest…

PubMed Publication Date: what is it, exactly?

File this one under “has troubled me (and others) for some years now, let’s try to resolve it.”

Let’s use the excellent R/rentrez package to search PubMed for articles that were retracted in 2013.

library(rentrez)

es <- entrez_search("pubmed", "\"Retracted Publication\"[PTYP] 2013[PDAT]", usehistory = "y")
es$count
# [1] 117

117 articles. Now let’s fetch the records in XML format.

xml <- entrez_fetch("pubmed", WebEnv = es$WebEnv, query_key = es$QueryKey, 
                    rettype = "xml", retmax = es$count)

Next question: which XML element specifies the “Date of publication” (PDAT)?
Continue reading

Good software, data and your brain

I recently asked the FriendFeed community about wiki usage and was struck by a comment from Allyson:

I think we’re on our third incarnation of various bits of wiki software, and we’ve finally hit on the right software for both our wet lab and bioinformaticians

By “the right software”, she means software that makes sense to the people who use it. When faced with several software alternatives, we often find there is one which for some reason, “makes sense” – it meshes naturally with the way our brains work. When you find a program that you like, it’s not only a joy to use but can enable understanding of data and processes that previously eluded you. In other words, good software doesn’t shield you from the fundamentals – it illuminates them.
Here are three examples of software that made me say: “Oh right! Now I get it.” These are not recommendations and opinions expressed are highly subjective: the point is, I like them because they work for me.
Read the rest…

DokuWiki, PubMed and Ruby

I recently built a wiki for a research group using DokuWiki, one of my favourite wiki packages. As with many other wikis, developers have extended its functionality by writing plugins. Some of these are excellent, allowing users to generate lots of content with a minimum of syntax. For example, using the PubMed plugin, you type this:

{{pubmed>long:15595725}}

and the result is this:
pubmed

Which got me thinking. Assuming that you’ve searched PubMed and retrieved a bunch of references in XML format, how might you generate text in DokuWiki syntax, to paste into your wiki? Here’s the small parser that I wrote in ruby:

#!/usr/bin/ruby
require 'rubygems'
require 'hpricot'

h = {}
d = Hpricot.XML(open('pubmed_result.xml'))

(d/:PubmedArticle).each do |a|
  (h["=== #{a.at('DateCreated/Year').inner_html} ==="] ||= []) << "{{pubmed>long:#{a.at('PMID').inner_html}}}"
end

puts h.sort {|a,b| b<=>a}

Nine lines – how cool is that? It uses Hpricot to parse the XML and creates a hash of arrays. Hash key is the year, formatted to show a level 4 headline in DokuWiki; hash value is an array of PMIDs, formatted with PubMed plugin syntax. At the end we just print it all out, sorting by year from newest – oldest.

As Pierre would say – that’s it.