APIs have let me down part 2/2: FriendFeed

In part 1, I described some frustrations arising out of a work project, using the Array Express API. I find that one way to deal mentally with these situations is to spend some time on a fun project, using similar programming techniques. A potential downside of this approach is that if your fun project goes bad, you’re really frustrated. That’s when it’s time to abandon the digital world, go outside and enjoy nature.

Here then, is why I decided to build another small project around FriendFeed, how its failure has led me to question the value of FriendFeed for the first time and why my time as a FriendFeed user might be up.

At the beginning of each year, I frequently experience a period of angst with respect to my social network and other web activity. Specifically, I ask: what am I getting out of this? Aside from enjoyable conversations with smart people, are there any practical or career benefits?

I had an idea. FriendFeed aggregates most of my online activity. However, the items themselves are not important: they are mostly items that exist elsewhere and would still exist, were FriendFeed to disappear. What’s interesting are the responses to those items: the likes and comments, both from others on my items and from me, on the items of others. In fact, you might argue that:

On FriendFeed, the only important items are those which provoke discussion

Brilliant. All I need to do is:

  1. Retrieve the items that I liked/commented
  2. Retrieve my items that others liked/commented
  3. Do some statistical analysis
  4. Decide which themes are important and whether any of them led to practical outcomes, outside of FriendFeed

As ever, I employed a variation of this script, to fetch a feed as JSON from the FriendFeed API and store it in a MongoDB database. Except, when I used this URL, to fetch the 6 634 items that I’ve liked:

http://friendfeed-api.com/v2/feed/neilfws/likes

I saw this in the log:

Processed entries 900 - 999
neilfws/likes contains 999 documents.
Processed entries 1000 - 1099
neilfws/likes contains 1099 documents.
Processed entries 1100 - 1199
neilfws/likes contains 1099 documents.
Processed entries 1200 - 1299
neilfws/likes contains 1099 documents.
Processed entries 1300 - 1399
neilfws/likes contains 1099 documents.

That’s right: the API stopped returning new items after 1099. The same problem occurs with my comments (neilfws/comments) and with entries that my friends liked (neilfws/friendlikes). The problem is not seen with my own feed (neilfws), but (a) I’d then have to filter for entries with comments/likes and (b) the API seems to struggle when the number of items exceeds 10 000 – and I’m quite sure that my feed has at least that number of items.

Since FriendFeed is no longer supported and has essentially been left to die, the usual course of action – contact support – does not apply. What this means, to state it bluntly, is that:

FriendFeed, as a permanent and complete archive of my online activity, is worthless

Which leads me to ask:


Is it sensible to keep generating information in FriendFeed, when that information cannot come out again?

Frankly, my current feeling is that the answer is: no, it is not sensible. Now I need a few days to consider whether to ignore that feeling, or begin the “phased withdrawal”.

6 thoughts on “APIs have let me down part 2/2: FriendFeed

  1. Pingback: Tweets that mention APIs have let me down part 2/2: FriendFeed | What You’re Doing Is Rather Desperate -- Topsy.com

  2. Greg Tyrelle

    “FriendFeed, as a permanent and complete archive of my online activity, is worthless”, FriendFeed maybe, but some kind of permanent online archive (with API) is needed. The limitations of Twitter and other social/sharing sites are starting to annoy me, for example why doesn’t twitter have groups that I can selectively post to (no not lists) ? Search is terrible, no integration with other data sources. There are no technical limitations here.

    My suggestion is why not just recreate FreindFeed for your personal archive. Aggregating content, posting to Twitter etc shouldn’t be that difficult to implement on an individual basis (i.e. no scaling etc.) in Ruby (libraries are available). One of the issues that developers face with sites like FriendFeed is making it scale for all users, but if it is just for yourself it doesn’t matter (implementation can be sloppy). I’m very keen on the idea of personal data stores that are connected in decentralized socail network: http://onesocialweb.org/.

  3. Chris Lasher

    Have you tried fiddling with the “start” and “num” query parameters? You may be able to fetch all your data through multiple iterations. Years ago when I worked with the v1 API, the default number of items returned was 30 unless the request specified to return more. The Facebook API behaves similarly (no surprise, since the FF buyout probably led to them working on the FB API).

  4. Chris Lasher

    I just peaked at your code and saw you did indeed try using those parameters. My other suggestion is to make sure you’re not running up against the rate limits. I didn’t spot a sleep cycle in your code (although I don’t speak Ruby). The API should send out a JSON error “limit-exceeded” if you do.

      1. nsaunders Post author

        I think there have always been limits to the maximum number of items that can be returned. Currently, items seem to max out at either 11 000 (for /feed/userid) or 1 100 (for /feed/userid/*). After that, the API just keeps returning the last set of 100 items (or whatever num is set to).

        Which is fine for feeds with less than that number of items, but no use for larger feeds (e.g. mine, the life scientists).

Comments are closed.