3 thoughts on “Communication failure

  1. Greg Tyrelle

    I don’t know about this. While I understand the frustration of working with people who avoid statistical thinking, I get the feeling this kind of thing creates an us-vs-them mentality, which is just unhelpful.

  2. Jonathan Badger

    Well, true. But it isn’t just the “ha, ha, experimentalists don’t get statistics” part — it is also how often they really don’t understand that real work is involved in analysis and think that ridiculously small fractions of an FTE are sufficient to do the analysis (as mentioned in the end of the video), which is often seen as just an afterthought to actually generating the data. Yes, there is a semi-unhelpful two cultures thing going on, but I think people on the analysis side understand that data is important — much more so than the inverse.

  3. Boyd Steere

    Yes, it’s funny because it’s true and it’s unfortunate because it’s true. You can look at it either from the funny side or the unfortunate side, but you can’t deny that this is a creative and effective illustration of the communication problems between these two, utterly interdependent disciplines.

    There are plenty of examples in this skit where the experimentalist isn’t grasping what the biostatistician needs to make their analysis: the relentless focus on the ‘3 patients’ and the hilarious suggestion to put the statistician on the grant for ‘half of a percent of an FTE’. The biostatistician is absolutely right that it would have been better to involve them at an earlier phase of the experimental design.

    But at the same time, the biostatistician missed a critical opportunity for communication of exactly what the experimentalist needed to provide: raw data from the validation of his apoptosis assay for use in calculating the variance, and an estimation of effect size between ‘Treatments A and B’, which could be obtained from previous work. The ambiguous request, “I’m going to need more information” of course opened the floodgates to the wrong kind of information (also hilarious).

    The lesson for experimentalists is to keep good records on assay validation handy to provide biostatisticians with the information they need, and to bring them in early to educate them about the expected effect sizes for a treatment/control comparison. The lesson for biostatisticians is to be crystal clear on exactly what data sets they require to make their power calculations, and be as reassuring as they can be about their role and intentions (e.g. ‘We don’t always recommend more patients or a 0.05 p-value – it depends on the question you want to ask and how you use the results! Look at my mouth; see, no fangs! :)

Comments are closed.