Saturday 6 June 2020

Frogs or termites? Gunshot or cumulative science?


"Tell us again about Monet, Grandpa."

The tl;dr version of this post is that we're all so obsessed with doing new studies that we disregard prior literature. This is largely due to a scientific culture that gives disproportionate value to novel work. This, I argue, weakens our science.

This post has been brewing in my mind ever since I took part in a reading group about systematic reviews. We were discussing the new NIRO guidelines for systematic reviews outside the clinical trials context that are under development by Marta Topor and Jade Pickering. I'd been recommending systematic review as a useful research contribution that could be undertaken when other activities had stalled because of the pandemic. But the enthusiasm of those in the reading group seemed to wane as the session progressed. Yes, everyone agreed, the guidelines were excellent: clear and comprehensive. But it was evident that doing a proper review would not be a "quick win"; the amount of work would of course depend on the number of papers on a topic, but even for a circumscribed subject it was likely to be substantial and involve close reading of a lot of material. Was it a good use of time, people asked. I defended the importance of looking at past literature: it's concerning if we don't read scientific papers because we are all so busy writing them. To my mind, being a serious scholar means being very familiar with past work in a subject area. However, it's concerning that our reward system doesn't value that, making early-career researchers nervous about investing time in it.

The thing that prompted me to put my thoughts into words was a tweet I saw this morning by Mike Johansen (@mikejohansenmd). It seems at first to be on an unrelated topic, but I think it is another symptom of the same issue: a disregard for prior literature. Mike wrote:
Manuscripts should look like: Question: Methods: Results: Limitations: Figures/Tables: Who does these things? Things that don't matter: introduction, discussion. Who does these things?
I replied that he seemed to be recommending that we disregard the prior literature, which I think is a bad idea. I argued "One study is never enough to answer a question - important to consider how this study fits in - or if it doesn't , why."

Noah Haber (@noahhaber) jumped in at this point to say: 
I'm sympathetic (~45% convinced) to the argument that literature reviews in introductions do more harm than good. In practice, they are rarely more than cursory and uncritical, and make us beholden to ideas that have long outlived their usefulness. Space better used in methods.
But I don't think that's a good argument. I'm the first to agree that literature reviews are usually terrible: people only cite the work that confirms their position, and often do that inaccurately. You can see slides from a talk I gave on 'Why your literature review should be systematic' here. But I worry if the response to current unscholarly and biased approaches to the literature is to say that we can just disregard the literature. If you assume that the study you are doing is so important that you don't have time to read other people's studies, it is on the one hand illogical (if we all did that, who would read your studies), on the other hand disrespectful to fellow scientists, and on the most important third hand (yes, assume a mutant for now) bad for science.

Why is it bad for science? Because science seldom advances by a single study. Solid progress is made when work is cumulative. We have far more confidence in a theory that is supported by a series of experiments than by a single study, however large the effect. Indeed, we know that studies heralding a novel result often overestimate the size of effect – the "winner's curse". So to interpret your study, I want to know how far it is consistent with prior work, and if it isn't whether there might be a good reason for that.

Alas, this approach to science is discouraged by many funders and institutions: calls for research proposals are peppered with words such as "groundbreaking", "transformational", and "novel". There is a horror of doing work that is merely "cumulative". As a consequence, many researchers hop around like frogs in a lilypond, trying to land on a lilypad that is hiding buried treasure. It may sound dull, but I think we should model ourselves more on termites – we can only build an impressive edifice if we collaborate to each do our part and build on what has gone before.

Of course, the termite mound approach is a disaster if the work we try to build on is biased, poorly conducted and over-hyped. Unfortunately that is often the case, as noted by Noah. We come rather full circle here, because I think a motivation for Mike and Noah's tweets is recognition of the importance of reporting work in a way that will make it a solid foundation for a cumulative science of the future. I'm in full agreement with that. Where I disagree, though, is in how we integrate what we are doing now with what has gone before. We do need to see what we are doing as part of a cumulative, collaborative process in taking ideas forward, rather than a series of single-shot studies.

1 comment:

  1. "We do need to see what we are doing as part of a cumulative, collaborative process in taking ideas forward, rather than a series of single-shot studies."

    Exactly. And the headlong rush for "novelty" at the expense of the slow grind of rigour and putting our work in context is damaging science in many fields.

    Or, as the chemist Frank Westheimer put it more pithily many moons ago, a month in the lab can save half an hour in the library...

    ReplyDelete