Wednesday, 26 October 2011

Accentuate the negative

Suppose you run a study to compare two groups of children: say a dyslexic group and a control group. Your favourite theory predicts a difference in auditory perception, but you find no difference between the groups. What to do? You may feel a further study is needed: perhaps there were floor or ceiling effects that masked true differences. Maybe you need more participants to detect a small effect. But what if you can’t find flaws in the study and decide to publish the result? You’re likely to hit problems. Quite simply, null results are much harder to publish than positive findings. In effect, you are telling the world “Here’s an interesting theory that could explain dyslexia, but it’s wrong.” It’s not exactly an inspirational message, unless the theory is so prominent and well-accepted that the null finding is surprising. And if that is the case, then it’s unlikely that your single study is going to be convincing enough to topple the status quo. It has been recognised for years that this “file drawer problem” leads to distortion of the research literature, creating an impression that positive results are far more robust than they really are (Rosenthal, 1979).
The medical profession has become aware of the issue and it’s now becoming common practice for clinical trials to be registered before a study commences, and for journals to undertake to publish the results of methodologically strong studies regardless of outcome. In the past couple of years, two early-intervention studies with null results have been published, on autism (Green et al, 2010) and late talkers (Wake et al, 2011). Neither study creates a feel-good sensation: it’s disappointing that so much effort and good intentions failed to make a difference. But it’s important to know that, to avoid raising false hopes and wasting scarce resources on things that aren’t effective. Yet it’s unlikely that either study would have found space in a high-impact journal in the days before trial registration.
Registration can also exert an important influence in cases where conflict of interest or other factors make researchers reluctant to publish null results. For instance, in 2007, Cylharova et al published a study relating membrane fatty acid levels to dyslexia in adults. This research group has a particular interest in fatty acids and neurodevelopmental disabilities, and the senior author has written a book on this topic. The researchers argued that the balance of omega 3 and omega 6 fatty acids differed between dyslexics and non-dyslexics, and concluded: “To gain a more precise understanding of the effects of omega-3 HUFA treatment, the results of this study need to be confirmed by blood biochemical analysis before and after supplementation”. They further stated that a randomised controlled trial was underway. Yet four years later, no results have been published and requests for information about the findings are met with silence. If the trial had been registered, the authors would have been required to report the results, or explain why they could not do so.
Advance registration of research is not a feasible option for most areas of psychology, so what steps can we take to reduce publication bias? Many years ago a wise journal editor told me that publication decisions should be based on evaluation of just the Introduction and Methods sections of a paper: if an interesting hypothesis had been identified, and the methods were appropriate to test it, then the paper should be published, regardless of the results.
People often respond to this idea saying that it would just mean the literature would be full of boring stuff. But remember, I'm not suggesting that any old rubbish should get published: there has to be a good case for doing the study made in the Introduction, and the Methods have to be strong. Also, some kinds of boring results are important: miminally, publication of a null result may save some hapless graduate student from spending three years trying to demonstrate an effect that’s not there. Estimates of effect sizes in meta-analyses are compromised if only positive findings get reported. More seriously, if we are talking about research with clinical implications, then over-estimation of effects can lead to inappropriate interventions being adopted.
Things are slowly changing and it’s getting easier to publish null results. The advent of electronic journals has made a big difference because there is no longer such pressure on page space. The electronic journal PLOS One adopts a publication policy that is pretty close to that proposed by the wise editor: they state they will publish all papers that are technically sound. So my advice to those of you who have null data from well-designed experiments languishing in that file drawer: get your findings out there in the public domain.

References

Cyhlarova, E., Bell, J., Dick, J., MacKinlay, E., Stein, J., & Richardson, A. (2007). Membrane fatty acids, reading and spelling in dyslexic and non-dyslexic adults European Neuropsychopharmacology, 17 (2), 116-121 DOI: 10.1016/j.euroneuro.2006.07.003

Green, J., Charman, T., McConachie, H., Aldred, C., Slonims, V., Howlin, P., Le Couteur, A., Leadbitter, K., Hudry, K., Byford, S., Barrett, B., Temple, K., Macdonald, W., & Pickles, A. (2010). Parent-mediated communication-focused treatment in children with autism (PACT): a randomised controlled trial The Lancet, 375 (9732), 2152-2160 DOI: 10.1016/S0140-6736(10)60587-9 

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86 (3), 638-641 DOI: 10.1037/0033-2909.86.3.638 

Wake M, Tobin S, Girolametto L, Ukoumunne OC, Gold L, Levickis P, Sheehan J, Goldfeld S, & Reilly S (2011). Outcomes of population based language promotion for slow to talk toddlers at ages 2 and 3 years: Let's Learn Language cluster randomised controlled trial. BMJ (Clinical research ed.), 343 PMID: 21852344
G

8 comments:

  1. Good post. I previously proposed that papers ought to be submitted to journals consisting of the Introduction and the Methods, before the data is gathered. The Introduction and Methods then get peer reviewed & if it's accepted, the paper would be published whatever the results. And also the Intro and Methods would be fixed from that point on (or at least, if you want to do post-hoc analyses after getting your data, they would go into their own "Secondary Methods" section).

    Can't see it happening, because it would be neither in the interests of journals nor of scientists, but I think it would be feasible if there was a will to do it.

    ReplyDelete
  2. Thanks Neuroskeptic! I recommend your post to all, but your even more radical approach would, I suspect, be totally unacceptable to most people. But you are right: you don't need to ban the secondary analyses - but useful to know what was truly a priori.
    I'm now curious to see if anyone knows the origins of the idea of looking only at Intro/Methods. My memory is sadly rusty but I'm pretty sure I first heard it from Eric Taylor when we were joint editors of Journal of Child Psychology & Psychiatry, which was in early 1990s. But I don't think it was his idea - he mentioned another source. I will have to see if he remembers. It's clear that in the past 2 decades, this idea has been largely ignored, but let's hope that by promoting it in the blogosphere, it will start getting attention.

    ReplyDelete
  3. Eric did remember this, and said: "As far as I remember the idea was in the context of The Lancet's indicating that if the purpose and methods of a trial were accepted by them then they would accept the paper regardless of how the results came out."
    So it sounds like this was some kind of precursor of the idea of clinical trials registration.

    ReplyDelete
  4. The problem is that the interesting findings often aren't the ones you were expecting. So you either have to ignore them or sit on them for years while you try and replicate them, or pretend that it was what you were looking for all along.

    A random thought. Perhaps it would help if researchers could identify a paper as an Experiment or as an Observation when submitting.

    An Experiment, as you both suggest, would be judged purely on its methodological rigour and a decision on acceptance would be made before the reviewers saw the results.

    Publication of an Observation would factor in its interestingness and novelty (as well as basic methodological standards) but would be signposted as a post hoc analysis. Observations could provide motivation for future Experiments - and conclusions would be considered preliminary by the scientific community until the results had been replicated in a number of Experiments.

    ReplyDelete
  5. Jon: That would be a good system, but my worry is that no-one would play ball, if it were voluntary. Everyone would say their papers were Experiments, because it would look better on their CV and get more citations... this is also why I'm skeptical of the benefits of Journals of Null Results and so forth - they're great in theory but will people use them? Some people do - they're better than nothing - but I don't think they get to the root of the problem which is that scientific publishing is systematically biased in favour of positive results.

    ReplyDelete
  6. I've had some thoughts on this subject in my review of the proposed Hypothes.is system: http://researchity.net/2011/11/01/peer-review-should-be-more-like-hypothes-is-than-hypothes-is-should-be-like-peer-review. There is probably a need for a more fundamental rebuilding of the review system that would make it more transparent.

    ReplyDelete
  7. Hello,
    Can you clarify why advanced registration is not feasible in most areas of psychology?
    Thanks.

    ReplyDelete
  8. Hi my loved one! I wish to say that this post is awesome, nice written and come with approximately all important infos. I would like to see more posts like this. Visit telephone answering service for best Telephone Service.

    ReplyDelete