Sunday, 12 January 2014

Why does so much research go unpublished?



As described in my last blogpost, I attended an excellent symposium on waste in research this week. A recurring theme was research that never got published. Rosalind Smyth described her experience of sitting on the funding panel of a medium-sized charity. The panel went to great pains to select the most promising projects, and would end a meeting with a sense of excitement about the great work that they were able to fund. A few years down the line, though, they'd find that many of the funds had been squandered. The work had either not been done, or had been completed but not published.

In order to tackle this problem, we need to understand the underlying causes. Sometimes, as Robert Burns noted, the best-laid schemes go wrong. Until you've tried to run a few research projects, it's hard to imagine the myriad different ways in which life can conspire to mess up your plans. The eight laws of psychological research formulated by Hodgson and Rollnick are as true today as they were 25 years ago.

But much research remains unpublished despite being completed. Reasons are multiple, and the strategies needed to overcome them are varied, but here is my list of the top three problems and potential solutions.

Inconclusive results


Probably the commonest reason for inconclusive results is lack of statistical power. A study is undertaken in the fond hope that a difference will be found between condition X and condition Y, and if the difference is found, there is great rejoicing and a rush to publish. A negative result should also be of interest, provided the study was well-designed and adequately motivated. But if the sample is small, then we can't be sure whether our failure to observe the effect is because it is absent: a real but small effect could be swamped by noise. 

I think the solution to this problem lies in the hands of funding panels and researchers: quite simply, they need to take statistical power very seriously indeed and to consider carefully whether anything will be learned from a study if the anticipated effects are not obtained. If not, then the research needs to be rethought. In the fields of genetics and clinical trials, it is now recognised that multicentre collaborations are the way forward to ensure that studies are conducted with sufficient power to obtain a conclusive result.

Rejection of completed work by journals


Even well-conducted and adequately powered studies may be rejected by journals if the results are not deemed to be exciting. To solve this problem, we must look to journals. We need recognition that - provided a study is methodologically strong and well-motivated - negative results can be as informative as positive ones. Otherwise we are doomed to waste time and money pursuing false leads.  As Paul Glasziou has emphasised, failure is part of the research process. It is important to tell people about what doesn't work if we are not to repeat our mistakes.

We do now have some journals that will publish negative results, and there is a growing move toward pre-registration of studies, with guaranteed publication if the methods meet quality criteria. But there is still a lot to be done, and we need a radical change of mindset about what kinds of research results are valuable.

Lack of time


Here, I lay the blame squarely on the incentive structures that operate in universities. To get a job, or to get promoted, you need to demonstrate that you can pull in research income. In many UK institutions this is quite explicit, and promotions criteria may give a specific figure to aim for of X thousand pounds research income per annum. There are few UK universities whose strategic plan does not include a statement about increasing research funding. This has changed the culture dramatically;  as Fergus Millar put it: "in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding".

Of course, for research to thrive, our Universities need people who can compete for funding to support their work. But the acquisition of funding has become an end in itself, rather than a means to an end. This has the pernicious effect of driving people to apply for grant after grant, without adequately budgeting for the time it takes to analyse and write up research, or indeed to carefully think about what they are doing.  As I argued previously, even junior researchers these days have an 'academic backlog' of unwritten papers.

At the Lancet meeting there were some useful suggestions for how we might change incentive structures to avoid such waste. Malcolm MacLeod argued researchers should be evaluated not by research income and high-impact publications, but by the quality of their methods, the extent to which their research was fully reported, and the reproducibility of findings. An-Wen Chan echoed this, arguing for performance metrics that recognise full dissemination of research and use of research datasets by other groups. However, we may ask whether such proposals have any chance of being adopted when University funding is directly linked to grant income, and Universities increasingly view themselves as businesses.

I suspect we would need revised incentives to be reflected at the level of those allocating central funding before vice-chancellors took them seriously.  It would, however, be feasible for behaviour to be shaped at the supply end, if funders adopted new guidelines. For a start, they could look more carefully at the time commitments of those to whom grants are given: in my experience this is never taken into consideration, and one can see successful 'fat cats' accumulating grant after grant, as success builds on success. Funders could also monitor more closely the outcomes of grants: Chan noted that NIHR withholds 10% of research funds until a paper based on the research has been submitted for publication. Moves like this could help us change the climate so that an award of a grant would confer responsibility on the recipient to carry through the work to completion, rather than acting solely to embellish the researcher's curriculum vitae.

References

Chan, A., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gotzsche, P., Krumholz, H. M., Ghersi, D., & van der Worp, H. B. (2014). Increasing value and reducing waste: addressing inaccessible research Lancet (8 Jan ) : 10.1016/S0140-6736(13)62296-5

Macleod, M. R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J. P. A., . . . Glasziou, P. (2014). Biomedical research: increasing value, reducing waste. Lancet, 383(9912), 101-104.

3 comments:

  1. The issue about properly powered clinical trials is near and dear to my heart since I have spent my career trying unsuccessfully to get sufficient funding to - for once - run a trial that would have enough power to actually compare the effectiveness of different treatments for speech sound disorders. I run up against a brick wall every time because the funders simply will not give large amounts of money to study this topic (you know this better than I do, I love your 2010 paper in PLoS ONE). Recently in Canada, the Social Sciences and Humanities Research Council, which used to underfund such trials announced that they would no longer fund any “health care” research which includes anything to do with speech sound disorders, even intervention trials in the schools, and the Canadian Institutes of Health Research, which doesn’t mind spending rather large amounts on genetics studies or imaging studies, still won’t fund a behavioral intervention of any size on a topic of such meager importance as kids who talk funny. That is one reason I won’t jump on the band wagon when my twitter friends get going about the sample size in some of these studies – people are doing the best that they can with funding from these small charities and foundations. One day maybe we can glue the results together after the fact although probably only in the area of autism, not speech sound disorders because there still won’t be enough trials, even small ones. @ProfRvach

    ReplyDelete
  2. Thanks Susan. That's a very sobering thought. Speech sound disorders have an even worse image problem than specific language impairment, I fear.
    One of the points made in the paper by Iain Chalmers is that it is important to engage with patients to find out what they want from researchers. I wonder whether you could obtain more leverage if you could get hard evidence to show potential funders that there are a lot of families clamouring for effective interventions for their children.
    Lobbying by affected families has played a large role in raising the profile of autism so perhaps we need to consider whether it is possible to harness similar parent power to make the case.

    ReplyDelete
  3. I am In this boat. My problem has been (as you are aware) I don't have the skills or know anyone who is willing to assist with co-authorship to get the word count down.

    ReplyDelete