Wednesday, 1 January 2020
Research funders need to embrace slow science
Uta Frith courted controversy earlier this year when she published an opinion piece in which she advocated for Slow Science, including the radical suggestion that researchers should be limited in the number of papers they publish each year. This idea has been mooted before, but has never taken root: the famous Chaos in the Brickyard paper by Bernard Forscher dates back to 1963, and David Colquhoun has suggested restricting the number of publications by scientists as a solution more than once on his blog (here and here).
Over the past couple of weeks I've been thinking further about this, because I've been doing some bibliometric searches. This was in part prompted by the need to correct and clarify an analysis I had written up in 2010, about the amount of research on different neurodevelopmental disorders. I had previously noted the remarkable amount of research on autism and ADHD compared to other behaviourally-defined conditions. A check of recent databases shows no slowing in the rate of research. A search for publications with autism or autistic in the title yielded 2251 papers published in 2010; in 2019, this has risen to 6562. We can roughly halve this number if we restrict attention to the Web of Science Core database and search only for articles (not reviews, editorials, conference proceedings etc). This gives 1135 articles published in 2010 and 3075 published in 2019. That's around 8 papers every day for 2019.
We're spending huge amounts to generate these outputs. I looked at NIH Reporter, which provides a simple interface where you can enter search terms to identify grants funded by the National Institutes of Health. For the fiscal year 2018-2019 there were 1709 projects with the keyword 'autism or autistic', with a total spend of $890 million. And of course, NIH is not the only source of research funding.
Within the field of developmental neuropsychology, autism is the most extreme example of research expansion, but if we look at adult disorders, this level of research activity is by no means unique. My searches found that this year there were 6 papers published every day on schizophrenia, 15 per day on depression, and 11 per day on Alzheimer's disease.
These are serious and common conditions and it is right that we fund research into them – if we could improve understanding and reduce their negative impacts, it would make a dramatic difference to many lives. The problem is information overload. Nobody, however nerdy and committed, could possibly keep abreast of the literature. And we're not just getting more information, the information is also more complex. I reckon I could probably understand the majority of papers on autism that were published when I started out in research years ago. That proportion has gone down and down with time, as methods get ever more complex. So we're spending increasing amounts of money to produce more and more research that is less and less comprehensible. Something has to give, and I like the proposal that we should all slow down.
But is it possible? If you want to get your hands on research funding, you need to demonstrate that you're likely to make good use of it. Publication track record provides objective evidence that you can do credible research, so researchers are focused on publishing papers. And they typically have a short time-frame in which to demonstrate productivity.
A key message from Uta's piece is that we need to stop confusing quantity with quality. When this topic has been discussed on social media, I've noted that many ECRs take the view that when you come to apply for grants or jobs, a large number of publications is seen as a good thing, and therefore Slow Science would damage the prospects of ECRs. That is not my experience. It's possible that there are differences in practice between different countries and subject areas, but in the UK the emphasis is much more on quality than quantity of publications, so a strategy of focusing on quality rather than quantity would be advantageous. Indeed, most of our major funders use proposal forms that ask applicants to list their N top publications, rather than a complete CV. This will disadvantage anyone who has sliced their oeuvre into lots of little papers, rather than writing a few substantial pieces. Similarly, in the UK Research Excellence Framework, researchers from an institution are required to submit their outputs, but there is a limited number that can be submitted – a restriction that was introduced many years ago to incentivise a focus on quality rather than quantity.
The same people who are outraged at reducing the number of publications often rail against the stress of working in the current system – and rightly so. After all, at some point in the research cycle, at least one person has to devote serious thought to the design, analysis and write-up. Each of these stages inevitably takes far longer than we anticipate – and then there is time needed to respond to reviewers. The quality and impact of research can be enhanced by pre-registration and making scripts and data open, but extra time needs to be budgeted for this. Indeed, lack of time is a common reason cited for not doing open science. Researchers who feel that to succeed they have to write numerous papers every year are bound to cut corners, and then burn out from stress. It makes far more sense to work slowly and carefully to produce a realistic number of strong papers that have been carefully planned, implemented and written up.
It's clear that science and scientists would benefit if we take things more slowly, but the major barrier is a lack of researcher confidence. Those who allocate funds to research have a vested interest in ensuring they get the best return from their investment – not a tsunami of papers that overwhelm us, but a smaller number of reports of high-quality, replicable and credible findings. If things are to change we need funders to do more to communicate to researchers that they will be evaluated on quality rather than quantity of outputs.
Subscribe to:
Post Comments (Atom)
I like Peter Higgs comment after winning the Nobel Prize for Physics.
ReplyDelete"It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.
When working in areas of "science" I have seldom been the "researcher', mostly someone doing secondary research to feed into practical or policy decisions.
The proliferation of papers of in MPU's, (Minimally Publishable Units), can be a nightmare. It can take days to realize that a pile of paper are from the same lab or network of researchers so that one may need to worry about common method variance or in one meta-analysis I worked on, a common data set that was added to as it went from university.
I would be all for fewer, more comprehensive (and hopefully better) publications.
Thanks for this though-provoking post to start the new year, Dorothy. (And, of course, Happy New Year!)
ReplyDeleteIt's possible that there are differences in practice between different countries and subject areas, but in the UK the emphasis is much more on quality than quantity of publications, so a strategy of focusing on quality rather than quantity would be advantageous.
Although I agree that a focus on quality is much preferred, a key problem -- at least in my research area of condensed matter physics/nanoscience -- is that too often the quality assessment of a paper is reduced to simplistic brand-name recognition: one paper in Science or Nature is worth much, much more than, say, a Physical Review B or J. Phys. Chem. publication.
If ECRs aren't fortunate enough to be publishing in the highest profile journals I get the feeling that sometimes they think they can at least partially compensate for this by producing more papers. And to a certain extent this is true.
I would also argue that a more comprehensive paper is not necessarily a better paper. A pithy and punchy description of a key result -- with all of the supporting data freely available -- may well be much more useful and influential than a lengthy, in-depth analysis where there's the potential for a "can't see the forest for the trees" effect.
Two studies back in 2015 came to entirely different conclusions as to whether a more comprehensive "wordy" abstract leads to a greater level of citation (which we can take as a proxy for influence, if not quality): https://www.enago.com/academy/does-the-length-of-abstract-really-affect-citation-paper/
If things are to change we need funders to do more to communicate to researchers that they will be evaluated on quality rather than quantity of outputs.
But just how do we evaluate quality and decouple it from measures of influence/impact such as citations? I certainly struggle: https://muircheartblog.wordpress.com/2019/09/21/guilty-confessions-of-a-referee/
Moreover, and perhaps most importantly, what practical and pragmatic advice can we give ECRs about publishing high quality papers? And is targeting quality the best strategy for an ambitious researcher in any case? Given the academic reward/career progression system as it stands, any ECR knows that to enhance their prospects what matters are high impact -- not necessarily high quality -- papers.
In my department most researchers resent REF and the obsession with * ratings. There is a lot wrong with REF, but it does at least provide a rare example of financial incentives pulling science in a healthy direction - i.e. towards quality over quantity. People with tenure often prefer the slow science approach, and if pushed, they can justify it by appealing to REF. Managers like it if you say 'I'm trying to produce a small number of high quality 4* papers, rather than churning out a large number of low quality 2* papers'. This is not an option for early career researchers. They don't have the opportunity to play the long game. For instance, PhD students will struggle to get post doc positions without any publications on their C.V. So they have to throw out a few hasty ones just to get on the ladder before the 3 years is up. Same for post docs on temporary contracts. Responsible supervisors encourage this. I suspect this is a major force driving the paper overproduction problem.
ReplyDeleteIn the usa the real peoblem is the RTP process where the number of publications (the number! As in count'em) in the evaluation year is an important metric. This is where trashy publications come from and why pay to publish journals are flourishing. My proposed solution is this. Each year the candidate will submit 3 published papers for evaluation. The sum of the three evaluations - not the number of publications - will then serve as the measure of scholarly activity and research.
ReplyDelete