Wednesday 27 March 2024

Some thoughts on eLife's New Model: One year on


I've just been sent an email from eLife, pointing me to links to a report called "eLife's New Model: One year on" and a report by the editors "Scientific Publishing: The first year of a new era". To remind readers who may have missed it, the big change introduced by eLife in 2023 was to drop the step where an editor decides on reject or accept of a manuscript after reviewer comments are received. Instead, the author submits a preprint, and the editors then decide whether it should be reviewed. If the answer is yes, then the paper will be published, with reviewer comments. 

Given the controversy surrounding this new publishing model, it seems timely to have a retrospective look at how it's gone, and these pieces by the journal are broadly encouraging in showing that the publishing world has not fallen apart as a consequence of the changes. We are told that the proportion of submissions published has gone down slightly from 31.4% to 27.7% and the demographic characteristics of authors and reviewers are largely unchanged. The ratings of quality of submissions are similar to those from the legacy model. The most striking change has been in processing time: median time from submission to publication of the first version with reviews is 91 days, which is much faster than previously. 

As someone who has been pushing for changes to the model of scientific publishing for years (see blogsposts below), I'm generally in favour of any attempt to disrupt the conventional model. I particularly like the fact that the peer reviews are available with the published articles in eLife - I hope that will become standard for other journals in future. However, there are two things that rather rankled about the latest communication from the journal. 

First, the report describes an 'author survey' which received 325 responses, but very little detail is given as to who was surveyed, what the response rate was, and what the overall outcome was. This reads more like a marketing report than a serious scientific apprasal. Two glowing endorsements were reported from authors who had good experiences. I wondered though about authors whose work had not been selected to go forward to peer review - were they just as enthusiastic? Quite a few tables of facts and figures about the impact of the new policy were presented with the report, but if eLife really does want to present itself as embracing open and transparent policies, I think they should bite the bullet and provide more information - including fuller details of their survey methods and results, and negative as well as positive appraisals. 

Second, I continue to think there is a fatal flaw in the new model, which is that it still relies on editors to decide which papers go forward to review, using a method that will do nothing to reduce the tendency to hype and the consequent publication bias that ensues. I blogged about this a year ago, and suggested a simple solution, which is for the editors to adopt 'results-blind' review when triaging papers. This is an idea that has been around at least since 1976 (Mahoney, 1976) which has had a resurgence in popularity in recent years, with growing awareness of the dangers of publication bias (Locasio, 2017). The idea is that editorial decisions should be made based on whether the authors had identified an interesting question and whether their methods were adequate to give a definitive answer to that question. The problem with the current system is that people get swayed by exciting results, and will typically overlook weak methods when there is a dramatic finding. If you don't know the results, then you are forced to focus on the methods. The eLife report states:

 "It is important to note that we don’t ascribe value to the decision to review. Our aim is to produce high-quality reviews that will be of significant value but we are not able to review everything that is submitted." 

That is hard to believe: if you really were just ignoring quality considerations, then you should decide on which papers to review by lottery. I think this claim is not only disingenuous but also wrong-headed. If you have a limited resource - reviewer capacity - then you should be focusing it on the highest quality work. But that judgement should be made on the basis of research question and design, and not on results. 


Locascio, J. J. (2017). Results blind science publishing. Basic and Applied Social Psychology, 39(5), 239–246. 

Mahoney, M. J. (1976). Scientist as Subject: The Psychological Imperative. Ballinger Publishing Company. 

Previous blogposts

Academic publishing: why isn't psychology like physics? 

Time for academics to withdraw free labour.

High impact journals: where newsworthiness trumps methodology

Will traditional science journals disappear?

Publishing replication failures

Sunday 24 March 2024

Just make it stop! When will we say that further research isn't needed?


I have a lifelong interest in laterality, which is a passion that few people share. Accordingly, I am grateful to RenĂ© Westerhausen who runs the Oslo Virtual Laterality Colloquium, with monthly presentations on topics as diverse as chiral variation in snails and laterality of gesture production. 

On Friday we had a great presentation from Lottie Anstee who told us about her Masters project on handedness and musicality. There have been various studies on this topic over the years, some claiming that left-handers have superior musical skills, but samples have been small and results have been mixed. Lottie described a study with an impressive sample size (nearly 3000 children aged 10-18 years) whose musical abilities were evaluated on a detailed music assessment battery that included self-report and perceptual evaluations. The result was convincingly null, with no handedness effect on musicality. 

What happened next was what always happens in my experience when someone reports a null result. The audience made helpful suggestions for reasons why the result had not been positive and suggested modifications of the sampling, measures or analysis that might be worth trying. The measure of handedness was, as Lottie was the first to admit, very simple - perhaps a more nuanced measure would reveal an association? Should the focus be on skilled musicians rather than schoolchildren? Maybe it would be worth looking at nonlinear rather than linear associations? And even though the music assessment was pretty comprehensive, maybe it missed some key factor - amount of music instruction, or experience of specific instruments. 

After a bit of to and fro, I asked the question that always bothers me. What evidence would we need to convince us that there is really no association between musicality and handedness? The earliest study that Lottie reviewed was from 1922, so we've had over 100 years to study this topic. Shouldn't there be some kind of stop rule? This led to an interesting discussion about the impossibility of proving a negative and whether we should be using Bayes Factors, and what would be the smallest effect size of interest.  

My own view is that further investigation of this association would prove fruitless. In part, this is because I think the old literature (and to some extent the current literature!) on factors associated with handedness is at particular risk of bias, so even the messy results from a meta-analysis are likely to be over-optimistic. More than 30 years ago, I pointed out that laterality research is particularly susceptible to what we now call p-hacking - post hoc selection of cut-offs and criteria for forming subgroups, which dramatically increase the chances of finding something significant. In addition, I noted that measurement of handedness by questionnaire is simple enough to be included in a study as a "bonus factor", just in case something interesting emerges. This increases the likelihood that the literature will be affected by publication bias - the handedness data will be reported if a significant result is obtained, but otherwise can be disregarded at little cost. So I suspect that most of the exciting ideas about associations between handedness and cognitive or personality traits are built on shaky foundations, and would not replicate if tested in well-powered, preregistered studies.  But somehow, the idea that there is some kind of association remains alive, even if we have a well-designed study that gives a null result.  

Laterality is not the only area where there is no apparent stop rule. I've complained of similar trends in studies of association between genetic variants and psychological traits, for instance, where instead of abandoning an idea after a null study, researchers slightly change the methods and try again. In 2019, Lisa Feldman Barrett wrote amusingly about zombie ideas in psychology, noting that some theories are so attractive that they seem impossible to kill. I hope that as preregistration becomes more normative, we may see more null results getting published, and learn to appreciate their value. But I wonder just what it takes to get people to conclude that a research seam has been mined to the point of exhaustion.