Wednesday, 27 March 2024

Some thoughts on eLife's New Model: One year on

 

I've just been sent an email from eLife, pointing me to links to a report called "eLife's New Model: One year on" and a report by the editors "Scientific Publishing: The first year of a new era". To remind readers who may have missed it, the big change introduced by eLife in 2023 was to drop the step where an editor decides on reject or accept of a manuscript after reviewer comments are received. Instead, the author submits a preprint, and the editors then decide whether it should be reviewed. If the answer is yes, then the paper will be published, with reviewer comments. 

Given the controversy surrounding this new publishing model, it seems timely to have a retrospective look at how it's gone, and these pieces by the journal are broadly encouraging in showing that the publishing world has not fallen apart as a consequence of the changes. We are told that the proportion of submissions published has gone down slightly from 31.4% to 27.7% and the demographic characteristics of authors and reviewers are largely unchanged. The ratings of quality of submissions are similar to those from the legacy model. The most striking change has been in processing time: median time from submission to publication of the first version with reviews is 91 days, which is much faster than previously. 

As someone who has been pushing for changes to the model of scientific publishing for years (see blogsposts below), I'm generally in favour of any attempt to disrupt the conventional model. I particularly like the fact that the peer reviews are available with the published articles in eLife - I hope that will become standard for other journals in future. However, there are two things that rather rankled about the latest communication from the journal. 

First, the report describes an 'author survey' which received 325 responses, but very little detail is given as to who was surveyed, what the response rate was, and what the overall outcome was. This reads more like a marketing report than a serious scientific apprasal. Two glowing endorsements were reported from authors who had good experiences. I wondered though about authors whose work had not been selected to go forward to peer review - were they just as enthusiastic? Quite a few tables of facts and figures about the impact of the new policy were presented with the report, but if eLife really does want to present itself as embracing open and transparent policies, I think they should bite the bullet and provide more information - including fuller details of their survey methods and results, and negative as well as positive appraisals. 

Second, I continue to think there is a fatal flaw in the new model, which is that it still relies on editors to decide which papers go forward to review, using a method that will do nothing to reduce the tendency to hype and the consequent publication bias that ensues. I blogged about this a year ago, and suggested a simple solution, which is for the editors to adopt 'results-blind' review when triaging papers. This is an idea that has been around at least since 1976 (Mahoney, 1976) which has had a resurgence in popularity in recent years, with growing awareness of the dangers of publication bias (Locasio, 2017). The idea is that editorial decisions should be made based on whether the authors had identified an interesting question and whether their methods were adequate to give a definitive answer to that question. The problem with the current system is that people get swayed by exciting results, and will typically overlook weak methods when there is a dramatic finding. If you don't know the results, then you are forced to focus on the methods. The eLife report states:

 "It is important to note that we don’t ascribe value to the decision to review. Our aim is to produce high-quality reviews that will be of significant value but we are not able to review everything that is submitted." 

That is hard to believe: if you really were just ignoring quality considerations, then you should decide on which papers to review by lottery. I think this claim is not only disingenuous but also wrong-headed. If you have a limited resource - reviewer capacity - then you should be focusing it on the highest quality work. But that judgement should be made on the basis of research question and design, and not on results. 

Bibliography 

Locascio, J. J. (2017). Results blind science publishing. Basic and Applied Social Psychology, 39(5), 239–246. https://doi.org/10.1080/01973533.2017.1336093 

Mahoney, M. J. (1976). Scientist as Subject: The Psychological Imperative. Ballinger Publishing Company. 

Previous blogposts

Academic publishing: why isn't psychology like physics? 

Time for academics to withdraw free labour.

High impact journals: where newsworthiness trumps methodology

Will traditional science journals disappear?

Publishing replication failures


No comments:

Post a Comment