The Royal Society has been celebrating the 350th anniversary of Philosophical Transactions, the world's first scientific journal, by holding a series of meetings on the future of scholarly scientific publishing. I followed the whole event on social media, and was able to attend in person for one day. One of the sessions followed a Dragon's Den format, with speakers having 100 seconds to convince three dragons – Onora O'Neill, Ben Goldacre and Anita de Waard – of the fund-worthiness of a new idea for science communication. Most were light-hearted, and there was a general mood of merriment, but the session got me thinking about what kind of future I would like to see. What I came up with was radically different from our current publishing model.
Most of the components of my dream system are not new, but
I've combined them into a format that I think could work. The overall idea had its origins in a blogpost I wrote in 2011, and has
points in common with David Colquhoun's submission to the dragons, in that
it would adopt a web-based platform run by scientists themselves. This is what
already happens with the arXiv for the physical
sciences and bioRxiv for biological sciences.
However, my 'consensual communication' model has some important differences.
Here's the steps I envisage an author going through:
1. An initial protocol
is uploaded before a study is done, consisting only of introduction, and a
detailed methods section and analysis plan, with the authors anonymised. An
editor then assigns reviewers to evaluate it. This aspect of the model draws on
features of registered
reports, as implemented in the neuroscience journal, Cortex. There are two key scientific advantages to
this approach; first, reviewers are able to improve the research design, rather
than criticise studies after they have been done. Second, there is a record of
what the research plan was, which can then be compared to what was actually
done. This does not confine the researcher to the plan, but it does make
transparent the difference between planned and exploratory analyses.
2. The authors get a chance to revise the protocol in
response to the reviews, and the editor judges whether the study is of an adequate
standard, and if necessary solicits another round of review. When there is
agreement that the study is as good as it can get, the protocol is posted as a
preprint on the web, together with the non-anonymised peer reviews. At this
point the identity of authors is revealed.
3. There are then two optional extra stages that could be
incorporated:
a) The researcher can solicit collaborators for the study.
This addresses two issues raised at the Royal Society meeting – first, many
studies are underpowered; duplicating a study across several centres could help
in cases where there are logistic problems in getting adequate sample sizes to
give a clear answer to a research question. Second, collaborative working
generally enhances reproducibility of findings.
b) It would make
sense for funding, if required, to be solicited at this point – in contrast to
the current system where funders evaluate proposals that are often only
sketchily described. Although funders currently review grant proposals, there
is seldom any opportunity to incorporate their feedback – indeed, very often a
single critical comment can kill a proposal.
4. The study is then completed, written up in full, and reviewed
by the editor. Provided the authors have followed the protocol, no further
review is required. The final version is deposited with the original preprint,
together with the data, materials and analysis scripts.
5. Post-publication discussion of the study is then encouraged
by enabling comments.
What might a panel of dragons make of this? I anticipate
several questions.
Who would pay for it?
Well, if arXiv is anything to go by, costs of this kind of operation are modest
compared with conventional publishing. They would consist of maintaining the web-based
platform, and covering the costs of editors. The open access journal PeerJ has developed an efficient e-publishing
operation and charges $99 per author per submission. I anticipate a similar
charge to authors would be sufficient to cover costs.
Wouldn't this give an
incentive to researchers to submit poorly thought-through studies? There
are two answers to that. First, half of the publication charge to authors would
be required at the point of initial submission. Although this would not be
large (e.g. £50) it should be high enough to deter frivolous or careless
submissions. Second, because the complete trail of a submission, from pre-print
to final report, would be public, there would be an incentive to preserve a
reputation for competence by not submitting sloppy work.
Who would agree to be
a reviewer under such a model? Why would anyone want to put their skills in
to improving someone else's work for no reward? I propose there could be
several incentives for reviewers. First, it would be more rewarding to provide
comments that improve the science, rather than just criticising what has
already been done. Second, as a more concrete reward, reviewers could have
submission fees waived for their own papers. Third, reviews would be public and
non-anonymised, and so the reviewer's contribution to a study would be
apparent. Finally, and most radically, where the editor judges that a reviewer
had made a substantial intellectual contribution to a study, then they could
have the option of having this recognised in authorship.
Why would anyone who
wasn't a troll want to comment post-publication? We can get some insights
into how to optimise comments from the model of the NIH-funded platform PubMed Commons. They do
not allow anonymous comments, and require that commenters have themselves
authored a paper that is listed on PubMed.
Commenters could also be offered incentives such as a reduction of
submission costs to the platform. To
this one could add ideas from commercial platforms such as e-Bay, where sellers
are rated by customers, so you can evaluate their reputation. It should be
possible to devise some kind of star rating – both for the paper being
commented on, and for the person making the comment. This could provide
motivation for good commenters and make it easier to identify the high quality
papers and comments.
I'm sure that any dragon from the publishing world would
swallow me up in flames for these suggestions, as I am in effect suggesting a
model that would take commercial publishers out of the loop. However, it seems
worth serious consideration, given the enormous
sums that could be saved by universities and funders by going it
alone. But the benefits would not just
be financial; I think we could greatly improve science by changing the point in
the research process when reviewer input occurs, and by fostering a more open
and collaborative style of publishing.
This article was first
published on the Guardian
Science Headquarters blog on 12 May 2015