Saturday, 27 March 2021

Open data: We know what's needed - now let's make it happen

©Cartoonstock.com

 

The week after then end of University term and before the Easter break is choc-a-bloc with conferences and meetings. A great thing about them all being virtual is that you can go to far more. My brain is zinging (and my to-do list sadly neglected) after 3 days of the Computational Research Integrity Conference (CRI-Conf) hosted at Syracuse University, and then a 1.5 hr webinar organised by the Center for Biomedical Research Transparency (CBMRT) on Accelerating Science in the Age of COVID-19. Synthesising all the discussion in my head gives one clear message: we need to get a grip on failure of authors to make their data, and analysis scripts, open. 

For a great overview of CRI-Conf, see this blog by one of the presenters, Debora Weber-Wulff . The meeting focused mainly on fraud, and the development of computational methods to detect it, with most emphasis on doctored images. There were presentations by those who detect manipulated data (notably Elisabeth Bik and Jennifer Byrne), by technology experts developing means of automating image analysis, publishers and research integrity officers who attempt to deal with the problem, and by those who have created less conventional means to counteract the tide of dodgy work (Boris Barbour from PubPeer, and Ivan Oransky from Retraction Watch). The shocking thing was the discovery that fabricated data in the literature is not down to a few bad apples: there are "paper mills" that generate papers for sale, which are readily snapped up by those who need them for professional advancement.

CRI-Conf brought together people who viewed the problem from very different perspectives, with a definite tension between those representing "the system" - publishers, institutions and funders - and those on the outside - the data sleuths, PubPeer, Retraction Watch. The latter are impatient at the failure of the system to act promptly to remove fraudulent papers from the literature; the former respond that they are doing a lot already, but the problem is immense and due process must be followed. There was, however, one point of agreement. Life would be easier for all of us if data were routinely published with papers. Research integrity officers in particular noted that a great deal of time in investigations is spent tracking down data.

The CBMRT webinar yesterday was particularly focused on the immense amount of research that has been generated by the COVID-19 pandemic. Ida Sim from Vivli noted that only 70 of 924 authors of COVID-19 clinical trials agreed to share their data within 6 months. John Inglis, co-founder of biorXiv and medrXiv cited Marty Lakary's summary of preprints: "a great disruptor of a clunky, slow system never designed for a pandemic". Deborah Dixon from Oxford University Press noted how open science assumed particular importance in the pandemic: open data not only make it possible to check a study's findings, but also can be used fruitfully as secondary data for new analyses. Finally, a clinician's perspective was provided by Sandra Petty. Her view is that there are too many small underpowered studies: we need large collaborative trials. My favourite quote: "Noise from smaller studies rapidly released to the public domain can create a public health issue in itself".

Everyone agreed that open data could be a game-changer, but clearly it was still the exception rather than the rule. I asked whether it should be made mandatory, not just for journal articles but also for preprints. The replies were not encouraging. Ida Sim, who understands the issues around data-sharing better than most, noted that there were numerous barriers - there may be legal hurdles to overcome, and those doing trials may not have the expertise, let alone the time, to get their data into appropriate format. John Inglis noted it would be difficult for moderators of preprint servers to implement a data-sharing requirement for pre-prints, and that many authors would find it challenging.

I am not, however, convinced. It's clear that there is a tsunami of research on COVID-19, much of it of very poor quality. This is creating problems for journals, reviewers, and for readers, who have to sift through a mountain of papers to try and extract a signal from the noise. Setting the bar for publication - or indeed pre-prints - higher, so that the literature only contains papers that can be (a) checked and (b) used for meta-analysis and secondary studies, would reduce the mountain to a hillock and allow us to focus on the serious stuff. 

The pandemic has indeed accelerated the pace of research, but things done in a hurry are more likely to contain errors, so it is more important than ever to be able to check findings, rather than just trusting authors to get it right. I'm thinking of the recent example where an apparent excess of COVID-19 in toddlers was found to be due to restricting age to a 2-digit number, so someone aged 102 would be listed as a 2-year-old.  We may be aghast, but I feel "there but for the grace of God go I". Unintentional errors are everywhere, and when the stakes are as high as they are now, we need to be able to check and double-check findings studies that are going to translated into clinical practice. That means sharing analysis code as well as data. As Philip Stark memorably said, "Science should be 'show me', not 'trust me'".

All in all, my sense is that we still have a long way to go before we shift the balance in publishing from a focus on the needs of authors (to get papers out rapidly) to an emphasis on users of research, for whom the integrity of the science is paramount. As Besançon et al put it: Open Science saves lives.

Saturday, 13 March 2021

Time for publishers to consider the rights of readers as well as authors

 

© cartoonstock.com
I've just been reading this piece entitled: "Publication ethics: Barriers in resolving figure corrections" by Lataisia Jones, on the website of the American Society for Microbiology, which publishes several journals.  Microbiology is a long way from my expertise and interests, but I have been following the work of Elisabeth Bik, datasleuth extraordinaire, for some time - see here. As Bik points out, the responses (or more often lack of response) she gets when she raises concerns about papers are similar to those seen in other fields where whistleblowers try to flag up errors  (e.g. this Australian example). 

It's clear that there are barriers to correcting the scientific record when errors are identified, and so I was pleased to see a piece tackling this head-on, which attempts to explain why responses by journals and publishers often appear to be so slow and unsatisfactory. However, I felt the post, missed some key points that need to be taken seriously by publishers and editors. 

The post starts by saying that: "Most figure concerns are created out of error and may present themselves in the form of image duplication, splicing and various figure enhancements." I think we need to have that "most" clarified in the form of a percentage. Yes, of course, we all make mistakes, but many of the issues flagged up by Bik are not the kinds of error made by someone going "oops" as they prepare their figures. I felt that on the one hand it is crucial to be aware that many papers are flawed because they contain honest errors, but that fact should not lead us to conclude that most cases of problematic images are of this kind. At least, not until there is hard evidence on that point. 

The post goes on to document the stages that are gone through when an error has been flagged up, noting in particular these guidelines produced by the Committee on Publication Ethics (COPE). First, the author is contacted. "Since human error is a common reason behind published figure concerns, ASM remains mindful and vigilant while investigating to prevent unnecessarily tarnishing a researcher’s reputation. Oftentimes, the concern does not proceed past the authors, who tend to be extremely responsive." So here again, Jones emphasises human error as a "common reason" for mistakes in figures, and also describes authors as "extremely responsive". And here again, I suggest some stastistics on both points would be of considerable interest. 

Jones explains that this preliminary step may take a long time when several years have elapsed between publication and the flagging up of concerns. The authors may be hard to contact, and the data may no longer be available. Assuming the authors give a satisfactory response, what happens next depends on whether the error can be corrected without changing the basic results or conclusions. If so, then a Correction is published. Interestingly, Jones says nothing about what happens if an honest error does change the basic results or conclusions. I think many readers would agree that in that case there should be a retraction, but I sense a reluctance to accept that, perhaps because Jones appears to identify retraction with malpractice. 

She describes the procedure followed by ASM if the authors do not have a satisfactory response: the problem is passed on to the authors' institution for investigation. As Jones points out, this can be an extended process, as it may require identification of old data, and turn into an inquiry into possible malpractice. Such enquiries often move slowly because the committee members responsible for this work are doing their investigations on top of their regular job. And, as Jones notes: "Additionally, multiple figure concerns and multiple papers take longer to address and recovering the original data files could take months alone." So, the institution feeds back its conclusions (typically after months or possibly years), which may return us to the point where it is decided a Correction is appropriate. But, "If the figure concerns are determined to have been made intentionally or through knowingly manipulating the data, the funding agencies are notified." And yet another investigation starts up, adding a few more months or years to the process. 

So my reading of this is that if the decision to make a Correction is not reached, the publisher and journal at this point hand all responsibility over to other agencies - the institution and the funders. The post by Jones at no point mentions the conditions that need to be met for the paper to actually be retracted (in which case it remains in the public domain but with a retraction notice) or withdrawn (in which case it is removed). Indeed, the word 'retract' does not appear at all in her piece. 

What else is missing from all of this? Any sense of responsibility to other researchers and the general public. A peer-reviewed published article is widely regarded as a credible piece of work. It may be built on by other researchers, who assume they can trust the findings. Its results may be used to inform treatment of patients or, in other fields, public policy. Leaving an erroneous piece of work in a peer-reviewed journal without any indication that concerns have been raised is rather like leaving a plate of cookies out for public consumption, when you know they may be contaminated. 

Ethical judgements by publishers need to consider their readers, as well as their authors. I would suggest they should give particularly high priority to published articles where concerns have not been adequately addressed by authors, and which also have been cited by others. The more citations, the greater the urgency to act, as citations spawn citations, with the work achieving canonical status in some cases. In addition, if there are multiple papers by the same author with concerns, surely this should be treated as a smoking gun, rather than an excuse for why it takes years to act.

It should not be necessary to wait until institutions and funders have completed investigations into possible malpractice. Malpractice is actually a separate issue here: the key point for readers of the journal is whether the published record is accurate. If it is inaccurate - either due to honest error or malpractice - the work should be retracted, and there is plenty of precedent for retraction notices to specify the reason for retraction. This also applies to the situation where there is a realistic concern about the work (such as manipulated figures or internally inconsistent data) and the author cannot produce the raw data that would allow for the error to be identified and corrected. In short, it should be up to the author to ensure that the work is transparent and reproducible. Retaining erroneous work in a journal is not a neutral act. It pollutes the scientific literature and ignores the rights of readers not to be misled or misinformed.

Wednesday, 3 March 2021

University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester

I had a depressing sense of déjà vu last week on learning that two UK Universities, University of Liverpool and University of Leicester, had plans for mass staff redundancies, affecting many academic psychologists among others. I blogged about a similar situation affecting Kings College London 7 years ago.

I initially wondered whether these new actions were related to the adverse effect of the pandemic on university finances, but it's clear that both institutions have been trying to bring in staff cuts for some years. The pandemic seems not so much the reason for the redundancies as a smokescreen behind which university administration can smuggle in unpopular measures.  

Nothing, of course, is for ever, and universities have to change with the times. But in both these cases, the way in which cuts are being made seems both heartless and stupid, and has understandably attracted widespread condemnation (see public letters below). There are differences in the criteria used to select people for redundancy at the two institutions, but the consequences are similar.  

The University of Liverpool has used a metrics-based approach, singling out people from the Faculty of Health and Life Sciences who don't meet cutoffs in terms of citations and grant income. Elizabeth Gadd (@LizzieGadd) noted on Twitter that SciVal's Field-Weighted Citation Index, which was being used to evaluate staff, is unstable and unreliable with small sample sizes . Meanwhile, and apparently in a different universe, the University has recently advertised for a "Responsible Metrics Implementation Officer", funded by the Wellcome Trust, whose job is to "lead the implementation of a project to embed the principles of the Declaration on Research Assessment (DORA) in the university's practice". Perhaps their first job will be to fire the people who have badly damaged the University's reputation by their irresponsible use of metrics (see also this critique in Times Higher Education).  

It is worth noting too that the advertisement proudly boasts that the University is a recipient of an Athena SWAN silver award, when among those targeted for redundancy are some who have worked tirelessly on Athena SWAN applications for institutes across the faculty where redundancies are planned. Assembling one of these applications is widely recognised as taking up as much time as the preparation of a major grant proposal. There won't be another Athena SWAN exercise for a few years and so it seems the institution is happy to dispose of the services of those who worked to achieve this accolade.  

The University of Leicester has adopted a different strategy, compiling a hit list that is based not on metrics, but on subject area. They are discussing proposals to close* redundancies at five academic departments and three professional services units in order to "secure the future of the university". Staff who have worked at the university for decades have described the surreal experience of receiving notifications of the planned job losses alongside warm-hearted messages emphasising how much the university wants to support their mental health and wellbeing.  

I lived through a purge some 20 years ago when I was based at the Medical Research Council's Applied Psychology Unit in Cambridge. MRC used the inter-regnum period between one director retiring and a new one being appointed to attempt to relocate staff who were deemed to be underperforming. As with all these exercises, management appeared to envisage an institution that will remain exactly as before with a few people subtracted. But of course it doesn't work that way. Ours was a small and highly collegial research unit, and living through the process of having everyone undergoing a high-stakes evaluation affected all of us. I can't remember how long the process went on for, but it felt like years. A colleague who, like me, was not under threat, captured it well when he said that he had previously felt a warm glow when he received mail with the MRC letterhead. Now he felt a cold chill in anticipation of some fresh hell. I've had similar conversations at other universities with senior academics who can recall times when those managing the institution were regarded as benevolent, albeit in a (desirably) distant manner. Perhaps it helped that in those days vice chancellors often came from the ranks of the institution's senior academics, and saw their primary goal as ensuring a strong reputation for teaching and research.  

As the sector has been increasingly squeezed over the years, this cosy scenario has been overturned, with a new cadre of managers appearing, with an eye on the bottom line. The focus has been on attracting student consumers with glitzy buildings and facilities, setting up overseas campuses to boost revenues, and recruiting research superstars who will embellish the REF portfolio. Such strategies seldom succeeded, with many universities left poorer by the same vice-chancellors who were appointed because of their apparent business acumen.  

There has been a big shift from the traditional meaning of "university" as a community of teachers and scholars. Academic staff are seen as dispensable, increasingly employed on short-term contracts. Whereas in the past there might be a cadre of academics who felt loyal to their institution and pride in being part of a group dedicated to the furtherance of knowledge, we now have a precariat who live in fear of what might happen if targets are not met. And this is all happening at a time when funders are realising that effective research is done by teams of people, rather than lone geniuses (see e.g. this report). Such teams can take years to build, but can be destroyed overnight, by those who measure academic worth by criteria such as grant income, or whether the Vice Chancellor understands the subject matter. I wonder too what plans there are for graduate students whose supervisors are unexpectedly dismissed - if interdependence of the academic community is ignored, there will be impacts that go beyond those in the immediate firing line.  

Those overseeing redundancies think they can cut out a pound of flesh from the university body, but unlike Shylock, who knew full well what he was doing, they seem to believe they can do so without weakening the whole institution. They will find to their cost that they are doing immense damage, not just to their reputation, but to the satisfaction and productivity of all who work and study there.  

Public letters of concern 

University of Leicester, Departments of Neuroscience, Psychology and Behaviour https://tinyurl.com/SaveNPB-UoL  

University of Liverpool, use of inappropriate metrics https://docs.google.com/document/d/1OJ28MT78MCMNkUtFXkLw3xROUxK7Mfr8yN8b-E2KDUg/edit#  

Equality and Diversity implications, University of Liverpool https://forms.gle/2AyJ2nHKiEchA3dP9

 

*correction made 5th March 2021