Monday, 20 March 2023

A suggestion for eLife

According to a piece today in Nature, there’s uproar at eLife, where a new publishing model has been introduced by Editor-in-Chief, Michael Eisen. The idea is that authors first submit their paper as a preprint, and then a decision is made by editors as to whether it is sent out for review – at a cost of $2000 to the author.  Papers that are reviewed are published, with the reviews, and an editorial comment, regardless of any criticisms in the review.  Authors have an opportunity to update the article to take into account reviewer comments if they wish, but once reviewed, cannot be rejected.

 

Of course, this does not really remove a “quality control” filter by the journal – it just moves it to the stage where a decision is made on whether or not to send the paper out for review.

The guidance given to editors in making that judgement is “can you generate high-quality and broadly useful public reviews of this paper?”  Concerns have been expressed over whether this would disadvantage less well-known authors, if editors preferred to play it safe and only send papers for review if the authors had a strong track record.  But the main concern is that there will be a drop in quality of papers in eLife, which will lose its reputation as a high-prestige outlet.

 

I have a simple suggestion for how to counteract such a concern, and that is that the journal should adopt a different criterion for deciding which papers to review – this should be done solely on the basis of the introduction and methods, without any knowledge of the results. Editors could also be kept unaware of the identity of authors.

 

If eLife wants to achieve a distinctive reputation for quality, it could do so by only taking forward to review those articles that have identified an interesting question and tackled it with robust methodology. It’s well-known that editors and reviewers tend to be strongly swayed by novel and unexpected results, and will disregard methodological weaknesses if the findings look exciting. If authors had to submit a results-blind version of the manuscript in the first instance, then I predict that the initial triage by editors would look rather different.  The question for the editor would no longer be one about the kind of review the paper would generate, but would focus rather on whether this was a well-conducted study that made the editor curious to know what the results would look like.  The papers that subsequently appeared in eLife would look different to those in its high-profile competitors, such as Nature and Science, but in a good way.  Those ultra-exciting but ultimately implausible papers would get filtered out, leaving behind only those that could survive being triaged solely on rationale and methods.

 

 

 

Wednesday, 22 February 2023

Open letter to CNRS

Need for transparent and robust response when research misconduct is found

(French translation available in Appendix 3 of this document)

This Open Letter is prompted by an article in Le Monde describing an investigation into alleged malpractice at a chemistry lab in CNRS-Université Sorbonne Paris Nord and the subsequent report into the case by CNRS. The signatories are individuals from different institutions who have been involved in investigations of research misconduct in different disciplines, all concerned that the same story is repeated over and over when someone identifies unambiguous evidence of data manipulation.  Quite simply, the response by institutions, publishers and funders is typically slow, opaque and inadequate, and is biased in favour of the accused, paying scant attention to the impact on those who use research, and placing whistleblowers in a difficult position.

 

The facts in this case are clear. More than 20 scientific articles from the lab of one principal investigator  have been shown to contain recycled and doctored graphs and electron microscopy images. That is, results from different experiments that should have distinctive results are illustrated by identical figures, with changes made to the axis legends by copying and pasting numbers on top of previous numbers.

 

Everyone is fallible, and no scientist should be accused of malpractice when honest errors are committed. We need also to be aware of the possibility of accusations made in bad faith by those with an axe to grind. However, there comes a point when there is a repeated pattern of errors for a prolonged period for which there is no innocent explanation. This point is surely reached here: the problematic data are well-documented in a number of PubPeer comments on the articles (see links in Appendix 1 of this document).

 

The response by CNRS to this case, as explained in their report (see Appendix 2 of this document), was to request correction rather than retraction of what were described as “shortcomings and errors”, to accept the scientist’s account that there was no intentionality, despite clear evidence of a remarkable amount of manipulation and reuse of figures; a disciplinary sanction of exclusion from duties was imposed for just one month. 

 

So what should happen when fraud is suspected?  We propose that there should be a prompt investigation, with all results transparently reported. Where there are serious errors in the scientific record, then the research articles should immediately be retracted, any research funding used for fraudulent research should be returned to the funder, and the person responsible for the fraud should not be allowed to run a research lab or supervise students. The whistleblower should be protected from repercussions.

 

In practice, this seldom happens. Instead, we typically see, as in this case, prolonged and secret investigations by institutions, journals and/or funders. There is a strong bias to minimize the severity of malpractice, and to recommend that published work be “corrected” rather than retracted.

 

One can see why this happens. First, all of those concerned are reluctant to believe that researchers are dishonest, and are more willing to assume that the concerns have been exaggerated. It is easy to dismiss whistleblowers as deluded, overzealous or jealous of another’s success. Second, there are concerns about reputational risk to an institution if accounts of fraudulent research are publicised. And third, there is a genuine risk of litigation from those who are accused of data manipulation. So in practice, research misconduct tends to be played down.

 

However, this failure to act effectively has serious consequences:

1.   It gives credibility to fictitious results, slowing down the progress of science by encouraging others to pursue false leads. This can be particularly damaging for junior researchers who may waste years trying to build on invented findings. And in the age of big data, where results in fields such as genetics and pharmaceuticals are harvested to contribute to databases of knowledge, erroneous data pollutes the databases on which we depend.

2.   Where the research has potential for clinical or commercial application, there can be direct damage to patients or businesses.

3.   It allows those who are prepared to cheat to compete with other scientists to gain positions of influence, and so perpetuate further misconduct, while damaging the prospects of honest scientists who obtain less striking results.

4.   It is particularly destructive when data manipulation involves the Principal Investigator of a lab. This creates challenges for honest early-career scientists based in the lab where malpractice occurs – they usually have the stark options of damaging their career prospects by whistleblowing, or leaving science. Those with integrity are thus removed from the pool of active researchers. Those who remain are those who are prepared to overlook integrity in return for career security.  CNRS has a mission to support research training: it is hard to see how this can be achieved if trainees are placed in a lab where misconduct occurs.

5.   It wastes public money from research grants.

6.   It damages public trust in science and trust between scientists.

7.   It damages the reputation of the institutions, funders, journals and publishers associated with the fraudulent work.

8.   Whistleblowers, who should be praised by their institution for doing the right thing, are often made to feel that they are somehow letting the side down by drawing attention to something unpleasant. They are placed at high risk of career damage and stress, and without adequate protection by their institution, may be at risk of litigation. Some institutions have codes of conduct where failure to report an incident that gives reasonable suspicion of research misconduct is itself regarded as misconduct, yet the motivation to adhere to that code will be low if the institution is known to brush such reports under the carpet.

 

The point of this letter is not to revisit the rights and wrongs of this specific case or to promote a campaign against the scientist involved. Rather, we use this case to illustrate what we see as an institutional malaise that is widespread in scientific organisations.  We write to CNRS to express our frustration at their inadequate response to this case, and to ask that they review their disciplinary processes and consider adopting a more robust, timely and transparent process that treats data manipulation with the seriousness it deserves, and serves the needs not just of their researchers, but also of other scientists, and of the public who ultimately provide the research funding.

 

Signed by:

 

Dorothy Bishop, FRS, FBA, FMedSci, Professor of Developmental Neuropsychology (Emeritus), University of Oxford, UK.

 

Patricia Murray, Professor of Stem Cell Biology and Regenerative Medicine, University of Liverpool, UK.

 

Elisabeth Bik, PhD, Science Integrity Consultant

 

Florian Naudet, Professor of Therapeutics, Université de Rennes and Institut Universitaire de France, Paris

 

David Vaux, AO FAA, FAHMS, Honorary Fellow WEHI, & Emeritus Professor University of Melbourne, Australia

 

David A. Sanders, Department of Biological Sciences, Purdue University, USA.

 

Ben W. Mol, Professor of Obstetrics and Gynecology, Melbourne, Australia

 

Timothy D. Clark, PhD, School of Life & Environmental Sciences, Deakin University, Geelong, Australia

 

David Robert Grimes, PhD, School of Medicine, Trinity College Dublin, Ireland

 

Fredrik Jutfelt, Professor of Animal Physiology, Norwegian University of Science and Technology, Trondheim, Norway

 

Nicholas J. L. Brown, PhD, Linnaeus University, Sweden

 

Dominique Roche, Marie Skłodowska-Curie Global FellowD, Institut de biologie, Université de Neuchâtel, Switzerland

 

Lex M. Bouter, Professor Emeritus of Methodology and Integrity, Amsterdam University Medical Center and Vrije Universiteit, Amsterdam, The Netherlands

 

Josefin Sundin, PhD, Department of Aquatic Resources, Swedish University of Agricultural Sciences, Sweden

 

Nick Wise, PhD, Engineering Department, University of Cambridge, UK

 

Guillaume Cabanac, Professor of Computer Science, Université Toulouse 3 – Paul Sabatier and Institut Universitaire de France

 

Iain Chalmers, DSc, MD, FRCPE, Centre for Evidence-Based Medicine, University of Oxford.

 

Response from CNRS, received 28th Feb 2023. 

 French version below. Version en français plus bas. ======================================== 

Dear Colleagues, I have read the open letter you sent me by email on February 22, entitled "Need for transparent and robust response when research misconduct is found". 

I am very surprised that you did not think it necessary to contact the CNRS before publishing this open letter. You are obviously not familiar, or at least very unfamiliar, with CNRS policy and procedures regarding scientific integrity. 

The CNRS deals with these essential issues without any complacency, but tries to be fair and to ensure that the sanctions are proportional to the misconduct committed, while respecting the rules of the French civil service. 

 Your letter mixes generalities about the so-called actions of scientific institutions with paragraphs that apply, perhaps, to the CNRS. If you wish to know how scientific misconduct is handled at the CNRS, I invite you to contact our scientific integrity officer, Rémy Mosseri 

Kind regards, 

Antoine Petit ================== 

Professer Antoine Petit CNRS CEO 

======================================== 

Chers et chères collègues, J’ai pris connaissance de la lettre ouverte que vous m’avez adressée par courriel le 22 février dernier dont le titre est « Nécessité d'une réponse transparente et robuste en cas de découverte de manquements à l’intégrité scientifique ». 

Je suis très étonné que vous n’ayez pas jugé utile de prendre contact avec le CNRS avant de publier cette lettre ouverte. Vous ne connaissez visiblement pas, ou au minimum très mal, la politique et les procédures du CNRS en ce qui concerne l’intégrité scientifique. 

 Le CNRS traite ces questions essentielles sans aucune complaisance mais en essayant d’être justes et que les sanctions soient proportionnelles aux fautes commises, tout en respectant les règles de la fonction publique française. 

Votre lettre mélange des généralités sur les soi-disant agissement des institutions scientifiques et des paragraphes qui s’appliquent, peut-être, au CNRS. Si vous souhaitez savoir comment les méconduites scientifiques sont traitées au CNRS, je vous invite à prendre contact avec notre référent intégrité scientifique, Rémy Mosseri 

 Bien à vous, ================ 

Antoine Petit CNRS Président - Directeur général

Saturday, 31 December 2022

New Year's Eve Quiz 2022

Dodgy journals special

With so much happening in the world this year, it’s easy to miss some recent developments in the world of academic publishing.  Test your knowledge here, to see how alert you are to news from the dark underbelly of research communication.

 

1. Which of these is part of a paper mill1?

 


 


2. How many of these tortured phrases2 can you decode?

 

a) In context of chemistry experiment: “watery arrangements”

b) in context of pharmaceuticals: “medication conveyance”

c) in context of statistics: “irregular esteem”

d) in context of medicine: “bosom peril”

e) in context of optical sensors: “wellspring of blunder”

 

 

3. Which journal published a paper beginning with the sentence:

Persistent harassment is a major source of inefficiency and your growth will likely increase over the next several years.

 

and ending with:

The method outlined here can be used to easily illuminate clinical beginnings about confinement in appropriate treatment, sensitivity and the number of treatment sessions, and provides an incentive to investigate the brain regions of two mice and humans

 

a)    a) Proceedings of the National Academy of Science

b)    b) Acta Scientifica

c)    c) Neurosciences and Brain Imaging

d)    d) Serbian Journal of Management

 

 

4. What have these authors got in common?  

 

Georges Chastellain, Jean Bodel, Suzanne Lilar, Henri Michaux, and Pierre Mertens

 

a)    a) They are all eminent French literary figures

b)    b) They all had a cat called Fifi

c)     c) They are authors of papers in the Research Journal of Oncology, vol 6, issue 5

d)    d) They were born in November 

 

5. What kind of statistical test would be appropriate for these data? 

a) t-test

b) no-way analysis of variance

c) subterranean insect optimisation

d) flag to commotion ratio

 

6. Many eminent authors have published in one of these Prime Scholars journals:

i)               Polymer Sciences

ii)              Journal of Autacoids

iii)            Journal of HIV and Retrovirus

iv)            British Journal of Research

 

Can you match the author to the journal?

a)    Jane Austen

b)    Kurt Vonnegut

c)     Walt Whitman

d)    Herman Hesse

e)    Tennessee Williams

f)      Ayn Rand

 

 

7. Some poor authors have their names badly mangled by those who use their name while attempting to avoid plagiarism checks.  Can you reconstruct the correct versions of these two names (and affiliation for author 1)?

 

---------------------------------------------------------------------------------------



 

 

Final thoughts

 

While the absurdity of dodgy journals can make us laugh, there is, of course, a dark side to all of this that cannot be ignored. The huge demand for places to publish has not only led to obviously predatory publishers, who will publish anything for money, but also has infiltrated supposedly reputable publishers.  Papermills are seen as a growing problem, and all kinds of fraud abound, even among some of the upper echelons of academia. As I argued in my last blogpost, it’s far too easy to get away with academic misconduct, and the incentives on researchers to fake data and publications are growing all the time. My New Year’s wish is that funders, academic societies and universities start to grapple with this problem more urgently, so that there won’t be material for such a quiz in 2023.

 

 

References

 

1 COPE & STM. (2022). Paper mills: Research report from COPE & STM. Committee on Publication Ethics and STM. https://doi.org/10.24318/jtbG8IHL

 

2 Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals (arXiv:2107.06751). arXiv. https://doi.org/10.48550/arXiv.2107.06751

 

 

ANSWERS

 

1. B is a solicitation for an academic paper mill. A is a flour mill and C is a paper mill of the more regular kind. B was discussed here.

 

2. Who knows? Best guesses are:

a) aqueous solutions

b) drug delivery

c) random value

d) breast cancer

e) source of error

 

If you enjoy this sort of word game, you can help by typing "tortured phrases" into PubPeer and checking out the papers that have been detected by the Problematic Paper Screener.  

 

3. c) https://www.primescholars.com/articles/a-short-note-on-mechanism-of-brain-in-animals-and-humans.pdf 

 

4. c) see https://www.primescholars.com/archive/iprjo-volume-6-issue-5-year-2022.html 

If you answered (a) you are misled by the Poirot fallacy – all of them except Bodel are Belgian. 

 

5. Fortunately the paper has been retracted and so no answer is required. For further details see here

c) is a reference to tortured phrase version of “ant colony optimisation” (which is a real thing!) and d) is reference to “signal-to-noise” ratio.

 

 

6.        

Jane Austen  (ii) and (iv)

Kurt Vonnegut (ii) and (iii)

Walt Whitman (i) (ii) and (iv)

Herman Hesse (iv)

Tennessee Williams (ii)

Ayn Rand (iii)

 

See: https://www.primescholars.com/archive/jac-volume-3-issue-2-year-2022.html 

https://www.primescholars.com/archive/ipbjr-volume-9-issue-7-year-2022.html 

https://www.primescholars.com/archive/ipps-volume-7-issue-4-year-2022.html 

https://www.primescholars.com/archive/ipps-volume-7-issue-2-year-2022.html 

https://www.primescholars.com/archive/ipbjr-volume-9-issue-9-year-2022.html 

https://www.primescholars.com/archive/ipbjr-volume-9-issue-7-year-2022.html 

 

7.

This article is available here: https://www.primescholars.com/archive/jac-volume-2-issue-3-year-2020.html.  A genuine email has been added to the paper and is the clue to the person whose identity was used for this paper: Williams, GM with address at New York Medical College, Valhalla campus. Given the mangling of his name, I suspect he is no more aware of his involvement in the paper than Jane Austen or Kurt Vonnegut.

For 2nd e.g. see https://pubpeer.com/publications/B7E65FDF7565448A0507B32123E4D8 




Friday, 16 December 2022

When there are no consequences for misconduct: Parallels between politics and science

 

Gustave Doré: Illustration for Paradise Lost

(Updated 17 Dec 2022) 

As children, we grow up with stories of the battle between good and evil, but good ultimately triumphs. In adulthood, we know things can be more complicated: bad people can get into positions of power and make everyone suffer.  And yet, we tell ourselves, we have a strong legal framework, there are checks and balances, and a political system aspires to be free and fair.

 

During the last decade, I started for the first time to have serious doubts about those assumptions. In both the UK and the US, the same pattern is seen repeatedly: the media report on a scandal involving the government or a public figure, there is a brief period of public outrage, but then things continue as before.

 

In the UK we have become accustomed to politicians lying to Parliament and failing to correct the record, to bullying by senior politicians, and to safety regulations being ignored.  The current scandal is a case of disaster capitalism where government cronies made vast fortunes from the Covid pandemic by gaining contracts for personal protective equipment – which was not only provided at inflated prices, but then could not be used as it was substandard.

 

These are all shocking stories, but even more shocking is the lack of any serious consequences for those who are guilty. In the past, politicians would have resigned for minor peccadilloes, with pressure from the Prime Minister if need be. During Boris Johnson’s premiership, however, the Prime Minister was part of the problem. 

 

During the Trump presidency in the US, Sarah Kendzior wrote about “saviour syndrome”  - the belief people had that someone would come along and put things right. As she noted: “Mr. Trump has openly committed crimes and even confessed to crimes: What is at stake is whether anyone would hold him accountable.” And, sadly, the answer has been no.

 

No consequences for scientific fraud

So what has this got to do with science?  Well, I get the same sinking feeling that there is a major problem, everyone can see there's a problem, but nobody is going to rescue us. Researchers who engage in obvious malpractice repeatedly get away with no consequences.  This has been a recurring theme from those who have exposed academic papermills (Byrne et al., 2021) and/or reported manipulation of figures in journal articles (Bik et al., 2016).  For instance, when Bik was interviewed by Nature, she noted that 60-70% of the 800 papers she had reported to journals had not been dealt with within 5 years. That matches my more limited experience; if one points out academic malpractice to publishers or institutions, there is often no reply. Those who do reply typically say they will investigate, but then you hear no more.

 

At a recent symposium on Research Integrity at Liverpool Medical Institution*, David Sanders (Purdue University) told of repeated experiences of being given the brush-off by journals and institutions when reporting suspect papers. For instance, he reported an article that had simply recycled a table from a previous paper on a different topic. The response was “We will look into it”. “What”, said David incredulously, “is there to look into?”. This is the concern – that there can be blatant evidence of malpractice within a paper, yet the complainant is ignored. In this case, nothing happened. There are honorable exceptions, but it seems shocking that serious and obvious errors in work are not dealt with in a prompt and professional manner.

 

At the same seminar, there was a searing presentation by Peter Wilmshurst, whose experiences of exposing medical fraud by powerful individuals and organisations have led him to be the subject of numerous libel complaints.  Here are a few details of two of the cases he presented:

 

Paolo Macchiarini:  Convicted in 2022 of causing bodily harm with an experimental transplant of a synthetic windpipe that he performed between 2011-2012.  Wilmshurst noted that the descriptions of the experimental surgery in journals were incorrect. For a summary see this BMJ article  A 2008 paper by Macchiarini and colleagues is still published in the Lancet, despite demands for it to be retracted. 

 

Don Poldermans: An eminent cardiologist who conducted a series of studies on perioperative betablockers, leading them to be recommended in guidelines from the European Society of Cardiology,  whose task force he chaired. A meta-analysis challenged that conclusion, showing mortality increased; an investigation found that work by Poldermans had serious integrity problems, and he was fired. Nevertheless, the papers have not been retracted. Wilmshurst estimated that thousands of deaths would have resulted from physicians following the guidelines recommending betablockers.

 

The week before the Liverpool meeting, there was a session on Correcting the Record at AIMOS2022.  The four speakers, John Loadsman (anaesthesiology), Ben Mol (Obstetrics and Gynecology), Lisa Parker (Oncology) and Jana Christopher (image integrity) covered the topic from a range of different angles, but in every single talk, the message came through loud and clear: it’s not enough to flag up cases of fraud – you have to then get someone to act on them, and that is far more difficult than it should be.

 

And then on the same day as the Liverpool meeting, Le Monde ran a piece about a researcher whose body of work contained numerous problems: the same graphs were used across different articles that purported to show different experiments, and other figures had signs of manipulation.  There was an investigation by the institution and by the funder, Centre National de la Recherche Scientifique (CNRS), which concluded that there had been several breaches of scientific integrity. However, it seems that the recommendation was simply that the papers should be “corrected”.

 

Why is scientific fraud not taken seriously?

There are several factors that conspire to get scientific fraud brushed under the carpet.

1.     Accusations of fraud may be unfounded. In science, as in politics, there may be individuals or organisations who target people unfairly – either for personal reasons, or because they don’t like their message. Furthermore, everyone makes mistakes and it would be dangerous to vilify researchers for honest errors. So it is vital to do due diligence and establish the facts. In practice, however, this typically means giving the accused the benefit of the doubt, even when the evidence of misconduct is strong.  While it is not always easy to demonstrate intent, there are many cases, such as those noted above, where a pattern of repeated transgressions is evident in published papers – and yet nothing is done.  

2.     Conflict of interest. Institutions may be reluctant to accept that someone is fraudulent if that person occupies a high-ranking role in the organisation, especially if they bring in grant income. Worries about reputational risk also create conflict of interest. The Printeger project is a set of case studies of individual research misconduct cases, which illustrates just how inconsistently these are handled in different countries, especially with regard to transparency vs confidentiality of process. It concluded “The reflex of research organisations to immediately contain and preferably minimise misconduct
cases is remarkable
”.

3.     Passing the buck. Publishers may be reluctant to retract papers unless there is an institutional finding of misconduct, even if there is clear evidence that the published work is wrong. I discussed this here.  My view is that leaving flawed research in the public record is analogous to a store selling poisoned cookies to customers – you have a responsibility to correct the record as soon as possible when the evidence is clear to avoid harm to consumers. Funders might be expected to also play a role in correcting the record when research they have funded is shown to be flawed. Where public money is concerned, funders surely have a moral responsibility to ensure it is not wasted on fraudulent or sloppy research. Yet in her introduction to the Liverpool seminar, Patricia Murray noted that the new UK Committee on Research Integrity (CORI) does not regard investigation of research misconduct as within its purview.  

4.     Concerns about litigation. Organisations often have concerns that they will be sued if they make investigations of misconduct public, even if they are confident that misconduct occurred. These concerns are justified, as can be seen from the lawsuits that most of the sleuths who spoke at AIMOS and Liverpool have been subjected to.  My impression is that, provided there is clear evidence of misconduct, the fraudsters typically lose libel actions, but I’d be interested in more information on that point.

 

 

Consequences when misconduct goes unpunished

 

The lack of consequences for misconduct has many corrosive impacts on society. 

 

1.     Political and scientific institutions can only operate properly if there is trust. If lack of integrity is seen to be rewarded, this erodes public confidence. 

 

2.     People depend on us getting things right. We are confronting major challenges to health and to our environment. If we can’t trust researchers to be honest, then we all suffer as scientific progress stalls.  Over-hyped findings that make it into the literature can lead subsequent generations of researchers to waste time pursuing false leads.  Ultimately, people are harmed if we don’t fix fraud.

 

3.     Misconduct leads to waste of resources. It is depressing to think of all the research that could have been supported by the funds that have been spent on fraudulent studies.

 

4.     People engage in misconduct because in a competitive system, it brings them personal benefits, in terms of prestige, tenure, power and salary. If the fraudsters are not tackled, they end up in positions of power, where they will perpetuate a corrupt system; it is not in their interests to promote those who might challenge them.

 

5.     The new generation entering the profession will become cynical if they see that one needs to behave corruptly in order to succeed. They are left with the stark choice of joining in the corruption or leaving the field.

 

 

What can be done?

 

There’s no single solution, but I think there are several actions that are needed to help clean up the mess.

 

1.     Appreciate the scale of the problem.

When fraud is talked about in scientific circles, you typically get the response that “fraud is rare” and “science is self-correcting”.  A hole has been blown in the first assumption by the emergence of industrial-scale fraud in the form of academic paper-mills . The large publishers are now worried enough about this to be taking concerted action to detect papermill activity, and some of them have engaged in mass retractions of fraudulent work (see, e.g. the case of IEEE retractions here). Yet, I have documented on PubPeer numerous new papermill articles in Hindawi special issues appearing since September of this year, when the publisher announced it would be engaging in retraction of 500 papers. It’s as if the publisher is trying to clean up with a mop while a fire-hose is spewing out fraudulent content.  This kind of fraud is different from that reported by Wilmshurst, but it illustrates just how slow the business of correcting the scientific record can be – even when the evidence for fraud is unambiguous. 

Publishers trying to mop up papermill outputs
 

Yes, self-correction will ultimately happen in science, when people find they cannot replicate the flawed research on which they try to build. But the time-scale for such self-correction is often far longer than it needs to be.  We have to understand just how much waste of time and money is caused by reliance on a passive, natural evolution of self-correction, rather than a more proactive system to root out fraud.  

 

2.     Full transparency

There’s been a fair bit of debate about open data, and now it is recognised that we also need open code (scripts to generate figures etc.) to properly evaluate results. I would go further, though, and say we also need open peer review. This need not mean that the peer reviewer is identified, but just that their report is available for others to read. I have found open peer reviews very useful in identifying papermill products.

 

3.     Develop shared standards

Organisations such as the Committee on Publication Ethics (COPE) give recommendations for editors about how to respond when an accusation of misconduct occurs.  Although this looks like a start in specifying standards to which reputable journals should adhere, several speakers at the AIMOS meeting suggested that COPE guidelines were not suited for dealing with papermills and could actually delay and obfuscate investigations. Furthermore, COPE has no regulatory power and publishers are under no obligation to follow the guidelines (even if they state they will do so).

 

4.     National bodies for promoting scientific integrity

The Printeger project (cited above) noted that “A typical reaction of a research organisation facing unfamiliar research misconduct without appropriate procedures is to set up ad hoc investigative committees, usually consisting of in-house senior researchers…. Generally, this does not go well.”

In response to some high-profile cases that did not go well, some countries have set up national bodies for promoting scientific integrity. These are growing in number, but those who report cases to them often complain that they are not much help when fraud is discovered – sometimes this is because they lack the funding to defend a legal challenge. But, as with shared standards, this is at least a start, and they may help gather data on the scale and nature of the problem.  

 

5.     Transparent discussion of breaches of research integrity

Perhaps the most effective way of persuading institutions, publishers and funders to act is by publicising when they have failed to respond adequately to complaints.  David Sanders described a case where journals and institutions took no action despite multiple examples of image manipulation and plagiarism from one lab.  He only got a response when the case was featured in the New York Times.

Nevertheless, as the Printeger project noted, relying on the media to highlight fraud is far from ideal – there can a tendency to sensationalise and simplify the story, with potential for disproportionate damage to both accused and whistleblowers. If we had trustworthy and official channels to report suspected research misconduct, then whistleblowers would be less likely to seek publicity through other means.

 

6.     Protect whistleblowers

In her introduction to the Liverpool Research Integrity seminar, Patricia Murray noted the lack of consistency in institutional guidelines on research integrity. In some cases, the approach to whistleblowers seemed hostile, with the guidelines emphasising that they would be guilty of misconduct if they were found to have made frivolous, vexatious and/or malicious allegations. This, of course, is fair enough, but it needs to be countered by recommendations that allow for whistleblowers who are none of these things, who are doing the institution a service by casting light on serious problems. Indeed, Prof Murray noted that in her institution, failure to report an incident that gives reasonable suspicion of research misconduct is itself regarded as misconduct.  At present, whistleblowers are often treated as nuisances or cranks who need to be shut down. As was evident from the cases of both Sanders and Wilmshurst, they are at risk of litigation, and careers may be put in jeopardy if they challenge senior figures.

 

7.     Changing the incentive structure in science

It’s well-appreciated that if you really want to stop a problem, you should understand what causes it and stop it at source. People do fraudulent research because the potential benefits are large and the costs seem negligible.  We can change that balance by, on the one hand having serious and public sanctions for those who commit fraud, and on the other hand, rewarding scientists who emphasise integrity, transparency and accuracy in their work, rather than those that get flashy, eyecatching results.

 

I'm developing my ideas on this topic and I welcome thoughts on these suggestions. Comments are moderated and so do not appear immediately, but I will post any that are on topic and constructive.  



Update 17th December 2022  


Jennifer Byrne suggested one further recommendation, as follows:

To change the incentive structure in scientific publishing. Journals are presently rewarded for publishing, as publishing drives both income (through subscriptions and/or open access charges) and the journal impact factor. In contrast, journals and publishers do not earn income and are not otherwise rewarded for correcting the literature that they publish. This means that the (seemingly rare) journals that work hard to correct, flag and retract erroneous papers are rewarded identically to journals that appear to do very little. Proactive journals appear to represent a minority, but while there are no incentives for journals to take a proactive approach to published errors and misinformation, it should not be surprising that few journals join their efforts. Until publication and correction are recognized as two sides of the same coin, and valued as such, it seems inevitable that we will see a continued drive towards publishing more and correcting very little, or continuing to value publication quantity over quality.

 

Bibliography 

I'll also add here additional resources. I'm certainly not the first to have made the points in this post, and it may be useful to have other articles gathered together in one place.  

 

Besançon, L., Bik, E., Heathers, J., & Meyerowitz-Katz, G. (2022). Correction of scientific literature: Too little, too late! PLOS Biology, 20(3), e3001572. https://doi.org/10.1371/journal.pbio.3001572   

 

Byrne, J. A., Park, Y., Richardson, R. A. K., Pathmendra, P., Sun, M., & Stoeger, T. (2022). Protection of the human gene research literature from contract cheating organizations known as research paper mills. Nucleic Acids Research, gkac1139. https://doi.org/10.1093/nar/gkac1139 

 

Christian, K., Larkins, J., & Doran, M. R. (2022). The Australian academic STEMM workplace post-COVID: a picture of disarray. BioRxiv. https://doi.org/10.1101/2022.12.06.519378 

 

Lévy, R. (2022, December 15). Is it somebody else’s problem to correct the scientific literature? Rapha-z-Lab. https://raphazlab.wordpress.com/2022/12/15/is-it-somebody-elses-problem-to-correct-the-scientific-literature/ 

 

Research misconduct: Theory & Pratico – For Better Science. (n.d.). Retrieved 17 December 2022, from https://forbetterscience.com/2022/08/31/research-misconduct-theory-pratico/   


Star marine ecologist committed misconduct, university says. (n.d.). Retrieved 17 December 2022, from https://www.science.org/content/article/star-marine-ecologist-committed-misconduct-university-says  


Additions on 18th December: Yet more relevant stuff coming to my attention! 

 

Naudet, Florian (2022) Lecture: Busting two zombie trials in a post-COVID world.   

 

Wilmshurst, Peter (2022) Blog: Has COPE membership become a way for unprincipled journals to buy a fake badge of integrity?


 *Addition on 20th December

Liverpool Medical Institution seminar on Research Integrity: The introduction by Patricia Murray, talk by Peter Wilmshurt, and Q&A are now available on Youtube.


And finally.... 

A couple of sobering thoughts:

 

Alexander Trevelyan on Twitter noted  a great quote from the anonymous @mumumouse (author of Research misconduct blogpost above): “To imagine what it’s like to be a whistleblower in the science community, imagine you are trying to report a Ponzi scheme, but instead of receiving help you are told, nonchalantly, to call Bernie Madoff, if you wish." 

 

Peter Wilmshurst started his talk by relaying a conversation with Patricia Murray in the run-up to his talk. He said he planned to talk about the 3 Fs, fabrication, falsification and honesty.

 To which Patricia replied, “There is no F in honesty”. 

(This may take a few moments to appreciate).