Wednesday 16 October 2024

An open letter regarding Scientific Reports

16th October 2024 

to: Mr Chris Graf
Research Integrity Director, Springer Nature and Chair Elect of the World Conference on Research Integrity Foundation Governing Board.

 

Dear Mr Graf,

We are a group of sleuths and forensic meta-scientists who are concerned that Springer Nature is failing in its duty to protect the scientific literature from fraudulent and low quality work. We are aware that, as noted in the 2023 Annual Report, you are committed to maintaining research integrity. We agree with the statement: “To solve the world’s biggest challenges, we all need research that’s reliable, trustworthy and can be built on by scientists and innovators. As a leading global research publisher, we have a pivotal role to play.” It is encouraging to hear that the Springer Nature research integrity group doubled in size in 2023. Nevertheless, we have a growing sense that all is not well concerning the mega journal Scientific Reports.

Some of the work that has been published is so seriously flawed that it is not credible that it underwent any meaningful form of peer review. In other cases, when we have reported flawed papers to the editor or integrity team, the response has been inadequate. A striking example cropped up last week when a “corrected” version of an article was published in Scientific Reports. This article had been flagged up by Guillaume Cabanac as containing numerous “tortured phrases” that are indicative of fraudulent authors attempting to bypass plagiarism checks; the authors were allowed to “correct” the article by merely removing some (not all) of the tortured phrases. This led some of us to look more closely at the article. As is evident from comments on PubPeer, it turned out to be a kind of case study of all the red flags for fraud that we look for. As well as (still uncorrected) tortured phrases, it contained irrelevant content, irrelevant citations, meaningless gibberish, a nonsensical figure, and material recycled from other publications.

This is perhaps the most flagrant example, but we argue that it indicates problems with your editorial processes that are not going to be fixed by AI. The only ways an article like this can have been published are either through editorial negligence or outright malpractice. For it to be negligence would require a remarkable degree of professional incompetence from a handling editor. The possibility of malpractice, would mean there is a corrupt handling editor who bypasses the peer review process entirely or willingly appoints corrupt peer reviewers to approve the manuscript. We appreciate that some papers that we and others have reported have been retracted, but in other cases blatantly fraudulent papers can take years to be retracted or to receive any appropriate editorial action.

We have some specific suggestions for actions that Springer Nature could take to address these issues.

  1. Employ a task force of people with the necessary expertise to carry out an urgent audit of all editors of Scientific Reports. We have looked at the editors on your website, and it is clear that this is an enormous task, given that there are over 13,000 of them, and they are not listed with disambiguating information such as Orcid IDs. Even so, in a few hours, by cross-checking this list against PubPeer, it was possible to identify the 28 cases listed below, covering a range of disciplines, and all, in our view, with pretty clear-cut evidence of problems. Four are members of the Editorial Board. We stress, this is just the low-hanging fruit which was fairly easy to detect.
  2. The list of problematic articles appended below or tabulated on the Problematic Paper Screener might provide an alternative route to identify editors who should never have been given a gatekeeping role in academic publishing. As well as checking the papers we list below, we recommend that all other articles accepted by the same editors should be scrutinised.
  3. Detection of problematic articles and editors could be helped by requiring open peer review for all journals, and ensuring that the name and Orcid ID of the handling editor is included with the published meta-data for all articles.
We hope these suggestions will be helpful in ensuring that research published in Scientific Reports is reliable and trustworthy.

Yours sincerely

Dorothy Bishop
Guillaume Cabanac
François-Xavier Coudert
René Aquarius
Nick Wise
Lonni Besançon
Simon A.J. Kimber
Anna Abalkina
Rickard Carlsson
Samuel J Westwood
Patricia Murray
Nicholas J. L. Brown
Smut Clyde
Leonid Schneider
Ian Hussey
Tu Duong
Gustav Nilsonne
Jamie Cummins
Alexander Magazinov
Elisabeth Bik
Mu Yang
Corrado Viotti
Sholto David


 

Appendices

1. Some examples of editors with concerning PubPeer entries

Editorial board Ghulam Md Ashraf
Editorial board Eun Bo Shim
Editorial board Ajay Goel
Editorial board Rasoul Kowsar

AGEING Vittorio Calabrese
AGRICULTURE Sudip Mitra
ANALYTICAL CHEMISTRY Syed Ghulam Musharraf
CELL BIOLOGY Gabriella Dobrowolny
CHEMICAL ENGINEERING Enas Taha Sayed
CIVIL ENGINEERING Manoj Khandelwal
CLINICAL ONCOLOGY Marcello Maggiolini
COMPUTATIONAL SCIENCE Praveen Kumar Reddy Maddikunta
DRUG DISCOVERY Salvatore Cuzzocrea
ENDOCRINOLOGY Sihem Boudina
ENVIRONMENTAL ENGINEERING Rama Rao Karri
ENVIRONMENTAL SCIENCE Mayeen Uddin Khandaker
GASTROENTEROLOGY AND HEPATOLOGY Sharon DeMorrow
IMMUNOLOGY Marcin Wysoczynski
INFECTIOUS DISEASES Fatah Kashanchi
MATHEMATICAL PHYSICS Ilyas Khan
MICROBIOLOGY Massimiliano Galdiero
NETWORKS AND COMPLEX SYSTEMS Achyut Shankar
NEUROLOGY Yvan Torrente
RESPIRATORY MEDICINE Soni Savai Pullamsetti
STRUCTURAL AND MOLECULAR BIOLOGY Stefania Galdiero
https://pubpeer.com/publications/42901FD2901EC917E3EE54B8DBD749#4 (authors claim a correction is underway, but none published for 2 years)
https://pubpeer.com/publications/01FE09F1127DF0598985987677A101 (part of a list of many flagged papers from this author group. Corrected rather than retracted)
https://pubpeer.com/publications/69EDBAECD50F31B051ECECCD1DF346 (notified on 31-3-2023 about this paper, no action so far)
https://pubpeer.com/publications/F8A1AD2B165888A06C18B28C860E7B. EiC contacted Nov. 22 with authorship concerns, responded that he would investigate. No action taken so far.
https://pubpeer.com/publications/286F83F9553D29F82CD4281309A1E4. Has had EoC for authorship irregularities since July 22, no action taken since.
https://pubpeer.com/publications/5BEDDDA9CF92B9CDDD2AB1AA796271 (blatantly nonsensical paper reported to publisher in June 2024; no action as yet)
https://pubpeer.com/publications/37B87CAC48DE4BC98AD40E00330143 (various corrections since 2022, and in Feb 2023 readers were told “conclusions of this article are being considered by the Editors. A further editorial response will follow the resolution of these issues”. 19 Months later we are still waiting.)


3. Some examples of journal-level reports posted on PubPeer

Scientific Reports

other Springer Nature journals:

Chemosphere

Tuesday 24 September 2024

Using PubPeer to screen editors

 

2023 was the year when academic publishers started to take seriously the threat that paper mills posed to their business. Their research integrity experts have penned various articles about the scale of the problem and the need to come up with solutions (e.g., here and here).  Interested parties have joined forces in an initiative called United2Act. And yet, to outsiders, it looks as though some effective actions are being overlooked. It's hard to tell whether this is the result of timidity, poor understanding, or deliberate footdragging from those who have a strong financial conflict of interest.

As I have emphasised before, the gatekeepers to journals are editors. Therefore it is crucial that they are people of the utmost integrity and competence. The growth of mega-journals with hundreds of editors has diluted scrutiny of who gets to be an editor. This has been made worse by the bloating of journals with hundreds of special issues, each handled by "guest editors". We know that paper millers will try to bribe existing editors, and to place their own operatives as editors or guest editors, use fake reviewers, and stuff articles with irrelevant citations. Stemming this tide of corruption would be one effective way to reduce the contamination of the research literature. Here are two measures I suggest that publishers should take if they seriously want to clean up their journals.

1. Three strikes and you are out. Any editor who has accepted three or more paper milled papers should be debarred from acting as an editor, and all papers that they have been responsible for accepting should be regarded as suspect. This means retrospectively cleaning-up the field by scrutinising the suspect papers and retracting any from authors associated with paper mills, or which are characterised by features suggestive of paper mills, such as tortured phrases, citation stacking, gobbledegook content, fake reviews from reviewers suggested by authors, invalid author email domains, or co-authors who are known to be part of a paper mill ring. All of these are things that any competent editor should be able to detect. I anticipate this would lead to a large number of retractions, particularly from journals with many Special Issues. As well as these simple indicators, we are told that publishers are working hard to develop AI-based checks. They should use these not only to screen new submissions, and to retract published papers, but also to identify editors who are allowing this to happen on their watch. It also goes without saying that nobody who has co-authored a paper-milled paper should act as an editor.

2. All candidates for roles as Editor or Guest Editor at a journal should be checked against the post-publication peer review website PubPeer, and rejected if this reveals evidence of papers that have had credible criticisms suggesting of data fabrication or falsification. This is a far from perfect indicator: only a tiny fraction of authors receive PubPeer comments, and these may comment on trivial or innocent aspects of a paper. But, as I shall demonstrate, using such a criterion can reveal cases of editorial misconduct.

I will illustrate how this might work in practice, using the example of the MDPI journal Electronics. This journal came to my attention because it has indicators that all is not well with its Special Issues programme. 

First, in common with nearly all MDPI journals, Electronics has regularly broken the rule that specifies that no more than 25% of articles should be authored by a Guest Editor. As mentioned in a previous post, this is a rule that has come and gone in the MDPI guidelines, but which is clearly stated as a requirement for inclusion in the Directory of Open Access Journals (DOAJ). 13% of Special Issues in Electronics completed in 2023-4 broke this rule**. DOAJ have withdrawn some MDPI journals from their directory for this reason, and I live in hope that they will continue to implement this policy rigorously - which would entail delisting from their Directory the majority of MDPI journals. Otherwise, there is nothing to stop publishers claiming to be adhering to rigorous standards while failing to implement them, making listing in DOAJ an irrelevance.  

Even more intriguing, for around 11% of the 517 Special issues of Electronics published in 2023-4, the Guest Editor doesn't seem to have done any editing We can tell this because Special Issues are supposed to list who has acted as Academic Editor for each paper. MDPI journals vary in how rigorously they implement that rule - some journals have no record of who was the Academic Editor. But most do, and in most Special Issues, as you might expect, the Guest Editor is the Academic Editor, except for any papers where there is conflict of interest (e.g. if authors are Guest Editors or are from the same institution as the Guest Editor). Where the Guest Editor cannot act as Academic Editor, the MDPI guidelines state that this role will be taken by a member of the Editorial Board. But, guess what? Sometimes that doesn't happen. As someone with a suspicious frame of mind, and a jaundiced view of how paper mills operate, this is a potential red flag for me.

Accordingly, I decided to check PubPeer comments for individuals in three editorial roles at Electronics for the years 2023-4:

  • Those listed as being in a formal editorial role on the journal website. 
  • Those acting a Guest Editors 
  • Those acting as Academic Editors, despite not being in the other two categories.

For Editors, a PubPeer search by name revealed 213/931 that had one or more comments. That sounds alarming, but cannot be taken at face value, because there are many innocent reasons for this result. The main one is namesakes: this is particularly common with Chinese names, which tend to be less distinctive than Western names. It is therefore important to match PubPeer comments on affiliations as well as names. Using this approach, it was depressingly easy to find instances of Editors who appeared associated with paper mills. I will mention just three, to illustrate the kind of evidence that PubPeer provides, but remember, there are many others deserving of scrutiny. 

  • As well as being a section board member of Electronics, Danda B Rawat (Department of Electrical Engineering and Computer Science, Howard University, Washington, DC 20059, USA) is Editor-in-Chief of Journal of Cybersecurity and Privacy, and a section board member of two further MDPI journals: Future Internet and Sensors. A PubPeer search reveals him to be co-author of one paper with tortured phrases, and another where equations make no sense. He is listed as Editor of three MDPI Special Issues: Multimodal Technologies and Interaction: Human Computer Communications and Internet of Things Sensors: Frontiers in Mobile Multimedia Communications Journal of Cybersecurity and Privacy: Applied Cryptography.
  • Aniello Castiglione  (Department of Management & Innovation Systems, University of Salerno, Italy) is Section Board Member of three journals: Electronics, Future Internet, and Journal of Cybersecurity and Privacy, and an Editorial Board member of Sustainability. PubPeer reveals he has co-authored one paper that was recently retracted because of compromised editorial processing, and that his papers are heavily cited in several other articles that appear to be used as vehicles for citation stacking. 
  •  Natalia Kryvinska (Department of Information Systems, Faculty of Management, Comenius University in Bratislava, Slovakia) is a Section Board Member of Electronics. She has co-authored several articles with tortured phrases.

Turning to the 1326 Guest Editors of Special Issues, there were 500 with at least one PubPeer comment, but as before, note that in many cases name disambiguation is difficult, so this will overestimate the problem. Once again, while it may seem invidious to single out specific individuals, it seems important to show the kinds of issues that can be found among those who are put in this important gatekeeping role. 

Finally, let's look at the category of Academic Editors who aren't listed as journal Editors. It's unclear how they are selected and who approves their selection. Again, among those with PubPeer comments, there's a lot to choose from. I'll focus here on three who have been exceptionally busy doing editorial work on several special issues. 

  • Gwanggil Jeon (Incheon National University, Korea) has acted as Academic Editor for 18 Special Issues in Electronics. He is not on the Editorial Board of the journal, but he has been Guest Editor for two special issues in Remote Sensing, and one in SensorsPubPeer comments note recycled figures and irrelevant references in papers that he has co-authored, as well as a problematic Special Issue that he co-edited for Springer Nature, which led to several retractions.
  • Hamid Reza Karimi (Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy) has acted as Academic Editor for 12 Special Issues in Electronics. He was previously Guest Editor for two Special Issues of Electronics, one of Sensors, one of Micromachines, and one of Machines.  In 2022, he was specifically called out by the IEEE for acting "in violation of the IEEE Principles of Ethical Publishing by artificially inflating the number of citations" for several articles. 
  • Finally, Juan M. Corchado (University of Salamanca, Spain) has acted as Academic Editor for 29 Special Issues. He was picked up by my search as he is not currently listed as being an Editor for Electronics, but that seems to be a relatively recent change: when searching for information, I found this interview from 2023. Thus his role as Academic Editor seems legitimate. Furthermore, as far as PubPeer is concerned, I found only one old comment, concerned with duplicate publication. However, he is notorious for boosting citations to his work by unorthodox means, as described in this article.* I guess we could regard his quiet disappearance from the Editorial Board as a sign that MDPI are genuinely concerned about editors who try to game the system. If so, we can only hope that they employ some experts who can do the kinds of cross-checking that I have described here at scale. If I can find nine dubious editors of one journal in a couple of hours searching, then surely the publisher, with all its financial resources, could uncover many more if they really tried.

Note that many of the editors featured here have quite substantial portfolios of publications. This makes me dubious about MDPI's latest strategy for improving integrity - to use an AI tool to select potential reviewers "from our internal databases with extensive publication records". That seems like an excellent way to keep paper millers in control of the system. 

Although the analysis presented here just scratches the surface of the problem, it would not have been possible without the help of sleuths who made it straightforward to extract the information I needed from the internet. My particular thanks to Pablo Gómez Barreiro, Huanzi and Sholto David.

I want to finish by thanking the sleuths who attempt to decontaminate the literature by posting comments to PubPeer. Without their efforts it would be much harder to keep track of paper millers. The problem is large and growing. Publishers are going to need to invest seriously in employing those with the expertise to tackle this issue. 

 *As I was finalising this piece, this damning update from El Pais appeared. It seems that many retractions of Corchado papers are imminent.  

 I can't keep up.... here's today's news. 


** P.S. 25th Sept 2024. DOAJ inform me that Electronics was removed from their directory in June of this year. 

*** P.P.S. 26th Sept 2024.  Guillaume Cabanac pointed me to this journal-level report on PubPeer, where he noted a high rate of Electronics papers picked up by the Problematic Paper Screener.

Saturday 14 September 2024

Prodding the behemoth with a stick

 

Like many academics, I was interested to see an announcement on social media that a US legal firm had filed a federal antitrust lawsuit against six commercial publishers of academic journals: (1) Elsevier B.V.; (2) Wolters Kluwer N.V.; (3) John Wiley & Sons, Inc.; (4) Sage Publications, Inc.; (5) Taylor and Francis Group, Ltd.; and (6) Springer Nature AG & Co, on the grounds that "In violation of Section 1 of the Sherman Act, the Publisher Defendants conspired to unlawfully appropriate billions of dollars that would have otherwise funded scientific research".   

 

So far, so good.  I've been writing about the avaricious practices of academic publishers for over 12 years, and there's plenty of grounds for a challenge. 

 

However, when I saw the case being put forward, I was puzzled.  From my perspective, the arguments just don't stack up.  In particular, three points are emphasised in the summary (quoted verbatim here from the website): 

 

  • First, an agreement to fix the price of peer review services at zero that includes an agreement to coerce scholars into providing their labor for nothing by expressly linking their unpaid labor with their ability to get their manuscripts published in the defendants’ preeminent journals.

 

But it's not true that there is an express link between peer review and publishing papers in the pre-eminent journals.  In fact, many journal editors complain that some of the most prolific authors never do any peer review - gaining an advantage by not adopting the "good citizen" behaviour of a peer reviewer.  I think this point can be rapidly thrown out.

 

  • Second, the publisher defendants agreed not to compete with each other for manuscripts by requiring scholars to submit their manuscripts to only one journal at a time, which substantially reduces competition by removing incentives to review manuscripts promptly and publish meritorious research quickly. 

 

This implies that the rationale for not allowing multiple submissions is to reduce competition between publishers.  But if there were no limit on how many journals you could simultaneously submit to, then the number of submissions to each journal would grow massively, increasing the workload for editors and peer reviewers - and much of their time would be wasted. This seems like a rational requirement, not a sinister one.

 

  • Third, the publisher defendants agreed to prohibit scholars from freely sharing the scientific advancements described in submitted manuscripts while those manuscripts are under peer review, a process that often takes over a year. As the complaint notes, “From the moment scholars submit manuscripts for publication, the Publisher Defendants behave as though the scientific advancements set forth in the manuscripts are their property, to be shared only if the Publisher Defendant grants permission. Moreover, when the Publisher Defendants select manuscripts for publication, the Publisher Defendants will often require scholars to sign away all intellectual property rights, in exchange for nothing. The manuscripts then become the actual property of the Publisher Defendants, and the Publisher Defendants charge the maximum the market will bear for access to that scientific knowledge.” 

Again, I would question the accuracy of this account.  For a start, in most science fields, peer review is a matter of weeks or months, not "over a year".  But also, most journals these days allow authors to post their articles as preprints, prior to, or at the point of submission. In fact, this is encouraged by many institutions, as it means that a Green Open Access version of the publication is available, even if the work is subsequently published in a pay-to-read version.  

 

In all, I am rather dismayed by this case, especially when there are very good grounds on which academic publishers can be challenged.  For instance:

 

1. Academic publishers claim to ensure quality control of what gets published, but some of them fail to do the necessary due diligence in selecting editors and reviewers, with the result that the academic literatureis flooded with weak and fraudulent material, making it difficult to distinguish valuable from pointless work, and creating an outlet for damaging material, such as pseudoscience.  This has become a growing problem with the advent of paper mills.

 

2. Many publishers are notoriously slow at responding to credible evidence of serious problems in published papers. It can take years to get studies retracted, even when they have important real world consequences.

 

3. Perhaps the only point in common with the case by Leiff Cabraser, Heimann and Bernstein concerns the issue of retention of intellectual property rights.  It is the case that publishers have traditionally required authors to sign away copyright of their works.  In the UK, at least, there has been a movement to fight this requirement, which has had some success, but undoubtedly more could be done. 

 

If I can find time I will add some references to support some of the points above - this is a hasty response to discussion taking place on social media, where many people seem to think it's great that someone is taking on the big publishers. I never thought I would find myself in a position of defending them, but I think if you are going to attack a behemoth, you need to do so with good weapons.  

 

 

Postscript

Comments on this post are welcome - there is moderation so they don't appear immediately.

 Nick Wise attempted unsuccessfully to add a comment (sorry, Blogger can be weird), providing this helpful reference on typical duration of peer review.  Very field-dependent and may be a biased sample, I suspect, but it gives us a rough idea.

PPS. 5th October 2024.

Before I wrote this blogpost, I contacted the legal firm involved, Leiff Cabraser, Heimann & Bernstein, via their website, to raise the same points.  Yesterday I received a reply from them, explaining that "Because you are located abroad, unfortunately you are not a member of this class suit".  This suggests they don't read correspondence sent to them. Not impressed.  

Wednesday 4 September 2024

Now you see it, now you don't: the strange world of disappearing Special Issues at MDPI

 

There is growing awareness that Special Issues have become a menace in the world of academic publishing, because they provide a convenient way for large volumes of low quality work to be published in journals that profit from a healthy article processing charge. There has been a consequent backlash against Special Issues, with various attempts to rein them in. Here I'll describe the backstory and show how such attempts are being subverted. 

Basically, when it became normative for journals to publish open access papers in exchange for an article processing charge, many publishers saw an opportunity to grow their business by expanding the number of articles they published. There was one snag: to maintain quality standards, one requires academic editors to oversee the peer review process and decide what to publish. The solution was to recruit large numbers of temporary "guest editors", each of whom could invite authors to submit to a Special Issue in their area of expertise; this cleverly solved two problems at once: it provided a way to increase the number of submissions to the journal, and it avoided overloading regular academic editors. In addition, if an eminent person could be persuaded to act as guest editor, this would encourage researchers to submit their work to a Special Issue.

Problems soon became apparent though. Dubious individuals, including those running paper mills, seized the opportunity to volunteer as guest editors, and then proceeded to fill Special Issues with papers that were at best low quality and at worse fraudulent. As described in this blogpost, the publisher Wiley, was badly hit by fallout from the Special Issues programme, with its Hindawi brand being 'sunsetted' in 2023 . In addition, the Swiss National Science Foundation declared they would not fund APCs for articles in Special Issues, on the grounds that the increase in the number of special issues was associated with shorter processing times and lower rejection rates, suggestive of rushed and superficial peer review. Other commentators noted the reputational risks of overreliance on Special Issues.

Some publishers that had adopted the same strategy for growth looked on nervously, but basically took the line that the golden goose should be tethered rather than killed, introducing various stringent conditions around how Special Issues operated. The publisher MDPI, one of those that had massive growth in Special Issues in recent years, issued detailed guidelines.

One of these concerned guest editors publishing in their own special issues. These guidelines have undergone subtle changes over time, as evidenced by these comparisons of different versions (accessed via Wayback Machine):
JUNE 2022: The special issue may publish contributions from the Guest Editor(s), but the number of such contributions should be limited to 20%, to ensure the diversity and inclusiveness of authorship representing the research area of the Special Issue.... Any article submitted by a Guest Editor will be handled by a member of the Editorial Board.

 21 JAN 2023: The special issue may publish contributions from the Guest Editor(s), but the number of such contributions should be limited, to ensure the diversity and inclusiveness of authorship representing the research area of the Special Issue. Any article submitted by a Guest Editor will be handled by a member of the Editorial Board.

2 JAN 2024: The special issue may publish contributions from the Guest Editor(s), but the number of such contributions should be limited to 25%, to ensure the diversity and inclusiveness of authorship representing the research area of the Special Issue. Any article submitted by a Guest Editor will be handled by a member of the Editorial Board.

3 MAY 2024: The special issue may publish contributions from the Guest Editor(s), but the number of such contributions should be limited, to ensure the diversity and inclusiveness of authorship representing the research area of the Special Issue. Any article submitted by a Guest Editor will be handled by a member of the Editorial Board.

The May 2024 version of guidelines is nonspecific but problematic, because it is out of alignment with criteria for accreditation by the Directory of Open Access Journals (DOAJ) , who state "Papers submitted to a special issue by the guest editor(s) must be handled under an independent review process and make up no more than 25% of the issue's total". Most of MDPI's journals are listed on DOAJ, which is a signal of trustworthiness.

So, how well is MDPI doing in terms of the DOAJ criteria? I was first prompted to ask this question when writing about an article in a Special Issue of Journal of Personalized Medicine that claimed to "reverse autism symptoms". You can read my critique of that article here; one question it raised was how on earth did it ever get published? I noted that the paper was handled by a guest editor, Richard E. Frye, who had coauthored 7 of the 14 articles in the Special Issue. I subsequently found that between 2021 and 2024 he had published 30 articles in Journal of Personalized Medicine, most in three special issues where he was guest editor. I'm pleased to say that DOAJ have now delisted the journal from their Directory. But this raises the question of how well MDPI is regulating their guest editors to prevent them going rogue and using a Special Issue as a repository for papers by themselves and their cronies.

To check up on this, I took a look at Special Issues published in 2023-2024 in 28 other MDPI journals*, focusing particularly on those with implications for public health. What I found was concerning at four levels. 

  1. Every single journal I looked at had Special Issues that broke the DOAJ rule of no more than 25% papers co-authored by guest editors (something DOAJ refer to as "endogeny").  Some of these can be found on PubPeer, flagged with the term "stuffedSI". 
  2. A minority of Special Issues conformed to the description of a "successful Special Issue" envisaged by the MDPI guidelines: "Normally, a successful Special Issue consists of 10 or more papers, in addition to an editorial (optional) written by the Guest Editor(s)." For the journals I looked at around 60% of Special Issues had fewer than 10 articles. 
  3. Quite often, the listed guest editors did not actually do any editing. One can check this by comparing the Action Editor listed for each article. Here's one example, where a different editor was needed for three of the nine papers to avoid conflict of interest, because they were co-authored by the guest editors;  but the guest editors are not listed as action editors for any of the other six papers in the special issue. 
  4. As I did this analysis, I became aware that some articles changed status. For instance, Richard E. Frye, mentioned above, had additional articles in the Journal of Personalized Medicine that were originally part of a Special Issue that are now listed as just belonging to a Section. see https://pubpeer.com/publications/BA21B22CA3FED62B6D3F679978F591#1.This change was not transparent, but was evident when earlier versions of the website were accessed using the Wayback Machine. Some of these are flagged with the term "stealth correction" on PubPeer.

This final observation was particularly worrying, because it indicated that the publisher could change the Special Issue status of articles post-publication. The concern is that lack of oversight of guest editors has created a mechanism whereby authors can propose a Special Issue, get taken on as a guest editor, and then have papers accepted there (either their own, or from friends, which could include papermillers), after which the Special Issue status is removed. In fact, given the growing nervousness around Special Issues, removal of Special Issue status could be an advantage.

When I have discussed with colleagues these and other issues around MDPI practices, I find that credible researchers tell me that there are some excellent journals published by MDPI. It seems unfortunate that, in seeking rapid growth via the mechanism of Special Issues, the publisher has risked its reputation by giving editorial responsibility to numerous guest editors without adequate oversight, and encouraged quantity over quality. Furthermore, the lack of transparency demonstrated by the publisher covertly removing Special Issue status from articles by guest editors does not appear consistent with their stated commitment to ethical policies. 

 *The code for this analysis and a summary chart for the 28 journals can be found on Github.

Thursday 22 August 2024

Optimizing research integrity investigations: the need for evidence

 

An article was published last week by Caron et al (2024) entitled "The PubPeer conundrum: Administrative challenges in research misconduct proceedings". The authors present a perspective on research misconduct from a viewpoint that is not often heard: three of them are attorneys who advise higher education institutions on research misconduct matters, and the other has served as a Research Integrity Officer at a hospital. 

The authors conclude that the bar for research integrity investigations should be raised, requiring a complaint to reach a higher evidential standard in order to progress, and using a statute of limitations to provide a cutoff date beyond which older research would not usually be investigated. This amounts to saying that the current system is expensive and has bad consequences, so let's change it to do fewer investigations - this will cost less and fewer bad consequences will happen.  The tldr; version of this blogpost is that the argument fails because on the one hand the authors give no indication of the frequency of bad consequences, and on the other hand, they ignore the consequences of failing to act.

How we handle misconduct allegations can be seen as an optimization problem; to solve it, we need two things: data on frequency of different outcomes, and an evaluation of how serious different outcomes are.

We can draw an analogy with a serious medical condition that leads to a variety of symptoms, and which can only be unambiguously diagnosed by an invasive procedure which is both unpleasant and expensive. In such a case, the family doctor will base the decision whether to refer for invasive testing on the basis of information such as physical symptoms or blood test results, and refer the patient for specialist investigations only if the symptoms exceed some kind of threshold. 

The invasive procedure may confirm that the disease is really present, a true positive, or that it is absent, a false positive. Those whose symptoms do not meet a cutoff do not progress to the invasive procedure, but may nevertheless have the disease, i.e., false negatives, or they may be free from the disease, true negatives. The more lenient the cutoff, the more true positives, but the price we pay will be to increase the rate of false positives. Conversely, with a stringent cutoff, we will reduce false positives, but will also miss true cases (i.e. have more false negatives).

Optimization is not just a case of seeking to maximize correct diagnoses - it must also take into account costs and benefits of each outcome. For some common conditions, it is deemed more serious to miss a true case of disease (false negative) than to send someone for additional testing unnecessarily (false positive). Many people feel they would put up with inconvenience, embarrassment, or pain rather than miss a fatal tumour. But some well-established medical screening programmes have been queried or even abandoned on the grounds that they may do more harm than good by creating unnecessary worry or leading to unwarranted medical interventions in people who would be fine left untreated. 

So, how does this analogy relate to research misconduct? The paper by Caron et al emphasizes the two-stage nature of the procedure that is codified in the US by the Office of Science and Technology Policy (OSTP), which is mandatory for federal agencies that conduct or support research. When an allegation of research misconduct is presented to a research institution, it is rather like a patient presenting themselves to a physician: symptoms of misconduct are described, and the research integrity officers must decide whether to proceed to a full investigation - a procedure which is both costly and stressful.

Just as patients will present with symptoms that are benign or trivial, some allegations of misconduct can readily be discounted. They may concern minor matters or be obviously motivated by malice. But there comes a point when the allegations can't be dismissed without a deeper investigation - equivalent to referring the patient for specialist testing. The complaint of Caron et al is that the bar for starting an investigation is specified by the regulator, and is set too low, leading to a great deal of unnecessary investigation. They make it sound rather like the situation that arose with prostate screening in the UK: use of a rather unreliable blood test led to a situation where there was overdiagnosis and overtreatment: in other words, the false positive rate was far too high. The screening programme was eventually abandoned.

My difficulty with this argument is that at no point do Caron et al indicate what the false positive rate is for investigations of misconduct. They emphasize that the current procedures for investigation of misconduct are onerous, both on the institution and on the person under investigation. They note the considerable damage that can be done when a case proves to be a false positive, where an aura of untrustworthiness may hang around the accused, even if they are exonerated. Their conclusion is that the criteria for undertaking an investigation should be made more stringent. This would undoubtedly reduce the rate of false positives, but it would also decrease the true positive detection rate.

One rather puzzling aspect of Caron et al's paper was their focus on the post-publication peer review website PubPeer as the main source of allegations of research misconduct. The impression they gave is that PubPeer has opened the floodgates to accusation of misconduct, many of which have little substance, but which the institutions are forced to respond to because of ORI regulations. This is the opposite of what most research sleuths experience, which is that it is extremely difficult to get institutions to take reports of possible research misconduct seriously, even when the evidence looks strong.

Given these diametrically opposed perspectives, what is needed is hard data on how many reported cases of misconduct proceed to a full investigation, and how many subsequently are found to be justified. And, given the authors' focus on PubPeer, it would be good to see those numbers for allegations that are based on PubPeer comments versus other sources.

There's no doubt that the volume of commenting on PubPeer has increased, but the picture presented by Caron et al seems misleading in implying that most complaints involve concerns such as "a single instance of image duplication in a published paper". Most sleuths who regularly report on PubPeer know that such a single instance is unlikely to be taken seriously; they also know that a researcher who commits research misconduct is often a serial offender, with a pattern of problems across multiple papers. Caron et al note the difficulties that arise when concerns are raised about papers that were published many years ago, where it is unlikely that original data still exist. That is a valid point, but I'd be surprised if research integrity officers receive many allegations via PubPeer based solely on a single paper from years ago; the reason that older papers come to attention is typically because a researcher's more recent work has come into question, which triggers a sleuth to look at other cases. I accept I could be wrong, though. I tend to focus on cases where there is little doubt that misconduct has occurred, and, like many sleuths, I find it frustrating when concerns are not taken seriously, so maybe I underestimate the volume of frivolous or unfounded allegations. If Caron et al want to win me over, they'd have to provide hard data showing how much investigative time is spent on cases that end up being dismissed.

Second, and much harder to estimate, what is the false negative rate: how often are cases of misconduct missed? The authors focus on the sad plight of the falsely accused researcher but say nothing about the negative consequences when a researcher gets away with misconduct. 

Here, the medical analogy may be extended further, because in one important respect, misconduct is less like cancer and more like an infectious disease. It affects all who work with the researcher, particularly younger researchers who will be trained to turn a blind eye to inconvenient data and to "play the game" rather than doing good research. The rot spreads even further: huge amounts of research funding are wasted by others trying to build on noncredible research, and research syntheses are corrupted by the inclusion of unreliable or even fictitious findings. In some high-stakes fields, medical practice or government policy may be influenced by fraudulent work. If we simply make it harder to investigate allegations of misconduct, we run the risk of polluting academic research. And the research community at large can develop a sense of cynicism when they see fraudsters promoted and given grants while honest researchers are neglected.

So, we have to deal with the problem that, currently, fraud pays. Indeed, it is so unlikely to be detected that, for someone with a desire to succeed uncoupled from ethical scruples, it is a more sensible strategy to make up data than to collect it. Research integrity officers may worry now that they are confronted with more accusations of misconduct than they can handle, but if institutions focus on raising the bar for misconduct investigations, rather than putting resources in to tackle the problem, it will only get worse.

In the UK, universities sign up to a Concordat to Support Research Integrity which requires them to report on the number and outcome of research misconduct investigations every year. When it was first introduced, the sense was that institutions wanted to minimize the number of cases reported, as it might be a source of shame.  Now there is growing recognition that fraud is widespread, and the shame lies in failing to demonstrate a robust and efficient approach to tackling it. 


Reference

Caron, M. M., Lye, C. T., Bierer, B. E., & Barnes, M. (2024). The PubPeer conundrum: Administrative challenges in research misconduct proceedings. Accountability in Research, 1–19. https://doi.org/10.1080/08989621.2024.2390007.

Thursday 8 August 2024

My experience as a reviewer for MDPI

 

Guest post by 

René Aquarius, PhD

Department of Neurosurgery

Radboud University Medical Center, Nijmegen, The Netherlands

 

After a recent zoom-call where Dorothy and I discussed several research-related topics, she invited me to write a guest blogpost about the experience I had as a peer-reviewer for MDPI. As I think transparency in research is important, I was happy to accept this invitation.  

 

Mid November 2023 I received a request to peer-review a manuscript for a special issue on subarachnoid hemorrhage for the Journal of Clinical Medicine, published by MDPI. This blog post summarizes that process. I hope it will give some insight on the nitty-gritty of the peer-review process for MDPI.

 

I decided to review the manuscript two days after receiving the invitation and what I found was a study like many others in the field: a single-center, retrospective analysis of a clinical case series. I ended up recommending rejection of the paper two days after accepting to review. My biggest gripes were that the authors claimed that data were collected prospectively, but their protocol was registered at the very end of the period in which they included patients. In addition, I discovered some important discrepancies between protocol and the final study. Target sample size according to the protocol was 50% bigger than what was used in their study. The minimum age for patients also differed between the protocol and the manuscript. I also had problems with their statistical analysis as they used more than 20 t-tests to test variables, which creates a high probability of Type I errors. The biggest problem was the lack of a control group, which made it impossible to establish whether changes in a physiological parameter could really predict intolerance for a certain drug in a small subset of patients.

 

When filling out the reviewer form for MDPI, certain aspects struck me as peculiar. There are four options for Overall Recommendation:

  • Accept in present form
  • Accept after minor revision (correction to minor methodological errors and text editing)
  • Reconsider after major revision (control missing in some experiments)
  • Reject (article has serious flaws, additional experiments needed, research not conducted correctly)

 

Regardless of which of the last two options you select, the response is: "If we ask the authors to revise the manuscript, the revised manuscript will be sent to you for further evaluation". 

 

Although reviewer number 2 is often jokingly referred to as "the difficult one" it couldn’t be further from the truth in this case. The reviewer liked the paper and recommended accept after minor revision. So with a total of two reviews, the paper got the editorial decision of rejected, with a possibility of resubmission after extensive revisions only one day after I handed in my peer review report.

 

Revisions were quite extensive, as you will discover below, and arrived only two days after the initial rejection. I agreed to review the revised manuscript. But before I could start my review of the revision, just four days after receiving the invitation, I received a response from the editorial office that my review was no longer needed because they already had enough peer-reviewers for the manuscript. I politely ignored this request, because I wanted to know if the manuscript had improved. What happened next was quite a bit of a surprise, but not in a good way. 

 

The manuscript had indeed undergone extensive revisions. The biggest change, however, was also the biggest red flag. Without any explanation the study had lost almost 20% of its participants. An additional problem was that all the issues I had raised in my previous review report remained unaddressed. I sent my newly written feedback report the same day, exactly one week after my initial rejection.

 

When I handed in my second review report, I understood why I initially got an email that my review was not needed anymore. One peer reviewer had also rejected the manuscript and had concerns similar to mine. Two other reviewers, however, accepted the manuscript. One with minor revisions (English needed some improvement) and one in present form, so without any suggested revisions. This means that if I had followed the advice of the editorial office of MDPI, the paper would probably have been accepted in its current form. But because my vote was now also cast and the paper received two rejections, the editor couldn’t do much more than to reject the manuscript, which happened three days after I handed in my review report.  

 

Fifteen days after receiving my first invitation to review, the manuscript had already seen two full rounds of peer-review by at least four different peer-reviewers.

 

This is not where the story ends.  

 

In December, about a month later, I received an invitation to review a manuscript for the MDPI journal Geriatrics. You’ve guessed it by now: it was the same manuscript. It's reasonable to assume this was shifted internally through MDPI's transfer service, summarised in this figure.  I can only speculate as to why I was still attached to the manuscript as a peer-reviewer, but I guess somebody forgot to remove my name from it.

from: https://www.mdpi.com/authors/transfer-service

The manuscript had, again, transformed. It was now very similar to the very first version I reviewed. Almost word-for-word similar. That also meant that the number of included patients was restored to the initial number. However, the registered protocol that was previously mentioned in the methods section (which had led to some of the most difficult to refute critiques) was now completely left out. The icing on the cake was that, for a reason that was not explained, another author was added to the manuscript. There was no mention in this invitation of the previous reviews and rejections of the same manuscript.   Although one might wonder whether MDPI editors were aware of this, it would be strange if they were not, since they pride themselves on their Susy manuscript submission system where "editors can easily track concurrent and previous submissions from the same authors".

 

Because the same issues were still present in the manuscript, I rejected it for a third time on the same day I agreed to review it. In an accompanying message to the editor, I clearly articulated my problems with the manuscript and the review process.

 

The week after, I received a message that the editor had decided to withdraw the manuscript in consultation with the authors.

 

Late January 2024, the manuscript was published in the MDPI journal Medicina. I was not attached to the manuscript any more as a reviewer. There was no indication on the website of the name of the acting editor who accepted it. 


Note from Dorothy Bishop

Comments on this blog are moderated so there may be some delay before they appear, but legitimate, on-topic contributions are welcomed. We would be particularly interested to hear from anyone else who has experiences, good or bad, as a reviewer for MDPI journals.

 

Postscript by Dorothy Bishop: 19 Aug 2024 

Here's an example of a paper that was published with the reviews visible. Two were damning and one was agreeable.  https://www.mdpi.com/2079-6382/9/12/868.  Thanks to @LymeScience for drawing our attention to this, and noting the important clinical consequences when those promoting an alternative, non-evidenced treatment have a "peer-reviewed" study to refer to.