Friday, 20 February 2026

Guest post: Stealth corrections are still a threat to scientific integrity


Authors

René Aquarius, Floris Schoeters, Alex Glynn, Guillaume Cabanac

 

An update on stealth corrections

Last year, we published an article describing stealth corrections, a phenomenon in which a publisher makes at least one post-publication change to a scientific article, without providing a correction note or any other indicator that the publication was temporarily or permanently altered.

 

Now, we have expanded our database with newly identified stealth corrections. We also wrote a freely accessible COSIG guide describing how to report stealth corrections in a transparent fashion.

 

Difficult to pinpoint

Stealth corrections are, by nature, extremely difficult to track down and most stealth corrections are identified by science sleuths who might notice a mismatch between different versions of an article. It is impossible to provide a comprehensive overview and one must assume that we have only identified a small minority of these issues.

 

For this update we applied the same pragmatic approach in documenting stealth retractions as previously: registering stealth corrections on PubPeer ourselves, asking around within the science sleuthing community and searching the PubPeer database for terms as “no erratum”, “no corrigendum”, or “stealth” (repeat the search yourself).

 

Stealth corrections were further categorized into the following types:

  • Changes in author information (addition or removal of authors, changes in author affiliation, etc.);
  • Changes in content (figures, data or text, etc.);
  • Changes in the record of editorial process (editor name, date of submission, acceptance or publication, etc.);
  • Changes in additional information (ethics statements, conflicts of interest statements, funding information, etc.).

New cases

We found 32 published articles that were affected by stealth corrections in addition to the 131 we had identified last year. An overview of all stealth corrections (#1-163) can be found in the online database, which also contains the links to all accompanying PubPeer posts for additional detail. Table 1 shows the type of correction per publisher for the 32 new cases.

 

Table 1. Type of correction per publisher for newly identified stealth corrections. 

  Changes in additional information Changes in author information Changes in content Changes in the record of editorial process
ACM 3 0 0 0
Am Phytopathological Soc 0 0 1 0
CV Literasi Indonesia 0 0 1 0
Elsevier 0 0 3 0
Impact Journals 0 0 1 0
Int Soc Computational Biology 1 0 0 0
MDPI 0 0 0 19
Oxford University Press 0 1 0 0
Springer Nature 0 0 1 0
Taylor & Francis 0 0 1 0
 

 

Why are stealth corrections still a thing?

Last year we wrote “post-publication amendments that are made silently, without a visible correction note, will give rise to questions regarding the ethics and integrity of the specific journal, editors and publisher, and might undermine the validity of the published literature as a whole”. Again, we have identified stealth corrections that might be used as a shortcut to ‘repair’ more serious issues. The three publishers with the most stealth corrections in this update were: MDPI, Elsevier, and the Association of Computing Machinery (ACM).

MDPI was involved in 19 new stealth corrections. Sixteen cases (#141, #144-158) were registered in August of 2024, too late for our initial pre-print and subsequent article on stealth corrections. All of these involved moving articles out of a ‘special issue’ and into a ‘section’. What stands out for all these 16 articles, is that the special issue editor was also an author on all of these papers. The Directory of Open Access Journals (DOAJ) has dictated that the number of articles co-authored by a special issue editor needs to be below lower than the 25% for each special issue. When it is higher than 25%, the DOAJ can delist the journal for not adhering to best practice, as detailed on their change log. Thus, by moving these articles silently out of special issues, MDPI is retroactively lowering this percentage to adhere to the rules of the DOAJ and therefore preventing potential delisting of their journals. In September 2024 -after publication of our pre-print- MDPI refuted that removing a Special Issue article from the digital SI website can be considered a ‘stealth correction’”. Possibly, the updated correction process (which now includes ‘minor corrections’) facilitated a complete stop of this practice by MDPI. We have not identified any recent cases, which is an encouraging sign.

In the remaining three cases (#135-137), the name of a peer reviewer was suddenly set to anonymous, while the contents of the peer review reports did not change. According to MDPI, this was done to adhere to GDPR requirements. However, this only happened after the peer reviewer was identified as being part of a review mill. The reviewer claims on PubPeer that they were not involved in writing the peer review report. These cases prove that a request for anonymity might hamper the desire for transparency and strengthening research integrity.

Elsevier was involved in 3 new stealth corrections. All of them involved changes in content. In 3 cases an image was silently replaced (#133, #139-141) according to PubPeer reports that were posted between December 2024 and May 2025. In response to our pre-print, Elsevier stated that they “do not correct articles without a formal notice”. However, in this update we -again- present clear evidence of major changes to the scientific record that went through without any formal acknowledgement in the form of a correction notice. This directly contradicts earlier statements from Elsevier. Eventually, all of these articles have been retracted, but only 4-12 months after the stealth correction was noticed, meaning there was a substantial window of time that allowed for interaction with these flawed articles, without any proper indication that there might have been a problem.

ACM silently made multiple changes to the introduction from three conference proceedings written by the conference chair (#159-161). References were removed and in one case the text was heavily altered. In all three cases, a notice of concern was also published to indicate that the peer review process had been compromised and the publisher strongly urged people not to cite the conference papers. It seems as if the ACM retroactively tried to erase the citations to the conference papers, but they did it by secretly making all kinds of alterations to the documents, which is far from ideal.

This update shows that some scientific publishers continue to use stealth corrections as a way to change the scientific record. Stealth corrections can undermine the entire enterprise of science; at the level of the individual article, the lack of a transparent correction minimizes the likelihood of those who read or cited the original version being informed of the change; on the macro level, the integrity of the published literature as a whole is compromised as readers never know for certain whether an article has been silently corrected or not. Meanwhile, there is still no consensus on issuing corrections.

 

Conclusion and recommendations

Stealth corrections are still problematic as they are sometimes used as a shortcut to ‘repair’ other integrity issues. Again, we stress that stealth corrections are notoriously difficult to find and that this update likely only shows chance findings by science sleuths. Correct documentation and transparency are of the utmost importance to uphold scientific integrity and the trustworthiness of science.

We still recommend:

  • Tracking of all changes to the published record by all publishers in an open, uniform and transparent manner, preferably by online submission systems that log every change publicly, making stealth corrections impossible.
  • Clear definitions and guidelines on all types of corrections.
  • Sustained vigilance of the scientific community to publicly register stealth corrections. Now made easier by using our COSIG guide.

 

Acknowledgements

We thank Dorothy Bishop for hosting this update on her blog and we thank all (anonymous) science sleuths who have found and reported stealth corrections: your work is much appreciated.

Note from DVMB: Comments are moderated on this blog.  They are usually approved if they are on topic and non-anonymous. 

Monday, 2 February 2026

An analysis of PubPeer comments on highly-cited retracted articles

PubPeer is sometimes discussed as if it is some kind of cesspit where people smear honest scientists with specious allegations of fraud. I'm always taken aback when I hear this, since it is totally at odds with my experience. When I conducted an analysis of PubPeer comments concerning papers from UK universities published over a two-year period, I found that all 345 of them conformed to PubPeer's guidelines, which require comments to contain only "Facts, logic and publicly verifiable information". There were examples where another commenter, sometimes an author, rebutted a comment convincingly. In other cases, the discussion concerned highly technical aspects of research, where even experts may disagree. Clearly, PubPeer comments are not infallible evidence of problems, but in my experience, they are strictly moderated and often draw attention to serious errors in published work.

The Problematic Paper Screener (PPS) is a beautiful resource that is ideal to investigate PubPeer's impact. It not only collates information on articles that are annulled (an umbrella term coined to encompass retractions, removals, or withdrawals), but it also cross-references this information with PubPeer, so you can see which articles have comments. Furthermore, it provides the citation count of each article, based on Dimensions.  

The PPS lists over 134,000 annulled papers; I wanted to see what proportion of retractions/withdrawals were preceded by a PubPeer comment. To make the task tractable, I focused on articles that had at least 100 citations, and which were annulled between 2021 and 2025. This gave a total of 800 articles, covering all scientific disciplines. It was necessary to read the PubPeer comments for each of these, because many comments occur after retraction, and serve solely to record the retraction on PubPeer. Accordingly, I coded each paper in terms of whether the first PubPeer comment preceded or followed the annulment.  

Flowchart of analysis of PPS annulled papers
 

I had anticipated that around 10-20% of these annulled articles would have associated PubPeer comments; this proved to be a considerable underestimate. In fact, 58% of highly-cited papers that were annulled between 2021-2025 had prior PubPeer comments. Funnily enough, shortly after I'd started this analysis, I saw this comment on Slack by Achal Agrawal: "I was wondering if there is any study on what percentage of retractions happen thanks to sleuths. I have a feeling that at least around 50% of the retractions happen thanks to the work of 10 sleuths." Achal's estimate of the percentage of flagged papers was much closer than mine. But what about the number of sleuths who were responsible?

It's not possible to give more than a rough estimate of the contribution of individual commenters. Many of them use pseudonyms (some people even use a different pseudonym for each post they submit), and combinations of individuals often contributed comments on a single article. Some of the PubPeer comments had been submitted in early years, when they were just labelled as "Unregistered submission" or "Peer 1" etc., so any estimate will be imperfect. The best I could do was to focus just on the first comment for each article, excluding any comments occurring after a retraction. Of those who had stable names or pseudonyms, the 10 most prolific commenters had commented on between 9 and 50 articles, accounting for 27% of all retractions in this sample. Although this is a lower proportion than Achal's estimate, it's an impressive number, especially when you bear in mind that there were many comments from unknown contributors, and the analysis focused only on articles with at least 100 citations.

Of course, the naysayers may reply and say that this just goes to show that the sleuths who comment on articles are effective in causing retractions, not that they are accurate. To that I can only reply that publishers/journals are very reluctant to retract articles: they may regard it as reputationally damaging, and be concerned about litigation from disgruntled authors. In addition, they have to go through due process and it takes up a lot of resources to make the necessary checks and modify the publication record. They don't do it lightly, and often don't do it at all, despite clear evidence of serious error in an article (see, e.g.  Grey et al, 2025)

If an article is going to be retracted, it is better that it is done sooner rather than later. Monitoring PubPeer would be a good way of cleaning up a polluted literature - in the interests of all of us. Any publisher can do that for free: just ask an employee of the integrity department to check new PubPeer posts every day—about 40 minutes and you’re done. PubPeer also provides publishers with a convenient dashboard to facilitate this essential monitoring task.

It would be interesting to extend the analysis to less highly-cited papers, but this would be a huge exercise, particularly since this would include many paper-milled articles from mass retractions. I hope that my selective analysis will at least demonstrate that those who comment on problematic articles on PubPeer should be taken seriously. 

 

Post-script: 7 February 2026

One of the commentators with numbered comments below has complained that I am censoring criticism, and has revealed their identity on LinkedIn as Ryan James Jessup, JD/MPA.  My bad - I usually paste a statement at the end of a blogpost explaining that Comments are moderated so there can be a delay, but I accept nonanonymous comments that are polite and on topic.  Jessup didn't take up my offer of incorporating his arguments in a section at the end of the blog, so I have accepted them and you can read them in the Comments.

I actually agree with a lot of what he says, but some points I disagree with, so here are my thoughts.

Points 1-2. He starts by stating the piece confuses correlation with causation.  On reflection I think he's right. The word "role" in the title is misleading, and I have accordingly changed the title of the post from "The role of PubPeer in retractions of highly-cited articles" to "An analysis of PubPeer comments on highly-cited articles".

3.  He argues that selection of highly-cited papers was done to fudge the result because these papers are most likely to be noticed and commented on.  The  actual reason for selecting these papers was to focus on outputs that had had some influence; many people assume PubPeer commentators just focus on the low-hanging fruit from papermills, which nobody is going to read anyhow. There is nothing to stop Jessup or anyone else doing his own analysis using another filter to see if these results generalise to less highly-cited articles. It involves just a few hours of rather tedious coding. Maybe sample a random 800 articles?  

4. He argues that "annulled" papers covers various categories.  I am glad to be able to clarify that in the sample of 800 papers that I analysed, all were retractions.

5. He disagrees that my opinion of whether PubPeer comments were factual and accurate has any value, and that they could be defamatory or otherwise falsely imply misconduct.  From my experience, I reckon it would be difficult to get such material past PubPeer moderators, but if he can provide some examples, that would be helpful.  

6. He says the coding method is subjective "They read comments and decide whether the first comment preceded or followed annulment".  The dates are provided for the retraction notice in the PPS, so this is just a matter of checking if the PubPeer comment (also dated) appeared before or after that date.

7. Re the identification of "top 10 sleuths".  I noted the limitations inherent in the data, so I am not sure what Jessup is complaining of here. The fact remains that a small number of individuals have been very effective in identifying issues in highly-cited articles prior to their retraction.

8.  Jessup argues that I'm saying that “journals don’t retract lightly, therefore PubPeer must be right”.  The first part of that argument has ample evidence. If he is aware of cases where PubPeer comments have indeed led to inappropriate retractions, then he should name them.

9-11. I do actually have some understanding of how retraction processes work in journals, but my concern is the failure of many journals/publishers to initiate the first step in the process.  I think we're in agreement that the current system for retracting articles from journals is broken. We also agree that PubPeer comments should be regarded as tips. My suggestion is simply that if publishers have a useful free source of tips, they should use it. A few of them do, but many don't seem motivated to be proactive because it just creates more work.

The prolific PubPeer commenters that I know would love it if the platform could be used primarily for civilised academic debate, as was the original intention. Unfortunately, science can't wait until the broken system is repaired; we do need to clean up a polluted literature. I would add that the idea that those who comment on PubPeer are doing it for the glory is laughable. The main reaction is to be ignored at best and abused at worst. They are unusual people who are obsessive about the need to have a reliable scientific literature.