Thursday, 12 March 2026

Bishopblog catalogue (updated 12 Mar 2026)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018) Changing terminology for children's language disorders (23 Feb 2020) Developmental Language Disorder (DLD) in relaton to DSM5 (29 Feb 2020) Why I am not engaging with the Reading Wars (30 Jan 2022)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019) Biomarkers to screen for autism (again) (6 Dec 2022)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014) The sinister side of French psychoanalysis revealed (15 Oct 2019) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019) Why eugenics is wrong (18 Feb 2020)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020) Open data: We know what's needed - now let's make it happen (27 Mar 2021) A proposal for data-sharing the discourages p-hacking (29 Jun 2022) Can systematic reviews help clean up science (9 Aug 2022)Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023) Book Review: Unreliable (Csaba Szabo) (Mar 16, 2025) Gold standard science isn't gold standard if it's applied selectively - firearms (Aug 26, 2025) Gold standard science isn't gold standard if it's applied selectively - autism (Aug 27, 2025) Wellcome LEAP's new $50M program (Oct 20, 2025) The dangers of using bibliometrics with polluted data (Nov 21, 2025)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018) Time to ditch relative risk in media reports (23 Jan 2020)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) Psychology research: hopeless case or pioneering field? (28 Aug 2015) When scientific communication is a one-way street (13 Dec 2016) Time to ditch relative risk in media reports (23 Jan 2020) Book Review. Fiona Fox: Beyond the Hype (12 Apr 2022)

Academic Publishing
Science journal editors: a taxonomy (28 Sep 2010) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018) Corrigendum: a word you may hope never to encounter (3 Aug 2019) Percent by most prolific author score and editorial bias (12 Jul 2020) PEPIOPs – prolific editors who publish in their own publications (16 Aug 2020) Faux peer-reviewed journals: a threat to research integrity (6 Dec 2020) Time for publishers to consider the rights of readers as well as authors (13 Mar 2021) Universities vs Elsevier: who has the upper hand? (14 Nov 2021) We need to talk about editors (6 Sep 2022) So do we need editors? (11 Sep 2022) Reviewer-finding algorithms: the dangers for peer review (30 Sep 2022) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) What is going on in Hindawi special issues? (12 Oct 2022) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022) A suggestion for e-Life (20 Mar 2023) Papers affected by misconduct: Erratum, correction or retraction? (11 Apr 2023) Is Hindawi “well-positioned for revitalization?” (23 Jul 2023) The discussion section: Kill it or reform it? (14 Aug 2023) Spitting out the AI Gobbledegook sandwich: a suggestion for publishers (2 Oct 2023) The world of Poor Things at MDPI journals (Feb 9 2024) Some thoughts on eLife's New Model: One year on (Mar 27 2024) Does Elsevier's negligence pose a risk to public health? (Jun 20 2024) Collapse of scientific standards at MDPI journals: a case study (Jul 23 2024) My experience as a reviewer for MDPI (Aug 8 2024) Optimizing research integrity investigations: the need for evidence (Aug 22 2024) Now you see it, now you don't: the strange world of disappearing Special Issues at MDPI (Sep 4 2024) Prodding the behemoth with a stick (Sep 14 2024) Using PubPeer to screen editors (Sep 24 2024) An open letter regarding Scientific Reports (Oct 16 2024) What's going on at the Journal of Psycholinguistic Research? (Oct 21, 2024) Finland vs Germany: the case of MDPI (Dec 23, 2024) Tomatoes roaming the fields: another embarrassing paper for MDPI (Jan 18, 2025) IEEE has a pseudoscience problem (Feb 22, 2025) Trouble at t'(review) mill: How MDPI lets down authors (July 21 ,2025) New publishing models will only work if authors embrace them (July 31, 2025) Problems with eLife's new article type: Replication studies (Oct 27, 2025) The inner workings of a paper mill (Nov 8, 2025) An open letter to the BMJ editorial board (Jan 5, 2026) An analysis of PubPeer comments on highly-cited retracted articles (Feb 2, 2026) Stealth corrections are still a threat to academic integrity (Feb 20, 2026) 

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019) A call for funders to ban institutions that use grant capture targets (20 Jul 2019) Research funders need to embrace slow science (1 Jan 2020) Should I stay or should I go: When debate with opponents should be avoided (12 Jan 2020) Stemming the flood of illegal external examiners (9 Feb 2020) What can scientists do in an emergency shutdown? (11 Mar 2020) Stepping back a level: Stress management for academics in the pandemic (2 May 2020)
TEF in the time of pandemic (27 Jul 2020) University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester (3 Mar 2021) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) When there are no consequences for misconduct (16 Dec 2022) Open letter to CNRS (30 Mar 2023) When privacy rules protect fraudsters (Oct 12, 2023) Defence against the dark arts: a proposal for a new MSc course (Nov 19, 2023) An (intellectually?) enriching opportunity for affiliation (Feb 2 2024) Just make it stop! When will we say that further research isn't needed? (Mar 24 2024) Are commitments to open data policies worth the paper they are written on? (May 26 2024) Whistleblowing, research misconduct, and mental health (Jul 1 2024) I don't care about journal impact factors but I do care about visibility (Oct 27, 2024) Why I have resigned from the Royal Society (Nov 25, 2024) Seven reasons for keeping Elon Musk as a Fellow of the Royal Society (Feb 12, 2025) The dangers of using bibliometrics with polluted data (Nov 21, 2025)

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019) Low-level lasers. Part 1. Shining a light on an unconventional treatment for autism (Nov 25, 2023) Low-level lasers. Part 2. Erchonia and the universal panacea (Dec 5, 2023)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019) A day out at 10 Downing St (10 Aug 2019) Voting in the EU referendum: Ignorance, deceit and folly ( 8 Sep 2019) Harry Potter and the Beast of Brexit (20 Oct 2019) Attempting to communicate with the BBC (8 May 2020) Boris bingo: strategies for (not) answering questions (29 May 2020) Linking responsibility for climate refugees to emissions (23 Nov 2021) Response to Philip Ball's critique of scientific advisors (16 Jan 2022) Boris Johnson leads the world ....in the number of false facts he can squeeze into a session of PMQs (20 Jan 2022) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) Contagion of the political system (3 Apr 2022)When there are no consequences for misconduct (16 Dec 2022)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022) Retrospective look at blog highlights of 2024 (Jan 1, 2025)

Friday, 20 February 2026

Guest post: Stealth corrections are still a threat to scientific integrity


Authors

René Aquarius, Floris Schoeters, Alex Glynn, Guillaume Cabanac

 

An update on stealth corrections

Last year, we published an article describing stealth corrections, a phenomenon in which a publisher makes at least one post-publication change to a scientific article, without providing a correction note or any other indicator that the publication was temporarily or permanently altered.

 

Now, we have expanded our database with newly identified stealth corrections. We also wrote a freely accessible COSIG guide describing how to report stealth corrections in a transparent fashion.

 

Difficult to pinpoint

Stealth corrections are, by nature, extremely difficult to track down and most stealth corrections are identified by science sleuths who might notice a mismatch between different versions of an article. It is impossible to provide a comprehensive overview and one must assume that we have only identified a small minority of these issues.

 

For this update we applied the same pragmatic approach in documenting stealth retractions as previously: registering stealth corrections on PubPeer ourselves, asking around within the science sleuthing community and searching the PubPeer database for terms as “no erratum”, “no corrigendum”, or “stealth” (repeat the search yourself).

 

Stealth corrections were further categorized into the following types:

  • Changes in author information (addition or removal of authors, changes in author affiliation, etc.);
  • Changes in content (figures, data or text, etc.);
  • Changes in the record of editorial process (editor name, date of submission, acceptance or publication, etc.);
  • Changes in additional information (ethics statements, conflicts of interest statements, funding information, etc.).

New cases

We found 32 published articles that were affected by stealth corrections in addition to the 131 we had identified last year. An overview of all stealth corrections (#1-163) can be found in the online database, which also contains the links to all accompanying PubPeer posts for additional detail. Table 1 shows the type of correction per publisher for the 32 new cases.

 

Table 1. Type of correction per publisher for newly identified stealth corrections. 

  Changes in additional information Changes in author information Changes in content Changes in the record of editorial process
ACM 3 0 0 0
Am Phytopathological Soc 0 0 1 0
CV Literasi Indonesia 0 0 1 0
Elsevier 0 0 3 0
Impact Journals 0 0 1 0
Int Soc Computational Biology 1 0 0 0
MDPI 0 0 0 19
Oxford University Press 0 1 0 0
Springer Nature 0 0 1 0
Taylor & Francis 0 0 1 0
 

 

Why are stealth corrections still a thing?

Last year we wrote “post-publication amendments that are made silently, without a visible correction note, will give rise to questions regarding the ethics and integrity of the specific journal, editors and publisher, and might undermine the validity of the published literature as a whole”. Again, we have identified stealth corrections that might be used as a shortcut to ‘repair’ more serious issues. The three publishers with the most stealth corrections in this update were: MDPI, Elsevier, and the Association of Computing Machinery (ACM).

MDPI was involved in 19 new stealth corrections. Sixteen cases (#141, #144-158) were registered in August of 2024, too late for our initial pre-print and subsequent article on stealth corrections. All of these involved moving articles out of a ‘special issue’ and into a ‘section’. What stands out for all these 16 articles, is that the special issue editor was also an author on all of these papers. The Directory of Open Access Journals (DOAJ) has dictated that the number of articles co-authored by a special issue editor needs to be below lower than the 25% for each special issue. When it is higher than 25%, the DOAJ can delist the journal for not adhering to best practice, as detailed on their change log. Thus, by moving these articles silently out of special issues, MDPI is retroactively lowering this percentage to adhere to the rules of the DOAJ and therefore preventing potential delisting of their journals. In September 2024 -after publication of our pre-print- MDPI refuted that removing a Special Issue article from the digital SI website can be considered a ‘stealth correction’”. Possibly, the updated correction process (which now includes ‘minor corrections’) facilitated a complete stop of this practice by MDPI. We have not identified any recent cases, which is an encouraging sign.

In the remaining three cases (#135-137), the name of a peer reviewer was suddenly set to anonymous, while the contents of the peer review reports did not change. According to MDPI, this was done to adhere to GDPR requirements. However, this only happened after the peer reviewer was identified as being part of a review mill. The reviewer claims on PubPeer that they were not involved in writing the peer review report. These cases prove that a request for anonymity might hamper the desire for transparency and strengthening research integrity.

Elsevier was involved in 3 new stealth corrections. All of them involved changes in content. In 3 cases an image was silently replaced (#133, #139-141) according to PubPeer reports that were posted between December 2024 and May 2025. In response to our pre-print, Elsevier stated that they “do not correct articles without a formal notice”. However, in this update we -again- present clear evidence of major changes to the scientific record that went through without any formal acknowledgement in the form of a correction notice. This directly contradicts earlier statements from Elsevier. Eventually, all of these articles have been retracted, but only 4-12 months after the stealth correction was noticed, meaning there was a substantial window of time that allowed for interaction with these flawed articles, without any proper indication that there might have been a problem.

ACM silently made multiple changes to the introduction from three conference proceedings written by the conference chair (#159-161). References were removed and in one case the text was heavily altered. In all three cases, a notice of concern was also published to indicate that the peer review process had been compromised and the publisher strongly urged people not to cite the conference papers. It seems as if the ACM retroactively tried to erase the citations to the conference papers, but they did it by secretly making all kinds of alterations to the documents, which is far from ideal.

This update shows that some scientific publishers continue to use stealth corrections as a way to change the scientific record. Stealth corrections can undermine the entire enterprise of science; at the level of the individual article, the lack of a transparent correction minimizes the likelihood of those who read or cited the original version being informed of the change; on the macro level, the integrity of the published literature as a whole is compromised as readers never know for certain whether an article has been silently corrected or not. Meanwhile, there is still no consensus on issuing corrections.

 

Conclusion and recommendations

Stealth corrections are still problematic as they are sometimes used as a shortcut to ‘repair’ other integrity issues. Again, we stress that stealth corrections are notoriously difficult to find and that this update likely only shows chance findings by science sleuths. Correct documentation and transparency are of the utmost importance to uphold scientific integrity and the trustworthiness of science.

We still recommend:

  • Tracking of all changes to the published record by all publishers in an open, uniform and transparent manner, preferably by online submission systems that log every change publicly, making stealth corrections impossible.
  • Clear definitions and guidelines on all types of corrections.
  • Sustained vigilance of the scientific community to publicly register stealth corrections. Now made easier by using our COSIG guide.

 

Acknowledgements

We thank Dorothy Bishop for hosting this update on her blog and we thank all (anonymous) science sleuths who have found and reported stealth corrections: your work is much appreciated.

Note from DVMB: Comments are moderated on this blog.  They are usually approved if they are on topic and non-anonymous. 

Monday, 2 February 2026

An analysis of PubPeer comments on highly-cited retracted articles

PubPeer is sometimes discussed as if it is some kind of cesspit where people smear honest scientists with specious allegations of fraud. I'm always taken aback when I hear this, since it is totally at odds with my experience. When I conducted an analysis of PubPeer comments concerning papers from UK universities published over a two-year period, I found that all 345 of them conformed to PubPeer's guidelines, which require comments to contain only "Facts, logic and publicly verifiable information". There were examples where another commenter, sometimes an author, rebutted a comment convincingly. In other cases, the discussion concerned highly technical aspects of research, where even experts may disagree. Clearly, PubPeer comments are not infallible evidence of problems, but in my experience, they are strictly moderated and often draw attention to serious errors in published work.

The Problematic Paper Screener (PPS) is a beautiful resource that is ideal to investigate PubPeer's impact. It not only collates information on articles that are annulled (an umbrella term coined to encompass retractions, removals, or withdrawals), but it also cross-references this information with PubPeer, so you can see which articles have comments. Furthermore, it provides the citation count of each article, based on Dimensions.  

The PPS lists over 134,000 annulled papers; I wanted to see what proportion of retractions/withdrawals were preceded by a PubPeer comment. To make the task tractable, I focused on articles that had at least 100 citations, and which were annulled between 2021 and 2025. This gave a total of 800 articles, covering all scientific disciplines. It was necessary to read the PubPeer comments for each of these, because many comments occur after retraction, and serve solely to record the retraction on PubPeer. Accordingly, I coded each paper in terms of whether the first PubPeer comment preceded or followed the annulment.  

Flowchart of analysis of PPS annulled papers
 

I had anticipated that around 10-20% of these annulled articles would have associated PubPeer comments; this proved to be a considerable underestimate. In fact, 58% of highly-cited papers that were annulled between 2021-2025 had prior PubPeer comments. Funnily enough, shortly after I'd started this analysis, I saw this comment on Slack by Achal Agrawal: "I was wondering if there is any study on what percentage of retractions happen thanks to sleuths. I have a feeling that at least around 50% of the retractions happen thanks to the work of 10 sleuths." Achal's estimate of the percentage of flagged papers was much closer than mine. But what about the number of sleuths who were responsible?

It's not possible to give more than a rough estimate of the contribution of individual commenters. Many of them use pseudonyms (some people even use a different pseudonym for each post they submit), and combinations of individuals often contributed comments on a single article. Some of the PubPeer comments had been submitted in early years, when they were just labelled as "Unregistered submission" or "Peer 1" etc., so any estimate will be imperfect. The best I could do was to focus just on the first comment for each article, excluding any comments occurring after a retraction. Of those who had stable names or pseudonyms, the 10 most prolific commenters had commented on between 9 and 50 articles, accounting for 27% of all retractions in this sample. Although this is a lower proportion than Achal's estimate, it's an impressive number, especially when you bear in mind that there were many comments from unknown contributors, and the analysis focused only on articles with at least 100 citations.

Of course, the naysayers may reply and say that this just goes to show that the sleuths who comment on articles are effective in causing retractions, not that they are accurate. To that I can only reply that publishers/journals are very reluctant to retract articles: they may regard it as reputationally damaging, and be concerned about litigation from disgruntled authors. In addition, they have to go through due process and it takes up a lot of resources to make the necessary checks and modify the publication record. They don't do it lightly, and often don't do it at all, despite clear evidence of serious error in an article (see, e.g.  Grey et al, 2025)

If an article is going to be retracted, it is better that it is done sooner rather than later. Monitoring PubPeer would be a good way of cleaning up a polluted literature - in the interests of all of us. Any publisher can do that for free: just ask an employee of the integrity department to check new PubPeer posts every day—about 40 minutes and you’re done. PubPeer also provides publishers with a convenient dashboard to facilitate this essential monitoring task.

It would be interesting to extend the analysis to less highly-cited papers, but this would be a huge exercise, particularly since this would include many paper-milled articles from mass retractions. I hope that my selective analysis will at least demonstrate that those who comment on problematic articles on PubPeer should be taken seriously. 

 

Post-script: 7 February 2026

One of the commentators with numbered comments below has complained that I am censoring criticism, and has revealed their identity on LinkedIn as Ryan James Jessup, JD/MPA.  My bad - I usually paste a statement at the end of a blogpost explaining that Comments are moderated so there can be a delay, but I accept nonanonymous comments that are polite and on topic.  Jessup didn't take up my offer of incorporating his arguments in a section at the end of the blog, so I have accepted them and you can read them in the Comments.

I actually agree with a lot of what he says, but some points I disagree with, so here are my thoughts.

Points 1-2. He starts by stating the piece confuses correlation with causation.  On reflection I think he's right. The word "role" in the title is misleading, and I have accordingly changed the title of the post from "The role of PubPeer in retractions of highly-cited articles" to "An analysis of PubPeer comments on highly-cited articles".

3.  He argues that selection of highly-cited papers was done to fudge the result because these papers are most likely to be noticed and commented on.  The  actual reason for selecting these papers was to focus on outputs that had had some influence; many people assume PubPeer commentators just focus on the low-hanging fruit from papermills, which nobody is going to read anyhow. There is nothing to stop Jessup or anyone else doing his own analysis using another filter to see if these results generalise to less highly-cited articles. It involves just a few hours of rather tedious coding. Maybe sample a random 800 articles?  

4. He argues that "annulled" papers covers various categories.  I am glad to be able to clarify that in the sample of 800 papers that I analysed, all were retractions.

5. He disagrees that my opinion of whether PubPeer comments were factual and accurate has any value, and that they could be defamatory or otherwise falsely imply misconduct.  From my experience, I reckon it would be difficult to get such material past PubPeer moderators, but if he can provide some examples, that would be helpful.  

6. He says the coding method is subjective "They read comments and decide whether the first comment preceded or followed annulment".  The dates are provided for the retraction notice in the PPS, so this is just a matter of checking if the PubPeer comment (also dated) appeared before or after that date.

7. Re the identification of "top 10 sleuths".  I noted the limitations inherent in the data, so I am not sure what Jessup is complaining of here. The fact remains that a small number of individuals have been very effective in identifying issues in highly-cited articles prior to their retraction.

8.  Jessup argues that I'm saying that “journals don’t retract lightly, therefore PubPeer must be right”.  The first part of that argument has ample evidence. If he is aware of cases where PubPeer comments have indeed led to inappropriate retractions, then he should name them.

9-11. I do actually have some understanding of how retraction processes work in journals, but my concern is the failure of many journals/publishers to initiate the first step in the process.  I think we're in agreement that the current system for retracting articles from journals is broken. We also agree that PubPeer comments should be regarded as tips. My suggestion is simply that if publishers have a useful free source of tips, they should use it. A few of them do, but many don't seem motivated to be proactive because it just creates more work.

The prolific PubPeer commenters that I know would love it if the platform could be used primarily for civilised academic debate, as was the original intention. Unfortunately, science can't wait until the broken system is repaired; we do need to clean up a polluted literature. I would add that the idea that those who comment on PubPeer are doing it for the glory is laughable. The main reaction is to be ignored at best and abused at worst. They are unusual people who are obsessive about the need to have a reliable scientific literature.