Monday 21 October 2024

What is going on at the Journal of Psycholinguistic Research?

Last week this blog focussed on problems affecting Scientific Reports, a mega-journal published by Springer Nature. This week I look at a journal at the opposite end of the spectrum, the Journal of Psycholinguistic Research (JPR), a small, specialist journal which has published just 2187 papers since it was founded in 1971. This is fewer than Scientific Reports publishes in one year. It was brought to my attention by Anna Abalkina because it shows every sign of having been targeted by one or more Eastern European paper mills.

Now, this was really surprising to me. JPR was founded in 1971 by Robert Rieber, whose obituaries in the New York Times  and the American Psychologist confirm he had a distinguished career (though both misnamed JPR!). The Advisory and Editorial boards of the journal are peppered with names of famous linguists and psychologists, starting with Noam Chomsky. So there is a sense that if this can happen to JPR, no journal is safe.

Coincidentally, last week Anna and I submitted revisions for a commentary on paper mills coauthored with Pawel Matusz. (You can read the preprint here). Pawel is editor of the journal Mind, Brain & Education (MBE), which experienced an attack by the Tanu.pro paper mill involving papers published in 2022-3. In the commentary, we discussed characteristics of the paper mill, which are rather distinctive and quite different from what is seen in basic biomedical or physical sciences. A striking feature is that the IMRaD structure (Introduction, Methods, Results and Discussion) is used, but in a clueless fashion, with these headings being inserted in what is otherwise a rambling and discursive piece of text, that typically has little or no empirical content. Insofar as there are any methods described, they don't occur in the methods section, and they are too vague for the research to be replicable.

Reading these papers rapidly turns my brain to mush, but in the interest of public service I did wade through five of them and left comments on Pubpeer:  

Yeleussizkyzy, M., Zhiyenbayeva, N., Ushatikova, I. et al. E-Learning and Flipped Classroom in Inclusive Education: The Case of Students with the Psychopathology of Language and Cognition. J Psycholinguist Res 52, 2721–2742 (2023). https://doi.org/10.1007/s10936-023-10015-y  

Snezhko, Z., Yersultanova, G., Spichak, V. et al. Effects of Bilingualism on Students’ Linguistic Education: Specifics of Teaching Phonetics and Lexicology. J Psycholinguist Res 52, 2693–2720 (2023). https://doi.org/10.1007/s10936-023-10016-x

Nurakenova, A., Nagymzhanova, K. A Study of Psychological Features Related to Creative Thinking Styles of University Students. J Psycholinguist Res 53, 1 (2024). https://doi.org/10.1007/s10936-024-10042-3

Auganbayeva, M., Turguntayeva, G., Anafinova, M. et al.Linguacultural and Cognitive Peculiarities of Linguistic Universals. J Psycholinguist Res 53, 3 (2024). https://doi.org/10.1007/s10936-024-10050-3

Shalkarbek, A., Kalybayeva, K., Shaharman, G. et al. Cognitive Linguistic Analysis of Hyperbole-based Phraseological Expressions in Kazakh and English Languages. J Psycholinguist Res 53, 4 (2024). https://doi.org/10.1007/s10936-024-10052-1

My experience with the current batch of papers suggests that a relatively quick way of screening a submitted paper would be to look at the Methods section. This should contain an account of methods that would indicate what was done and how, at a level of detail sufficient for others to replicate the work. Obviously, this is not appropriate for theoretical papers, but for those purporting to report empirical work, it would work well, at least for the papers I looked at in JPR.   

All of these papers have authors from Kazakhstan, sometimes with co-authors from the Russian Federation. This led me to look at the geographic distribution of authors in the journal over time. The top countries represented by JPR authors in 2020 onwards are China (113), United States (68), Iran (52), Germany (28), Saudi Arabia (22) and Kazakhstan (19). However, these composite numbers mask striking trends. All the Kazakhstan authored papers are in 2023-2024. There's also a notable fall-off in papers authored by USA-based authors in the same time period, with only 11 cases in total. This is quite remarkable for a journal that had a striking USA dominance in authors up until around 2015, as shown in the attached figure (screenshot from Dimensions.ai).

 

Number of papers in JPR from five top countries: 2005-2024

Exported: October 20, 2024

Criteria: Source Title is Journal of Psycholinguistic Research.

© 2024 Digital Science and Research Solutions Inc. All rights reserved. 

Non-commercial redistribution / external re-use of this work is permitted subject to appropriate acknowledgement. 

This work is sourced from Dimensions® at www.dimensions.ai.

Whenever a paper mill infestation is discovered, it raises the question of how it happened. Surely the whole purpose of peer review is to prevent low quality or fraudulent material entering the literature? In other journals where this has happened it has been found that the peer review process was compromised, with fake peer reviewers being used. Even so, one would have hoped that an editor would scrutinize papers and realise something was amiss. As mentioned in the previous blogpost, it would be much easier to track down the ways in which fraudulent papers get into mainstream journals if the journal reported information about the editor who handled the paper, and published open peer review.

Whatever the explanation, it is saddening to see a fine journal brought so low. In 2021, at the 50th anniversary of the founding of the journal, the current editor, Rafael Art. Javier, wrote a tribute to his predecessor, Robert Rieber:
"His expectation, as stated in that first issue, was that manuscripts accepted 'must add to knowledge in some way, whether they are in the form of experimental reports, review papers, or theoretical papers...and studies with negative results,' provided that they are of sufficiently high quality to make an original contribution."

Let us hope that the scourge of paper mills can be banished from the journal to allow it to be restored to the status it once had, and for Robert Rieber's words to once more be applicable.

 

Saturday 19 October 2024

Bishopblog catalogue (updated 19 October 2024)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018) Changing terminology for children's language disorders (23 Feb 2020) Developmental Language Disorder (DLD) in relaton to DSM5 (29 Feb 2020) Why I am not engaging with the Reading Wars (30 Jan 2022)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019) Biomarkers to screen for autism (again) (6 Dec 2022)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014) The sinister side of French psychoanalysis revealed (15 Oct 2019) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019) Why eugenics is wrong (18 Feb 2020)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020) Open data: We know what's needed - now let's make it happen (27 Mar 2021) A proposal for data-sharing the discourages p-hacking (29 Jun 2022) Can systematic reviews help clean up science (9 Aug 2022)Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018) Time to ditch relative risk in media reports (23 Jan 2020)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018) Corrigendum: a word you may hope never to encounter (3 Aug 2019) Percent by most prolific author score and editorial bias (12 Jul 2020) PEPIOPs – prolific editors who publish in their own publications (16 Aug 2020) Faux peer-reviewed journals: a threat to research integrity (6 Dec 2020) Time to ditch relative risk in media reports (23 Jan 2020) Time for publishers to consider the rights of readers as well as authors (13 Mar 2021) Universities vs Elsevier: who has the upper hand? (14 Nov 2021) Book Review. Fiona Fox: Beyond the Hype (12 Apr 2022) We need to talk about editors (6 Sep 2022) So do we need editors? (11 Sep 2022) Reviewer-finding algorithms: the dangers for peer review (30 Sep 2022) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) What is going on in Hindawi special issues? (12 Oct 2022) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022) A suggestion for e-Life (20 Mar 2023) Papers affected by misconduct: Erratum, correction or retraction? (11 Apr 2023) Is Hindawi “well-positioned for revitalization?” (23 Jul 2023) The discussion section: Kill it or reform it? (14 Aug 2023) Spitting out the AI Gobbledegook sandwich: a suggestion for publishers (2 Oct 2023) The world of Poor Things at MDPI journals (Feb 9 2024) Some thoughts on eLife's New Model: One year on (Mar 27 2024) Does Elsevier's negligence pose a risk to public health? (Jun 20 2024) Collapse of scientific standards at MDPI journals: a case study (Jul 23 2024) My experience as a reviewer for MDPI (Aug 8 2024) Optimizing research integrity investigations: the need for evidence (Aug 22 2024) Now you see it, now you don't: the strange world of disappearing Special Issues at MDPI (Sep 4 2024) Prodding the behemoth with a stick (Sep 14 2024) Using PubPeer to screen editors (Sep 24 2024) An open letter regarding Scientific Reports (Oct 16 2024)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019) A call for funders to ban institutions that use grant capture targets (20 Jul 2019) Research funders need to embrace slow science (1 Jan 2020) Should I stay or should I go: When debate with opponents should be avoided (12 Jan 2020) Stemming the flood of illegal external examiners (9 Feb 2020) What can scientists do in an emergency shutdown? (11 Mar 2020) Stepping back a level: Stress management for academics in the pandemic (2 May 2020)
TEF in the time of pandemic (27 Jul 2020) University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester (3 Mar 2021) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) When there are no consequences for misconduct (16 Dec 2022) Open letter to CNRS (30 Mar 2023) When privacy rules protect fraudsters (Oct 12, 2023) Defence against the dark arts: a proposal for a new MSc course (Nov 19, 2023) An (intellectually?) enriching opportunity for affiliation (Feb 2 2024) Just make it stop! When will we say that further research isn't needed? (Mar 24 2024) Are commitments to open data policies worth the paper they are written on? (May 26 2024) Whistleblowing, research misconduct, and mental health (Jul 1 2024)

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019) Low-level lasers. Part 1. Shining a light on an unconventional treatment for autism (Nov 25, 2023) Low-level lasers. Part 2. Erchonia and the universal panacea (Dec 5, 2023)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019) A day out at 10 Downing St (10 Aug 2019) Voting in the EU referendum: Ignorance, deceit and folly ( 8 Sep 2019) Harry Potter and the Beast of Brexit (20 Oct 2019) Attempting to communicate with the BBC (8 May 2020) Boris bingo: strategies for (not) answering questions (29 May 2020) Linking responsibility for climate refugees to emissions (23 Nov 2021) Response to Philip Ball's critique of scientific advisors (16 Jan 2022) Boris Johnson leads the world ....in the number of false facts he can squeeze into a session of PMQs (20 Jan 2022) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) Contagion of the political system (3 Apr 2022)When there are no consequences for misconduct (16 Dec 2022)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022)

Wednesday 16 October 2024

An open letter regarding Scientific Reports

16th October 2024 

to: Mr Chris Graf
Research Integrity Director, Springer Nature and Chair Elect of the World Conference on Research Integrity Foundation Governing Board.

 

Dear Mr Graf,

We are a group of sleuths and forensic meta-scientists who are concerned that Springer Nature is failing in its duty to protect the scientific literature from fraudulent and low quality work. We are aware that, as noted in the 2023 Annual Report, you are committed to maintaining research integrity. We agree with the statement: “To solve the world’s biggest challenges, we all need research that’s reliable, trustworthy and can be built on by scientists and innovators. As a leading global research publisher, we have a pivotal role to play.” It is encouraging to hear that the Springer Nature research integrity group doubled in size in 2023. Nevertheless, we have a growing sense that all is not well concerning the mega journal Scientific Reports.

Some of the work that has been published is so seriously flawed that it is not credible that it underwent any meaningful form of peer review. In other cases, when we have reported flawed papers to the editor or integrity team, the response has been inadequate. A striking example cropped up last week when a “corrected” version of an article was published in Scientific Reports. This article had been flagged up by Guillaume Cabanac as containing numerous “tortured phrases” that are indicative of fraudulent authors attempting to bypass plagiarism checks; the authors were allowed to “correct” the article by merely removing some (not all) of the tortured phrases. This led some of us to look more closely at the article. As is evident from comments on PubPeer, it turned out to be a kind of case study of all the red flags for fraud that we look for. As well as (still uncorrected) tortured phrases, it contained irrelevant content, irrelevant citations, meaningless gibberish, a nonsensical figure, and material recycled from other publications.

This is perhaps the most flagrant example, but we argue that it indicates problems with your editorial processes that are not going to be fixed by AI. The only ways an article like this can have been published are either through editorial negligence or outright malpractice. For it to be negligence would require a remarkable degree of professional incompetence from a handling editor. The possibility of malpractice, would mean there is a corrupt handling editor who bypasses the peer review process entirely or willingly appoints corrupt peer reviewers to approve the manuscript. We appreciate that some papers that we and others have reported have been retracted, but in other cases blatantly fraudulent papers can take years to be retracted or to receive any appropriate editorial action.

We have some specific suggestions for actions that Springer Nature could take to address these issues.

  1. Employ a task force of people with the necessary expertise to carry out an urgent audit of all editors of Scientific Reports. We have looked at the editors on your website, and it is clear that this is an enormous task, given that there are over 13,000 of them, and they are not listed with disambiguating information such as Orcid IDs. Even so, in a few hours, by cross-checking this list against PubPeer, it was possible to identify the 28 cases listed below, covering a range of disciplines, and all, in our view, with pretty clear-cut evidence of problems. Four are members of the Editorial Board. We stress, this is just the low-hanging fruit which was fairly easy to detect.
  2. The list of problematic articles appended below or tabulated on the Problematic Paper Screener might provide an alternative route to identify editors who should never have been given a gatekeeping role in academic publishing. As well as checking the papers we list below, we recommend that all other articles accepted by the same editors should be scrutinised.
  3. Detection of problematic articles and editors could be helped by requiring open peer review for all journals, and ensuring that the name and Orcid ID of the handling editor is included with the published meta-data for all articles.
We hope these suggestions will be helpful in ensuring that research published in Scientific Reports is reliable and trustworthy.

Yours sincerely

Dorothy Bishop
Guillaume Cabanac
François-Xavier Coudert
René Aquarius
Nick Wise
Lonni Besançon
Simon A.J. Kimber
Anna Abalkina
Rickard Carlsson
Samuel J Westwood
Patricia Murray
Nicholas J. L. Brown
Smut Clyde
Leonid Schneider
Ian Hussey
Tu Duong
Gustav Nilsonne
Jamie Cummins
Alexander Magazinov
Elisabeth Bik
Mu Yang
Corrado Viotti
Sholto David


 

Appendices

1. Some examples of editors with concerning PubPeer entries

Editorial board Ghulam Md Ashraf
Editorial board Eun Bo Shim
Editorial board Ajay Goel
Editorial board Rasoul Kowsar

AGEING Vittorio Calabrese
AGRICULTURE Sudip Mitra
ANALYTICAL CHEMISTRY Syed Ghulam Musharraf
CELL BIOLOGY Gabriella Dobrowolny
CHEMICAL ENGINEERING Enas Taha Sayed
CIVIL ENGINEERING Manoj Khandelwal
CLINICAL ONCOLOGY Marcello Maggiolini
COMPUTATIONAL SCIENCE Praveen Kumar Reddy Maddikunta
DRUG DISCOVERY Salvatore Cuzzocrea
ENDOCRINOLOGY Sihem Boudina
ENVIRONMENTAL ENGINEERING Rama Rao Karri
ENVIRONMENTAL SCIENCE Mayeen Uddin Khandaker
GASTROENTEROLOGY AND HEPATOLOGY Sharon DeMorrow
IMMUNOLOGY Marcin Wysoczynski
INFECTIOUS DISEASES Fatah Kashanchi
MATHEMATICAL PHYSICS Ilyas Khan
MICROBIOLOGY Massimiliano Galdiero
NETWORKS AND COMPLEX SYSTEMS Achyut Shankar
NEUROLOGY Yvan Torrente
RESPIRATORY MEDICINE Soni Savai Pullamsetti
STRUCTURAL AND MOLECULAR BIOLOGY Stefania Galdiero
https://pubpeer.com/publications/42901FD2901EC917E3EE54B8DBD749#4 (authors claim a correction is underway, but none published for 2 years)
https://pubpeer.com/publications/01FE09F1127DF0598985987677A101 (part of a list of many flagged papers from this author group. Corrected rather than retracted)
https://pubpeer.com/publications/69EDBAECD50F31B051ECECCD1DF346 (notified on 31-3-2023 about this paper, no action so far)
https://pubpeer.com/publications/F8A1AD2B165888A06C18B28C860E7B. EiC contacted Nov. 22 with authorship concerns, responded that he would investigate. No action taken so far.
https://pubpeer.com/publications/286F83F9553D29F82CD4281309A1E4. Has had EoC for authorship irregularities since July 22, no action taken since.
https://pubpeer.com/publications/5BEDDDA9CF92B9CDDD2AB1AA796271 (blatantly nonsensical paper reported to publisher in June 2024; no action as yet)
https://pubpeer.com/publications/37B87CAC48DE4BC98AD40E00330143 (various corrections since 2022, and in Feb 2023 readers were told “conclusions of this article are being considered by the Editors. A further editorial response will follow the resolution of these issues”. 19 Months later we are still waiting.)


3. Some examples of journal-level reports posted on PubPeer

Scientific Reports

other Springer Nature journals:

Chemosphere

Tuesday 24 September 2024

Using PubPeer to screen editors

 

2023 was the year when academic publishers started to take seriously the threat that paper mills posed to their business. Their research integrity experts have penned various articles about the scale of the problem and the need to come up with solutions (e.g., here and here).  Interested parties have joined forces in an initiative called United2Act. And yet, to outsiders, it looks as though some effective actions are being overlooked. It's hard to tell whether this is the result of timidity, poor understanding, or deliberate footdragging from those who have a strong financial conflict of interest.

As I have emphasised before, the gatekeepers to journals are editors. Therefore it is crucial that they are people of the utmost integrity and competence. The growth of mega-journals with hundreds of editors has diluted scrutiny of who gets to be an editor. This has been made worse by the bloating of journals with hundreds of special issues, each handled by "guest editors". We know that paper millers will try to bribe existing editors, and to place their own operatives as editors or guest editors, use fake reviewers, and stuff articles with irrelevant citations. Stemming this tide of corruption would be one effective way to reduce the contamination of the research literature. Here are two measures I suggest that publishers should take if they seriously want to clean up their journals.

1. Three strikes and you are out. Any editor who has accepted three or more paper milled papers should be debarred from acting as an editor, and all papers that they have been responsible for accepting should be regarded as suspect. This means retrospectively cleaning-up the field by scrutinising the suspect papers and retracting any from authors associated with paper mills, or which are characterised by features suggestive of paper mills, such as tortured phrases, citation stacking, gobbledegook content, fake reviews from reviewers suggested by authors, invalid author email domains, or co-authors who are known to be part of a paper mill ring. All of these are things that any competent editor should be able to detect. I anticipate this would lead to a large number of retractions, particularly from journals with many Special Issues. As well as these simple indicators, we are told that publishers are working hard to develop AI-based checks. They should use these not only to screen new submissions, and to retract published papers, but also to identify editors who are allowing this to happen on their watch. It also goes without saying that nobody who has co-authored a paper-milled paper should act as an editor.

2. All candidates for roles as Editor or Guest Editor at a journal should be checked against the post-publication peer review website PubPeer, and rejected if this reveals evidence of papers that have had credible criticisms suggesting of data fabrication or falsification. This is a far from perfect indicator: only a tiny fraction of authors receive PubPeer comments, and these may comment on trivial or innocent aspects of a paper. But, as I shall demonstrate, using such a criterion can reveal cases of editorial misconduct.

I will illustrate how this might work in practice, using the example of the MDPI journal Electronics. This journal came to my attention because it has indicators that all is not well with its Special Issues programme. 

First, in common with nearly all MDPI journals, Electronics has regularly broken the rule that specifies that no more than 25% of articles should be authored by a Guest Editor. As mentioned in a previous post, this is a rule that has come and gone in the MDPI guidelines, but which is clearly stated as a requirement for inclusion in the Directory of Open Access Journals (DOAJ). 13% of Special Issues in Electronics completed in 2023-4 broke this rule**. DOAJ have withdrawn some MDPI journals from their directory for this reason, and I live in hope that they will continue to implement this policy rigorously - which would entail delisting from their Directory the majority of MDPI journals. Otherwise, there is nothing to stop publishers claiming to be adhering to rigorous standards while failing to implement them, making listing in DOAJ an irrelevance.  

Even more intriguing, for around 11% of the 517 Special issues of Electronics published in 2023-4, the Guest Editor doesn't seem to have done any editing We can tell this because Special Issues are supposed to list who has acted as Academic Editor for each paper. MDPI journals vary in how rigorously they implement that rule - some journals have no record of who was the Academic Editor. But most do, and in most Special Issues, as you might expect, the Guest Editor is the Academic Editor, except for any papers where there is conflict of interest (e.g. if authors are Guest Editors or are from the same institution as the Guest Editor). Where the Guest Editor cannot act as Academic Editor, the MDPI guidelines state that this role will be taken by a member of the Editorial Board. But, guess what? Sometimes that doesn't happen. As someone with a suspicious frame of mind, and a jaundiced view of how paper mills operate, this is a potential red flag for me.

Accordingly, I decided to check PubPeer comments for individuals in three editorial roles at Electronics for the years 2023-4:

  • Those listed as being in a formal editorial role on the journal website. 
  • Those acting a Guest Editors 
  • Those acting as Academic Editors, despite not being in the other two categories.

For Editors, a PubPeer search by name revealed 213/931 that had one or more comments. That sounds alarming, but cannot be taken at face value, because there are many innocent reasons for this result. The main one is namesakes: this is particularly common with Chinese names, which tend to be less distinctive than Western names. It is therefore important to match PubPeer comments on affiliations as well as names. Using this approach, it was depressingly easy to find instances of Editors who appeared associated with paper mills. I will mention just three, to illustrate the kind of evidence that PubPeer provides, but remember, there are many others deserving of scrutiny. 

  • As well as being a section board member of Electronics, Danda B Rawat (Department of Electrical Engineering and Computer Science, Howard University, Washington, DC 20059, USA) is Editor-in-Chief of Journal of Cybersecurity and Privacy, and a section board member of two further MDPI journals: Future Internet and Sensors. A PubPeer search reveals him to be co-author of one paper with tortured phrases, and another where equations make no sense. He is listed as Editor of three MDPI Special Issues: Multimodal Technologies and Interaction: Human Computer Communications and Internet of Things Sensors: Frontiers in Mobile Multimedia Communications Journal of Cybersecurity and Privacy: Applied Cryptography.
  • Aniello Castiglione  (Department of Management & Innovation Systems, University of Salerno, Italy) is Section Board Member of three journals: Electronics, Future Internet, and Journal of Cybersecurity and Privacy, and an Editorial Board member of Sustainability. PubPeer reveals he has co-authored one paper that was recently retracted because of compromised editorial processing, and that his papers are heavily cited in several other articles that appear to be used as vehicles for citation stacking. 
  •  Natalia Kryvinska (Department of Information Systems, Faculty of Management, Comenius University in Bratislava, Slovakia) is a Section Board Member of Electronics. She has co-authored several articles with tortured phrases.

Turning to the 1326 Guest Editors of Special Issues, there were 500 with at least one PubPeer comment, but as before, note that in many cases name disambiguation is difficult, so this will overestimate the problem. Once again, while it may seem invidious to single out specific individuals, it seems important to show the kinds of issues that can be found among those who are put in this important gatekeeping role. 

Finally, let's look at the category of Academic Editors who aren't listed as journal Editors. It's unclear how they are selected and who approves their selection. Again, among those with PubPeer comments, there's a lot to choose from. I'll focus here on three who have been exceptionally busy doing editorial work on several special issues. 

  • Gwanggil Jeon (Incheon National University, Korea) has acted as Academic Editor for 18 Special Issues in Electronics. He is not on the Editorial Board of the journal, but he has been Guest Editor for two special issues in Remote Sensing, and one in SensorsPubPeer comments note recycled figures and irrelevant references in papers that he has co-authored, as well as a problematic Special Issue that he co-edited for Springer Nature, which led to several retractions.
  • Hamid Reza Karimi (Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy) has acted as Academic Editor for 12 Special Issues in Electronics. He was previously Guest Editor for two Special Issues of Electronics, one of Sensors, one of Micromachines, and one of Machines.  In 2022, he was specifically called out by the IEEE for acting "in violation of the IEEE Principles of Ethical Publishing by artificially inflating the number of citations" for several articles. 
  • Finally, Juan M. Corchado (University of Salamanca, Spain) has acted as Academic Editor for 29 Special Issues. He was picked up by my search as he is not currently listed as being an Editor for Electronics, but that seems to be a relatively recent change: when searching for information, I found this interview from 2023. Thus his role as Academic Editor seems legitimate. Furthermore, as far as PubPeer is concerned, I found only one old comment, concerned with duplicate publication. However, he is notorious for boosting citations to his work by unorthodox means, as described in this article.* I guess we could regard his quiet disappearance from the Editorial Board as a sign that MDPI are genuinely concerned about editors who try to game the system. If so, we can only hope that they employ some experts who can do the kinds of cross-checking that I have described here at scale. If I can find nine dubious editors of one journal in a couple of hours searching, then surely the publisher, with all its financial resources, could uncover many more if they really tried.

Note that many of the editors featured here have quite substantial portfolios of publications. This makes me dubious about MDPI's latest strategy for improving integrity - to use an AI tool to select potential reviewers "from our internal databases with extensive publication records". That seems like an excellent way to keep paper millers in control of the system. 

Although the analysis presented here just scratches the surface of the problem, it would not have been possible without the help of sleuths who made it straightforward to extract the information I needed from the internet. My particular thanks to Pablo Gómez Barreiro, Huanzi and Sholto David.

I want to finish by thanking the sleuths who attempt to decontaminate the literature by posting comments to PubPeer. Without their efforts it would be much harder to keep track of paper millers. The problem is large and growing. Publishers are going to need to invest seriously in employing those with the expertise to tackle this issue. 

 *As I was finalising this piece, this damning update from El Pais appeared. It seems that many retractions of Corchado papers are imminent.  

 I can't keep up.... here's today's news. 


** P.S. 25th Sept 2024. DOAJ inform me that Electronics was removed from their directory in June of this year. 

*** P.P.S. 26th Sept 2024.  Guillaume Cabanac pointed me to this journal-level report on PubPeer, where he noted a high rate of Electronics papers picked up by the Problematic Paper Screener.