Friday 30 September 2022

Reviewer-finding algorithms: the dangers for peer review

 


Last week many words were written for Peer Review Week, so you might wonder whether there is anything left to say. I may have missed something, but I think I do have a novel take on this, namely to point out that some recent developments in automation may be making journals vulnerable to fake peer review. 

Finding peer reviewers is notoriously difficult these days. Editors are confronted with a barrage of submissions, many outside their area. They can ask authors to recommend peer reviewers, but this raises concerns of malpractice, if authors recommend friends, or even individuals tied up with paper mills, who might write a positive review in return for payment.

One way forward is to harness the power of big data to identify researchers who have a track record of publishing in a given area. Many publishers now use such systems. This way a journal editor can select from a database of potential reviewers that is formed by identifying papers with some overlap to a given submission.

I have become increasingly concerned, however, that use of algorithmically-based systems might leave a journal vulnerable to fraudulent peer reviewers who have accumulated publications by using paper mills. I became interested in this when submitting work to Wellcome Open Research and F1000, where open peer review is used, but it is the author rather than an editor who selects reviewers. Clearly, with such a system, one needs to be careful to avoid malpractice, and strict criteria are imposed. As explained here,  reviewers need to be:
  1. Qualified: typically hold a doctorate (PhD/MD/MBBS or equivalent). 
  2. Expert: have published at least three articles as lead author in a relevant topic, with at least one article having been published in the last five years. 
  3. Impartial: No competing interests and no co-authorship or institutional overlap with current authors. 
  4. Global: geographically diverse and from different institutions. 
  5. Diverse: in terms of gender, geographic location and career stage

Unfortunately, now that we have paper mills, which allow authors, for a fee, to generate and publish a large number of fake papers, these criteria are inadequate. Consider the case of Mohammed Sahab Uddin, who features in this account in Retraction Watch. As far as I am aware, he does not have a doctorate*, but I suspect people would be unlikely to query the qualifications of someone who had 137 publications and an H-index of 37. By the criteria above, he would be welcomed as a reviewer from an underrepresented location. And indeed, he was frequently used as a reviewer: Leslie McIntosh, who unmasked Uddin’s deception, noted that before he wiped his Publons profile, he had been listed as a reviewer on 300 papers. 

This is not an isolated case. We are only now beginning to get to grips with the scale of the problem of paper mills. There are undoubtedly many other cases of individuals who are treated as trusted reviewers on the back of fraudulent publications. Once in positions of influence, they can further distort the publication process. As I noted in last week's blogpost, open peer review offers a degree of defence against this kind of malpractice, as readers will at least be able to evaluate the peer review, but it is disturbing to consider how many dubious authors will have already found themselve promoted to positions of influence based on their apparently impressive track record of publishing, reviewing and even editing.

I started to think about this might interact with other moves to embrace artificial intelligence. A recent piece in Times Higher Education stated: “Research England has commissioned a study of whether artificial intelligence could be used to predict the quality of research outputs based on analysis of journal abstracts, in a move that could potentially remove the need for peer review from the Research Excellence Framework (REF).” This seems to me to be the natural endpoint of the move away from trusting the human brain in the publication process. We could end up with a system where algorithms write the papers, which are attributed to fake authors,  peer reviewed by fake peer reviewers, and ultimately evaluated in the Research Excellence Framework by machines. Such a system is likely to be far more successful than mere mortals, as it will be able to rapidly and flexibly adapt to changing evaluation criteria. At that point, we will have dispensed with the need for human academics altogether and have reached peak academia. 

 *Correction 30/9/22: Leslie McIntosh tells me he does have a doctorate and was working on a postdoc.

Sunday 11 September 2022

So do we need editors?

It’s been an interesting week in world politics, and I’ve been distracting myself by pondering the role of academic editors. The week kicked off with a rejection of a preprint written with co-author Anna Abalkina, who is an expert sleuth who tracks down academic paper mills – organisations that will sell you a fake publication in an academic journal. Our paper describes a paper mill that had placed six papers in the Journal of Community Psychology, a journal which celebrated its 50th anniversary in 2021. We had expected rejection, as we submitted the paper to the Journal of Community Psychology, as a kind of stress test to see whether the editor, Michael B. Blank, actually reads papers that he accepts for the journal. I had started to wonder, because you can read his decision letters on Publons, and they are identical for every article he accepts. I suspected he may be an instance of Editoris Machina, or automaton, one who just delegates editorial work to an underling, waits until reviewer reports converge on a recommendation, and then accepts or rejects accordingly without actually reading the paper. I was wrong, though. He did read our paper, and rejected it with the comment that it was a superficial analysis of six papers. We immediately posted it as a preprint and plan to publish it elsewhere.

Although I was quite amused by all of this, it has a serious side. As we note in our preprint, when paper mills succeed in breaching the defences of a journal, this is not a victimless crime. First, it gives competitive advantage to the authors who paid the paper mill – they do this in order to have a respectable-looking publication that will help their career. I used to think this was a minor benefit, but when you consider that the paper mills can also ensure that the papers they place are heavily cited, you start to realise that authors can edge ahead on conventional indicators of academic prestige, while their more honest peers trail behind. The second set of victims are those who publish in the journal in good faith. Once its reputation is damaged by the evidence that there is no quality control, then all papers appearing in the journal are tainted by association. The third set of victims are busy academics who are trying to read and integrate the literature, who can get tangled up in the weeds as they try to navigate between useful and useless information. And finally, we need to be concerned about the cynicism induces in the general public when they realise that for some authors and editors, the whole business of academic publishing is a game, which is won not by doing good science, but by paying someone to pretend you have done so.

Earlier this week I shared my thoughts on the importance of ensuring that we have some kind of quality control over journal editors. They are, after all, the gatekeepers of science. When I wrote my taxonomy of journal editors back in 2010, I was already concerned at the times I had to deal with editors who were lazy or superficial in their responses to authors. I had not experienced ‘hands off’ editors in the early days of my research career, and I wondered how far this was a systematic change over time, or whether it was related to subject area. In the 1970s and 1980s, I mostly published in journals that dealt with psychology and/or language, and the editors were almost always heavily engaged with the paper, adding their own comments and suggesting how reviewer comments might be addressed. That’s how I understood the job when I myself was an editor. But when I moved to publishing work in journals that were more biological (genetics, neuroscience) things seemed different, and it was not uncommon to find editors who really did nothing more than collate peer reviews.

The next change I experienced was when, as a Wellcome-funded researcher, I started to publish in Wellcome Open Research (WOR), which adopts a very different publication model, based on that initiated by F1000. In this model, there is no academic editor. Instead, the journal employs staff who check that the paper complies with rigorous criteria: the proposed peer reviewers much have a track record of publishing and be clear from conflict of interest. Data and other materials must be openly available so that the work can be reproduced. And the peer review is published. The work is listed on PubMed if and when peer reviewers agree that it meets a quality threshold: otherwise the work remains visible but with status shown as not approved by peer review. 

The F1000/WOR model shows that editors are not needed, but I generally prefer to publish in journals that do have academic editors – provided the editor is engaged and does their job properly. My papers have benefitted from input from a wise and experienced editor on many occasions. In a specialist journal, such an editor will also know who are the best reviewers – those who have the expertise to give a detailed and fair appraisal of the work. However, in the absence of an engaged editor, I prefer the F1000/WOR model, where at least everything is transparent. The worst of all possible worlds is when you have an editor who doesn’t do more than collate peer reviews, but where everything is hidden: the outside world cannot know who the editor was, how decisions were made, who did the reviews, and what they said. Sadly, this latter situation seems to be pretty common, especially in the more biological realms of science. To test my intuitions, I ran a little Twitter poll for different disciplines, asking, for instance: 

 
 
 Results are below

% respondents stating Not Read, Read Superfially, or Read in Depth



 

Such polls of course have to be taken with a pinch of salt, as the respondents are self-selected, and the poll allows only very brief questions with no nuance. It is clear that within any one discipline, there is wide variability in editorial engagement. Nevertheless, I find it a matter of concern that in all areas, some respondents had experienced a journal editor who did not appear to have read the paper they had accepted, and in areas of biomedicine, neuroscience, and genetics, and also in mega journals, this was as high as around 25-33%

So my conclusion is that it is not surprising that we are seeing phenomena like paper mills, because the gatekeepers of the publication process are not doing their job. The solution would be either to change the culture for editors, or, where that is not feasible, to accept that we can do without editors. But if we go down that route, we should move to a model such as F1000 with much greater quality control over reviewers and COI, and much more openness and transparency.

 As usual comments are welcome: if you have trouble getting past comment moderation, please email me.

Tuesday 6 September 2022

We need to talk about editors


Editoris spivia

The role of journal editor is powerful: you decide what is accepted or rejected for publication. Given that publications count as an academic currency – indeed in some institutions they are literally fungible – a key requirement for editors is that they are people of the utmost integrity. Unfortunately, there are few mechanisms in place to ensure editors are honest – and indeed there is mounting evidence that many are not. I argue here that we can no longer take editorial honesty for granted, and systems need to change to weed out dodgy editors if academic publishing is to survive as a useful way of advancing science. In particular, the phenomenon of paper mills has shone a spotlight on editorial malpractice.

Questionable editorial practices

Back in 2010, I described a taxonomy of journal editors based on my own experience as an author over the years. Some were negligent, others were lordly, and others were paragons – the kind of editor we all want, who is motivated solely by a desire for academic excellence, who uses fair criteria to select which papers are published, who aims to help an author improve their work, and provides feedback in a timely and considerate fashion. My categorisation omitted another variety of editor that I have sadly become acquainted with in the intervening years: the spiv. The spiv has limited interest in academic excellence: he or she sees the role of editor as an opportunity for self-advancement. This usually involves promoting the careers of self or friends by facilitating publication of their papers, often with minimal reviewing, and in some cases may go as far as working hand in glove with paper mills to receive financial rewards for placing fraudulent papers.

When I first discovered a publication ring that involved journal editors scratching one another’s backs, in the form of rapid publication of each other’s papers, I assumed this was a rare phenomenon. After I blogged about this, one of the central editors was replaced, but others remained in post. 

I subsequently found journals where the editor-in-chief authored an unusually high percentage of the articles published in the journal. I drew these to the attention of integrity advisors of the publishers that were involved, but did not get the impression that they regarded this as particularly problematic or were going to take any action about it. Interestingly, there was one editor, George Marcoulides, who featured twice in a list of editors who authored at least 15 articles in their own journal over a five year period. Further evidence that he equates his editorial role with omnipotence came when his name cropped up in connection with a scandal where a reviewer, Fiona Fidler, complained after she found her positive report on a paper had been modified by the editor to justify rejecting the paper: see this Twitter thread for details. It appears that the publishers regard this as acceptable: Marcoulides is still editor-in-chief at the Sage journal Educational and Psychological Measurement, and at Taylor and Francis’ Structural Equation Modeling, though his rate of publishing in both journals has declined since 2019; maybe someone had a word with him to explain that publishing most of your papers in a journal you edit is not a good look.

Scanff et al (2021) did a much bigger investigation of what they termed “self-promotion journals” - those that seemed to be treated as the personal fiefdom of editors, who would use the journal as an outlet for their own work. This followed on from a study by Locher et al (2021), which found editors who were ready to accept papers by a favoured group of colleagues with relatively little scrutiny. This had serious consequences when low-quality studies relating to the Covid-19 pandemic appeared in the literature and subsequently influenced clinical decisions. Editorial laxness appears in this case to have done real harm to public health.

So, it's doubtful that all editors are paragons. And this is hardly surprising: doing a good job as editor is hard and often thankless work. On the positive side, an editor may obtain kudos for being granted an influential academic role, but often there is little or no financial reimbursement for the many hours that must be dedicated to reading and evaluating papers, assigning reviewers, and dealing with fallout from authors who react poorly to having their papers rejected. Even if an editor starts off well, they may over time start to think “What’s in this for me?” and decide to exploit the opportunities for self-advancement offered by the position. The problem is that there seems little pressure to keep them on the straight and narrow; it's like when a police chief is corrupt. Nobody is there to hold them to account. 

Paper mills

Many people are shocked when they read about the phenomenon of academic paper mills – defined in a recent report by the Committee on Publication Ethics (COPE) and the Association of Scientific, Tehcnical and Medical Publishers (STM) as “the process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.” The report stated that “the submission of suspected fake research papers, also often associated with fake authorship, is growing and threatens to overwhelm the editorial processes of a significant number of journals.” It concluded with a raft of recommendations to tackle the problem from different fronts: changing the incentives adopted by institutions, investment in tools to detect paper mill publications, education of editors and reviewers to make them aware of paper mills, introduction of protocols to impede paper mills succeeding, and speeding up the process of retraction by publishers.

However, no attention was given to the possibility that journal editors may contribute to the problem: there is talk of “educating” them to be more aware of paper mills, but this is not going to be effective if the editor is complicit with the paper mill, or so disengaged from editing as to not care about them. 

It’s important to realise that not all paper mill papers are the same. Many generate outputs that look plausible. As Byrne and LabbĂ© (2017) noted, in biomedical genetic studies, fake papers are generated from a template that is based on a legitimate paper, and just vary in terms of the specific genetic sequence and/or phenotype that is studied. There are so many genetic sequences and phenotypes, that the number of possible combinations of these is immense. In such cases, a diligent editor may get tricked into accepting a fake paper, because the signs of fakery are not obvious and aren’t detected by reviewers. But at the other extreme, some products of paper mills are clearly fabricated. The most striking examples are those that contain what Guillaume Cabanac and colleagues term “tortured phrases”. These appear to be generated by taking segments of genuine articles and running them through an AI app that will use a thesaurus to alter words, with the goal of evading plagiarism detection software. In other cases, the starting point appears to be text from an essay mill. The results are often bizarre and so incomprehensible that one only needs read a few sentences to know that something is very wrong. Here’s an example from Elsevier’s International Journal of Biological Macromolecules, which those without access can pay $31.50 for (see analysis on Pubpeer, here).

"Wound recuperating camwood a chance to be postponed due to the antibacterial reliance of microorganisms concerning illustration an outcome about the infection, wounds are unable to mend appropriately, furthermore, take off disfiguring scares [150]. Chitin and its derivatives go about as simulated skin matrixes that are skilled should push a fast dermal redesign after constantly utilized for blaze treatments, chitosan may be wanton toward endogenous enzymes this may be a fundamental preference as evacuating those wound dressing camwood foundation trauma of the wounds and harm [151]. Chitin and its derivatives would make a perfect gas dressing. Likewise, they dampen the wound interface, are penetrability will oxygen, furthermore, permit vaporous exchange, go about as a boundary with microorganisms, and are fit about eliminating abundance secretions"

And here’s the start of an Abstract from a Springer Nature collection called Modern Approaches in Machine Learning and Cognitive Science (see here for some of the tortured phrases that led to detection of this article). The article can be yours for £19.95:

“Asthma disease are the scatters, gives that influence the lungs, the organs that let us to inhale and it’s the principal visit disease overall particularly in India. During this work, the matter of lung maladies simply like the trouble experienced while arranging the sickness in radiography are frequently illuminated. There are various procedures found in writing for recognition of asthma infection identification. A few agents have contributed their realities for Asthma illness expectation. The need for distinguishing asthma illness at a beginning period is very fundamental and is an exuberant research territory inside the field of clinical picture preparing. For this, we’ve survey numerous relapse models, k-implies bunching, various leveled calculation, characterizations and profound learning methods to search out best classifier for lung illness identification. These papers generally settlement about winning carcinoma discovery methods that are reachable inside the writing.”

These examples are so peculiar that even a layperson could detect the problem. In more technical fields, the fake paper may look superficially normal, but is easy to spot by anyone who knows the area, and who recognises that the term “signal to noise” does not mean “flag to commotion”, or that while there is such a thing as a “Swiss albino mouse” there is no such thing as a “Swiss pale-skinned person mouse”. These errors are not explicable as failures of translation by someone who does not speak good English. They would be detected by any reviewer with expertise in the field. Another characteristic of paper mill outputs, featured in this recent blogpost, are fake papers that combine tables and figures from different publications in nonsensical contexts.

Sleuths who are interested in unmasking paper mills have developed automated methods for identifying such papers, and the number is both depressing and astounding. As we have seen, though some of these outputs appear in obscure sources, many crop up in journals or edited collections that are handled by the big scientific publishing houses, such as Springer Nature, Elsevier and Wiley. When sleuths find these cases, they report the problems on the website PubPeer, and this typically raises an incredulous response as to how on earth did this material get published. It’s a very good question, and the answer has to be that somehow an editor let this material through. As explained in the COPE&STM report, sometimes a nefarious individual from a paper mill persuades a journal to publish a “special issue” and the unwitting journal is then hijacked and turned into a vehicle for publishing fraudulent work. If the special issue editor poses as a reputable scientist, using a fake email address that looks similar to the real thing, this can be hard to spot.

But in other cases, we see clearcut instances of paper mill outputs that have apparently been approved by a regular journal editor. In a recent preprint, Anna Abalkina and I describe finding putative paper mill outputs in a well-established Wiley journal, the Journal of Community Psychology. Anna identified six papers in the journal in the course of a much larger investigation of papers that came from faked email addresses. For five of them the peer review and editorial correspondence was available on Publons. The papers,  from addresses in Russia or Kazakhstan, were of very low quality and frequently opaque. I had to read and re-read to work out what the paper was about, and still ended up uncertain. The reviewers, however, suggested only minor corrections. They used remarkably similar language to one another, giving the impression that the peer review process had been compromised. Yet the Editor-in-Chief, Michael B. Blank, accepted the papers after minor revisions, with a letter concluding: “Thank you for your fine contribution”. 

There are two hypotheses to consider when a journal publishes incomprehensible or trivial material: either the editor was not doing their job of scrutinising material in the journal, or they were in cahoots with a paper mill. I wondered whether the editor was what I have previously termed an automaton – one who just delegates all the work to a secretary. After all, authors are asked to recommend reviewers, so all that is needed is for someone to send out automated requests to review, and then keep going until there are sufficient recommendations to either accept or reject. If that were the case, then maybe the journal would accept a paper by us. Accordingly, we submitted our manuscript about paper mills to the Journal of Community Psychology. But it was desk rejected by the Editor in Chief with a terse comment: “This a weak paper based on a cursory review of six publications”. So we can reject hypothesis 1 – that the editor is an automaton. But that leaves hypothesis 2 – that the editor does read papers submitted to his journal, and had accepted the previous paper mill outputs in full knowledge of their content. This raises more questions than it answers. In particular, why would he risk his personal reputation and that of his journal by behaving that way? But perhaps rather than dwelling on that question, we should think positively about how journals might protect themselves in future from attacks by paper mills.

A call for action

My proposal is that, in addition to the useful suggestions from the COPE&STM report, we need additional steps to ensure that those with editorial responsibility are legitimate and are doing their job. Here are some preliminary suggestions:

  1. Appointment to the post of editor should be made in open competition among academics who meet specified criteria.
  2. It should be transparent who is responsible for final sign-off for each article that is published in the journal.
  3. Journals where a single editor makes the bulk of editorial decisions should be discouraged. (N.B. I looked at the 20 most recent papers in Journal of Community Psychology that featured on Publons and all had been processed by Michael B. Blank).
  4. There should be an editorial board consisting of reputable people from a wide range of institutional backgrounds, who share the editorial load, and meet regularly to consider how the journal is progressing and to discuss journal business.
  5.  Editors should be warned about the dangers of special issues and should not delegate responsibility for signing off on any papers appearing in a special issue.
  6. Editors should be required to follow COPE guidelines about publishing in their own journal, and publishers should scrutinise the journal annually to check whether the recommended procedures were followed.
  7. Any editor who allows gibberish to be published in their journal should be relieved of their editorial position immediately.

Many journals run by academic societies already adopt procedures similar to these. Particular problems arise when publishers start up new journals to fill a perceived gap in the market, and there is no oversight by academics with expertise in the area. The COPE&STM report has illustrated how dangerous that can be – both for scientific progress and for the reputation of publishers.

Of course, when one looks at this list of requirements, one may start to wonder why anyone would want to be an editor. Typically there is little financial renumeration, and the work is often done in a person’s “spare time”. So maybe we need to rethink how that works, so that paragons with a genuine ability and interest in editing are rewarded more adequately for the important work they do.

P.S. Comment moderation is enabled for this blog to prevent it being overwhelmed by spam, but I welcome comments, and will check for these in the weeks following the post, and admit those that are on topic. 

 

Comment by Jennifer Byrne, 9th Sept 2022 

(this comment by email, as Blogger seems to eat comments by Jennifer for some reason, while letting through weird spammy things!).

This is a fantastic list of suggestions to improve the important contributions of journal editors. I would add that journal editors should be appointed for defined time periods, and their contributions regularly reviewed. If for any reason it becomes apparent that an editor is not in a position to actively contribute to the journal, they should be asked to step aside. In my experience, editorial boards can include numerous inactive editors. These can provide the appearance of a large, active and diverse editorial board, when in practice, the editorial work may be conducted by a much smaller group, or even one person. Journals cannot be run successfully without a strong editorial team, but such teams require time and resources to establish and maintain.