Tuesday 6 September 2022

We need to talk about editors


Editoris spivia

The role of journal editor is powerful: you decide what is accepted or rejected for publication. Given that publications count as an academic currency – indeed in some institutions they are literally fungible – a key requirement for editors is that they are people of the utmost integrity. Unfortunately, there are few mechanisms in place to ensure editors are honest – and indeed there is mounting evidence that many are not. I argue here that we can no longer take editorial honesty for granted, and systems need to change to weed out dodgy editors if academic publishing is to survive as a useful way of advancing science. In particular, the phenomenon of paper mills has shone a spotlight on editorial malpractice.

Questionable editorial practices

Back in 2010, I described a taxonomy of journal editors based on my own experience as an author over the years. Some were negligent, others were lordly, and others were paragons – the kind of editor we all want, who is motivated solely by a desire for academic excellence, who uses fair criteria to select which papers are published, who aims to help an author improve their work, and provides feedback in a timely and considerate fashion. My categorisation omitted another variety of editor that I have sadly become acquainted with in the intervening years: the spiv. The spiv has limited interest in academic excellence: he or she sees the role of editor as an opportunity for self-advancement. This usually involves promoting the careers of self or friends by facilitating publication of their papers, often with minimal reviewing, and in some cases may go as far as working hand in glove with paper mills to receive financial rewards for placing fraudulent papers.

When I first discovered a publication ring that involved journal editors scratching one another’s backs, in the form of rapid publication of each other’s papers, I assumed this was a rare phenomenon. After I blogged about this, one of the central editors was replaced, but others remained in post. 

I subsequently found journals where the editor-in-chief authored an unusually high percentage of the articles published in the journal. I drew these to the attention of integrity advisors of the publishers that were involved, but did not get the impression that they regarded this as particularly problematic or were going to take any action about it. Interestingly, there was one editor, George Marcoulides, who featured twice in a list of editors who authored at least 15 articles in their own journal over a five year period. Further evidence that he equates his editorial role with omnipotence came when his name cropped up in connection with a scandal where a reviewer, Fiona Fidler, complained after she found her positive report on a paper had been modified by the editor to justify rejecting the paper: see this Twitter thread for details. It appears that the publishers regard this as acceptable: Marcoulides is still editor-in-chief at the Sage journal Educational and Psychological Measurement, and at Taylor and Francis’ Structural Equation Modeling, though his rate of publishing in both journals has declined since 2019; maybe someone had a word with him to explain that publishing most of your papers in a journal you edit is not a good look.

Scanff et al (2021) did a much bigger investigation of what they termed “self-promotion journals” - those that seemed to be treated as the personal fiefdom of editors, who would use the journal as an outlet for their own work. This followed on from a study by Locher et al (2021), which found editors who were ready to accept papers by a favoured group of colleagues with relatively little scrutiny. This had serious consequences when low-quality studies relating to the Covid-19 pandemic appeared in the literature and subsequently influenced clinical decisions. Editorial laxness appears in this case to have done real harm to public health.

So, it's doubtful that all editors are paragons. And this is hardly surprising: doing a good job as editor is hard and often thankless work. On the positive side, an editor may obtain kudos for being granted an influential academic role, but often there is little or no financial reimbursement for the many hours that must be dedicated to reading and evaluating papers, assigning reviewers, and dealing with fallout from authors who react poorly to having their papers rejected. Even if an editor starts off well, they may over time start to think “What’s in this for me?” and decide to exploit the opportunities for self-advancement offered by the position. The problem is that there seems little pressure to keep them on the straight and narrow; it's like when a police chief is corrupt. Nobody is there to hold them to account. 

Paper mills

Many people are shocked when they read about the phenomenon of academic paper mills – defined in a recent report by the Committee on Publication Ethics (COPE) and the Association of Scientific, Tehcnical and Medical Publishers (STM) as “the process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.” The report stated that “the submission of suspected fake research papers, also often associated with fake authorship, is growing and threatens to overwhelm the editorial processes of a significant number of journals.” It concluded with a raft of recommendations to tackle the problem from different fronts: changing the incentives adopted by institutions, investment in tools to detect paper mill publications, education of editors and reviewers to make them aware of paper mills, introduction of protocols to impede paper mills succeeding, and speeding up the process of retraction by publishers.

However, no attention was given to the possibility that journal editors may contribute to the problem: there is talk of “educating” them to be more aware of paper mills, but this is not going to be effective if the editor is complicit with the paper mill, or so disengaged from editing as to not care about them. 

It’s important to realise that not all paper mill papers are the same. Many generate outputs that look plausible. As Byrne and LabbĂ© (2017) noted, in biomedical genetic studies, fake papers are generated from a template that is based on a legitimate paper, and just vary in terms of the specific genetic sequence and/or phenotype that is studied. There are so many genetic sequences and phenotypes, that the number of possible combinations of these is immense. In such cases, a diligent editor may get tricked into accepting a fake paper, because the signs of fakery are not obvious and aren’t detected by reviewers. But at the other extreme, some products of paper mills are clearly fabricated. The most striking examples are those that contain what Guillaume Cabanac and colleagues term “tortured phrases”. These appear to be generated by taking segments of genuine articles and running them through an AI app that will use a thesaurus to alter words, with the goal of evading plagiarism detection software. In other cases, the starting point appears to be text from an essay mill. The results are often bizarre and so incomprehensible that one only needs read a few sentences to know that something is very wrong. Here’s an example from Elsevier’s International Journal of Biological Macromolecules, which those without access can pay $31.50 for (see analysis on Pubpeer, here).

"Wound recuperating camwood a chance to be postponed due to the antibacterial reliance of microorganisms concerning illustration an outcome about the infection, wounds are unable to mend appropriately, furthermore, take off disfiguring scares [150]. Chitin and its derivatives go about as simulated skin matrixes that are skilled should push a fast dermal redesign after constantly utilized for blaze treatments, chitosan may be wanton toward endogenous enzymes this may be a fundamental preference as evacuating those wound dressing camwood foundation trauma of the wounds and harm [151]. Chitin and its derivatives would make a perfect gas dressing. Likewise, they dampen the wound interface, are penetrability will oxygen, furthermore, permit vaporous exchange, go about as a boundary with microorganisms, and are fit about eliminating abundance secretions"

And here’s the start of an Abstract from a Springer Nature collection called Modern Approaches in Machine Learning and Cognitive Science (see here for some of the tortured phrases that led to detection of this article). The article can be yours for £19.95:

“Asthma disease are the scatters, gives that influence the lungs, the organs that let us to inhale and it’s the principal visit disease overall particularly in India. During this work, the matter of lung maladies simply like the trouble experienced while arranging the sickness in radiography are frequently illuminated. There are various procedures found in writing for recognition of asthma infection identification. A few agents have contributed their realities for Asthma illness expectation. The need for distinguishing asthma illness at a beginning period is very fundamental and is an exuberant research territory inside the field of clinical picture preparing. For this, we’ve survey numerous relapse models, k-implies bunching, various leveled calculation, characterizations and profound learning methods to search out best classifier for lung illness identification. These papers generally settlement about winning carcinoma discovery methods that are reachable inside the writing.”

These examples are so peculiar that even a layperson could detect the problem. In more technical fields, the fake paper may look superficially normal, but is easy to spot by anyone who knows the area, and who recognises that the term “signal to noise” does not mean “flag to commotion”, or that while there is such a thing as a “Swiss albino mouse” there is no such thing as a “Swiss pale-skinned person mouse”. These errors are not explicable as failures of translation by someone who does not speak good English. They would be detected by any reviewer with expertise in the field. Another characteristic of paper mill outputs, featured in this recent blogpost, are fake papers that combine tables and figures from different publications in nonsensical contexts.

Sleuths who are interested in unmasking paper mills have developed automated methods for identifying such papers, and the number is both depressing and astounding. As we have seen, though some of these outputs appear in obscure sources, many crop up in journals or edited collections that are handled by the big scientific publishing houses, such as Springer Nature, Elsevier and Wiley. When sleuths find these cases, they report the problems on the website PubPeer, and this typically raises an incredulous response as to how on earth did this material get published. It’s a very good question, and the answer has to be that somehow an editor let this material through. As explained in the COPE&STM report, sometimes a nefarious individual from a paper mill persuades a journal to publish a “special issue” and the unwitting journal is then hijacked and turned into a vehicle for publishing fraudulent work. If the special issue editor poses as a reputable scientist, using a fake email address that looks similar to the real thing, this can be hard to spot.

But in other cases, we see clearcut instances of paper mill outputs that have apparently been approved by a regular journal editor. In a recent preprint, Anna Abalkina and I describe finding putative paper mill outputs in a well-established Wiley journal, the Journal of Community Psychology. Anna identified six papers in the journal in the course of a much larger investigation of papers that came from faked email addresses. For five of them the peer review and editorial correspondence was available on Publons. The papers,  from addresses in Russia or Kazakhstan, were of very low quality and frequently opaque. I had to read and re-read to work out what the paper was about, and still ended up uncertain. The reviewers, however, suggested only minor corrections. They used remarkably similar language to one another, giving the impression that the peer review process had been compromised. Yet the Editor-in-Chief, Michael B. Blank, accepted the papers after minor revisions, with a letter concluding: “Thank you for your fine contribution”. 

There are two hypotheses to consider when a journal publishes incomprehensible or trivial material: either the editor was not doing their job of scrutinising material in the journal, or they were in cahoots with a paper mill. I wondered whether the editor was what I have previously termed an automaton – one who just delegates all the work to a secretary. After all, authors are asked to recommend reviewers, so all that is needed is for someone to send out automated requests to review, and then keep going until there are sufficient recommendations to either accept or reject. If that were the case, then maybe the journal would accept a paper by us. Accordingly, we submitted our manuscript about paper mills to the Journal of Community Psychology. But it was desk rejected by the Editor in Chief with a terse comment: “This a weak paper based on a cursory review of six publications”. So we can reject hypothesis 1 – that the editor is an automaton. But that leaves hypothesis 2 – that the editor does read papers submitted to his journal, and had accepted the previous paper mill outputs in full knowledge of their content. This raises more questions than it answers. In particular, why would he risk his personal reputation and that of his journal by behaving that way? But perhaps rather than dwelling on that question, we should think positively about how journals might protect themselves in future from attacks by paper mills.

A call for action

My proposal is that, in addition to the useful suggestions from the COPE&STM report, we need additional steps to ensure that those with editorial responsibility are legitimate and are doing their job. Here are some preliminary suggestions:

  1. Appointment to the post of editor should be made in open competition among academics who meet specified criteria.
  2. It should be transparent who is responsible for final sign-off for each article that is published in the journal.
  3. Journals where a single editor makes the bulk of editorial decisions should be discouraged. (N.B. I looked at the 20 most recent papers in Journal of Community Psychology that featured on Publons and all had been processed by Michael B. Blank).
  4. There should be an editorial board consisting of reputable people from a wide range of institutional backgrounds, who share the editorial load, and meet regularly to consider how the journal is progressing and to discuss journal business.
  5.  Editors should be warned about the dangers of special issues and should not delegate responsibility for signing off on any papers appearing in a special issue.
  6. Editors should be required to follow COPE guidelines about publishing in their own journal, and publishers should scrutinise the journal annually to check whether the recommended procedures were followed.
  7. Any editor who allows gibberish to be published in their journal should be relieved of their editorial position immediately.

Many journals run by academic societies already adopt procedures similar to these. Particular problems arise when publishers start up new journals to fill a perceived gap in the market, and there is no oversight by academics with expertise in the area. The COPE&STM report has illustrated how dangerous that can be – both for scientific progress and for the reputation of publishers.

Of course, when one looks at this list of requirements, one may start to wonder why anyone would want to be an editor. Typically there is little financial renumeration, and the work is often done in a person’s “spare time”. So maybe we need to rethink how that works, so that paragons with a genuine ability and interest in editing are rewarded more adequately for the important work they do.

P.S. Comment moderation is enabled for this blog to prevent it being overwhelmed by spam, but I welcome comments, and will check for these in the weeks following the post, and admit those that are on topic. 

 

Comment by Jennifer Byrne, 9th Sept 2022 

(this comment by email, as Blogger seems to eat comments by Jennifer for some reason, while letting through weird spammy things!).

This is a fantastic list of suggestions to improve the important contributions of journal editors. I would add that journal editors should be appointed for defined time periods, and their contributions regularly reviewed. If for any reason it becomes apparent that an editor is not in a position to actively contribute to the journal, they should be asked to step aside. In my experience, editorial boards can include numerous inactive editors. These can provide the appearance of a large, active and diverse editorial board, when in practice, the editorial work may be conducted by a much smaller group, or even one person. Journals cannot be run successfully without a strong editorial team, but such teams require time and resources to establish and maintain.

No comments:

Post a Comment