Monday, 23 December 2024

Finland vs. Germany: the case of MDPI

It's been an odd week for the academic publisher MDPI. On 16th December, Finland's Publication Forum (known as JUFO) announced that from January 2025 it was downgrading its classification of 271 open access journals to the lowest level, zero. By my calculations, this includes 187 journals from MDPI and 82 from Frontiers, plus 2 others. This is likely to deter Finnish researchers from submitting work to these journals, as the rating indicates they are of low quality. As explained in an article in the Times Higher Education, JUFO justified its decision on the grounds that these journals “aim to increase the number of publications with the minimum time spend for editorial work and quality assessment”. 

Then the very next day, 17th December, MDPI announced that it had secured a national publishing agreement with ZB Med, which offered substantial discounts to authors from over 100 German Universities publishing in MDPI journals. On Bluesky, this news was not greeted with the universal joy that ZB Med might have anticipated, with comments such as "an embarrassment", "shocking", and "a catastrophe".

To understand the background, it's worth reading a thread on Bluesky by Mark Hanson, one of the authors of a landmark paper entitled "The Strain on Scientific Publishing". This article showed that MDPI and Frontiers stand out from other publishers in terms of having an exponential growth in number of papers published in recent years, a massive shift to special issues as a vehicle for this increase, and a sharp drop in publication lag from 2016 to 2022.

In their annual report for 2023, MDPI described annualised publication growth of 20.4%. They stated that they have over 295,693 academic editors on editorial boards, and a median lag of 17 days from receipt of a paper to the first editorial decision and 6 weeks from submission to publication. It's not clear whether the 17 day figure includes desk rejections, but even if it does, this is remarkably fast. Of course, you could argue (and I'm sure the publishers will argue) that if you are publishing a lot more and doing it faster, you are just being efficient. However, an alternative view, and one that is apparently held by JUFO, is that this speedy processing goes hand in hand with poor editorial quality control.

The problem here is that anyone who has worked as a journal editor knows that, while a 17 day turnaround might be a good goal to aim for, it is generally not feasible. There is a limited pool of experts who can do a thorough peer review, and often one has to ask as many as 20 people to do a review in order to achieve 2 or 3 reviews. So it can take at least a couple of weeks to secure reviewers, and then it is likely to be another couple of weeks before all reviewers have completed a comprehensive review. Given these constraints, most editors would be happy if they could achieve a median time to first decision of 34 days - i.e. double that reported by MDPI. So the sheer speed of decision making - regarded by MDPI as a selling point for authors - is a red flag.

It seems that speed is achieved by adopting a rather unorthodox process of assigning peer reviewers, where the involvement of an academic editor is optional: "At least two review reports are collected for each submitted article. The academic editor can suggest reviewers during pre-check. Alternatively, MDPI editorial staff will use qualified Editorial Board members, qualified reviewers from our database, or new reviewers identified by web searches for related articles." The impression is that, in order to meet targets, editorial staff will select peer reviewers who can be relied upon to respond immediately to requests to review.

A guest post on this blog by René Aquarius supported these suspicions and further suggested that reviewers who are critical may be sidelined. After writing an initial negative review, René had promptly agreed to review a revision of a paper, but was then told his review was not needed - this is quite remarkable, given that most editors would be delighted if a competent reviewer agreed to do a timely re-review. It's worth looking not just at René's blogpost but also the comments from others, which indicate his experience is not unique.

A further issue concerns the fate of papers receiving negative reviews. René found that the paper he had rejected popped up as a new submission in another MDPI journal, and after negative reviews there, it was published in a third journal. This raises questions about MDPI's reported rejection rate of around 50%. If each of these resubmissions was counted as a new submission, the rejection rate would appear to be 66%, but given that the same paper was recycled from journal to journal before eventual acceptance, the actual rate was 0%. 

One good thing about MDPI is that it gives authors the option of making peer review open. However, analysis of published peer reviews further dents confidence in the rigour of the peer review process. A commentary in Science covering the work of Maria Ángeles Oviedo García noted how some MDPI peer reviews contained repetitive phrases that suggested they were generated by a template. They were superficial and did not engage seriously with the content of the article. In some cases the reviewer freely admitted a lack of expertise in the topic of the article, and in others, there was evidence of coercive citation (i.e., authors being told to cite the work of a specific individual, sometimes the editor).

Comments on Reddit cannot, of course, be treated as hard evidence, but they raise further questions about the peer review process at MDPI. Several commenters described having recommended rejection as a peer reviewer only to find that the paper was published without their comments. If negative peer review comments are selectively suppressed in the public record, this would be a serious breach of ethical standards by the publisher.

Lack of competent peer review and/or editorial malpractice is also suggested by the publication of papers in MDPI that fall well below the threshold of acceptable science. Of course, quality judgements are subjective, and it's common for researchers to complain "How did this get past peer review?" But in the case of MDPI journals, one finds articles that are so poor that they suggest the editor was either corrupt or asleep on the job. I have covered some examples in previous blogposts, here, and here.

The Finns are not alone in being concerned about research quality in MDPI journals. The Swiss National Science Foundation did not mention specific publishers by name, but in November 2023 they withdrew funding for articles published in special issues. Since 2023, the Directory of Open Access Journals has withdrawn 19 MDPI journals from its index for "Not adhering to best practice". Details are not provided, but these tend to be journals where guest editors have used special issues to publish a high volume of articles by themselves and close collaborators - another red flag for scientific quality. Yet another issue is citation manipulation, where editors or peer reviewers demand inclusion of specific references in a revision of an article in order to boost their own citation count. In February 2024, China released its latest Early Warning List of journals that are deemed to be "untrustworthy, predatory or not serving the Chinese research community’s interests". This included four MDPI journals listed for citation manipulation.

A final red flag about MDPI is that it seems remarkably averse to retracting articles. Hindawi publishing, which was bought by Wiley in 2021, was heavily criticised for allowing a flood of paper-milled articles to be published, but it did subsequently retract over 13,000 of them (just under 5% of 267K articles), before the closure of the brand. A search on Web of Science for documents classified as "retracted publication" or "retraction" and published by "MDPI or MDPI Ag" turned up a mere 582 retractions since 2010, which amounts to 0.04% of the 1.4 million articles listed on the database.

I've heard various arguments against JUFO's action, such as: many papers published in MDPI journals are fine; you should judge an article by its content, not where it is published; authors should be free to prefer speed over scrutiny if they wish. The reason why I support JUFO, and think ZB Med is rash to sign an agreement, is because if the peer review process is not managed by experienced and respected academic editors with specialist subject knowledge, then we need to consider the impact, not just on individual authors, but on the scientific literature as a whole. If we allow trivia, fatally flawed studies or pseudoscience to be represented as "peer-reviewed" this contaminates the research literature, with adverse consequences for everyone.

No comments:

Post a Comment