Monday, 23 December 2024

Finland vs. Germany: the case of MDPI

It's been an odd week for the academic publisher MDPI. On 16th December, Finland's Publication Forum (known as JUFO) announced that from January 2025 it was downgrading its classification of 271 open access journals to the lowest level, zero. By my calculations, this includes 187 journals from MDPI and 82 from Frontiers, plus 2 others. This is likely to deter Finnish researchers from submitting work to these journals, as the rating indicates they are of low quality. As explained in an article in the Times Higher Education, JUFO justified its decision on the grounds that these journals “aim to increase the number of publications with the minimum time spend for editorial work and quality assessment”. 

Then the very next day, 17th December, MDPI announced that it had secured a national publishing agreement with ZB Med, which offered substantial discounts to authors from over 100 German Universities publishing in MDPI journals. On Bluesky, this news was not greeted with the universal joy that ZB Med might have anticipated, with comments such as "an embarrassment", "shocking", and "a catastrophe".

To understand the background, it's worth reading a thread on Bluesky by Mark Hanson, one of the authors of a landmark paper entitled "The Strain on Scientific Publishing". This article showed that MDPI and Frontiers stand out from other publishers in terms of having an exponential growth in number of papers published in recent years, a massive shift to special issues as a vehicle for this increase, and a sharp drop in publication lag from 2016 to 2022.

In their annual report for 2023, MDPI described annualised publication growth of 20.4%. They stated that they have over 295,693 academic editors on editorial boards, and a median lag of 17 days from receipt of a paper to the first editorial decision and 6 weeks from submission to publication. It's not clear whether the 17 day figure includes desk rejections, but even if it does, this is remarkably fast. Of course, you could argue (and I'm sure the publishers will argue) that if you are publishing a lot more and doing it faster, you are just being efficient. However, an alternative view, and one that is apparently held by JUFO, is that this speedy processing goes hand in hand with poor editorial quality control.

The problem here is that anyone who has worked as a journal editor knows that, while a 17 day turnaround might be a good goal to aim for, it is generally not feasible. There is a limited pool of experts who can do a thorough peer review, and often one has to ask as many as 20 people to do a review in order to achieve 2 or 3 reviews. So it can take at least a couple of weeks to secure reviewers, and then it is likely to be another couple of weeks before all reviewers have completed a comprehensive review. Given these constraints, most editors would be happy if they could achieve a median time to first decision of 34 days - i.e. double that reported by MDPI. So the sheer speed of decision making - regarded by MDPI as a selling point for authors - is a red flag.

It seems that speed is achieved by adopting a rather unorthodox process of assigning peer reviewers, where the involvement of an academic editor is optional: "At least two review reports are collected for each submitted article. The academic editor can suggest reviewers during pre-check. Alternatively, MDPI editorial staff will use qualified Editorial Board members, qualified reviewers from our database, or new reviewers identified by web searches for related articles." The impression is that, in order to meet targets, editorial staff will select peer reviewers who can be relied upon to respond immediately to requests to review.

A guest post on this blog by René Aquarius supported these suspicions and further suggested that reviewers who are critical may be sidelined. After writing an initial negative review, René had promptly agreed to review a revision of a paper, but was then told his review was not needed - this is quite remarkable, given that most editors would be delighted if a competent reviewer agreed to do a timely re-review. It's worth looking not just at René's blogpost but also the comments from others, which indicate his experience is not unique.

A further issue concerns the fate of papers receiving negative reviews. René found that the paper he had rejected popped up as a new submission in another MDPI journal, and after negative reviews there, it was published in a third journal. This raises questions about MDPI's reported rejection rate of around 50%. If each of these resubmissions was counted as a new submission, the rejection rate would appear to be 66%, but given that the same paper was recycled from journal to journal before eventual acceptance, the actual rate was 0%. 

One good thing about MDPI is that it gives authors the option of making peer review open. However, analysis of published peer reviews further dents confidence in the rigour of the peer review process. A commentary in Science covering the work of Maria Ángeles Oviedo García noted how some MDPI peer reviews contained repetitive phrases that suggested they were generated by a template. They were superficial and did not engage seriously with the content of the article. In some cases the reviewer freely admitted a lack of expertise in the topic of the article, and in others, there was evidence of coercive citation (i.e., authors being told to cite the work of a specific individual, sometimes the editor).

Comments on Reddit cannot, of course, be treated as hard evidence, but they raise further questions about the peer review process at MDPI. Several commenters described having recommended rejection as a peer reviewer only to find that the paper was published without their comments. If negative peer review comments are selectively suppressed in the public record, this would be a serious breach of ethical standards by the publisher.

Lack of competent peer review and/or editorial malpractice is also suggested by the publication of papers in MDPI that fall well below the threshold of acceptable science. Of course, quality judgements are subjective, and it's common for researchers to complain "How did this get past peer review?" But in the case of MDPI journals, one finds articles that are so poor that they suggest the editor was either corrupt or asleep on the job. I have covered some examples in previous blogposts, here, and here.

The Finns are not alone in being concerned about research quality in MDPI journals. The Swiss National Science Foundation did not mention specific publishers by name, but in November 2023 they withdrew funding for articles published in special issues. Since 2023, the Directory of Open Access Journals has withdrawn 19 MDPI journals from its index for "Not adhering to best practice". Details are not provided, but these tend to be journals where guest editors have used special issues to publish a high volume of articles by themselves and close collaborators - another red flag for scientific quality. Yet another issue is citation manipulation, where editors or peer reviewers demand inclusion of specific references in a revision of an article in order to boost their own citation count. In February 2024, China released its latest Early Warning List of journals that are deemed to be "untrustworthy, predatory or not serving the Chinese research community’s interests". This included four MDPI journals listed for citation manipulation.

A final red flag about MDPI is that it seems remarkably averse to retracting articles. Hindawi publishing, which was bought by Wiley in 2021, was heavily criticised for allowing a flood of paper-milled articles to be published, but it did subsequently retract over 13,000 of them (just under 5% of 267K articles), before the closure of the brand. A search on Web of Science for documents classified as "retracted publication" or "retraction" and published by "MDPI or MDPI Ag" turned up a mere 582 retractions since 2010, which amounts to 0.04% of the 1.4 million articles listed on the database.

I've heard various arguments against JUFO's action, such as: many papers published in MDPI journals are fine; you should judge an article by its content, not where it is published; authors should be free to prefer speed over scrutiny if they wish. The reason why I support JUFO, and think ZB Med is rash to sign an agreement, is because if the peer review process is not managed by experienced and respected academic editors with specialist subject knowledge, then we need to consider the impact, not just on individual authors, but on the scientific literature as a whole. If we allow trivia, fatally flawed studies or pseudoscience to be represented as "peer-reviewed" this contaminates the research literature, with adverse consequences for everyone.

Monday, 25 November 2024

Why I have resigned from the Royal Society


The Royal Society is a venerable institution founded in 1660, whose original members included such eminent men as Christopher Wren, Robert Hooke, Robert Boyle and Isaac Newton. It promotes science in many ways: administering grants, advising government, holding meetings and lectures, and publishing expert reports on scientific matters of public importance.  

There are currently around 1,800 Fellows and Foreign Members of the Royal Society, all elected through a stringent and highly competitive process which includes nomination by two Fellows of the Royal Society (FRS), detailed scrutiny of the candidate's achievements and publications, reports by referees, and consideration by a committee of experts in their broad area of research.  Although most Fellows are elected on the basis of their scientific contributions, others are nominated on the basis of "wider contributions to science, engineering or medicine through leadership, organisation, scholarship or communication".
For many scientists, election to the Royal Society is the pinnacle of their scientific career. It establishes that their achievements are recognised as exceptional, and the title FRS brings immediate respect from colleagues. Of course, things do not always work out as they should. Some Fellows may turn out to have published fraudulent work, or go insane and start promoting crackpot ideas. Although there are procedures that allow a fellow to be expelled from the Royal Society, I have been told this has not happened for over 150 years. It seems that election as a Fellow of the Royal Society, like loss of virginity, is something that can't readily be reversed.
This brings us, then, to the case of Elon Musk, who was elected as a Fellow of the Royal Society in 2018 on the basis of his technological achievements, notably in space travel and electrical vehicle development. Unfortunately, since that time, his interests have extended to using social media for political propaganda, while at the same time battling what he sees as "woke mind virus" and attacks on free speech. Whereas previously he seemed to agree with mainstream scientific opinion on issues such as climate change and medicine, over the past year or two, he's started promoting alternative ideas.   
In summer of 2024, a number of FRSs became concerned at how Musk was using his social media platform (previously Twitter, now termed X) to stir up racial unrest and anti-government sentiment in the UK. Notable tweets by him from this period included incendiary comments and frank misinformation, as documented in this Guardian article
This led to a number of Fellows expressing dismay that Musk had been elected. There was no formal consultation of the Fellowship but via informal email contacts, a group of 74 Fellows formulated a letter of concern that was sent in early August to the President of the Royal Society, raising doubts as to whether he was "a fit and proper person to hold the considerable honour of being a Fellow of the Royal Society". The letter specifically mentioned the way Musk had used his platform on X to make unjustified and divisive statements that served to inflame right-wing thuggery and racist violence in the UK. 
Somebody (not me!) leaked the letter to the Guardian, who ran a story about it on 23rd August.
I gather that at this point the Royal Society Council opted to consult a top lawyer to determine whether Musk's behaviour breached their Code of Conduct. The problem with this course of action is that if you are uncertain about doing something that seems morally right but may have consequences, then it is easy to find a lawyer who will advise against doing it. That's just how lawyers work. They're paid to rescue people from ethical impulses that may get them into trouble. And, sure enough, the lawyer determined that Musk hadn't breached the Code of Conduct. If you want to see if you agree, you can find the Code of Conduct here.
Many of the signatories of the letter, including me, were unhappy with this response. We set about assembling further evidence of behaviours incompatible with the Code of Conduct. There is a lot of material, which can be broadly divided into two categories, depending on whether it relates to "Scientific conduct" or "Principles".  

On Scientific conduct, the most relevant points from the Code of Conduct are:
2.6. Fellows and Foreign Members shall carry out their scientific research with regard to the Society's statement on research integrity and to the highest standards. 
2.10. Fellows and Foreign Members shall treat all individuals in the scientific enterprise collegially and with courtesy, including supervisors, colleagues, other Society Fellows and Foreign Members, Society staff, students and other early‐career colleagues, technical and clerical staff, and interested members of the public. 
2.11. Fellows and Foreign Members shall not engage in any form of discrimination, harassment, or bullying.
Most of those I've spoken to agree that a serious breach of these principles was in 2022, when Musk tweeted: "My pronouns are Prosecute/Fauci", thereby managing to simultaneously offend the LGBTQ community, express an antivaxx sentiment, and put Fauci, already under attack from antivaxxers, at further risk. Fauci was not a Fellow at the time these comments were made, but that should not matter given the scope of the statement is "individuals in the scientific community". This incident was covered by CBS News.
Now that the US election is over, Musk seems emboldened to ramp up his attacks. On 19th November 2024, he retweeted this to his millions of followers, followed by a compilation of attacks on Fauci on 21st November,

Neuralink
There are also questions about the management of Musk's research project, Neuralink, which involves developing a brain-computer interface to help people who are paralysed. While this is clearly a worthy goal, his approach to conducting research is characterised by refusal to let anyone interfere with how he does things. This has led to accusations of failure to adhere to regulatory procedures for Good Laboratory Practice. For instance, consider these quotes from this article
'I think what concerns people is that Neuralink could be cutting corners, and so far nobody has stopped them,' says Nick Ramsey, a clinical neuroscientist at University Medical Center Utrecht, in the Netherlands.  There’s an incredible push by Neuralink to bypass the conventional research world, and there’s little interaction with academics, as if they think that we’re on the wrong track—that we’re inching forward while they want to leap years forward.
In response to Musk's claim that no monkey had died because of Neuralink, the Physicians Committee for Responsible Medicine wrote to the SEC, claiming Musk’s comments were false. The group said it had obtained veterinary records from Neuralink’s experiments showing that at least 12 young, healthy monkeys were euthanized as a result of problems with Neuralink’s implant. The group alleged that Musk’s comments are misleading investors, and urged SEC regulators to investigate Musk and Neuralink for securities fraud.
The problems with Neuralink do not stop with the ethics of the animals and the secrecy surrounding them. In a piece in Nature, various scientists were interviewed about the first human trial that was conducted earlier this year. The main concern was lack of transparency. Human trials are usually recorded in clinical.trials.gov, which was set up precisely to make it easier to track if studies had followed a protocol. Musk did not do this. His approach to the human trials again reflects his distaste for any regulations. But the regulations are there for a purpose, and one would expect a Fellow of the Royal Society to abide by them; otherwise we end up with scandals such as Theranos or the stem cell experiments by Macchiarini and Birchall. The ethics of this kind of trial also needs careful handling, especially in terms of the patient's understanding of possible adverse effects, their expectations of benefits, and the undertaking of researchers to provide long-term support for the prosthesis.

If we turn to the more general issues that come under Principles, then the Code of Conduct states: 
Fellows and Foreign Members shall not act or fail to act in any way which would undermine the Society's mission or bring the Society into disrepute.
 Here are some examples that I would regard as contrary to the Society's mission.

Promoting vaccine hesitation
The Royal Society has done good work promoting public understanding of vaccines, as with this blogpost by Charles Bangham FRS. In contrast, as described here, Musk has promoted vaccine conspiracy theories and anti-vaccine views on his platform. This Tweet had 85 million views:



Downplaying the climate emergency
In 2023 Musk played down the seriousness of climate change, and 2024 participated in a bizarre interview with Donald Trump, which dismayed climate experts. Among the commenters was Michael Mann, who said “It is sad that Elon Musk has become a climate change denier, but that’s what he is. He’s literally denying what the science has to say here.” Mann was elected as a Foreign Member of the Royal Society in 2024.

Spreading deep fakes and misinformation on X
As recently as 2022, the Royal Society published a report in which Frank Kelly (FRS) noted the high priority that the Royal Society gives to accurate scientific communication:
The Royal Society’s mission since it was established in 1660 has been to promote science for the benefit of humanity, and a major strand of that is to communicate accurately. But false information is interfering with that goal. It is accused of fuelling mistrust in vaccines, confusing discussions about tackling the climate crisis and influencing the debate about genetically modified crops. 
Earlier this month, Martin McKee wrote in the British Medical Journal:
 Musk’s reason for buying Twitter was to influence the social discourse. And influence he did—by using his enormous platform (203 million followers) to endorse Trump, spread disinformation about voter fraud and deep fakes of Kamala Harris, and amplify conspiracy theories about everything from vaccines to race replacement theory to misogyny.
The most recent development is the announcement that Musk is to be co-director of the new Department of Government Efficiency (DOGE, an allusion to the cryptocurrency Dogecoin) in the Trump Administration, with a brief to cut waste and bureaucracy. The future for US science is starting to look bleak, with Musk being given unfettered powers to cut budgets to NIH and NASA, among others.  This tweet, which he endorsed, indicates that rather than using objective evidence, the cuts will fall on those who have criticized Trump, who will find bowdlerized summaries of their work used to generate public outrage. The tweet reads:  "Here’s what the U.S. Government wasted $900 Billion of your tax dollars on in 2023. The Department of Government Efficiency (@DOGE) will fix this. America deserves leaders that prioritize sensible spending" before presenting a chart listing items for cuts, with unsourced descriptions of expenditure, including:
  • Dr Fauci's monkey business on NIH's "monkey island":   $33,200,000 
  • NIH's meth-head monkeys:  portion of $12,000,000 
  • Dr Fauci's transgender monkey study: $477,121
I'm sad to say I agree with Alex Wild, Curator of Entomology at University of Texas Austin, who wrote a few days ago: "I hope federally funded scientists are preparing for large scale, bad faith attacks by Musk and his troll army.  It’s pretty clear the DOGE operation is going to take snippets of grant proposals and papers, present them out of context, and direct weaponized harassment of individual people."

What next?  
I've been told that in the light of the evolving situation, the Royal Society Council will look again at the case of Elon Musk. In conversations I have had with them, they emphasise that they must adhere to their own procedures, which are specified in the Statutes, and which involve a whole series of stages of legal scrutiny, committee evaluation, discussion with the Fellow in question, and ultimately a vote from the Fellowship, before a Fellow or Foreign Member could be expelled. While I agree that if you have a set of rules you should stick to them, I find the fact that nobody has been expelled for over 150 years telling. It does suggest that the Statutes are worded so that it is virtually impossible to do anything about Fellows who breach the Code of Conduct. In effect the Statutes serve a purpose of protecting the Royal Society from ever having to take action against one of its Fellows.
In the course of investigating this blogpost, I've become intimately familiar with the Code of Conduct, which requires me to "treat all individuals in the scientific enterprise collegially and with courtesy, including ... foreign Members". I'm not willing to treat Elon Musk "collegially and with courtesy". Any pleasure I may take in the distinction of the honour of an FRS is diminished by the fact it is shared with someone who appears to be modeling himself on a Bond villain, a man who has immeasurable wealth and power which he will use to threaten scientists who disagree with him. Accordingly, last week I resigned my FRS. I don't do this in the expectation of having any impact: in the context of over 350 years of Royal Society history, this is just a blip. I just feel far more comfortable to be dissociated from an institution that continues to honour this disreputable man.

Note: Comments will be accepted if they are by a named individual, civil, and on topic. They are moderated and there may be a delay before they appear online. 
 
PS. 1st Dec 2024. It seems many people did not read this far and I have deleted a lot of anonymous comments.  I will close this post to comments now, as I think nobody has anything new to say and I don't think anything will be gained by more to and fro.  Thanks for all the support - which outnumbers criticism by about 20:1.

Sunday, 27 October 2024

I don't care about journal impact factors but I do care about visibility

There's been a fair bit of discussion about Clarivate's decision to pause inclusion of eLife publications on the Science Citation Index (e.g. on Research Professional).  What I find exasperating is that most of the discussion focuses on a single consequence - loss of eLife's impact factor.  For authors, there are graver consequences.   

I've reviewed for eLife but never published there; however, I have published a lot in Wellcome Open Research, which is another journal that aimed to disrupt the traditional publishing model, and has some similarities with eLife.  Wellcome Open Research has never been included in Science Citation Index, despite the fact that it uses peer review.  Wellcome Open Research has an unconventional model whereby submitted papers are immediately published, as a kind of pre-print prior to peer review, and then updated after peer review.  It is true that some papers don't get sufficient approval to proceed to the peer-reviewed stage; the distinction between those that do and do not pass peer review is clearly flagged on the article.  In addition to peer review, Wellcome Open Research maintains some quality control by limiting eligibility to researchers funded by Wellcome.  

 

When Wellcome Open Research started up, all Wellcome-funded researchers were encouraged to publish there.  As someone committed to Open Research, this seemed a great idea.  There were no publication charges, and everything was open: access to the publication, data, and peer review. Peer reviewers even get DOIs for their reviews, some of which are worth citing in their own right.  I was increasingly adopting open practices, and I think some of my best peer-reviewed work is published there. 

 

I was shocked when I discovered that the journal wasn't included in Web of Science. I remember preparing a progress report for Wellcome and using Web of Science to check I hadn't omitted any publications.  I was puzzled that I seemed to have published far less than I remembered. Then it became clear: everything in Wellcome Open Research was missing. 

 

I was on the Advisory Board for Wellcome Open Research at the time, and raised this with them. They were shocked that I was upset.  "We thought you of all people didn't care about impact factors", they said. This, of course, was true. But I did care a lot about my work being visible.  I was also aware that any WOS-based H-index would exclude all the papers listed below: not a big deal for me, but potentially harmful to junior authors.  

 

The reply I got was similar to the argument being made by eLife  - well, the articles are indexed in Google Scholar and PubMed.  That was really little consolation to me, given that I had relied heavily on Web of Science in my own literature searches, believing that it screened out dodgy journals. (This belief turns out to be false - there are many journals featured in WoS that are very low quality, which just rubs salt into the wound).  

 

I have some criticisms of eLife's publishing model, but I would like them to succeed. We urgently need alternatives to the traditional journal model operated by the big commercial publishers.  Their response to the open access movement has been to monetise it, with catastrophic consequences for science, as an unlimited supply of shoddy and fake work gets published - often in journals that are indexed in Web of Science.

 

I agree that we need an index of published academic work that has some quality control.  Whether alternatives like OpenAlex will do the job remains to be seen. 

 

Papers that aren't indexed on Web of Science

Bishop, D. V. M., & Bates, T. C. (2020). Heritability of language laterality assessed by functional transcranial Doppler ultrasound: A twin study. Wellcome Open Research, 4, 161. https://doi.org/10.12688/wellcomeopenres.15524.3


Bishop, D. V. M., Brookman-Byrne, A., Gratton, N., Gray, E., Holt, G., Morgan, L., Morris, S., Paine, E., Thornton, H., & Thompson, P. A. (2019). Language phenotypes in children with sex chromosome trisomies. Wellcome Open Research, 3, 143. https://doi.org/10.12688/wellcomeopenres.14904.2


Bishop, D. V. M., Grabitz, C. R., Harte, S. C., Watkins, K. E., Sasaki, M., Gutierrez-Sigut, E., MacSweeney, M., Woodhead, Z. V. J., & Payne, H. (2021). Cerebral lateralisation of first and second languages in bilinguals assessed using functional transcranial Doppler ultrasound. Wellcome Open Research, 1, 15. https://doi.org/10.12688/wellcomeopenres.9869.2


Frizelle, P., Thompson, P. A., Duta, M., & Bishop, D. V. M. (2019). The understanding of complex syntax in children with Down syndrome. Wellcome Open Research, 3, 140. https://doi.org/10.12688/wellcomeopenres.14861.2


Newbury, D. F., Simpson, N. H., Thompson, P. A., & Bishop, D. V. M. (2018). Stage 1 Registered Report: Variation in neurodevelopmental outcomes in children with sex chromosome trisomies: protocol for a test of the double hit hypothesis. Wellcome Open Research, 3, 10. https://doi.org/10.12688/wellcomeopenres.13828.2


Newbury, D. F., Simpson, N. H., Thompson, P. A., & Bishop, D. V. M. (2021). Stage 2 Registered Report: Variation in neurodevelopmental outcomes in children with sex chromosome trisomies: testing the double hit hypothesis. Wellcome Open Research, 3, 85. 

https://doi.org/10.12688/wellcomeopenres.14677.4


Pritchard, V. E., Malone, S. A., Burgoyne, K., Heron-Delaney, M., Bishop, D. V. M., & Hulme, C. (2019). Stage 2 Registered Report: There is no appreciable relationship between strength of hand preference and language ability in 6- to 7-year-old children. Wellcome Open Research, 4, 81. https://doi.org/10.12688/wellcomeopenres.15254.1


Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations. Wellcome Open Research, 4, 142. https://doi.org/10.12688/wellcomeopenres.15396.2


Wilson, A. C., & Bishop, D. V. M. (2019). ‘If you catch my drift...’: Ability to infer implied meaning is distinct from vocabulary and grammar skills. Wellcome Open Research, 4, 68. https://doi.org/10.12688/wellcomeopenres.15210.3


Wilson, A. C., King, J., & Bishop, D. V. M. (2019). Autism and social anxiety in children with sex chromosome trisomies: An observational study. Wellcome Open Research, 4, 32. https://doi.org/10.12688/wellcomeopenres.15095.2


Woodhead, Z. V. J., Rutherford, H. A., & Bishop, D. V. M. (2020). Measurement of language laterality using functional transcranial Doppler ultrasound: A comparison of different tasks. Wellcome Open Research, 3, 104. https://doi.org/10.12688/wellcomeopenres.14720.3

 

 

 

Monday, 21 October 2024

What is going on at the Journal of Psycholinguistic Research?

Last week this blog focussed on problems affecting Scientific Reports, a mega-journal published by Springer Nature. This week I look at a journal at the opposite end of the spectrum, the Journal of Psycholinguistic Research (JPR), a small, specialist journal which has published just 2187 papers since it was founded in 1971. This is fewer than Scientific Reports publishes in one year. It was brought to my attention by Anna Abalkina because it shows every sign of having been targeted by one or more Eastern European paper mills.

Now, this was really surprising to me. JPR was founded in 1971 by Robert Rieber, whose obituaries in the New York Times  and the American Psychologist confirm he had a distinguished career (though both misnamed JPR!). The Advisory and Editorial boards of the journal are peppered with names of famous linguists and psychologists, starting with Noam Chomsky. So there is a sense that if this can happen to JPR, no journal is safe.

Coincidentally, last week Anna and I submitted revisions for a commentary on paper mills coauthored with Pawel Matusz. (You can read the preprint here). Pawel is editor of the journal Mind, Brain & Education (MBE), which experienced an attack by the Tanu.pro paper mill involving papers published in 2022-3. In the commentary, we discussed characteristics of the paper mill, which are rather distinctive and quite different from what is seen in basic biomedical or physical sciences. A striking feature is that the IMRaD structure (Introduction, Methods, Results and Discussion) is used, but in a clueless fashion, with these headings being inserted in what is otherwise a rambling and discursive piece of text, that typically has little or no empirical content. Insofar as there are any methods described, they don't occur in the methods section, and they are too vague for the research to be replicable.

Reading these papers rapidly turns my brain to mush, but in the interest of public service I did wade through five of them and left comments on Pubpeer:  

Yeleussizkyzy, M., Zhiyenbayeva, N., Ushatikova, I. et al. E-Learning and Flipped Classroom in Inclusive Education: The Case of Students with the Psychopathology of Language and Cognition. J Psycholinguist Res 52, 2721–2742 (2023). https://doi.org/10.1007/s10936-023-10015-y  

Snezhko, Z., Yersultanova, G., Spichak, V. et al. Effects of Bilingualism on Students’ Linguistic Education: Specifics of Teaching Phonetics and Lexicology. J Psycholinguist Res 52, 2693–2720 (2023). https://doi.org/10.1007/s10936-023-10016-x

Nurakenova, A., Nagymzhanova, K. A Study of Psychological Features Related to Creative Thinking Styles of University Students. J Psycholinguist Res 53, 1 (2024). https://doi.org/10.1007/s10936-024-10042-3

Auganbayeva, M., Turguntayeva, G., Anafinova, M. et al.Linguacultural and Cognitive Peculiarities of Linguistic Universals. J Psycholinguist Res 53, 3 (2024). https://doi.org/10.1007/s10936-024-10050-3

Shalkarbek, A., Kalybayeva, K., Shaharman, G. et al. Cognitive Linguistic Analysis of Hyperbole-based Phraseological Expressions in Kazakh and English Languages. J Psycholinguist Res 53, 4 (2024). https://doi.org/10.1007/s10936-024-10052-1

My experience with the current batch of papers suggests that a relatively quick way of screening a submitted paper would be to look at the Methods section. This should contain an account of methods that would indicate what was done and how, at a level of detail sufficient for others to replicate the work. Obviously, this is not appropriate for theoretical papers, but for those purporting to report empirical work, it would work well, at least for the papers I looked at in JPR.   

All of these papers have authors from Kazakhstan, sometimes with co-authors from the Russian Federation. This led me to look at the geographic distribution of authors in the journal over time. The top countries represented by JPR authors in 2020 onwards are China (113), United States (68), Iran (52), Germany (28), Saudi Arabia (22) and Kazakhstan (19). However, these composite numbers mask striking trends. All the Kazakhstan authored papers are in 2023-2024. There's also a notable fall-off in papers authored by USA-based authors in the same time period, with only 11 cases in total. This is quite remarkable for a journal that had a striking USA dominance in authors up until around 2015, as shown in the attached figure (screenshot from Dimensions.ai).

 

Number of papers in JPR from five top countries: 2005-2024

Exported: October 20, 2024

Criteria: Source Title is Journal of Psycholinguistic Research.

© 2024 Digital Science and Research Solutions Inc. All rights reserved. 

Non-commercial redistribution / external re-use of this work is permitted subject to appropriate acknowledgement. 

This work is sourced from Dimensions® at www.dimensions.ai.

Whenever a paper mill infestation is discovered, it raises the question of how it happened. Surely the whole purpose of peer review is to prevent low quality or fraudulent material entering the literature? In other journals where this has happened it has been found that the peer review process was compromised, with fake peer reviewers being used. Even so, one would have hoped that an editor would scrutinize papers and realise something was amiss. As mentioned in the previous blogpost, it would be much easier to track down the ways in which fraudulent papers get into mainstream journals if the journal reported information about the editor who handled the paper, and published open peer review.

Whatever the explanation, it is saddening to see a fine journal brought so low. In 2021, at the 50th anniversary of the founding of the journal, the current editor, Rafael Art. Javier, wrote a tribute to his predecessor, Robert Rieber:
"His expectation, as stated in that first issue, was that manuscripts accepted 'must add to knowledge in some way, whether they are in the form of experimental reports, review papers, or theoretical papers...and studies with negative results,' provided that they are of sufficiently high quality to make an original contribution."

Let us hope that the scourge of paper mills can be banished from the journal to allow it to be restored to the status it once had, and for Robert Rieber's words to once more be applicable.

 

Wednesday, 16 October 2024

An open letter regarding Scientific Reports

16th October 2024 

to: Mr Chris Graf
Research Integrity Director, Springer Nature and Chair Elect of the World Conference on Research Integrity Foundation Governing Board.

 

Dear Mr Graf,

We are a group of sleuths and forensic meta-scientists who are concerned that Springer Nature is failing in its duty to protect the scientific literature from fraudulent and low quality work. We are aware that, as noted in the 2023 Annual Report, you are committed to maintaining research integrity. We agree with the statement: “To solve the world’s biggest challenges, we all need research that’s reliable, trustworthy and can be built on by scientists and innovators. As a leading global research publisher, we have a pivotal role to play.” It is encouraging to hear that the Springer Nature research integrity group doubled in size in 2023. Nevertheless, we have a growing sense that all is not well concerning the mega journal Scientific Reports.

Some of the work that has been published is so seriously flawed that it is not credible that it underwent any meaningful form of peer review. In other cases, when we have reported flawed papers to the editor or integrity team, the response has been inadequate. A striking example cropped up last week when a “corrected” version of an article was published in Scientific Reports. This article had been flagged up by Guillaume Cabanac as containing numerous “tortured phrases” that are indicative of fraudulent authors attempting to bypass plagiarism checks; the authors were allowed to “correct” the article by merely removing some (not all) of the tortured phrases. This led some of us to look more closely at the article. As is evident from comments on PubPeer, it turned out to be a kind of case study of all the red flags for fraud that we look for. As well as (still uncorrected) tortured phrases, it contained irrelevant content, irrelevant citations, meaningless gibberish, a nonsensical figure, and material recycled from other publications.

This is perhaps the most flagrant example, but we argue that it indicates problems with your editorial processes that are not going to be fixed by AI. The only ways an article like this can have been published are either through editorial negligence or outright malpractice. For it to be negligence would require a remarkable degree of professional incompetence from a handling editor. The possibility of malpractice, would mean there is a corrupt handling editor who bypasses the peer review process entirely or willingly appoints corrupt peer reviewers to approve the manuscript. We appreciate that some papers that we and others have reported have been retracted, but in other cases blatantly fraudulent papers can take years to be retracted or to receive any appropriate editorial action.

We have some specific suggestions for actions that Springer Nature could take to address these issues.

  1. Employ a task force of people with the necessary expertise to carry out an urgent audit of all editors of Scientific Reports. We have looked at the editors on your website, and it is clear that this is an enormous task, given that there are over 13,000 of them, and they are not listed with disambiguating information such as Orcid IDs. Even so, in a few hours, by cross-checking this list against PubPeer, it was possible to identify the 28 cases listed below, covering a range of disciplines, and all, in our view, with pretty clear-cut evidence of problems. Four are members of the Editorial Board. We stress, this is just the low-hanging fruit which was fairly easy to detect.
  2. The list of problematic articles appended below or tabulated on the Problematic Paper Screener might provide an alternative route to identify editors who should never have been given a gatekeeping role in academic publishing. As well as checking the papers we list below, we recommend that all other articles accepted by the same editors should be scrutinised.
  3. Detection of problematic articles and editors could be helped by requiring open peer review for all journals, and ensuring that the name and Orcid ID of the handling editor is included with the published meta-data for all articles.
We hope these suggestions will be helpful in ensuring that research published in Scientific Reports is reliable and trustworthy.

Yours sincerely

Dorothy Bishop
Guillaume Cabanac
François-Xavier Coudert
René Aquarius
Nick Wise
Lonni Besançon
Simon A.J. Kimber
Anna Abalkina
Rickard Carlsson
Samuel J Westwood
Patricia Murray
Nicholas J. L. Brown
Smut Clyde
Leonid Schneider
Ian Hussey
Tu Duong
Gustav Nilsonne
Jamie Cummins
Alexander Magazinov
Elisabeth Bik
Mu Yang
Corrado Viotti
Sholto David


 

Appendices

1. Some examples of editors with concerning PubPeer entries

Editorial board Ghulam Md Ashraf
Editorial board Eun Bo Shim
Editorial board Ajay Goel
Editorial board Rasoul Kowsar

AGEING Vittorio Calabrese
AGRICULTURE Sudip Mitra
ANALYTICAL CHEMISTRY Syed Ghulam Musharraf
CELL BIOLOGY Gabriella Dobrowolny
CHEMICAL ENGINEERING Enas Taha Sayed
CIVIL ENGINEERING Manoj Khandelwal
CLINICAL ONCOLOGY Marcello Maggiolini
COMPUTATIONAL SCIENCE Praveen Kumar Reddy Maddikunta
DRUG DISCOVERY Salvatore Cuzzocrea
ENDOCRINOLOGY Sihem Boudina
ENVIRONMENTAL ENGINEERING Rama Rao Karri
ENVIRONMENTAL SCIENCE Mayeen Uddin Khandaker
GASTROENTEROLOGY AND HEPATOLOGY Sharon DeMorrow
IMMUNOLOGY Marcin Wysoczynski
INFECTIOUS DISEASES Fatah Kashanchi
MATHEMATICAL PHYSICS Ilyas Khan
MICROBIOLOGY Massimiliano Galdiero
NETWORKS AND COMPLEX SYSTEMS Achyut Shankar
NEUROLOGY Yvan Torrente
RESPIRATORY MEDICINE Soni Savai Pullamsetti
STRUCTURAL AND MOLECULAR BIOLOGY Stefania Galdiero
https://pubpeer.com/publications/42901FD2901EC917E3EE54B8DBD749#4 (authors claim a correction is underway, but none published for 2 years)
https://pubpeer.com/publications/01FE09F1127DF0598985987677A101 (part of a list of many flagged papers from this author group. Corrected rather than retracted)
https://pubpeer.com/publications/69EDBAECD50F31B051ECECCD1DF346 (notified on 31-3-2023 about this paper, no action so far)
https://pubpeer.com/publications/F8A1AD2B165888A06C18B28C860E7B. EiC contacted Nov. 22 with authorship concerns, responded that he would investigate. No action taken so far.
https://pubpeer.com/publications/286F83F9553D29F82CD4281309A1E4. Has had EoC for authorship irregularities since July 22, no action taken since.
https://pubpeer.com/publications/5BEDDDA9CF92B9CDDD2AB1AA796271 (blatantly nonsensical paper reported to publisher in June 2024; no action as yet)
https://pubpeer.com/publications/37B87CAC48DE4BC98AD40E00330143 (various corrections since 2022, and in Feb 2023 readers were told “conclusions of this article are being considered by the Editors. A further editorial response will follow the resolution of these issues”. 19 Months later we are still waiting.)


3. Some examples of journal-level reports posted on PubPeer

Scientific Reports

other Springer Nature journals:

Chemosphere

Tuesday, 24 September 2024

Using PubPeer to screen editors

 

2023 was the year when academic publishers started to take seriously the threat that paper mills posed to their business. Their research integrity experts have penned various articles about the scale of the problem and the need to come up with solutions (e.g., here and here).  Interested parties have joined forces in an initiative called United2Act. And yet, to outsiders, it looks as though some effective actions are being overlooked. It's hard to tell whether this is the result of timidity, poor understanding, or deliberate footdragging from those who have a strong financial conflict of interest.

As I have emphasised before, the gatekeepers to journals are editors. Therefore it is crucial that they are people of the utmost integrity and competence. The growth of mega-journals with hundreds of editors has diluted scrutiny of who gets to be an editor. This has been made worse by the bloating of journals with hundreds of special issues, each handled by "guest editors". We know that paper millers will try to bribe existing editors, and to place their own operatives as editors or guest editors, use fake reviewers, and stuff articles with irrelevant citations. Stemming this tide of corruption would be one effective way to reduce the contamination of the research literature. Here are two measures I suggest that publishers should take if they seriously want to clean up their journals.

1. Three strikes and you are out. Any editor who has accepted three or more paper milled papers should be debarred from acting as an editor, and all papers that they have been responsible for accepting should be regarded as suspect. This means retrospectively cleaning-up the field by scrutinising the suspect papers and retracting any from authors associated with paper mills, or which are characterised by features suggestive of paper mills, such as tortured phrases, citation stacking, gobbledegook content, fake reviews from reviewers suggested by authors, invalid author email domains, or co-authors who are known to be part of a paper mill ring. All of these are things that any competent editor should be able to detect. I anticipate this would lead to a large number of retractions, particularly from journals with many Special Issues. As well as these simple indicators, we are told that publishers are working hard to develop AI-based checks. They should use these not only to screen new submissions, and to retract published papers, but also to identify editors who are allowing this to happen on their watch. It also goes without saying that nobody who has co-authored a paper-milled paper should act as an editor.

2. All candidates for roles as Editor or Guest Editor at a journal should be checked against the post-publication peer review website PubPeer, and rejected if this reveals evidence of papers that have had credible criticisms suggesting of data fabrication or falsification. This is a far from perfect indicator: only a tiny fraction of authors receive PubPeer comments, and these may comment on trivial or innocent aspects of a paper. But, as I shall demonstrate, using such a criterion can reveal cases of editorial misconduct.

I will illustrate how this might work in practice, using the example of the MDPI journal Electronics. This journal came to my attention because it has indicators that all is not well with its Special Issues programme. 

First, in common with nearly all MDPI journals, Electronics has regularly broken the rule that specifies that no more than 25% of articles should be authored by a Guest Editor. As mentioned in a previous post, this is a rule that has come and gone in the MDPI guidelines, but which is clearly stated as a requirement for inclusion in the Directory of Open Access Journals (DOAJ). 13% of Special Issues in Electronics completed in 2023-4 broke this rule**. DOAJ have withdrawn some MDPI journals from their directory for this reason, and I live in hope that they will continue to implement this policy rigorously - which would entail delisting from their Directory the majority of MDPI journals. Otherwise, there is nothing to stop publishers claiming to be adhering to rigorous standards while failing to implement them, making listing in DOAJ an irrelevance.  

Even more intriguing, for around 11% of the 517 Special issues of Electronics published in 2023-4, the Guest Editor doesn't seem to have done any editing We can tell this because Special Issues are supposed to list who has acted as Academic Editor for each paper. MDPI journals vary in how rigorously they implement that rule - some journals have no record of who was the Academic Editor. But most do, and in most Special Issues, as you might expect, the Guest Editor is the Academic Editor, except for any papers where there is conflict of interest (e.g. if authors are Guest Editors or are from the same institution as the Guest Editor). Where the Guest Editor cannot act as Academic Editor, the MDPI guidelines state that this role will be taken by a member of the Editorial Board. But, guess what? Sometimes that doesn't happen. As someone with a suspicious frame of mind, and a jaundiced view of how paper mills operate, this is a potential red flag for me.

Accordingly, I decided to check PubPeer comments for individuals in three editorial roles at Electronics for the years 2023-4:

  • Those listed as being in a formal editorial role on the journal website. 
  • Those acting a Guest Editors 
  • Those acting as Academic Editors, despite not being in the other two categories.

For Editors, a PubPeer search by name revealed 213/931 that had one or more comments. That sounds alarming, but cannot be taken at face value, because there are many innocent reasons for this result. The main one is namesakes: this is particularly common with Chinese names, which tend to be less distinctive than Western names. It is therefore important to match PubPeer comments on affiliations as well as names. Using this approach, it was depressingly easy to find instances of Editors who appeared associated with paper mills. I will mention just three, to illustrate the kind of evidence that PubPeer provides, but remember, there are many others deserving of scrutiny. 

  • As well as being a section board member of Electronics, Danda B Rawat (Department of Electrical Engineering and Computer Science, Howard University, Washington, DC 20059, USA) is Editor-in-Chief of Journal of Cybersecurity and Privacy, and a section board member of two further MDPI journals: Future Internet and Sensors. A PubPeer search reveals him to be co-author of one paper with tortured phrases, and another where equations make no sense. He is listed as Editor of three MDPI Special Issues: Multimodal Technologies and Interaction: Human Computer Communications and Internet of Things Sensors: Frontiers in Mobile Multimedia Communications Journal of Cybersecurity and Privacy: Applied Cryptography.
  • Aniello Castiglione  (Department of Management & Innovation Systems, University of Salerno, Italy) is Section Board Member of three journals: Electronics, Future Internet, and Journal of Cybersecurity and Privacy, and an Editorial Board member of Sustainability. PubPeer reveals he has co-authored one paper that was recently retracted because of compromised editorial processing, and that his papers are heavily cited in several other articles that appear to be used as vehicles for citation stacking. 
  •  Natalia Kryvinska (Department of Information Systems, Faculty of Management, Comenius University in Bratislava, Slovakia) is a Section Board Member of Electronics. She has co-authored several articles with tortured phrases.

Turning to the 1326 Guest Editors of Special Issues, there were 500 with at least one PubPeer comment, but as before, note that in many cases name disambiguation is difficult, so this will overestimate the problem. Once again, while it may seem invidious to single out specific individuals, it seems important to show the kinds of issues that can be found among those who are put in this important gatekeeping role. 

Finally, let's look at the category of Academic Editors who aren't listed as journal Editors. It's unclear how they are selected and who approves their selection. Again, among those with PubPeer comments, there's a lot to choose from. I'll focus here on three who have been exceptionally busy doing editorial work on several special issues. 

  • Gwanggil Jeon (Incheon National University, Korea) has acted as Academic Editor for 18 Special Issues in Electronics. He is not on the Editorial Board of the journal, but he has been Guest Editor for two special issues in Remote Sensing, and one in SensorsPubPeer comments note recycled figures and irrelevant references in papers that he has co-authored, as well as a problematic Special Issue that he co-edited for Springer Nature, which led to several retractions.
  • Hamid Reza Karimi (Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy) has acted as Academic Editor for 12 Special Issues in Electronics. He was previously Guest Editor for two Special Issues of Electronics, one of Sensors, one of Micromachines, and one of Machines.  In 2022, he was specifically called out by the IEEE for acting "in violation of the IEEE Principles of Ethical Publishing by artificially inflating the number of citations" for several articles. 
  • Finally, Juan M. Corchado (University of Salamanca, Spain) has acted as Academic Editor for 29 Special Issues. He was picked up by my search as he is not currently listed as being an Editor for Electronics, but that seems to be a relatively recent change: when searching for information, I found this interview from 2023. Thus his role as Academic Editor seems legitimate. Furthermore, as far as PubPeer is concerned, I found only one old comment, concerned with duplicate publication. However, he is notorious for boosting citations to his work by unorthodox means, as described in this article.* I guess we could regard his quiet disappearance from the Editorial Board as a sign that MDPI are genuinely concerned about editors who try to game the system. If so, we can only hope that they employ some experts who can do the kinds of cross-checking that I have described here at scale. If I can find nine dubious editors of one journal in a couple of hours searching, then surely the publisher, with all its financial resources, could uncover many more if they really tried.

Note that many of the editors featured here have quite substantial portfolios of publications. This makes me dubious about MDPI's latest strategy for improving integrity - to use an AI tool to select potential reviewers "from our internal databases with extensive publication records". That seems like an excellent way to keep paper millers in control of the system. 

Although the analysis presented here just scratches the surface of the problem, it would not have been possible without the help of sleuths who made it straightforward to extract the information I needed from the internet. My particular thanks to Pablo Gómez Barreiro, Huanzi and Sholto David.

I want to finish by thanking the sleuths who attempt to decontaminate the literature by posting comments to PubPeer. Without their efforts it would be much harder to keep track of paper millers. The problem is large and growing. Publishers are going to need to invest seriously in employing those with the expertise to tackle this issue. 

 *As I was finalising this piece, this damning update from El Pais appeared. It seems that many retractions of Corchado papers are imminent.  

 I can't keep up.... here's today's news. 


** P.S. 25th Sept 2024. DOAJ inform me that Electronics was removed from their directory in June of this year. 

*** P.P.S. 26th Sept 2024.  Guillaume Cabanac pointed me to this journal-level report on PubPeer, where he noted a high rate of Electronics papers picked up by the Problematic Paper Screener.

Saturday, 14 September 2024

Prodding the behemoth with a stick

 

Like many academics, I was interested to see an announcement on social media that a US legal firm had filed a federal antitrust lawsuit against six commercial publishers of academic journals: (1) Elsevier B.V.; (2) Wolters Kluwer N.V.; (3) John Wiley & Sons, Inc.; (4) Sage Publications, Inc.; (5) Taylor and Francis Group, Ltd.; and (6) Springer Nature AG & Co, on the grounds that "In violation of Section 1 of the Sherman Act, the Publisher Defendants conspired to unlawfully appropriate billions of dollars that would have otherwise funded scientific research".   

 

So far, so good.  I've been writing about the avaricious practices of academic publishers for over 12 years, and there's plenty of grounds for a challenge. 

 

However, when I saw the case being put forward, I was puzzled.  From my perspective, the arguments just don't stack up.  In particular, three points are emphasised in the summary (quoted verbatim here from the website): 

 

  • First, an agreement to fix the price of peer review services at zero that includes an agreement to coerce scholars into providing their labor for nothing by expressly linking their unpaid labor with their ability to get their manuscripts published in the defendants’ preeminent journals.

 

But it's not true that there is an express link between peer review and publishing papers in the pre-eminent journals.  In fact, many journal editors complain that some of the most prolific authors never do any peer review - gaining an advantage by not adopting the "good citizen" behaviour of a peer reviewer.  I think this point can be rapidly thrown out.

 

  • Second, the publisher defendants agreed not to compete with each other for manuscripts by requiring scholars to submit their manuscripts to only one journal at a time, which substantially reduces competition by removing incentives to review manuscripts promptly and publish meritorious research quickly. 

 

This implies that the rationale for not allowing multiple submissions is to reduce competition between publishers.  But if there were no limit on how many journals you could simultaneously submit to, then the number of submissions to each journal would grow massively, increasing the workload for editors and peer reviewers - and much of their time would be wasted. This seems like a rational requirement, not a sinister one.

 

  • Third, the publisher defendants agreed to prohibit scholars from freely sharing the scientific advancements described in submitted manuscripts while those manuscripts are under peer review, a process that often takes over a year. As the complaint notes, “From the moment scholars submit manuscripts for publication, the Publisher Defendants behave as though the scientific advancements set forth in the manuscripts are their property, to be shared only if the Publisher Defendant grants permission. Moreover, when the Publisher Defendants select manuscripts for publication, the Publisher Defendants will often require scholars to sign away all intellectual property rights, in exchange for nothing. The manuscripts then become the actual property of the Publisher Defendants, and the Publisher Defendants charge the maximum the market will bear for access to that scientific knowledge.” 

Again, I would question the accuracy of this account.  For a start, in most science fields, peer review is a matter of weeks or months, not "over a year".  But also, most journals these days allow authors to post their articles as preprints, prior to, or at the point of submission. In fact, this is encouraged by many institutions, as it means that a Green Open Access version of the publication is available, even if the work is subsequently published in a pay-to-read version.  

 

In all, I am rather dismayed by this case, especially when there are very good grounds on which academic publishers can be challenged.  For instance:

 

1. Academic publishers claim to ensure quality control of what gets published, but some of them fail to do the necessary due diligence in selecting editors and reviewers, with the result that the academic literatureis flooded with weak and fraudulent material, making it difficult to distinguish valuable from pointless work, and creating an outlet for damaging material, such as pseudoscience.  This has become a growing problem with the advent of paper mills.

 

2. Many publishers are notoriously slow at responding to credible evidence of serious problems in published papers. It can take years to get studies retracted, even when they have important real world consequences.

 

3. Perhaps the only point in common with the case by Leiff Cabraser, Heimann and Bernstein concerns the issue of retention of intellectual property rights.  It is the case that publishers have traditionally required authors to sign away copyright of their works.  In the UK, at least, there has been a movement to fight this requirement, which has had some success, but undoubtedly more could be done. 

 

If I can find time I will add some references to support some of the points above - this is a hasty response to discussion taking place on social media, where many people seem to think it's great that someone is taking on the big publishers. I never thought I would find myself in a position of defending them, but I think if you are going to attack a behemoth, you need to do so with good weapons.  

 

 

Postscript

Comments on this post are welcome - there is moderation so they don't appear immediately.

 Nick Wise attempted unsuccessfully to add a comment (sorry, Blogger can be weird), providing this helpful reference on typical duration of peer review.  Very field-dependent and may be a biased sample, I suspect, but it gives us a rough idea.

PPS. 5th October 2024.

Before I wrote this blogpost, I contacted the legal firm involved, Leiff Cabraser, Heimann & Bernstein, via their website, to raise the same points.  Yesterday I received a reply from them, explaining that "Because you are located abroad, unfortunately you are not a member of this class suit".  This suggests they don't read correspondence sent to them. Not impressed.