Saturday, 22 February 2025

IEEE Has a Pseudoscience Problem

Guest post by Solal Pirelli


The IEEE, full name Institute of Electrical and Electronics Engineers, is one of the main scientific publishers in domains related to its name. Many IEEE venues, such as ICSE in software engineering and IROS in robotics, are “top” venues that publish important research. While these are conferences and not journals, computer science and related fields are unusual in that conferences are typically the more prestigious option.

But as I’ve covered before in the case of another big computer science publisher, world-class research can coexist with world-class nonsense. Many not-so-top IEEE venues publish “AI gobbledegook sandwiches”, pointless papers that apply standard machine learning or artificial intelligence to basic data sets resulting in vague predictions supposedly improving on ill-defined baselines.

Unfortunately, bad science published by IEEE isn’t limited to boring applications of boring algorithms to boring data. In this blog post, I’ll present IEEE-published pseudoscience of various kinds, show how this correlates with other problems, and discuss why publishers don’t do enough about it. 

All kinds of quackery 

The IEEE has published numerous new “methods” to help providers or users of pseudoscientific disciplines. Ayurveda is enhanced with a “preprocessing framework” to detect diabetes, a neural network to classify herbs, and even an AI assistant. Astrology is automated with a machine learning model. Myers-Briggs personality type testing is granted another neural network.

Some IEEE papers are at the very fringe of pseudoscience, unconventional even by quack standards. A symposium on antennas and propagation published three papers by the same author on “scientific traditional Chinese medicine”, a variant based on electromagnetism and 5G with “supernatural potential” (see here, here and here). An Indian conference on electronics published four papers (here, here, here and here) by the same first author on “electro-homeopathy”, the brainchild of a 19th century Italian count that an Indian high court called “nothing but quackery” a decade before these papers were published.

Of course, no list of pseudoscience would be complete without perpetual motion. That’s right, the IEEE has published two papers on perpetual motion in 2017 and 2022! How these were not desk-rejected is anyone’s guess.

Even work that is not pseudoscientific in itself can propagate harmful or downright absurd stereotypes. Consider what IEEE-published and supposedly peer-reviewed papers have to say about autism: 

  • “Children with autism require constant care because you never know what will trigger them” (source
  • "If symptoms of autism are detected early, children with autism usually return to normal development after effective medical intervention" (source
  • “A baby born with autism spectrum disorder may have a lower-than-average heart rate. Complete blockage of the heart at birth is rare. Abnormal heart rate leads to heart block. So, there is a high chance of the child's death due to permanent heart blockage at any time.” (source)

Why it matters

One may think that such papers won’t cause harm because they’re unlikely to be read, since they are mostly in unknown venues and unrelated to the IEEE’s domain. While I personally disagree since I believe publishing pseudoscience risks breaking the public’s trust in legitimate research, let me provide a more objective argument. Pseudoscience in papers is heavily correlated with other problematic practices that are more difficult to detect automatically. This makes searching for pseudoscience an effective way to find problematic venues, complementary to existing techniques.

The preprocessing framework to detect diabetes with Ayurveda? In a conference that accepts papers on the same day they are submitted, somehow speeding up the weeks or months usually necessary for proper peer review.

The neural network that classifies Ayurvedic herbs? In a conference that plagiarized its peer review policy from Elsevier’s “Transport Policy” journal. Look for fragments of this policy in your favorite search engine and you’ll find a surprising number of venues that have done so, seemingly without noticing the references to Transport Policy.

The four papers on electro-homeopathy? In a conference that published a mathematical “algorithm” amounting to high school mathematics. While exact definitions of “novelty” vary, no one could credibly claim that this paper is novel enough for a scientific conference.

The 2017 paper on perpetual motion? In a conference that didn’t notice an entirely plagiarized section in that paper, ironically from a source explaining why perpetual motion is impossible. How this is compatible with IEEE’s policy of checking all content for plagiarism is unclear.

The paper claiming “you never know what will trigger” autistic children? In a conference supposedly happening in a London office building, whose four IEEE-published editions only feature one paper from a European university among a sea of India-based authors. Did the authors of this conference’s papers really travel to the other side of the globe to present in a place not designed for presentations?

The neural network for Myers-Briggs? In a conference chaired by a professor whose Russian university is under sanctions from the US, the EU, Ukraine, and even Switzerland!

Action is rare 

The expected process here would be to report this nonsense to the publisher, who would investigate, quickly conclude these papers should never have been published, lose faith in the peer review process that led to their acceptance, and issue retractions. Barring extremely strong evidence from conference chairs that some cases were truly one-off exceptions, such retractions would cover entire editions of conferences.

This happens… sometimes. The IEEE has retracted papers before, such as this one after “only” five months. They have also retracted entire venues, such as this one totaling 400 papers, four years after it was reported.
 
But the IEEE frequently does not react at all to reports. Guillaume Cabanac, who specializes in scientific fraud detection, has repeatedly and publicly called them out. For instance, he’s reported telltale signs of ChatGPT as in this paper that includes “Regenerate Response” in the middle of text and this paper that includes “I am unable to […] due to the fact I am an AI language model”. He’s also reported “tortured phrases”, attempts at avoiding plagiarism detection that instead create nonsense such as “parcel misfortune” instead of “packet loss” in computer networking, in sometimes large concentrations. Cabanac and other sleuths have published “proceedings-level reports” on PubPeer, such as this one, when entire IEEE conferences have problems. None of the examples in this paragraph have led to any public reaction from the IEEE.

The IEEE occasionally issues “expressions of concern”, such one as for this paper over a year after concrete evidence of plagiarism was publicly reported. But expressions of concerns are not retractions. In mid-2023, Retraction Watch noted that hundreds of IEEE papers reported by Guillaume Cabanac and Harvard lecturer Kendra Albert were still up for sale. A year and a half later, that remains the case.

One case noted above is particularly noteworthy in terms of both reputation and IEEE awareness: The “scientific TCM” papers were published in the 2022 and 2023 editions of the “International Symposium on Antennas and Propagation”, a 6-decade-old conference whose 2024 edition boasted the IEEE President as a keynote speaker. Clearly, the IEEE is aware of the venue and its papers. What’s the point in “reporting” them?


Processes are inadequate 

The scale of publishers’ actions is nowhere near the scale of the problem. Creating a new conference or journal does not require that much time if the peer reviewing process is fake. As long as the average time it takes a publisher to retract a venue is higher than the time it takes to create a new venue, there won’t be meaningful progress.

Current publisher processes are designed to correct honest mistakes, not to fight malice. The time it takes to contact authors, wait for their response, wait for them to find original data, and so on is worth it when a single paper has a problem that can be explained by human error. But any such process is a waste of time when a paper contains blatant pseudoscience, has obviously been plagiarized, or uses terminology so bizarre no reviewer could have understood it.

To give an example of scale, here’s a collision of pseudoscience and tortured phrases. The paper on an AI assistant for ayurveda mentioned earlier is in the “2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)”. Guillaume Cabanac’s Problematic Paper Screener currently lists 185 cases of tortured phrases manually confirmed by Cabanac himself, with another 160 pending assessment. These include “herbal language” instead of natural language, “system getting to know” instead of machine learning, “give-up-to-give-up” instead of end-to-end, and “0.33-celebration” instead of third-party.  

Individually contacting and waiting for hundreds of authors just in case they can explain why their paper talks about 0.33-celebrations isn’t going to cut it. Neither is individually contacting and waiting for dozens of conference editors just in case they can explain why their peer review process didn’t spot this nonsense. 

What can we do?

Given the incentives and processes at play, it’s not surprising to see the IEEE or any other big publisher publish pseudoscience. The authors of the papers mentioned in this post probably didn’t do anything illegal, except maybe for occasional plagiarism of copyrighted content, but nobody has the time and money to sue for such boring violations. This gives publishers a double excuse: they’re not publishing anything illegal, and retractions without a solid legal basis could backfire.

The scientific community needs to ban the “incompetence” defense from authors and stop associating with publishers that can’t be bothered to act quickly enough.  

Authors who publish obvious nonsense should not get a chance to explain themselves or “correct” their paper. 

Publishers make enough money from processing and selling articles. They can defend themselves from occasional lawsuits by angry authors, and they can hire scientific integrity specialists.

When I say “scientific integrity specialist”, that can unfortunately be as simple as “person looking for specific keywords in Google Scholar”. It’s what I did to find pseudoscience, and you can do that too. Report these on PubPeer, directly to publishers, or both. You can also go to the Problematic Paper Screener’s page listing articles that have not been manually assessed yet, and follow the instructions.

Finally, remember that most scientists have no idea this is going on. You can help by publicly calling out problematic papers and lack of action. Ask candidates for governance boards in more democratic publishers like the IEEE what they plan to do about fraud. Discourage institutions, especially public ones in democratic countries, from making blanket deals with publishers.

No comments:

Post a Comment