Showing posts with label quackery. Show all posts
Showing posts with label quackery. Show all posts

Saturday, 22 February 2025

IEEE Has a Pseudoscience Problem

Guest post by Solal Pirelli


The IEEE, full name Institute of Electrical and Electronics Engineers, is one of the main scientific publishers in domains related to its name. Many IEEE venues, such as ICSE in software engineering and IROS in robotics, are “top” venues that publish important research. While these are conferences and not journals, computer science and related fields are unusual in that conferences are typically the more prestigious option.

But as I’ve covered before in the case of another big computer science publisher, world-class research can coexist with world-class nonsense. Many not-so-top IEEE venues publish “AI gobbledegook sandwiches”, pointless papers that apply standard machine learning or artificial intelligence to basic data sets resulting in vague predictions supposedly improving on ill-defined baselines.

Unfortunately, bad science published by IEEE isn’t limited to boring applications of boring algorithms to boring data. In this blog post, I’ll present IEEE-published pseudoscience of various kinds, show how this correlates with other problems, and discuss why publishers don’t do enough about it. 

All kinds of quackery 

The IEEE has published numerous new “methods” to help providers or users of pseudoscientific disciplines. Ayurveda is enhanced with a “preprocessing framework” to detect diabetes, a neural network to classify herbs, and even an AI assistant. Astrology is automated with a machine learning model. Myers-Briggs personality type testing is granted another neural network.

Some IEEE papers are at the very fringe of pseudoscience, unconventional even by quack standards. A symposium on antennas and propagation published three papers by the same author on “scientific traditional Chinese medicine”, a variant based on electromagnetism and 5G with “supernatural potential” (see here, here and here). An Indian conference on electronics published four papers (here, here, here and here) by the same first author on “electro-homeopathy”, the brainchild of a 19th century Italian count that an Indian high court called “nothing but quackery” a decade before these papers were published.

Of course, no list of pseudoscience would be complete without perpetual motion. That’s right, the IEEE has published two papers on perpetual motion in 2017 and 2022! How these were not desk-rejected is anyone’s guess.

Even work that is not pseudoscientific in itself can propagate harmful or downright absurd stereotypes. Consider what IEEE-published and supposedly peer-reviewed papers have to say about autism: 

  • “Children with autism require constant care because you never know what will trigger them” (source
  • "If symptoms of autism are detected early, children with autism usually return to normal development after effective medical intervention" (source
  • “A baby born with autism spectrum disorder may have a lower-than-average heart rate. Complete blockage of the heart at birth is rare. Abnormal heart rate leads to heart block. So, there is a high chance of the child's death due to permanent heart blockage at any time.” (source)

Why it matters

One may think that such papers won’t cause harm because they’re unlikely to be read, since they are mostly in unknown venues and unrelated to the IEEE’s domain. While I personally disagree since I believe publishing pseudoscience risks breaking the public’s trust in legitimate research, let me provide a more objective argument. Pseudoscience in papers is heavily correlated with other problematic practices that are more difficult to detect automatically. This makes searching for pseudoscience an effective way to find problematic venues, complementary to existing techniques.

The preprocessing framework to detect diabetes with Ayurveda? In a conference that accepts papers on the same day they are submitted, somehow speeding up the weeks or months usually necessary for proper peer review.

The neural network that classifies Ayurvedic herbs? In a conference that plagiarized its peer review policy from Elsevier’s “Transport Policy” journal. Look for fragments of this policy in your favorite search engine and you’ll find a surprising number of venues that have done so, seemingly without noticing the references to Transport Policy.

The four papers on electro-homeopathy? In a conference that published a mathematical “algorithm” amounting to high school mathematics. While exact definitions of “novelty” vary, no one could credibly claim that this paper is novel enough for a scientific conference.

The 2017 paper on perpetual motion? In a conference that didn’t notice an entirely plagiarized section in that paper, ironically from a source explaining why perpetual motion is impossible. How this is compatible with IEEE’s policy of checking all content for plagiarism is unclear.

The paper claiming “you never know what will trigger” autistic children? In a conference supposedly happening in a London office building, whose four IEEE-published editions only feature one paper from a European university among a sea of India-based authors. Did the authors of this conference’s papers really travel to the other side of the globe to present in a place not designed for presentations?

The neural network for Myers-Briggs? In a conference chaired by a professor whose Russian university is under sanctions from the US, the EU, Ukraine, and even Switzerland!

Action is rare 

The expected process here would be to report this nonsense to the publisher, who would investigate, quickly conclude these papers should never have been published, lose faith in the peer review process that led to their acceptance, and issue retractions. Barring extremely strong evidence from conference chairs that some cases were truly one-off exceptions, such retractions would cover entire editions of conferences.

This happens… sometimes. The IEEE has retracted papers before, such as this one after “only” five months. They have also retracted entire venues, such as this one totaling 400 papers, four years after it was reported.
 
But the IEEE frequently does not react at all to reports. Guillaume Cabanac, who specializes in scientific fraud detection, has repeatedly and publicly called them out. For instance, he’s reported telltale signs of ChatGPT as in this paper that includes “Regenerate Response” in the middle of text and this paper that includes “I am unable to […] due to the fact I am an AI language model”. He’s also reported “tortured phrases”, attempts at avoiding plagiarism detection that instead create nonsense such as “parcel misfortune” instead of “packet loss” in computer networking, in sometimes large concentrations. Cabanac and other sleuths have published “proceedings-level reports” on PubPeer, such as this one, when entire IEEE conferences have problems. None of the examples in this paragraph have led to any public reaction from the IEEE.

The IEEE occasionally issues “expressions of concern”, such one as for this paper over a year after concrete evidence of plagiarism was publicly reported. But expressions of concerns are not retractions. In mid-2023, Retraction Watch noted that hundreds of IEEE papers reported by Guillaume Cabanac and Harvard lecturer Kendra Albert were still up for sale. A year and a half later, that remains the case.

One case noted above is particularly noteworthy in terms of both reputation and IEEE awareness: The “scientific TCM” papers were published in the 2022 and 2023 editions of the “International Symposium on Antennas and Propagation”, a 6-decade-old conference whose 2024 edition boasted the IEEE President as a keynote speaker. Clearly, the IEEE is aware of the venue and its papers. What’s the point in “reporting” them?


Processes are inadequate 

The scale of publishers’ actions is nowhere near the scale of the problem. Creating a new conference or journal does not require that much time if the peer reviewing process is fake. As long as the average time it takes a publisher to retract a venue is higher than the time it takes to create a new venue, there won’t be meaningful progress.

Current publisher processes are designed to correct honest mistakes, not to fight malice. The time it takes to contact authors, wait for their response, wait for them to find original data, and so on is worth it when a single paper has a problem that can be explained by human error. But any such process is a waste of time when a paper contains blatant pseudoscience, has obviously been plagiarized, or uses terminology so bizarre no reviewer could have understood it.

To give an example of scale, here’s a collision of pseudoscience and tortured phrases. The paper on an AI assistant for ayurveda mentioned earlier is in the “2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)”. Guillaume Cabanac’s Problematic Paper Screener currently lists 185 cases of tortured phrases manually confirmed by Cabanac himself, with another 160 pending assessment. These include “herbal language” instead of natural language, “system getting to know” instead of machine learning, “give-up-to-give-up” instead of end-to-end, and “0.33-celebration” instead of third-party.  

Individually contacting and waiting for hundreds of authors just in case they can explain why their paper talks about 0.33-celebrations isn’t going to cut it. Neither is individually contacting and waiting for dozens of conference editors just in case they can explain why their peer review process didn’t spot this nonsense. 

What can we do?

Given the incentives and processes at play, it’s not surprising to see the IEEE or any other big publisher publish pseudoscience. The authors of the papers mentioned in this post probably didn’t do anything illegal, except maybe for occasional plagiarism of copyrighted content, but nobody has the time and money to sue for such boring violations. This gives publishers a double excuse: they’re not publishing anything illegal, and retractions without a solid legal basis could backfire.

The scientific community needs to ban the “incompetence” defense from authors and stop associating with publishers that can’t be bothered to act quickly enough.  

Authors who publish obvious nonsense should not get a chance to explain themselves or “correct” their paper. 

Publishers make enough money from processing and selling articles. They can defend themselves from occasional lawsuits by angry authors, and they can hire scientific integrity specialists.

When I say “scientific integrity specialist”, that can unfortunately be as simple as “person looking for specific keywords in Google Scholar”. It’s what I did to find pseudoscience, and you can do that too. Report these on PubPeer, directly to publishers, or both. You can also go to the Problematic Paper Screener’s page listing articles that have not been manually assessed yet, and follow the instructions.

Finally, remember that most scientists have no idea this is going on. You can help by publicly calling out problematic papers and lack of action. Ask candidates for governance boards in more democratic publishers like the IEEE what they plan to do about fraud. Discourage institutions, especially public ones in democratic countries, from making blanket deals with publishers.

Sunday, 4 December 2011

Pioneering treatment or quackery? How to decide

My mother was only slightly older than I am now when she died of emphysema (chronic obstructive pulmonary disease). It’s a progressive condition for which there is no cure, though it can be managed by use of inhalers and oxygen. I am still angry at the discomfort she endured in her last years, as she turned from one alternative practitioner to another. It started with a zealous nutritionist who was a pupil of hers. He had a complicated list of foods she should avoid: I don’t remember much about the details, except that when she was in hospital I protested at the awful meal she’d been given - unadorned pasta and peas - only to be told that this was at her request. Meat, sauces, fats, cheese were all off the menu. My mother was a great cook who enjoyed good food, but she was seriously underweight and the unappetising meals were not helping. In that last year she also tried acupuncture, which she did not enjoy: she told me how it involved lying freezing on a couch having needles prodded into her stick-like body. Homeopathy was another source of hope, and the various remedies stacked up in the kitchen. Strangely enough, spiritual healing was resisted, even though my Uncle Syd was a practitioner. That seemed too implausible for my atheistic mother, whose view was: “If there is a God, why did he make us intelligent enough to question his existence?”
From time to time, friends and relatives of mine have asked my advice about other treatments that are out there. There is, for instance, the Stem Cell Institute in Panama, offering treatment for multiple sclerosis, spinal cord injury, osteoarthritis, rheumatoid arthritis, other autoimmune diseases, autism, and cerebral palsy.  Or nutritional therapist Lucille Leader,  who has a special interest in supporting patients with Parkinson's Disease, Multiple Sclerosis and Inflammatory Bowel Disease. My mother would surely have been interest in AirEnergy, a “compact machine that creates 'energised air' that feeds every cell in your body with oxygen that it can absorb and use more efficiently”.
Another source of queries are parents of the children with neurodevelopmental disorders who are the focus of my research. If you Google for treatments for dyslexia you are confronted by a plethora of options. There is the Dyslexia Treatment Centre, which offers Neurolinguistic Programming and hypnotherapy to help children with dyslexia, dyspraxia or ADHD. Meanwhile the Dore Programme markets a set of “daily physical exercises that aim to improve balance, co-ordination, concentration and social skills” to help those with dyslexia, dyspraxia, ADHD or Asperger’s syndrome. The Dawson Program offers vibrational kinesiology to correct imbalances in the body’s energy fields.  I could go on, and on, and on….
So how on earth can we decide which treatments to trust and which are useless or even fraudulent? There are published lists of warning signs (e.g. ehow Health, Quackwatch), but I wonder how useful they are to the average consumer. For instance, the cartoon by scienceblogs will make skeptics laugh, but I doubt it will be much help for anyone with no science background who is looking for advice. So here’s my twopennyworth. First, a list of things you need to ignore when evaluating a treatment.
1. The sincerity of the practitioner. It’s a mistake to assume all purveyors of ineffective treatments are evil bastards out to make money of the desperate. Many, probably most,  believe honestly in what they are doing. The nutritionist who advised my mother was a charming man who did not charge her a penny - but still did her harm by ensuring her last months were spent on an inadequate and boring diet. The problem is if practitioners don’t adopt scientific methods of evalulating treatments they will convince themselves they are doing good, because some people get better anyway, and they’ll attribute the improvement to their method.
2. The professionalism of the website. Some dodgy treatments have very slick marketing. The Dore Treatment, which I regard as of dubious efficacy, had huge success when it first appeared. Its founder, Wyford Dore was a businessman who had no background in neurodevelopmental disorders but knew a great deal about marketing. He ensured that if you typed ‘dyslexia treatment’ into Google his impressive website was the first thing you’d hit.
3. Fancy-looking credentials. These can be misleading if you aren’t an expert - and sometimes even if you are. My bugbear is ‘Fellow the Royal Society of Medicine’, which sounds very impressive - similar to Fellow the Royal Society (which really is impressive).  In fact, the threshold for fellowship is pretty low, so much so that fellows are told by the RSM that they should not use FRSM on a curriculum vitae. So when you see this on someone’s list of credentials, it means the opposite of what you think: they are likely to be a charlatan. It’s also worth realising that it’s pretty easy to set up your own organisation and offer your own qualifications. I could set up the Society of Skeptical Quackbusters and offer Fellowship to anyone I choose. The letters FSSQ might look good, but carry no guarantee of anything.
4. Testimonials. There is evidence (reviewed here) that humans trust testimonials far more than facts and figures. It’s a tendency that’s hard to overcome, despite scientific training. I still find myself getting swayed if I hear someone tell me of their positive experience with some new nutritional supplement, and thinking, maybe there’s something in it. Advertisers know this: it’s one thing to say that 9 out of 10 cats prefer KittyMunch, but to make it really effective you need a cute cat going ecstatic over the food bowl. If you are deciding whether to go for a treatment you must force yourself to ignore testimonials. For a start, you don’t even know if they are genuine: anyone who regards sick and desperate people as a business opportunity is quite capable of employing actors to pose as satisfied customers. Second, you are given no information about how typical they are. You might be less impressed by the person telling you their dyslexia was cured if you knew that there were a hundred others who paid for the treatment and got no benefit. And the cancer patients who die after a miracle cure are the ones you won’t hear about.
5. Research articles. Practitioners of alternative treatments are finding that the public is getting better educated, and they may be asked about research evidence. So it’s becoming more common to find a link to ‘research’ on websites advertising treatments. The problem is that all too often this is not what it seems. This was recently illustrated by an analysis of research publications from the Burzynski clinic, which offers the opportunity to participate in expensive trials of cancer treatment. I was interested also to see the research listed on the website of FastForword, a company that markets a computerized intervention for children’s language and literacy problems. Under a long list of Foundational Research articles, they list one of my papers that fails to support their theory that phonological and auditory difficulties have common origins. More generally, the reference list contains articles that are relevant to the theory behind the intervention, but don’t necessarily support it. Few people other than me would know that. And a recent meta-analysis of randomized controlled trials of FastForword is a notable omission from the list of references provided. Overall, this website seems to exemplify a strategy that has previously been adopted in other areas such as climate change, impact of tobacco or sex differences, where you create an impression of a huge mass of scientific evidence, which can only be counteracted if painstakingly unpicked by an expert who knows the literature well enough to evaluate what’s been missed out, as well as what’s in there. It’s similar to what Ben Goldacre has termed ‘referenciness’, or the ‘Gish gallop’ technique of creationists. It’s most dangerous when employed by those who know enough about science to make it look believable. The theory behind FastForword is not unreasonable, but the evidence for it is far less compelling than the website would suggest.
So those are the things that can lull you into a false sense of acceptance. What about the red flags, warning signs that suggest you are dealing with a dodgy enterprise? None of these on its own is foolproof, but where several are present together, beware.
  1. Is there any theory behind the intervention, and if so is it deemed plausible by mainstream scientists? Don’t be impressed by sciency-sounding theories - these are often designed to mislead. Neuroscience terms are often incorporated to give superficial plausibility: I parodied this in my latest novel, with the invention of Neuropositive Nutrition, which is based on links between nutrients, the thalamus and the immune system. I suspect if I set up a website promoting it, I’d soon have customers. Unfortunately, it can be hard to sort the wheat from the chaff, but NHSChoices is good for objective, evidence-based  information. Most universities have a communications office that may be able to point you to someone who could indicate whether an intervention has any scientific credibility.  
  2. How specific is the treatment? A common feature of dodgy treatments is that they claim to work for a wide variety of conditions. Most effective treatments are rather specific in their mode of action.
  3. Does the practitioner reject conventional treatments? That’s usually a bad sign, especially if there are effective mainstream approaches.
  4. Does the practitioner embrace more than one kind of alternative treatment? I was intriguted when doing my brief research on Fellows of the Royal Society of Medicine to see how alternative interventions tend to cluster together. The same person who is offering chiropractic is often also recommended hypnotherapy, nutritional supplements and homeopathy.  Since modern medical advances have all depended on adopting a scientific stance, anyone who adopts a range of methods that don’t have scientific support is likely to be a bad bet.
  5. Are those developing the intervention cautious, and interested in doing proper trials?  Do they know what a randomised controlled trial is? If they aren’t doing them, why not? See this book for an accessible explanation of why this is important.
  6. Does it look as though those promoting the intervention are deliberately exploiting people’s gullibility by relying heavily on testimonials? Use of celebrities to promote a product is a technique used by the advertising industry to manipulate people’s judgement. It’s a red flag.
  7. Are costs reasonable?  Does the website give you any idea of how much they are, or do you have to phone up for information? (bad sign!). Are people tied in to long-term treatment/payment plans? Are you being asked to pay to take part in a clinical trial? (Very unusual and ethically dubious). Do you get a refund if it doesn’t work? If yes, read the terms and condition very carefully so you understand exactly the circumstances under which you get your money back. For instance, I’ve seen a document from the Dore organisation that promised a money-back guarantee on condition there was ‘no physiological change’. That was interpreted as change on tests of balance and eye movements. These change with age and practice, and don’t necessarily mean a treatment has worked. Failing to improve in reading did not qualify you for the refund.
  8. Can the practitioner answer the question of why mainstream medicine/education has not adopted their methods? If the answer refers to others having competing interests, be very, very suspicious. Remember, mainstream practitioners want to make people better, and anyone who can offer effective treatments is going to be more successful than someone who can’t.