Saturday 25 November 2023

Low-level lasers. Part 1. Shining a light on an unconventional treatment for autism


 

'Light enters, then a miracle happens, and good things come out!' (Quirk & Whelan, 2011*)



I'm occasionally asked to investigate weird interventions for children's neurodevelopmental conditions, and recently I've found myself immersed in the world of low-level laser treatments. The material I've dug up is not new - it's been around for some years, but has not been on my radar until now. 

A starting point is this 2018 press statement by Erchonia, a firm that makes low-level laser devices for quasi-medical interventions. 

They had tested a device that was supposed to reduce irritability in autistic children by applying low-level laser light to the temporal and posterior regions of the head (see Figure 1) for 5 minute sessions twice a week for 4 weeks.

Figure 1: sites of stimulation by low-level laser

 The study, which was reported here, was carefully designed as a randomized controlled trial. Half the children received a placebo intervention. Placebo and active laser devices were designed to look identical and both emitted light, and neither the child nor the person administering the treatment knew whether the active or placebo light was being used.

According to Erchonia “The results are so strong, nobody can argue them.” (sic). Alas, their confidence turned out to be misplaced.

The rationale given by Leisman et al (with my annotations in yellow in square brackets) is as follows: "LLLT promotes cell and neuronal repair (Dawood and Salman 2013) [This article is about wound healing, not neurons] and brain network rearrangement (Erlicher et al. 2002) [This is a study of rat cells in a dish] in many neurologic disorders identified with lesions in the hubs of default mode networks (Buckner et al. 2008)[This paper does not mention lasers]. LLLT facilitates a fast-track wound-healing (Dawood and Salman 2013) as mitochondria respond to light in the red and near-infrared spectrum (Quirk and Whelan 2011*)[review of near-infrared irradiation photobiomodulation that notes inadequate knowledge of mechanisma - see cartoon]. On the other hand, Erlicher et al. (2002) have demonstrated that weak light directs the leading edge of growth cones of a nerve [cells in a dish]. Therefore, when a light beam is positioned in front of a nerve’s leading edge, the neuron will move in the direction of the light and grow in length (Black et al. 2013 [rat cells in a dish]; Quirk and Whelan 2011). Nerve cells appear to thrive and grow in the presence of low-energy light, and we think that the effect seen here is associated with the rearrangement of connectivity."

I started out looking at the registration of the trial on ClinicalTrials.gov. This included a very thorough document that detailed a protocol and analysis plan, but there were some puzzling inconsistencies; I documented them here on PubPeer,  and subsequently a much more detailed critique was posted there by Florian Naudet and André Gillibert. Among other things, there was confusion about where the study was done. The registration document said it was done in Nazareth, Israel, which is where the first author, Gerry Leisman was based. But it also said that the PI was Calixto Machado, who is based in Havana, Cuba.

Elvira Cawthon, from Regulatory Insight, Inc, Tennessee was mentioned on the protocol as clinical consultant and study monitor. The role of the study monitor is specified as follows: 

"The study Monitor will assure that the investigator is executing the protocol as outlined and intended. This includes insuring that a signed informed consent form has been attained from each subject’s caregiver prior to commencing the protocol, that the study procedure protocol is administered as specified, and that all study evaluations and measurements are taken using the specified methods and correctly and fully recorded on the appropriate clinical case report forms."

This does not seem ideal, given that the study monitor was in Tennessee, and the study was conducted in either Nazareth or Havana. Accordingly, I contacted Ms Cawthon, who replied: 

"I can confirm that I performed statistical analysis on data from the clinical study you reference that was received from paper CRFs from Dr. Machado following completion of the trial. I was not directly involved in the recruitment, treatment, or outcomes assessment of the subjects whose data was recorded on those CRFs. I have not reviewed any of the articles you referenced below so I cannot attest to whether the data included was based on the analyses that I performed or not or comment on any of the discrepancies without further evaluation at this time."

I had copied Drs Leisman and Machado into my query, and Dr Leisman also replied. He stated:

"I am the senior author of the paper pertaining to a trial of low-level laser therapy in autism spectrum disorder.... I take full responsibility for the publication indicated above and vouch for having personally supervised the implementation of the project whose results were published under the following citation:

Leisman, G. Machado, C., Machado, Y, Chinchilla-Acosta, M. Effects of Low-Level Laser Therapy in Autism Spectrum Disorder. Advances in Experimental Medicine and Biology 2018:1116:111-130. DOI:10.1007/5584_2018_234. The publication is referenced in PubMed as: PMID: 29956199.

I hold a dual appointment at the University of Haifa and at the University of the Medical Sciences of Havana with the latter being "Professor Invitado" by the Ministry of Health of the Republic of Cuba. Ms. Elvira Walls served as the statistical consultant on this project."

However, Dr Leisman denied any knowledge of subsequent publications of follow-up data by Dr Machado. I asked if I could see the data from the Leisman et al study, and he provided a link to a data file on ResearchGate, the details of which I have put on PubPeer.

Alas, the data were amazing, but not in a good way. The main data came from five subscales of the Aberrant Behavior Checklist (ABC)**, which can be combined into a Global score. (There were a handful of typos in the dataset for the Global score, which I have corrected in the following analysis). For the placebo group, 15 of 19 children obtained exactly the same global score on all 4 sessions. Note that there is no restriction of range for this scale: reported scores range from 9 to 154. This pattern was also seen in the five individual subscales. You might think that is to be expected if the placebo intervention is ineffective, but that's not the case. Questionnaire measures such as that used here are never totally stable. In part this is because children's behaviour fluctuates. But even if the behaviour is constant, you expect to see some variability in responses, depending on how the rater interprets the scale of measurement. Furthermore, when study participants are selected because they have extreme scores on a measure, the tendency is for scores to improve on later testing - a phenomenon known as regression to the mean, Such unchanging scores are out of line with anything I have ever come across in the intervention literature. If we turn to the treated group, we see that 20 of 21 children showed a progressive decline in global scores (i.e. improvement), with each measurement improving from the previous one over 4 sessions. This again is just not credible because we'd expect some fluctuation in children's behaviour as well as variable ratings due to error of measurement. These results were judged to be abnormal in a further commentary by Gillibert and Naudet on PubPeer. They also noted that the statistical distribution of scores was highly improbable, with far more even than odd numbers.

Although Dr Machado has been copied into my correspondence, he has not responded to queries. Remember, he was PI for the study in Cuba, and he is first author on a follow-up study from which Dr Leisman dissociated himself. Indeed, I subsequently found that there were no fewer than three follow-up reports, all appearing in a strange journal whose DOIs did not appear to be genuine: 

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2019a). Follow-up assessment of autistic children 6 months after finishing low lever (sic) laser therapy. Internet Journal of Neurology, 21(1). https://doi.org/10.5580/IJN.54101 (available from https://ispub.com/IJN/21/1/54101).

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2019b). Twelve months follow-up comparison between autistic children vs. Initial placebo (treated) groups. Internet Journal of Neurology, 21(2). https://doi.org/10.5580/IJN.54812 (available from https://ispub.com/IJN/21/2/54812).

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2020). Follow-up assessment of autistic children 12 months after finishing low lever (sic) laser therapy. Internet Journal of Neurology, 21(2). https://doi.org/10.5580/IJN.54809 (available from available from https://ispub.com/IJN/21/2/54809)

The 2019a paper starts by talking of a study of anatomic and functional brain connectivity in 21 children, but then segues to an extended follow-up (6 months) of the 21 treated and 19 placebo children from the Leisman et al study. The Leisman et al study is mentioned but not adequately referenced. Remarkably, all the original participants participated in the follow-up. The same trend as before continued: the placebo group stagnated, whereas the treated group continue to improve up to 6 months later, even though they received no further active treatment after the initial 4 week period. The 2020 Abstract reported a further follow-up to 12 months. The huge group difference was sustained (see Figure 2). Three of the treated group were now reported as scoring in the normal range on a measure of clinical impairment. 

Figure 2. Chart 1 from Machado et al 2020
 

In the 2019b paper, it is reported that, after the stunning success of the initial phase of the study, the placebo group were offered the intervention, and all took part, whereupon they proceeded to make an almost identical amount of remarkable progress on all five subscales, as well as the global scale (see Figure 3). We might expect the 'baseline' scores of the cross-over group to correspond to the scores reported at the final follow-up (as placebo group prior to cross-over) but they don't. 

Figure 3: Chart 2 of Machado et al 2019b

I checked for other Erchonia studies on clinicaltrials.gov. Another study, virtually identical except for the age range, was registered in 2020 with Dr Leon Morales-Quezada of Spaulding Rehabilitation Hospital, Boston as Principal Investigator.  Comments in the documents suggest this was conducted after Erchonia failed to get the desired FDA approval. Although I have not found a published report of this second trial, I found a recruitment advertisement, which confusingly cites the NCT registration number of the 2013 study. Some summary results are included on clinicaltrials.gov, and they are strikingly different from the Leisman et al trial, with no indication of any meaningful difference between active and placebo groups in the final outcome measure, and both groups showing some improvement. I have requested fuller data from Elvira Cawthon (listed as results point of contact) with cc. to Dr Morales-Quezada and will update this post if I hear back.

It would appear that at one level this is a positive story, because it shows the regulatory system working. We do not know why FDA rejected Erchonia's request for 510k Market Clearance, but the fact that they did so might indicate that they were unimpressed by the data provided by Leisman and Machado. The fact that Machado et al reported their three follow-up studies in what appears to be an unregistered journal suggests they had difficulty persuading regular journals that the findings were legitimate. If eight 5-minute sessions with a low-level laser pointed at the head really could dramatically improve the function of children with autism 12 months later, one would imagine that Nature, Cell and Science would be scrambling to publish the articles. On the other hand, any device that has the potential to stimulate neuronal growth might also ring alarm bells in terms of potential for harm.

Use of low-level lasers to treat autism is only part of the story. Questions remain about the role of Regulatory Insight, Inc., whose statistician apparently failed to notice anything strange about the data from the first autism study. In another post, I plan to look at cases where the same organisation was involved in monitoring and analysing trials of Erchonia laser devices for other conditions such as cellulite, pain, and hearing loss.

Notes

* Quirk, B. J., & Whelan, H. T. (2011). Near-infrared irradiation photobiomodulation: The need for basic science. Photomedicine and Laser Surgery, 29(3), 143–144. https://doi.org/10.1089/pho.2011.3014. This article states "clinical uses of NIR-PBM have been studied in such diverse areas as wound healing, oral mucositis, and retinal toxicity. In addition, NIR-PBM is being considered for study in connection with areas such as aging and neural degenerative diseases (Parkinson's disease in particular). One thing that is missing in all of these pre-clinical and clinical studies is a proper investigation into the basic science of the NIR-PBM phenomenon. Although there is much discussion of the uses of NIR, there is very little on how it actually works. As far as explaining what really happens, we are basically left to resort to saying 'light enters, then a miracle happens, and good things come out!' Clearly, this is insufficient, if for no other reason than our own intellectual curiosity." 

**Aman, M. G., Singh, N. N., Stewart, A. W., & Field, C. J. (1985). The aberrant behavior checklist: A behavior rating scale for the assessment of treatment effects. American Journal of Mental Deficiency, 89(5), 485–491. N. B. this is different from the Autism Behavior Checklist which is a commonly used autism assessment. 

Sunday 19 November 2023

Defence against the dark arts: a proposal for a new MSc course

 


Since I retired, an increasing amount of my time has been taken up with investigating scientific fraud. In recent months, I've become convinced of two things: first, fraud is a far more serious problem than most scientists recognise, and second, we cannot continue to leave the task of tackling it to volunteer sleuths. 

If you ask a typical scientist about fraud, they will usually tell you it is extremely rare, and that it would be a mistake to damage confidence in science because of the activities of a few unprincipled individuals. Asked to name fraudsters they may, depending on their age and discipline, mention Paolo Macchiarini, John Darsee, Elizabeth Holmes or Diederik Stapel, all high profile, successful individuals, who were brought down when unambiguous evidence of fraud was uncovered. Fraud has been around for years, as documented in an excellent book by Horace Judson (2004), and yet, we are reassured, science is self-correcting, and has prospered despite the activities of the occasional "bad apple". The problem with this argument is that, on the one hand, we only know about the fraudsters who get caught, and on the other hand, science is not prospering particularly well - numerous published papers produce results that fail to replicate and major discoveries are few and far between (Harris, 2017). We are swamped with scientific publications, but it is increasingly hard to distinguish the signal from the noise. In my view, it is getting to the point where in many fields it is impossible to build a cumulative science, because we lack a solid foundation of trustworthy findings. And it's getting worse and worse.

My gloomy prognosis is partly engendered by a consideration of a very different kind of fraud: the academic paper mill. In contrast to the lone fraudulent scientist who fakes data to achieve career advancement, the paper mill is an industrial-scale operation, where vast numbers of fraudulent papers are generated, and placed in peer-reviewed journals with authorship slots being sold to willing customers. This process is facilitated in some cases by publishers who encourage special issues, which are then taken over by "guest editors" who work for a paper mill. Some paper mill products are very hard to detect: they may be created from a convincing template with just a few details altered to make the article original. Others are incoherent nonsense, with spectacularly strange prose emerging when "tortured phrases" are inserted to evade plagiarism detectors.

You may wonder whether it matters if a proportion of the published literature is nonsense: surely any credible scientist will just ignore such material? Unfortunately, it's not so simple. First, it is likely that the paper mill products that are detected are just the tip of the iceberg - a clever fraudster will modify their methods to evade detection. Second, many fields of science attempt to synthesise findings using big data approaches, automatically combing the literature for studies with specific keywords and then creating databases, e.g. of genotypes and phenotypes. If these contain a large proportion of fictional findings, then attempts to use these databases to generate new knowledge will be frustrated. Similarly, in clinical areas, there is growing concern that systematic reviews that are supposed to synthesise evidence to get at the truth instead lead to confusion because a high proportion of studies are fraudulent. A third and more indirect negative consequence of the explosion in published fraud is that those who have committed fraud can rise to positions of influence and eminence on the back of their misdeeds. They may become editors, with the power to publish further fraudulent papers in return for money, and if promoted to professorships they will train a whole new generation of fraudsters, while being careful to sideline any honest young scientists who want to do things properly. I fear in some institutions this has already happened.

To date, the response of the scientific establishment has been wholly inadequate. There is little attempt to proactively check for fraud: science is still regarded as a gentlemanly pursuit where we should assume everyone has honourable intentions. Even when evidence of misconduct is strong, it can take months or years for a paper to be retracted. As whistleblower Raphaël Levy asked on his blog: Is it somebody else's problem to correct the scientific literature? There is dawning awareness that our methods for hiring and promotion might encourage misconduct, but getting institutions to change is a very slow business, not least because those in positions of power succeeded in the current system, and so think it must be optimal.

The task of unmasking fraud is largely left to hobbyists and volunteers, a self-styled army of "data sleuths", who are mostly motivated by anger at seeing science corrupted and the bad guys getting away with it. They have developed expertise in spotting certain kinds of fraud, such as image manipulation and improbable patterns in data, and they have also uncovered webs of bad actors who have infiltrated many corners of science. One might imagine that the scientific establishment would be grateful that someone is doing this work, but the usual response to a sleuth who finds evidence of malpractice is to ignore them, brush the evidence under the carpet, or accuse them of vexatious behaviour. Publishers and academic institutions are both at fault in this regard.

If I'm right, this relaxed attitude to the fraud epidemic is a disaster-in-waiting. There are a number of things that need to be done urgently. One is to change research culture so that rewards go to those whose work is characterised by openness and integrity, rather than those who get large grants and flashy publications. Another is for publishers to act far more promptly to investigate complaints of malpractice and issue retractions where appropriate. Both of these things are beginning to happen, slowly. But there is a third measure that I think should be taken as soon as possible, and that is to train a generation of researchers in fraud busting. We owe a huge debt of gratitude to the data sleuths, but the scale of the problem is such that we need the equivalent of a police force rather than a volunteer band. Here are some of the topics that an MSc course could cover:

  • How to spot dodgy datasets
  • How to spot manipulated figures
  • Textual characteristics of fraudulent articles
  • Checking scientific credentials
  • Checking publisher credentials/identifying predatory publishers
  • How to raise a complaint when fraud is suspected
  • How to protect yourself from legal attacks
  • Cognitive processes that lead individuals to commit fraud
  • Institutional practices that create perverse incentives
  • The other side of the coin: "Merchants of doubt" whose goal is to discredit science

I'm sure there's much more that could be added and would be glad of suggestions. 

Now, of course, the question is what could you do with such a qualification. If my predictions are right, then individuals with such expertise will increasingly be in demand in academic institutions and publishing houses, to help ensure the integrity of work they produce and publish. I also hope that there will be growing recognition of the need for more formal structures to be set up to investigate scientific fraud and take action when it is discovered: graduates of such a course would be exactly the kind of employees needed in such an organisation.

It might be argued that this is a hopeless endeavour. In Harry Potter and the Half-Blood Prince (Rowling, 2005) Professor Snape tells his pupils:

 "The Dark Arts, are many, varied, ever-changing, and eternal. Fighting them is like fighting a many-headed monster, which, each time a neck is severed, sprouts a head even fiercer and cleverer than before. You are fighting that which is unfixed, mutating, indestructible."

This is a pretty accurate description of what is involved in tackling scientific fraud. But Snape does not therefore conclude that action is pointless. On the contrary, he says: 

"Your defences must therefore be as flexible and inventive as the arts you seek to undo."

I would argue that any university that wants to be ahead of the field in this enterprise could should flexibility and inventiveness in starting up a postgraduate course to train the next generation of fraud-busting wizards. 

Bibliography

Bishop, D. V. M. (2023). Red flags for papermills need to go beyond the level of individual articles: A case study of Hindawi special issues. https://osf.io/preprints/psyarxiv/6mbgv
Boughton, S. L., Wilkinson, J., & Bero, L. (2021). When beauty is but skin deep: Dealing with problematic studies in systematic reviews | Cochrane Library. Cochrane Database of Systematic Reviews, 5. Retrieved 4 June 2021, from https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000152/full
 Byrne, J. A., & Christopher, J. (2020). Digital magic, or the dark arts of the 21st century—How can journals and peer reviewers detect manuscripts and publications from paper mills? FEBS Letters, 594(4), 583–589. https://doi.org/10.1002/1873-3468.13747
Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals (arXiv:2107.06751). arXiv. https://doi.org/10.48550/arXiv.2107.06751
Carreyrou, J. (2019). Bad Blood: Secrets and Lies in a Silicon Valley Startup. Pan Macmillan.
COPE & STM. (2022). Paper mills: Research report from COPE & STM. Committee on Publication Ethics and STM. https://doi.org/10.24318/jtbG8IHL 
Culliton, B. J. (1983). Coping with fraud: The Darsee Case. Science (New York, N.Y.), 220(4592), 31–35. https://doi.org/10.1126/science.6828878 
Grey, S., & Bolland, M. (2022, August 18). Guest Post—Who Cares About Publication Integrity? The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2022/08/18/guest-post-who-cares-about-publication-integrity/ 
Hanson, M., Gómez Barreiro, P., Crosetto, P., & Brockington, D. (2023). The strain on scientific publishing (2309; p. 33343265 Bytes). arXiv. https://arxiv.org/ftp/arxiv/papers/2309/2309.15884.pdf 
Harris, R. (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (1st edition). Basic Books.

Judson, H. F. (2004). The Great Betrayal. Orlando.

Lévy, R. (2022, December 15). Is it somebody else’s problem to correct the scientific literature? Rapha-z-Lab. https://raphazlab.wordpress.com/2022/12/15/is-it-somebody-elses-problem-to-correct-the-scientific-literature/
 Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., Coriat, A.-M., Foeger, N., & Dirnagl, U. (2020). The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737
 Oreskes, N., & Conway, E. M. (2010). Merchants of Doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Press.
 Paterlini, M. (2023). Paolo Macchiarini: Disgraced surgeon is sentenced to 30 months in prison. BMJ, 381, p1442. https://doi.org/10.1136/bmj.p1442  
Rowling, J. K. (2005) Harry Potter and the Half-Blood Prince. Bloomsbury, London. ‎ ISBN: 9780747581086
Smith, R. (2021, July 5). Time to assume that health research is fraudulent until proven otherwise? The BMJ. https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/
Stapel, D. (2016). Faking science: A true story of academic fraud.  Translated by Nicholas J. Brown. http:// nick.brown.free.fr/stapel.
Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7(6), 670–688. https://doi.org/10.1177/1745691612460687
 

Note: On-topic comments are welcome but are moderated to avoid spam, so there may be a delay before they appear.