Friday 9 February 2024

The world of Poor Things at MDPI journals


At the weekend, the Observer ran a piece by Robin McKie entitled "‘The situation has become appalling’: fake scientific papers push research credibility to crisis point". I was one of those interviewed for the article, describing my concerns about a flood of dodgy papers that was polluting the scientific literature.

Two days later I received an email from the editorial office of MDPI publishers with the header "[Children] (IF: 2.4, ISSN 2227-9067): Good Paper Sharing on the Topic of" (sic) that began:

Greetings from the Children Editorial Office!

We recently collected 10 highly cited papers in our journal related to Childhood Autism. And we sincerely invite you to visit and read these papers, because you are an excellent expert in this field of study.

Who could resist such a flattering invitation? MDPI is one of those publishers that appears to be encouraging publication of low quality work, with a massive growth in special issues where papers are published with remarkably rapid turnaround times. Only last week it was revealed that the journal is affected by fake peer review that appears to be generated by AI. So I was curious to take a look.

The first article, by Frolli et al (2022a) was weird. It reported a comparison of two types of intervention designed to improve emotion recognition in children with autism, one of which used virtual reality. The first red flag was the sample size: two groups each of 30 children, all originally from the city of Caserta. I checked Wikipedia, which told me the population of Caserta was around 76,000 in 2017. Recruiting participants for intervention studies is typically slow and laborious and this is a remarkable sample size to recruit from such a small region. But credibility is then stretched to breaking point on hearing that the selection criteria required that the children were all aged between 9 and 10 years and had IQs of 97 or above. No researcher in their right mind would impose unnecessary constraints on recruitment, and both the age and IQ criteria are far tighter than would usually be adopted. I wondered whether there might be a typo in this account, but we then hear that the IQ range of the sample is indeed remarkably narrow: 

"The first experimental group (Gr1) was composed of 30 individuals with a mean age of 9.3 (SD 0.63) and a mean IQ of 103.00 (SD 1.70). ...... The second experimental group (Gr2) was composed of 30 individuals with a mean age of 9.4 (SD 0.49) and mean IQ of 103.13 (SD 2.04)...."

Most samples for studies using Wechsler IQ scales have SD of at least 8, even if cutoffs are applied as selection criteria, so this is unbelievably low.

This dubious paper prompted me to look at others by the first author. It was rather like pulling a thread on a hole in a sweater - things started to unravel fast. A paper published by Frolli et al (2023a) in the MDPI journal Behavioral Sciences claimed to have studied eighty 18-year-olds recruited from four different high schools. The selection criteria were again unbelievably stringent: IQ assessed on the WAIS-IV fell between 95-105 "to ensure that participants fell within the average range of intellectual functioning, minimizing the impact of extreme cognitive variations on our analyses". The lower IQ range selected here corresponds to z-score of -0.33 or 37th percentile. If the population of students covered the full range of IQ, then only around 25% would meet the criterion (between 37th and 63rd centile), so to obtain a sample of 80 it would be necessary to test over 300 potential participants. Furthermore, there are IQ screening tests that can be used in this circumstance that are relatively quick to administer, but the WAIS-IV is not one of them. We are told all participants were given the full test, which requires individual administration by a qualified psychologist and takes around one hour to complete. So who did all this testing, and where? The article states: "The data were collected and analyzed at the FINDS Neuropsychiatry Outpatient Clinic by licensed psychologists in collaboration with the University of International Studies of Rome (UNINT)." So we are supposed to believe that hundreds of 18-year-olds trekked to a neuropsychiatry outpatient clinic for a full IQ screening which most of them would not have passed. I cannot imagine a less efficient way of conducting such a study. I could not find any mention of compensation for participants, which is perhaps unsurprising as the research received no external funding. All of this is described as happening remarkably fast, with ethics approval in January 2023, and submission of the article in October 2023.

Another paper in Children in 2023 focused on ADHD, and again reported recruiting two groups of 30 children for an intervention that lasted 5 months (Frolli et al., 2023b). The narrow IQ selection criteria were again used, with WISC-IV IQs in the range 95-105, and the mean IQs were 96.48 (SD =1.09) and 98.44 (SD = 1.12) for groups 1 and 2 respectively. Again, the research received no external funding. The report of ethics approval is scanty "The study was conducted in accordance with the Declaration of Helsinki. The study was approved by the Ethics Committee and the Academic Senate of the University of International Studies of Rome."

The same first author published a paper on the impact of COVID-19 on cognitive development and executive functioning in adolescents in 2021 (Frolli et al, 2021). I have not gone over it in detail, but a quick scan revealed some very odd statistical reporting. There were numerous F-ratios, but they were all negative, which is impossible, as F is a ratio between two positive numbers. Furthermore, the reported p-values and degrees of freedom didn't always correspond to the F-ratio, even if the sign was ignored.

At this point I was running out of steam, but a quick look at Frolli et al (2022a) on Executive Functions and Foreign Language Learning suggested yet more problems, with the sentence "Significance at the level of 5% (α < 0.001) has been accepted" featuring at least twice. It is hard to believe that a human being wrote this sentence, or that any human author, editor or reviewer read it without comment.

If anyone is interested in pulling at other related threads, I suspect it would be of interest to look at articles accepted for a Special Issue of the MDPI journal Disabilities co-edited by Frolli.

In his brilliant film Poor Things, Yorgos Lanthimos distorts familiar objects and places just enough to be disturbing. Lisbon looks like what I imagine Lisbon would be in the Victorian age, except that the colours are unusually vivid, there are strange flying cars in the sky, and nobody seems concerned at the central character wandering around only partially clothed (see, e.g., this review).  The combined impression is that MDPI publishes papers from that universe, where everything looks superficially like genuine science but with jarring features that tell you something is amiss. The difference is that Poor Things has a happy ending.

References 

Frolli, A.; Ricci, M.C.; Di Carmine, F.; Lombardi, A.; Bosco, A.; Saviano, E.; Franzese, L. The Impact of COVID-19 on Cognitive Development and Executive Functioning in Adolescents: A First Exploratory Investigation. Brain Sci. 2021, 11, 1222. https://doi.org/10.3390/brainsci11091222

Frolli, A.; Savarese, G.; Di Carmine, F.; Bosco, A.; Saviano, E.; Rega, A.; Carotenuto, M.; Ricci, M.C. Children on the Autism Spectrum and the Use of Virtual Reality for Supporting Social Skills. Children 2022a, 9, 181. https://doi.org/10.3390/children9020181

Frolli, A.; Cerciello, F.; Esposito, C.; Ciotola, S.; De Candia, G.; Ricci, M.C.; Russo, M.G. Executive Functions and Foreign Language Learning. Pediatr. Rep. 2022b, 14, 450-456. https://doi.org/10.3390/pediatric14040053

Frolli, A.; Cerciello, F.; Ciotola, S.; Ricci, M.C.; Esposito, C.; Sica, L.S. Narrative Approach and Mentalization. Behav. Sci. 2023a, 13, 994. https://doi.org/10.3390/bs13120994

Frolli, A.; Cerciello, F.; Esposito, C.; Ricci, M.C.; Laccone, R.P.; Bisogni, F. Universal Design for Learning for Children with ADHD. Children 2023b, 10, 1350. https://doi.org/10.3390/children10081350

Friday 2 February 2024

An (intellectually?) enriching opportunity for affiliation

Guest Post by Nick Wise 


 

A couple of months ago a professor received the following email, which they forwarded to me.

 

"Dear esteemed colleagues,

We are delighted to extend an invitation to apply for our prestigious remote research fellowships at the University of Religions and Denominations (URD). These fellowships offer substantial financial support to researchers with papers currently in press, accepted or under review by Scopus-indexed journals. We welcome scholars from diverse academic disciplines to seize this intellectually enriching opportunity.

Fellowship Details:
Fellowship Type: Remote Short-term Research Fellowship.
Research Focus: Diverse fields, spanning humanities, social sciences, interdisciplinary studies, and more.
Research Output: Publication of research articles in Scopus-indexed journals.
Affiliation: Encouragement for researchers to acknowledge URD as their additional affiliation in published articles.
Remuneration: Project-based compensation for each research article.
Payment Range: Up to $1000 USD per article (based on SJR journal ranking).
Eligibility: Papers in press, accepted, or under review by Scopus-indexed journals.

Preference: Priority for indexing before December 30, 2023.

Application Process:   

To express your interest in securing a fellowship, kindly submit your curriculum vitae to  Ahmad Moghri at moghri.urd@gmail.com. When emailing your application, please use the subject line: "Research Fellowship, FULL NAME."

Upon Selection:
Successful applicants will receive formal invitations to join our esteemed fellowship program. Invitation letters and collaboration contracts will be dispatched within a maximum of 5 days.

We firmly believe that this fellowship program provides an invaluable platform for scholars to make substantial contributions to their fields while collaborating with the distinguished University of Religions and Denominations. We encourage all eligible individuals to seize this exceptional opportunity.

For inquiries or further information, please do not hesitate to contact moghri.urd@gmail.com.

Warmest Regards,”

Why would the institution pay researchers to say that they are affiliated with them? It could be that funding for the university is related to the number of papers published in indexed journals. More articles associated with the university can also improve their placing in national or international university rankings, which could lead directly to more funding, or to more students wanting to attend and bringing in more money.

The University of Religions and Denominations is a private Iranian university specialising, as the name suggests, in the study of different religions and movements. Until recently the institution had very few published papers associated with it according to Dimensions and their subject matter was all related to religion. However, last year there was a substantial increase to 103 published papers, and so far this year there are already 35. This suggests that some academics have taken them up on the offer in the advert to include URD as an affiliation.

Surbhi Bhatia Khan is a lecturer in data science at the University of Salford in the UK since March 2023 and a top 2% scientist in the world according to Stanford University’s rankings. She published 29 research articles last year according to Dimensions, an impressive output, in which she was primarily affiliated to the University of Salford. In addition though, 5 of those submitted in the 2nd half of last year had an additional affiliation at the Department of Engineering and Environment at URD, which is not listed as one of the departments on the university website. Additionally, 19 of the 29 state that she’s affiliated to the Lebanese American University in Beirut, which she was not affiliated with before 2023. She is yet to mention her role at either of these additional affiliations on her LinkedIn profile.

Looking at the Lebanese American University, another private university, its publication numbers have shot up from 201 in 2015 to 503 in 2021 and 2,842 in 2023, according to Dimensions. So far in 2024 they have published 525, on track for over 6,000 publications for the year. By contrast, according to the university website, the faculty consisted of 547 full-time staff members in 2021 but had shrunk to 423 in 2023.  It is hard to imagine how such growth in publication numbers could occur without a similar growth in the faculty, let alone with a reduction.

How many other institutions are seeing incredible increases in publication numbers? Last year we saw gaming of the system on a grand scale by various Saudi Arabian universities, but how many offers like the one above are going around, whether by email or sent through Whatsapp groups or similar?

The Committee On Publication Ethics held a forum on claiming institutional affiliations in December 2023, in recognition of the fact that guidance for what merits affiliation to an institution is lacking and there are no accepted standards for how many affiliations an author should give. It looks like such guidance can’t come soon enough.

Nick Wise is a researcher at the University of Cambridge, UK.

Note: Comments are moderated to prevent spam and abuse, so please be patient if you post a comment and it does not appear immediately

P.S. 3rd Feb 2024

Someone on social media queried the "top 2% rating" for Khan. Nick tells me this is based on an Elsevier ranking for 2022: https://elsevier.digitalcommonsdata.com/datasets/btchxktzyw/6

Tuesday 5 December 2023

Low-level lasers. Part 2. Erchonia and the universal panacea

 

 


In my last blogpost, I looked at a study that claimed continuing improvements of symptoms of autism after eight 5-minute sessions where a low-level laser was pointed at the head.  The data were so extreme that I became interested in the company, Erchonia, who sponsored the study and in Regulatory Insight, Inc, whose statistician failed to notice anything odd.  In exploring Erchonia's research corpus, I found that they have investigated the use of their low-laser products for a remarkable range of conditions. A search of clinicaltrials.com with the keyword Erchonia produced 47 records, describing studies of pain (chronic back pain, post-surgical pain, and foot pain), body contouring (circumference reduction, cellulite treatment), sensorineural hearing loss, Alzheimer's disease, hair loss, acne and toenail fungus. After excluding the trials on autism described in my previous post, fourteen of the records described randomised controlled trials in which an active laser was compared with a placebo device that looked the same, with both patient and researcher being kept in the dark about which device was which until the data were analysed. As with the autism study, the research designs for these RCTs specified on clinicaltrials.com looked strong, with statistician Elvira Cawthon from Regulatory Insight involved in data analysis.

As shown in Figure 1, where results are reported for RCTs, they have been spectacular in virtually all cases. The raw data are mostly not available, and in general the plotted data look less extreme than in the autism trial covered in last week's post, but nonetheless, the pattern is a consistent one, where over half the active group meet the cutoff for improvement, whereas less than half (typically 25% or less) of the placebo group do so. 

FIGURE 1: Proportions in active treated group vs placebo group meeting preregistered criterion for improvement (Error bars show SE)*

I looked for results from mainstream science against which to benchmark the Erchonia findings.  I found a big review of behavioural and pharmaceutical interventions for obesity by the US Agency for Healthcare Research and Quality (LeBlanc et al, 2018). Figures 7 and 13 show results for binary outcomes - relative risk of losing 5% or more of body weight over a 12 month period; i.e. the proportion of treated individuals who met this criterion divided by the proportion of controls. In 38 trials of behavioural interventions, the mean RR was 1.94 [95% CI, 1.70 to 2.22]. For 31 pharmaeutical interventions, the effect varied with the specific medication, with RR ranging from 1.18 to 3.86. Only two pharmaceutical comparisons had RR in excess of 3.0. By contrast, for five trials of body contouring or cellulite reduction from Erchonia, the RRs ranged from 3.6 to 18.0.  Now, it is important to note that this is not comparing like with like: the people in the Erchonia trials were typically not clinically obese: they were mostly women seeking cosmetic improvements to their appearance.  So you could, and I am sure many would, argue it's an unfair comparison. If anyone knows of another literature that might provide a better benchmark, please let me know. The point is that the effect sizes reported by Erchonia are enormous relative to the kinds of effects typically seen with other treatments focused on weight reduction.

If we look more generally at the other results obtained with low-level lasers, we can compare them to an overview of effectiveness of common medications (Leucht et al, 2015). These authors presented results from a huge review of different therapies, with effect sizes represented as standardized mean differences (SMD - familiar to psychologists as Cohen's d). I converted Erchonia results into this metric*, and found that across all the studies of pain relief shown in Figure 1, the average SMD was 1.30, with a range from 0.87 to 1.77. This contrasts with Leucht et al's estimated effect size of 1.06 for oxycodone plus paracetamol, and 0.83 for Sumatriptan for migraine.  So if we are to believe the results, they indicate that the effect of Erchonia low-level lasers is as good or better than the most effective pharmaceutical medications that we have for pain relief or weight loss. I'm afraid I remain highly sceptical.

I would not have dreamed of looking at Erchonia's track record if it were not for their impossibly good results in the Leisman et al autism trial that I discussed in the previous blogpost.  When I looked in more detail, I was reminded of the kinds of claims made for alternative treatments for children's learning difficulties, where parents are drawn in with slick websites promising scientifically proven interventions, and glowing testimonials from satisfied customers. Back in 2012 I blogged about how to evaluate "neuroscientific" interventions for dyslexia.  Most of the points I made there apply to the world of "photomodulation" therapies, including the need to be wary when a provider claims that a single method is effective for a whole host of different conditions.  

Erchonia products are sold worldwide and seem popular with alternative health practitioners. For instance, in Stockport, Manchester, you can attend a chiropractic clinic where Zerona laser treatment will remove "stubborn body fat". In London there is a podiatry centre that reassures you: "There are numerous papers which show that cold laser affects the activity of cells and chemicals within the cell. It has been shown that cold laser can encourage the formation of stem cells which are key building blocks in tissue reparation. It also affects chemicals such as cytochrome c and causes a cascade of reactions which stimulates the healing. There is much research to show that cold laser affects healing and there are now several very good class 1 studies to show that laser can be effective." But when I looked for details of these "very good class 1 studies" they were nowhere to be found. In particular, it was hard to find research by scientists without vested interests in the technology.  

Of all the RCTs that I found, there were just two that were conducted at reputable universities. One of them, on hearing loss (NCT01820416) was conducted at the University of Iowa, but terminated prematurely because intermediate analysis showed no clinically or statistically significant effects (Goodman et al., 2013).  This contrasts sharply with NCT00787189, which had the dramatic results reported in Figure 1 (not, as far as I know, published outside of clinicaltrials.gov). The other university-based study was the autism study based in Boston described in my previous post: again, with unpublished, unimpressive results posted on clinicaltrials.gov.

This suggests it is important when evaluating novel therapies to have results from studies that are independent of those promoting the therapy. But, sadly, this is easier to recommend than to achieve. Running a trial takes a lot of time and effort: why would anyone do this if they thought it likely that the intervention would not work and the postulated mechanism of action was unproven? There would be a strong risk that you'd end up putting in effort that would end in a null result, which would be hard to publish. And you'd be unlikely to convince those who believed in the therapy - they would no doubt say you had the wrong wavelength of light, or insufficient duration of therapy, and so on.  

I suspect the response by those who believe in the power of low-level lasers will be that I am demonstrating prejudice, in my reluctance to accept the evidence that they provide of dramatic benefits. But, quite simply, if low-level laser treatment was so remarkably effective in melting fat and decreasing pain, surely it would have quickly been publicised through word of mouth from satisfied customers. Many of us are willing to subject our bodies to all kinds of punishments in a quest to be thin and/or pain-free. If this could be done simply and efficiently without the need for drugs, wouldn't this method have taken over the world?

*Summary files (Erchonia_proportions4.csv) and script (Erchonia_proportions_for_blog.R) are on Github, here.

Saturday 25 November 2023

Low-level lasers. Part 1. Shining a light on an unconventional treatment for autism


 

'Light enters, then a miracle happens, and good things come out!' (Quirk & Whelan, 2011*)



I'm occasionally asked to investigate weird interventions for children's neurodevelopmental conditions, and recently I've found myself immersed in the world of low-level laser treatments. The material I've dug up is not new - it's been around for some years, but has not been on my radar until now. 

A starting point is this 2018 press statement by Erchonia, a firm that makes low-level laser devices for quasi-medical interventions. 

They had tested a device that was supposed to reduce irritability in autistic children by applying low-level laser light to the temporal and posterior regions of the head (see Figure 1) for 5 minute sessions twice a week for 4 weeks.

Figure 1: sites of stimulation by low-level laser

 The study, which was reported here, was carefully designed as a randomized controlled trial. Half the children received a placebo intervention. Placebo and active laser devices were designed to look identical and both emitted light, and neither the child nor the person administering the treatment knew whether the active or placebo light was being used.

According to Erchonia “The results are so strong, nobody can argue them.” (sic). Alas, their confidence turned out to be misplaced.

The rationale given by Leisman et al (with my annotations in yellow in square brackets) is as follows: "LLLT promotes cell and neuronal repair (Dawood and Salman 2013) [This article is about wound healing, not neurons] and brain network rearrangement (Erlicher et al. 2002) [This is a study of rat cells in a dish] in many neurologic disorders identified with lesions in the hubs of default mode networks (Buckner et al. 2008)[This paper does not mention lasers]. LLLT facilitates a fast-track wound-healing (Dawood and Salman 2013) as mitochondria respond to light in the red and near-infrared spectrum (Quirk and Whelan 2011*)[review of near-infrared irradiation photobiomodulation that notes inadequate knowledge of mechanisma - see cartoon]. On the other hand, Erlicher et al. (2002) have demonstrated that weak light directs the leading edge of growth cones of a nerve [cells in a dish]. Therefore, when a light beam is positioned in front of a nerve’s leading edge, the neuron will move in the direction of the light and grow in length (Black et al. 2013 [rat cells in a dish]; Quirk and Whelan 2011). Nerve cells appear to thrive and grow in the presence of low-energy light, and we think that the effect seen here is associated with the rearrangement of connectivity."

I started out looking at the registration of the trial on ClinicalTrials.gov. This included a very thorough document that detailed a protocol and analysis plan, but there were some puzzling inconsistencies; I documented them here on PubPeer,  and subsequently a much more detailed critique was posted there by Florian Naudet and André Gillibert. Among other things, there was confusion about where the study was done. The registration document said it was done in Nazareth, Israel, which is where the first author, Gerry Leisman was based. But it also said that the PI was Calixto Machado, who is based in Havana, Cuba.

Elvira Cawthon, from Regulatory Insight, Inc, Tennessee was mentioned on the protocol as clinical consultant and study monitor. The role of the study monitor is specified as follows: 

"The study Monitor will assure that the investigator is executing the protocol as outlined and intended. This includes insuring that a signed informed consent form has been attained from each subject’s caregiver prior to commencing the protocol, that the study procedure protocol is administered as specified, and that all study evaluations and measurements are taken using the specified methods and correctly and fully recorded on the appropriate clinical case report forms."

This does not seem ideal, given that the study monitor was in Tennessee, and the study was conducted in either Nazareth or Havana. Accordingly, I contacted Ms Cawthon, who replied: 

"I can confirm that I performed statistical analysis on data from the clinical study you reference that was received from paper CRFs from Dr. Machado following completion of the trial. I was not directly involved in the recruitment, treatment, or outcomes assessment of the subjects whose data was recorded on those CRFs. I have not reviewed any of the articles you referenced below so I cannot attest to whether the data included was based on the analyses that I performed or not or comment on any of the discrepancies without further evaluation at this time."

I had copied Drs Leisman and Machado into my query, and Dr Leisman also replied. He stated:

"I am the senior author of the paper pertaining to a trial of low-level laser therapy in autism spectrum disorder.... I take full responsibility for the publication indicated above and vouch for having personally supervised the implementation of the project whose results were published under the following citation:

Leisman, G. Machado, C., Machado, Y, Chinchilla-Acosta, M. Effects of Low-Level Laser Therapy in Autism Spectrum Disorder. Advances in Experimental Medicine and Biology 2018:1116:111-130. DOI:10.1007/5584_2018_234. The publication is referenced in PubMed as: PMID: 29956199.

I hold a dual appointment at the University of Haifa and at the University of the Medical Sciences of Havana with the latter being "Professor Invitado" by the Ministry of Health of the Republic of Cuba. Ms. Elvira Walls served as the statistical consultant on this project."

However, Dr Leisman denied any knowledge of subsequent publications of follow-up data by Dr Machado. I asked if I could see the data from the Leisman et al study, and he provided a link to a data file on ResearchGate, the details of which I have put on PubPeer.

Alas, the data were amazing, but not in a good way. The main data came from five subscales of the Aberrant Behavior Checklist (ABC)**, which can be combined into a Global score. (There were a handful of typos in the dataset for the Global score, which I have corrected in the following analysis). For the placebo group, 15 of 19 children obtained exactly the same global score on all 4 sessions. Note that there is no restriction of range for this scale: reported scores range from 9 to 154. This pattern was also seen in the five individual subscales. You might think that is to be expected if the placebo intervention is ineffective, but that's not the case. Questionnaire measures such as that used here are never totally stable. In part this is because children's behaviour fluctuates. But even if the behaviour is constant, you expect to see some variability in responses, depending on how the rater interprets the scale of measurement. Furthermore, when study participants are selected because they have extreme scores on a measure, the tendency is for scores to improve on later testing - a phenomenon known as regression to the mean, Such unchanging scores are out of line with anything I have ever come across in the intervention literature. If we turn to the treated group, we see that 20 of 21 children showed a progressive decline in global scores (i.e. improvement), with each measurement improving from the previous one over 4 sessions. This again is just not credible because we'd expect some fluctuation in children's behaviour as well as variable ratings due to error of measurement. These results were judged to be abnormal in a further commentary by Gillibert and Naudet on PubPeer. They also noted that the statistical distribution of scores was highly improbable, with far more even than odd numbers.

Although Dr Machado has been copied into my correspondence, he has not responded to queries. Remember, he was PI for the study in Cuba, and he is first author on a follow-up study from which Dr Leisman dissociated himself. Indeed, I subsequently found that there were no fewer than three follow-up reports, all appearing in a strange journal whose DOIs did not appear to be genuine: 

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2019a). Follow-up assessment of autistic children 6 months after finishing low lever (sic) laser therapy. Internet Journal of Neurology, 21(1). https://doi.org/10.5580/IJN.54101 (available from https://ispub.com/IJN/21/1/54101).

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2019b). Twelve months follow-up comparison between autistic children vs. Initial placebo (treated) groups. Internet Journal of Neurology, 21(2). https://doi.org/10.5580/IJN.54812 (available from https://ispub.com/IJN/21/2/54812).

Machado, C., Machado, Y., Chinchilla, M., & Machado, Yazmina. (2020). Follow-up assessment of autistic children 12 months after finishing low lever (sic) laser therapy. Internet Journal of Neurology, 21(2). https://doi.org/10.5580/IJN.54809 (available from available from https://ispub.com/IJN/21/2/54809)

The 2019a paper starts by talking of a study of anatomic and functional brain connectivity in 21 children, but then segues to an extended follow-up (6 months) of the 21 treated and 19 placebo children from the Leisman et al study. The Leisman et al study is mentioned but not adequately referenced. Remarkably, all the original participants participated in the follow-up. The same trend as before continued: the placebo group stagnated, whereas the treated group continue to improve up to 6 months later, even though they received no further active treatment after the initial 4 week period. The 2020 Abstract reported a further follow-up to 12 months. The huge group difference was sustained (see Figure 2). Three of the treated group were now reported as scoring in the normal range on a measure of clinical impairment. 

Figure 2. Chart 1 from Machado et al 2020
 

In the 2019b paper, it is reported that, after the stunning success of the initial phase of the study, the placebo group were offered the intervention, and all took part, whereupon they proceeded to make an almost identical amount of remarkable progress on all five subscales, as well as the global scale (see Figure 3). We might expect the 'baseline' scores of the cross-over group to correspond to the scores reported at the final follow-up (as placebo group prior to cross-over) but they don't. 

Figure 3: Chart 2 of Machado et al 2019b

I checked for other Erchonia studies on clinicaltrials.gov. Another study, virtually identical except for the age range, was registered in 2020 with Dr Leon Morales-Quezada of Spaulding Rehabilitation Hospital, Boston as Principal Investigator.  Comments in the documents suggest this was conducted after Erchonia failed to get the desired FDA approval. Although I have not found a published report of this second trial, I found a recruitment advertisement, which confusingly cites the NCT registration number of the 2013 study. Some summary results are included on clinicaltrials.gov, and they are strikingly different from the Leisman et al trial, with no indication of any meaningful difference between active and placebo groups in the final outcome measure, and both groups showing some improvement. I have requested fuller data from Elvira Cawthon (listed as results point of contact) with cc. to Dr Morales-Quezada and will update this post if I hear back.

It would appear that at one level this is a positive story, because it shows the regulatory system working. We do not know why FDA rejected Erchonia's request for 510k Market Clearance, but the fact that they did so might indicate that they were unimpressed by the data provided by Leisman and Machado. The fact that Machado et al reported their three follow-up studies in what appears to be an unregistered journal suggests they had difficulty persuading regular journals that the findings were legitimate. If eight 5-minute sessions with a low-level laser pointed at the head really could dramatically improve the function of children with autism 12 months later, one would imagine that Nature, Cell and Science would be scrambling to publish the articles. On the other hand, any device that has the potential to stimulate neuronal growth might also ring alarm bells in terms of potential for harm.

Use of low-level lasers to treat autism is only part of the story. Questions remain about the role of Regulatory Insight, Inc., whose statistician apparently failed to notice anything strange about the data from the first autism study. In another post, I plan to look at cases where the same organisation was involved in monitoring and analysing trials of Erchonia laser devices for other conditions such as cellulite, pain, and hearing loss.

Notes

* Quirk, B. J., & Whelan, H. T. (2011). Near-infrared irradiation photobiomodulation: The need for basic science. Photomedicine and Laser Surgery, 29(3), 143–144. https://doi.org/10.1089/pho.2011.3014. This article states "clinical uses of NIR-PBM have been studied in such diverse areas as wound healing, oral mucositis, and retinal toxicity. In addition, NIR-PBM is being considered for study in connection with areas such as aging and neural degenerative diseases (Parkinson's disease in particular). One thing that is missing in all of these pre-clinical and clinical studies is a proper investigation into the basic science of the NIR-PBM phenomenon. Although there is much discussion of the uses of NIR, there is very little on how it actually works. As far as explaining what really happens, we are basically left to resort to saying 'light enters, then a miracle happens, and good things come out!' Clearly, this is insufficient, if for no other reason than our own intellectual curiosity." 

**Aman, M. G., Singh, N. N., Stewart, A. W., & Field, C. J. (1985). The aberrant behavior checklist: A behavior rating scale for the assessment of treatment effects. American Journal of Mental Deficiency, 89(5), 485–491. N. B. this is different from the Autism Behavior Checklist which is a commonly used autism assessment. 

Sunday 19 November 2023

Defence against the dark arts: a proposal for a new MSc course

 


Since I retired, an increasing amount of my time has been taken up with investigating scientific fraud. In recent months, I've become convinced of two things: first, fraud is a far more serious problem than most scientists recognise, and second, we cannot continue to leave the task of tackling it to volunteer sleuths. 

If you ask a typical scientist about fraud, they will usually tell you it is extremely rare, and that it would be a mistake to damage confidence in science because of the activities of a few unprincipled individuals. Asked to name fraudsters they may, depending on their age and discipline, mention Paolo Macchiarini, John Darsee, Elizabeth Holmes or Diederik Stapel, all high profile, successful individuals, who were brought down when unambiguous evidence of fraud was uncovered. Fraud has been around for years, as documented in an excellent book by Horace Judson (2004), and yet, we are reassured, science is self-correcting, and has prospered despite the activities of the occasional "bad apple". The problem with this argument is that, on the one hand, we only know about the fraudsters who get caught, and on the other hand, science is not prospering particularly well - numerous published papers produce results that fail to replicate and major discoveries are few and far between (Harris, 2017). We are swamped with scientific publications, but it is increasingly hard to distinguish the signal from the noise. In my view, it is getting to the point where in many fields it is impossible to build a cumulative science, because we lack a solid foundation of trustworthy findings. And it's getting worse and worse.

My gloomy prognosis is partly engendered by a consideration of a very different kind of fraud: the academic paper mill. In contrast to the lone fraudulent scientist who fakes data to achieve career advancement, the paper mill is an industrial-scale operation, where vast numbers of fraudulent papers are generated, and placed in peer-reviewed journals with authorship slots being sold to willing customers. This process is facilitated in some cases by publishers who encourage special issues, which are then taken over by "guest editors" who work for a paper mill. Some paper mill products are very hard to detect: they may be created from a convincing template with just a few details altered to make the article original. Others are incoherent nonsense, with spectacularly strange prose emerging when "tortured phrases" are inserted to evade plagiarism detectors.

You may wonder whether it matters if a proportion of the published literature is nonsense: surely any credible scientist will just ignore such material? Unfortunately, it's not so simple. First, it is likely that the paper mill products that are detected are just the tip of the iceberg - a clever fraudster will modify their methods to evade detection. Second, many fields of science attempt to synthesise findings using big data approaches, automatically combing the literature for studies with specific keywords and then creating databases, e.g. of genotypes and phenotypes. If these contain a large proportion of fictional findings, then attempts to use these databases to generate new knowledge will be frustrated. Similarly, in clinical areas, there is growing concern that systematic reviews that are supposed to synthesise evidence to get at the truth instead lead to confusion because a high proportion of studies are fraudulent. A third and more indirect negative consequence of the explosion in published fraud is that those who have committed fraud can rise to positions of influence and eminence on the back of their misdeeds. They may become editors, with the power to publish further fraudulent papers in return for money, and if promoted to professorships they will train a whole new generation of fraudsters, while being careful to sideline any honest young scientists who want to do things properly. I fear in some institutions this has already happened.

To date, the response of the scientific establishment has been wholly inadequate. There is little attempt to proactively check for fraud: science is still regarded as a gentlemanly pursuit where we should assume everyone has honourable intentions. Even when evidence of misconduct is strong, it can take months or years for a paper to be retracted. As whistleblower Raphaël Levy asked on his blog: Is it somebody else's problem to correct the scientific literature? There is dawning awareness that our methods for hiring and promotion might encourage misconduct, but getting institutions to change is a very slow business, not least because those in positions of power succeeded in the current system, and so think it must be optimal.

The task of unmasking fraud is largely left to hobbyists and volunteers, a self-styled army of "data sleuths", who are mostly motivated by anger at seeing science corrupted and the bad guys getting away with it. They have developed expertise in spotting certain kinds of fraud, such as image manipulation and improbable patterns in data, and they have also uncovered webs of bad actors who have infiltrated many corners of science. One might imagine that the scientific establishment would be grateful that someone is doing this work, but the usual response to a sleuth who finds evidence of malpractice is to ignore them, brush the evidence under the carpet, or accuse them of vexatious behaviour. Publishers and academic institutions are both at fault in this regard.

If I'm right, this relaxed attitude to the fraud epidemic is a disaster-in-waiting. There are a number of things that need to be done urgently. One is to change research culture so that rewards go to those whose work is characterised by openness and integrity, rather than those who get large grants and flashy publications. Another is for publishers to act far more promptly to investigate complaints of malpractice and issue retractions where appropriate. Both of these things are beginning to happen, slowly. But there is a third measure that I think should be taken as soon as possible, and that is to train a generation of researchers in fraud busting. We owe a huge debt of gratitude to the data sleuths, but the scale of the problem is such that we need the equivalent of a police force rather than a volunteer band. Here are some of the topics that an MSc course could cover:

  • How to spot dodgy datasets
  • How to spot manipulated figures
  • Textual characteristics of fraudulent articles
  • Checking scientific credentials
  • Checking publisher credentials/identifying predatory publishers
  • How to raise a complaint when fraud is suspected
  • How to protect yourself from legal attacks
  • Cognitive processes that lead individuals to commit fraud
  • Institutional practices that create perverse incentives
  • The other side of the coin: "Merchants of doubt" whose goal is to discredit science

I'm sure there's much more that could be added and would be glad of suggestions. 

Now, of course, the question is what could you do with such a qualification. If my predictions are right, then individuals with such expertise will increasingly be in demand in academic institutions and publishing houses, to help ensure the integrity of work they produce and publish. I also hope that there will be growing recognition of the need for more formal structures to be set up to investigate scientific fraud and take action when it is discovered: graduates of such a course would be exactly the kind of employees needed in such an organisation.

It might be argued that this is a hopeless endeavour. In Harry Potter and the Half-Blood Prince (Rowling, 2005) Professor Snape tells his pupils:

 "The Dark Arts, are many, varied, ever-changing, and eternal. Fighting them is like fighting a many-headed monster, which, each time a neck is severed, sprouts a head even fiercer and cleverer than before. You are fighting that which is unfixed, mutating, indestructible."

This is a pretty accurate description of what is involved in tackling scientific fraud. But Snape does not therefore conclude that action is pointless. On the contrary, he says: 

"Your defences must therefore be as flexible and inventive as the arts you seek to undo."

I would argue that any university that wants to be ahead of the field in this enterprise could should flexibility and inventiveness in starting up a postgraduate course to train the next generation of fraud-busting wizards. 

Bibliography

Bishop, D. V. M. (2023). Red flags for papermills need to go beyond the level of individual articles: A case study of Hindawi special issues. https://osf.io/preprints/psyarxiv/6mbgv
Boughton, S. L., Wilkinson, J., & Bero, L. (2021). When beauty is but skin deep: Dealing with problematic studies in systematic reviews | Cochrane Library. Cochrane Database of Systematic Reviews, 5. Retrieved 4 June 2021, from https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000152/full
 Byrne, J. A., & Christopher, J. (2020). Digital magic, or the dark arts of the 21st century—How can journals and peer reviewers detect manuscripts and publications from paper mills? FEBS Letters, 594(4), 583–589. https://doi.org/10.1002/1873-3468.13747
Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals (arXiv:2107.06751). arXiv. https://doi.org/10.48550/arXiv.2107.06751
Carreyrou, J. (2019). Bad Blood: Secrets and Lies in a Silicon Valley Startup. Pan Macmillan.
COPE & STM. (2022). Paper mills: Research report from COPE & STM. Committee on Publication Ethics and STM. https://doi.org/10.24318/jtbG8IHL 
Culliton, B. J. (1983). Coping with fraud: The Darsee Case. Science (New York, N.Y.), 220(4592), 31–35. https://doi.org/10.1126/science.6828878 
Grey, S., & Bolland, M. (2022, August 18). Guest Post—Who Cares About Publication Integrity? The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2022/08/18/guest-post-who-cares-about-publication-integrity/ 
Hanson, M., Gómez Barreiro, P., Crosetto, P., & Brockington, D. (2023). The strain on scientific publishing (2309; p. 33343265 Bytes). arXiv. https://arxiv.org/ftp/arxiv/papers/2309/2309.15884.pdf 
Harris, R. (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (1st edition). Basic Books.

Judson, H. F. (2004). The Great Betrayal. Orlando.

Lévy, R. (2022, December 15). Is it somebody else’s problem to correct the scientific literature? Rapha-z-Lab. https://raphazlab.wordpress.com/2022/12/15/is-it-somebody-elses-problem-to-correct-the-scientific-literature/
 Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., Coriat, A.-M., Foeger, N., & Dirnagl, U. (2020). The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737
 Oreskes, N., & Conway, E. M. (2010). Merchants of Doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Press.
 Paterlini, M. (2023). Paolo Macchiarini: Disgraced surgeon is sentenced to 30 months in prison. BMJ, 381, p1442. https://doi.org/10.1136/bmj.p1442  
Rowling, J. K. (2005) Harry Potter and the Half-Blood Prince. Bloomsbury, London. ‎ ISBN: 9780747581086
Smith, R. (2021, July 5). Time to assume that health research is fraudulent until proven otherwise? The BMJ. https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/
Stapel, D. (2016). Faking science: A true story of academic fraud.  Translated by Nicholas J. Brown. http:// nick.brown.free.fr/stapel.
Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7(6), 670–688. https://doi.org/10.1177/1745691612460687
 

Note: On-topic comments are welcome but are moderated to avoid spam, so there may be a delay before they appear.

Thursday 12 October 2023

When privacy rules protect fraudsters

 

 
I was recently contacted with what I thought was a simple request: could I check the Oxford University Gazette to confirm that a person, X, had undergone an oral examination (viva) for a doctorate a few years ago. The request came indirectly from a third party, Y, via a colleague who knew that on the one hand I was interested in scientific fraud, and on the other hand, that I was based at Oxford.

My first thought was that this was a rather cumbersome way of checking someone's credentials. For a start, as Y had discovered, you can consult the on-line University Gazette only if you have an official affiliation with the university. In theory, when someone has a viva, the internal examiner notifies the University Gazette, which announces details in advance so that members of the university can attend if they so wish. In practice, it is vanishingly rare for an audience to turn up, and the formal notification to the Gazette may get overlooked.

But why, I wondered, didn't Y just check the official records of Oxford University listing names and dates of degrees? Well, to my surprise, it turned out that you can't do that. The university website is clear that to verify someone's qualifications you need to meet two conditions. First, the request can only be made by "employers, prospective employers, other educational institutions, funding bodies or recognised voluntary organisations". Second, "the student's permission ... should be acquired prior to making any verification request".

Anyhow, I found evidence online that X had been a graduate student at the university, but when I checked the Gazette I could find no mention of X having had an oral examination. The other source of evidence would be the University Library where there should be a copy of the thesis for all higher degrees. I couldn't find it in the catalogue. I suggested that Y might check further but they were already ahead of me, and had confirmed with the librarian that no thesis had been deposited in that name.

Now, I have no idea whether X is fraudulently claiming to have an Oxford doctorate, but I'm concerned that it is so hard for a private individual to validate someone's credentials. As far as I can tell, the justification comes from data protection regulations, which control what information organisations can hold about individuals. This is not an Oxford-specific interpretation of rules - I checked a few other UK universities, and the same processes apply.

Having said that, Y pointed out to me that there is a precedent for Oxford University to provide information when there is media interest in a high-profile case: in response to a freedom of information request, they confirmed that Ferdinand Marcus Jr did not have the degree he was claiming.

There will always be tension between openness and the individual's right to privacy, but the way the rules are interpreted mean that anyone could claim they had a degree from a UK university and it would be impossible to check this. Is there a solution? I'm no lawyer, but I would have thought it should be trivial to require that on receipt of a degree, the student is asked to give signed permission for their name, degree and date of degree to be recorded on a publicly searchable database. I can't see a downside to this, and going forward it would save a lot of administrative time dealing with verification requests.

Something like this does seem to work outside Europe. I only did a couple of spot checks, but found this for York University (Ontario):

"It is the University's policy to make information about the degrees or credentials conferred by the University and the dates of conferral routinely available. In order to protect our alumni information as much as possible, YU Verify will give users a result only if the search criteria entered matches a unique record. The service will not display a list of names which may match criteria and allow you to select."

And for Macquarie University, Australia, there is exactly the kind of searchable website that I'd assumed Oxford would have.

I'd be interested if anyone can think of unintended bad consequences of this approach. I had a bit of to-and-fro on Twitter about this with someone who argued that it was best to keep as much information as possible out of the public domain. I remain unconvinced: academic qualifications are important for providing someone with credentials as an expert, and if we make it easy for anyone to pretend to have a degree from a prestigious institution, I think the potential for harm is far greater than any harms caused by lack of privacy. Or have I missed something? 

 N.B. Comments on the blog are moderated so may only appear after a delay.


P.S. Some thoughts via Mastodon from Martin Vueilleme on potential drawback of directory: 

Far fetched, but I could see the following reasons:

- You live in an oppressive country that targets academics, intellectuals
- Hiding your university helps prevent stalkers (or other predators) from getting further information on you
- Hiding your university background to fit in a group
- Your thesis is on a sensitive topic or a topic forbidden from being studied where you live
- Hiding your university degree because you were technically not allowed to get it (eg women)

My (DB) response is that I think that in terms of balancing probabilities of risks against the risk of fraudsters benefiting from lack of checking, the case for the open directory is strengthened, as these risks seem very slight for UK universities (at least for now!). And the other cost/benefit analysis is of finances, where an open directory would seem superior; i.e. it costs to maintain the directory, but that has to be done anyhow, Currently there are extra costs for people who are employed to respond to requests for validation.

Tuesday 3 October 2023

Bishopblog catalogue (updated 4 October 2023)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015) Opportunity cost: a new red flag for evaluating interventions (30 Aug 2015) The STEP Physical Literacy programme: have we been here before? (2 Jul 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Developmental language disorder: the need for a clinically relevant definition (9 Jun 2018) Changing terminology for children's language disorders (23 Feb 2020) Developmental Language Disorder (DLD) in relaton to DSM5 (29 Feb 2020) Why I am not engaging with the Reading Wars (30 Jan 2022)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019) Biomarkers to screen for autism (again) (6 Dec 2022)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014) The sinister side of French psychoanalysis revealed (15 Oct 2019) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) Incomprehensibility of much neurogenetics research ( 1 Oct 2016) A common misunderstanding of natural selection (8 Jan 2017) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Review of 'Innate' by Kevin Mitchell ( 15 Apr 2019) Why eugenics is wrong (18 Feb 2020)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014) Incomprehensibility of much neurogenetics research ( 1 Oct 2016)

Reproducibility
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020) Open data: We know what's needed - now let's make it happen (27 Mar 2021) A proposal for data-sharing the discourages p-hacking (29 Jun 2022) Can systematic reviews help clean up science (9 Aug 2022)Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)  

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014) Why I still use Excel ( 1 Sep 2016) Sample selection in genetic studies: impact of restricted range (23 Apr 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Prisons, developmental language disorder, and base rates (3 Nov 2017) How Analysis of Variance Works (20 Nov 2017) ANOVA, t-tests and regression: different ways of showing the same thing (24 Nov 2017) Using simulations to understand the importance of sample size (21 Dec 2017) Using simulations to understand p-values (26 Dec 2017) One big study or two small studies? ( 12 Jul 2018) Time to ditch relative risk in media reports (23 Jan 2020)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011)  Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013)  A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015) Publishing replication failures (11 Jul 2015) Psychology research: hopeless case or pioneering field? (28 Aug 2015) Desperate marketing from J. Neuroscience ( 18 Feb 2016) Editorial integrity: publishers on the front line ( 11 Jun 2016) When scientific communication is a one-way street (13 Dec 2016) Breaking the ice with buxom grapefruits: Pratiques de publication and predatory publishing (25 Jul 2017) Should editors edit reviewers? ( 26 Aug 2018) Corrigendum: a word you may hope never to encounter (3 Aug 2019) Percent by most prolific author score and editorial bias (12 Jul 2020) PEPIOPs – prolific editors who publish in their own publications (16 Aug 2020) Faux peer-reviewed journals: a threat to research integrity (6 Dec 2020) Time to ditch relative risk in media reports (23 Jan 2020) Time for publishers to consider the rights of readers as well as authors (13 Mar 2021) Universities vs Elsevier: who has the upper hand? (14 Nov 2021) Book Review. Fiona Fox: Beyond the Hype (12 Apr 2022) We need to talk about editors (6 Sep 2022) So do we need editors? (11 Sep 2022) Reviewer-finding algorithms: the dangers for peer review (30 Sep 2022) A desire for clickbait can hinder an academic journal's reputation (4 Oct 2022) What is going on in Hindawi special issues? (12 Oct 2022) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022) A suggestion for e-Life (20 Mar 2023) Papers affected by misconduct: Erratum, correction or retraction? (11 Apr 2023) Is Hindawi “well-positioned for revitalization?” (23 Jul 2023) The discussion section: Kill it or reform it? (14 Aug 2023) Spitting out the AI Gobbledegook sandwich: a suggestion for publishers (2 Oct 2023)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014) Email overload ( 12 Apr 2016) How to survive on Twitter - a simple rule to reduce stress (13 May 2018)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013)  Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013)  Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014)  Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014)  Shaky foundations of the TEF (7 Dec 2015) A lamentable performance by Jo Johnson (12 Dec 2015) More misrepresentation in the Green Paper (17 Dec 2015) The Green Paper’s level playing field risks becoming a morass (24 Dec 2015) NSS and teaching excellence: wrong measure, wrongly analysed (4 Jan 2016) Lack of clarity of purpose in REF and TEF ( 2 Mar 2016) Who wants the TEF? ( 24 May 2016) Cost benefit analysis of the TEF ( 17 Jul 2016)  Alternative providers and alternative medicine ( 6 Aug 2016) We know what's best for you: politicians vs. experts (17 Feb 2017) Advice for early career researchers re job applications: Work 'in preparation' (5 Mar 2017) Should research funding be allocated at random? (7 Apr 2018) Power, responsibility and role models in academia (3 May 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) More haste less speed in calls for grant proposals ( 11 Aug 2018) Has the Society for Neuroscience lost its way? ( 24 Oct 2018) The Paper-in-a-Day Approach ( 9 Feb 2019) Benchmarking in the TEF: Something doesn't add up ( 3 Mar 2019) The Do It Yourself conference ( 26 May 2019) A call for funders to ban institutions that use grant capture targets (20 Jul 2019) Research funders need to embrace slow science (1 Jan 2020) Should I stay or should I go: When debate with opponents should be avoided (12 Jan 2020) Stemming the flood of illegal external examiners (9 Feb 2020) What can scientists do in an emergency shutdown? (11 Mar 2020) Stepping back a level: Stress management for academics in the pandemic (2 May 2020)
TEF in the time of pandemic (27 Jul 2020) University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester (3 Mar 2021) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) When there are no consequences for misconduct (16 Dec 2022) Open letter to CNRS (30 Mar 2023)

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014) NeuroPointDX's blood test for Autism Spectrum Disorder ( 12 Jan 2019)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014) The alt-right guide to fielding conference questions (18 Feb 2017) We know what's best for you: politicians vs. experts (17 Feb 2017) Barely a good word for Donald Trump in Houses of Parliament (23 Feb 2017) Do you really want another referendum? Be careful what you wish for (12 Jan 2018) My response to the EPA's 'Strengthening Transparency in Regulatory Science' (9 May 2018) What is driving Theresa May? ( 27 Mar 2019) A day out at 10 Downing St (10 Aug 2019) Voting in the EU referendum: Ignorance, deceit and folly ( 8 Sep 2019) Harry Potter and the Beast of Brexit (20 Oct 2019) Attempting to communicate with the BBC (8 May 2020) Boris bingo: strategies for (not) answering questions (29 May 2020) Linking responsibility for climate refugees to emissions (23 Nov 2021) Response to Philip Ball's critique of scientific advisors (16 Jan 2022) Boris Johnson leads the world ....in the number of false facts he can squeeze into a session of PMQs (20 Jan 2022) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) Contagion of the political system (3 Apr 2022)When there are no consequences for misconduct (16 Dec 2022)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022)