Wednesday, 27 August 2025

Gold standard science isn't gold standard if it's applied selectively. Part 2: Research on causes of autism

In my previous blogpost, I discussed why many scientists can't take seriously the lofty ideals expressed in Trump's plans for Gold Standard Science, even though the basic principles seem excellent. Given the current Republican administration's catastrophic track record in undermining US science, their demands for high standards ring hollow. It appears that demands for Gold Standard Science will be weaponised against types of science they don't like. 

Today I provide evidence from the opposite side of the fence. If the Trump administration was really serious about Gold Standard Science, then they should lead by example and showcase projects that are impeccably rigorous, open and transparent. 

And what better place to start than with research on autism, which has been an obsession of Robert F Kennedy Jr (RFK), the 26th United States Secretary of Health and Human Services? The first inkling of what was to come was in April this year when the media reported that he had planned to find the cause of autism by September.  Then, on May 27th, NIH announced a new Autism Data Science Initiative, which appears to relate to JFK's plans. 

I watched with interest a video that was released on 10th June explaining the background of the project. The NIH staff presenting the video gave an evidence-based and balanced account of what is known about the etiology of autism, which was described as complex and multifactorial, involving different types of genetic causes which may interact with environmental factors. In considering the increase in autism diagnosis over time, it was noted that changes in diagnostic criteria, and the need for a diagnosis for access to services were implicated. They implicitly accepted, however, the notion that these factors were insufficient to explain the increase, and so we needed research on other factors. 

They then explained how to apply for a share of the $50 million allocated to the initiative, which seemed designed to encourage machine learning approaches to data mining of autism-relevant datasets "to explore the contribution of genetic and non-genetic factors to the causes of autism and or to identify patterns associated with intervention outcomes and the use of services for autism." 

A startling feature was that submissions had to be in by 27th June, one month after the scheme was announced, and 17 days after the instructional video was posted. The earliest start date was 1st September, perhaps prompted by RFK's idea of having autism etiology done and dusted by then. 

You might wonder how this works with NIH's review process: the answer is that it doesn't. The funding mechanism is an Other Transaction: "an assistance mechanism that is not a grant, contract, or cooperative agreement" and the proposal does not undergo traditional NIH review, but instead is subject to "an objective scientific review. This ensures the assessment of scientific or technical merit of applications by individuals with knowledge and expertise equivalent to that of the individuals whose applications of support they are reviewing". Though, so far, it seems that the top US autism researchers have not been consulted on this scheme. Furthermore, the phrase "Autism Data Science Initiative" isn't mentioned in the massive report from the Committee on Appropriations that was published on July 31st. So this has a decidedly ad hoc feel to it. 

This hurried process does not seem an optimal way to foster Gold Standard Science, which requires thought and care to go into research plans. There seem two possible explanations for this rushed approach. Either those who devised the scheme are so ignorant that they don't understand how long it takes to develop a strong research proposal, or they really don't want anyone to apply to the scheme other than specific cronies who will do their bidding. 

I had thought we might have to wait until the end of September when the successful grants are announced to see who the lucky grant recipients would be. From this video clip of a recent Cabinet meeting, however, it seems that the research has already been done, results are in and will be announced next month! 

Donald Trump and Jay Bhattacharya can talk about Gold Standard Science as much as they like: scientists can see for themselves this travesty of research process, which indicates that, when it comes to their own studies, those in power will know the answer before the data is in. No transparency, no pre-registration, no open data and code, no communication of error and uncertainty, no skepticism of findings or attempt to falsify hypotheses, and no impartial peer review. Truly this is Tinsel Standard Science.

Tuesday, 26 August 2025

Gold standard science isn't gold standard if it's applied selectively. Part 1: Firearms injuries

In May, the White House produced a report called "Restoring Gold Standard Science", which was followed by an NIH plan to implement this policy. The initial report pointed to the well-attested problems with reproducibility in science, and to high-profile cases of research fraud, and recommended nine requirements for Gold Standard science. It must be: 
(i) reproducible;
(ii) transparent; 
(iii) communicative of error and uncertainty; 
(iv) collaborative and interdisciplinary; 
(v) skeptical of its findings and assumptions; 
(vi) structured for falsifiability of hypotheses; 
(vii) subject to unbiased peer review; 
(viii) accepting of negative results as positive outcomes; and 
(ix) without conflicts of interest. 
This is a clever move by the Trump administration because it's hard to make a coherent case against such a policy without appearing to question whether science should be done to high standards. Nevertheless, many scientists are concerned. The main issue is the mismatch between what the government is doing, in terms of defunding science, stopping grants, firing competent people and appointing incompetent cronies in their place, and the lofty ambitions stated in the plan (see, e.g., this blogpost). This leads to suspicion of the motives of those behind Gold Standard Science, which is seen as attempt to weaponise science policy in order to attack science that it doesn't like. 
This is credible given recent history. Take the requirement for transparency. Back in 2016, Stephan Lewandowsky and I wrote an opinion piece for Nature entitled Don't Let Transparency Damage Science, noting that politicians who didn't like climate science or tobacco research were tying researchers up in red tape with spurious demands for data. In 2018, I blogged about a proposal for Strengthening Transparency in Regulatory Science by the US Environmental Protection Agency (EPA) that stated that policy should only be based on research that has openly available public data. This would allow government to dismiss regulations concerning substances such as asbestos or pesticides, where data was gathered long before open data was a thing. Similarly, if we were to argue that a result must be shown to be reproducible before it can influence policy, then politicians can justify ignoring inconvenient findings from studies that are not easy to reproduce, such as those involving long time-scales or complex methods. 
Doing science well is much harder than doing it badly; it takes time and expertise to pre-register a study, to work out the best protocol, to design an analysis to reduce bias, and to make data and scripts open and useable. I'm strongly in favour of all of those things, but, like many others, I am suspicious that demands for adherence to the highest standards may be used selectively to impede or even terminate research that the administration doesn't like. 
I was accordingly interested to see how Gold Standard Science was referenced in plans for government-funded research in this mammoth report from the US Senate Committee on Appropriations, which was posted on July 31st 2025, a couple of months after the Gold Standard Science document was written. Pages 105-172 cover the National Institutes for Health, and discuss funding for numerous health conditions. I could find just one paragraph where the importance of open data or pre-registration was mentioned, and it was this one: 
Firearm Injury and Mortality Prevention.—The Committee provides $12,500,000 to conduct research on firearm injury and mortality prevention. Given violence and suicide have a number of causes, the Committee recommends NIH take a comprehensive approach to studying these underlying causes and evidence-based methods of prevention of injury, including crime prevention. All grantees under this section will be required to fulfill requirements around open data, open code, pre-registration of research projects, and open access to research articles consistent with the National Science Foundation’s open science principles. The Director is to report to the Committees within 30 days of enactment of this act on implementation schedules and procedures for grant awards, which strive to ensure that such awards support ideologically and politically unbiased research projects. 
I found this odd. Surely, if the Bhattacharya plan is to be believed, the statements about compliance with open science principles should apply to all the research done by NIH? Yet a search of the document finds that only this paragraph (repeated in two sections) makes any mention of such practice, and this happens to concern a topic, firearms injuries, that is a contentious political issue. 
So I'm all in favour of Gold Standard Science as described in the Bhattacharya plan, but let's see these principles be applied even-handedly and not just to research that might give uncomfortable results.