Reporting bias widespread in early-childhood autism intervention trials

Only 7 percent of completed registered trials were later updated with results, one of several failings identified in a new analysis.

Selective reporting: Autism intervention trials rarely comply with open-science practices, new study suggests.
Klaus Vedfelt / Getty Images

Practices that support collaboration and transparency in early-childhood autism intervention trials are inconsistent at best, a new study in Autism shows.

The open-science movement encourages investigators to register trials in advance and specify methods and outcomes to ensure scientists do not cherry-pick data or inflate the statistical significance of their work. Since 2017, for example, the U.S. National Institutes of Health has required that scientists who receive NIH funding register their trials on clinicaltrials.gov. Yet researchers often register their clinical trials after they have already started, fail to report the intervention results and publish incomplete data or outcomes that differ from the intervention’s stated goals, the study finds.

These failings ultimately make it harder for clinicians to identify effective therapies, says study investigator Micheal Sandbank, professor of occupational science and occupational therapy at the University of North Carolina at Chapel Hill. A quarter of the available research reports lacked sufficient detail for inclusion in a 2023 meta-analysis she and her colleagues conducted. “That is potentially enough information to change what our estimates [of effectiveness] are,” she says.

The new work included a review of 252 studies of early-childhood intervention trials, 56 of which had corresponding registration information on clinicaltrails.gov or another platform that the investigators could locate. Sandbank and her colleagues explored multiple factors with these records, including the frequency with which researchers provided incomplete data.

The majority, 71 percent, of the 56 trials were registered late. Only 5 out of the 56 trials published complete registration information, meaning that they specified their intervention and comparison groups, study outcomes, assessment time points and analytic approach.

Researchers also found evidence of selective reporting, even among studies that provided all required information—that is, the reported trial results did not provide the outcomes or analysis as described in the earlier registration. The sole exception was an unpublished dissertation.

In a separate analysis, Sandbank and her colleagues considered 84 trials registered on clincialtrials.gov and completed at least a year prior. Only 7 percent (6 trials) reported their results on that same platform, and just 64 percent (53 trials) had associated published papers. Sandbank and her colleagues de-identified the dataset of studies used in the paper.

“I

t’s a really important contribution to look at the literature critically and identify where there are gaps and weaknesses,” says Isabel Smith, a psychologist and professor of developmental pediatrics at Dalhousie University, who has contributed to early-intervention autism trials but wasn’t involved in the study.

Although the study highlights the importance of reporting practices, habits that promote open science are relatively new in this community—something the new study does not recognize, says Samuel Odom, senior research scientist at the Frank Porter Graham Child Development Institute at the University of North Carolina at Chapel Hill. Odom has also contributed to early-intervention autism trials but was not involved in this study.

For example, although nearly 70 percent of the registered studies were published after the NIH began requiring the registration of clinical trials, some of those studies could have begun before this requirement went into effect, Odom points out. And not all the evaluated trials in the meta-analysis are NIH-funded, according to Sandbank.

Although the registration became official NIH policy in 2017, the open-science movement began before that, Sandbank says. “It’s important for us to not act like it’s brand-new, like we were suddenly bamboozled,” she says, “We can acknowledge that this is hard and we are still learning about it, but also that it’s important and we have to get better at it.”

Randomized controlled trials are relatively new for the community, and researchers need to learn how to run them, Smith says. “I think it’s not enough to say people need to report better; they need to actually learn to do this better.”

This problem is not exclusive to early-autism intervention research, Smith adds. For instance, only 41 percent of biomedical trials in a 2020 study had reported their results in clinicaltrials.gov as required under U.S. federal law. And it is often not a reflection of researchers being ill-intentioned, but instead lacking the time to adequately report results, Smith says.

S

andbank agrees. In a world in which academia prioritizes getting and writing grants, “there’s not a lot of motivation to go back and write up the study where you didn’t see something happen,” Sandbank says.

She and her colleagues describe several steps to address such structural issues in their study. Their recommendations include encouraging federal funders to enforce the mandates for reporting results on clinicaltrials.gov and asking registration platforms to provide investigators with prompts that can elicit all essential trial details. Sandbank and her colleagues also call on autism-specific journals to do more, including making pre-registration a prerequisite for publication.