A neurochemistry journal has retracted a paper that claimed prenatal vaccine exposure caused autism-like behaviors in rats. The retraction comes more than a year after the paper’s publication and amid criticism of its methodology.
The paper, which was originally published 10 January 2024 in Neurochemical Research, purported to show reduced sociability in rats born to mothers that received a human-sized dose of a COVID-19 mRNA vaccine while pregnant. Anti-vaccine advocates touted the result online. In addition, the work has been cited four times, according to Clarivate’s Web of Science.
But comments posted on social media and PubPeer since the paper’s publication have raised several questions about the work, including the dose of vaccine given to the rats, the proprietary software used for the analysis and the oddly similar data shown for different experimental conditions.
The dose is “so large, it seems like it’s maybe not done from the perspective of trying to do unbiased science,” Brian Lee, professor of epidemiology and biostatistics at Drexel University, told The Transmitter.
According to the 19 July 2025 retraction notice, a “post-publication review found inconsistencies in the number of subjects reported in the Methods and raw data. The Editor-in-Chief therefore no longer has confidence in the presented data.”
This is the second retraction for Mümin Alper Erdoğan, associate professor in the department of physiology at İzmir Katip Çelebi University. It is the first retraction for each of the other three authors, according to the Retraction Watch database. Only Erdoğan replied to The Transmitter’s emailed requests for comment. “I believe this decision was unjust and warrants open discussion,” he wrote, referring to the vaccine study retraction. Erdoğan did not respond to a follow-up request for an interview.
C
oncerns regarding this study appeared on PubPeer in January 2024, when sleuth Kevin Patrick, who posts under the pseudonym “Actinopolyspora biskrensis,” noted a variety of inconsistencies. Patrick has previously flagged issues with other papers by the same authors.Patrick first pointed out that some of the error bars in Figure 2 were uneven, which wouldn’t happen if they were generated with the software the authors claimed to have used, he says. He also raised concerns about the authors’ AI-based behavioral analysis system, Scove Systems, that they used in the study. When the article was first published, the website for the software would not open, and there was very little information available publicly about the software.
Patrick says he wanted to see some evidence that the software works. Otherwise, “it’s a black box… we don’t know if any of the behavioral measurements that they use in that device are valid at all,” he says.
Patrick says he shared his concerns with the journal in January 2024.
At that time, the journal began an investigation and determined that no further action was needed, according to an email to The Transmitter from Tim Kersjes, head of research integrity and resolutions for Springer Nature.
In February 2025, sleuth and Columbia University mouse behavioralist Mu Yang posted on PubPeer under the pseudonym “Dysdera arabisenen” about two bar graphs, also in Figure 2, that were seemingly identical, even though they purportedly represented different data.
The data in both graphs derive from a three-chamber sociability test, in which the rat being tested is put in the middle of three connected chambers. The test is designed to measure different facets of sociability, Yang says.
One of the graphs in question claimed to provide evidence that rats preferred a chamber with another rat over an empty chamber, and the other that they preferred an unfamiliar rat to a familiar one.
“The two figures almost look identical…[but] these are very different scenarios,” Yang told The Transmitter.
Yang says the chambers were also surprisingly small—not much larger than chambers she and others routinely use in mouse experiments, despite rats being much larger animals.
“I think the peer review process failed awfully,” Patrick says. “Whoever read this paper, editors and the peer reviewers, obviously made no attempt to understand what was going on.”
The journal became aware of these concerns in February and opened a new investigation, “including conducting a post-publication review of the raw data provided by the authors at that time,” Kersjes said. “After carefully considering the facts of this latest investigation, we concluded that retracting the paper was the correct action to take to maintain the validity of the scientific record.”