Despite the massive rise in preprints over the past few years, peer review at journals is still widely held as the gold standard for assessing the validity of results in neuroscience.
Even if we acknowledge that mistakes occasionally fall through the cracks, scientists generally assume that peer review quickly self-corrects the literature; the idea is that over the course of multiple papers, enough people vet the assumptions, approaches and ways of thinking to ensure that they are valid.
But what happens if multiple scientists share the same misconception? It’s clear that individual wrong ideas, such as the claim that vaccines can cause autism, can spread in society through failures in journalism or social media, even after science has corrected the record. But are there mechanisms by which mistakes can similarly propagate through peer-reviewed literature before failed replications or retractions catch up with them?
To get an intuition for this problem, let’s examine how many people need to make the same mistake—deciding to ignore an alternative interpretation of a result or using an invalid experimental approach, for example—for that view to be reflected in peer-reviewed papers. Similarly, how big of a group holding a strong opinion can keep conflicting points of view out of the literature? A simple back-of the envelope calculation, which I detail below, shows that this number is surprisingly small: Because of the low rates at which neuroscientists agree to review manuscripts, it is possible for small groups of labs—as few as 15 to 20 scientists—to end up in review “filter bubbles” that fail to properly vet results.
A
For a paper to be peer reviewed, an editor first must decide it is worthy of their journal and believe they can find reviewers. If candidates decline, the editor keeps reaching out until they eventually find three (or in some cases, two or four) reviewers. In neuroscience, reviewer pools can be small: Subfield conferences often fit the majority of a field into small lecture halls, and in many cases it seems realistic to put the number of potential reviewers for a paper at well below 50.
The problem with current peer review, however, is not that fields are small but that the number of scientists who actively review is much smaller: From a survey we conducted in 2024 and 2025 of editors at major neuroscience journals, we found that depending on the subfield, journal and editor, only 20 to 50 percent of review requests are accepted.
This low review rate gives scientists who do agree to review specific papers an outsized influence. This pool could include reviewers with particularly strong opinions on some topic or those in active disagreement with the authors. Papers bounce around the reviewer pool until they hit one of these few always-reviewing scientists, giving that reviewer significantly more influence than the average, rarely reviewing scientist. This effect risks funneling reviews into a narrow category of reviewer.
To estimate the outcome of such imbalance, we can calculate the probability that any accepted review request comes from a biased group that always agrees to review. (This is just an application of Bayes theorem; see appendix for details.)

