Speech bubbles and images of a brain overlaid on a globe.
Team effort: The distributed community has helped collectively analyze pilot data collected at the Allen Institute.
Illustration by Yihui Chang

A community-designed experiment tests open questions in predictive processing

More than 50 scientists came together to identify the key missing data needed to rigorously test theoretical models.

By Jérôme Lecoq
12 November 2025 | 6 min read

Neuroscientists study brain computation using the traditional academic model—individual laboratories leading independent research programs—that has worked so well in other fields. This system has delivered incredible insights: breakthroughs in synaptic learning, revelations about how the cortex encodes visual scenes, and more. It has been broad, detailed in places, and remarkably productive.

Despite these successes, the task ahead remains daunting. To achieve a deeper understanding of how the brain functions as a whole, these isolated pieces of knowledge must be assembled, piece by piece. And as a field, we struggle to integrate knowledge, facilitate data-sharing and validate insights across laboratories. These challenges seem almost inherent to the system, rooted in our recruitment practices, individual incentive structures and even our funding mechanisms. The brain demands integration, yet our system incentivizes fragmentation.

Recognizing these challenges, Christof Koch, myself and many colleagues at the Allen Institute created OpenScope, a platform for shared, high-throughput neurophysiology experimentation. Modeled after large-scale observatories in fields such as astronomy and physics, OpenScope conducts new experimental projects proposed by scientists around the world. In those projects, we record from thousands of neurons across the mouse brain, using either multi-probe Neuropixels recordings or multi-area two-photon calcium imaging. The resulting datasets are then shared with the selected research teams, who carry out their proposed data analyses, and subsequently made available to the broader community.

Initially, OpenScope focused on executing independent projects selected through a double-blind review process to ensure neutrality and alignment with the field’s evolving interests. After several cycles, however, a clear opportunity emerged: Many proposals converged on questions related to predictive processing. With guidance from our steering committee—Adrienne Fairhall, Satrajit Ghosh, Mackenzie Mathis, Konrad Körding, Joel Zylberberg and Nick Steinmetz—and support from the U.S. National Institutes of Health, we decided to unite multiple laboratories around a single, community-defined experiment. The idea was to commit OpenScope’s resources to a collaboratively designed project to help to unify the disparate theories and data that have emerged from experiments on predictive processing

O

ur first step was to submit a proposal for a workshop on predictive processing at the 2024 Cognitive Computational Neuroscience conference in Boston, together with Körding, Colleen Gillon and Michael Berry. In preparing for the event, we realized organizing a discussion during a workshop would not be enough to design an actual experiment. So, months before the conference, we created a shared Google Doc to begin shaping the conversation, conducting a deep review of the predictive processing literature and dedicating hours each day to reading, summarizing and synthesizing.

Presentation at the 2024 Cognitive Computational Neuroscience conference.
Community engagement: Neuroscientist Colleen Gillon co-organized a workshop at the 2024 Cognitive Computational Neuroscience conference to discuss the project.
Courtesy of Jérôme Lecoq

From the start, the review was structured to contrast existing experimental work with theoretical models of predictive processing, with the goal of identifying the key missing data needed to rigorously test those theories. We shared this living document on social media, inviting anyone to weigh in, suggest papers or contribute text. Over the course of one year, more than 50 scientists joined the effort. A critical decision, inspired by Ray Dalio’s principles, was to adopt a radically open approach: We granted everyone full editing access. I believed that this openness lowered barriers to participation and encouraged trust and respectful debate. As each person added a new piece—another paper, another insight—the review grew richer and more interconnected.

By the time of the workshop, a global community had already joined the effort. The event amplified engagement, but the real discussion unfolded in Google Docs comment threads. It was beautiful to watch: One newcomer would disagree with a piece of text, citing sources; another researcher, across the world, would respond, and long, thoughtful comment chains would follow, often greatly exceeding the original content. We were learning from one another. Some participants told me they were following the comments just for the sheer value of witnessing active, respectful disagreement on key research topics. In some cases, when we were unsure about a particular paper, we would tag the authors directly in the comments—and often, the lead authors themselves would weigh in. It felt as if the field wanted to collaborate; it just needed a small push in the right direction. We had stumbled on the right social ingredients to make it happen.

What began as a tentative outline evolved into a full-blown review article, grounded in the field’s literature and shaped collectively by a growing community. By October 2024, we had exceeded Google Docs’ comment limits and had to duplicate the document just to keep going. By the end, the community had contributed more than 1,900 comments. Each comment thread was independently resolved through a transparent consensus process. It felt like a unique experience for many of us. Through this shared document, along with weekly virtual meetings and face-to-face conversations, we proved that distributed teams—with the right tools and a spirit of openness—can integrate research programs in a way that feels organic and powerful.

Map with dots in the United States and Europe.
International collaboration: A distributed network of scientists has contributed to the effort.

T

he review, now on arXiv, revealed important divergences across both experiments and theories. Predicting the next incoming stimulus could be one of the cortex’s fundamental capabilities, yet, as is often the case in biology, this function likely arises from a collection of interacting mechanisms rather than from a single unified process. Many prior studies had assumed a shared mechanism for prediction, but the review emphasized that the brain may instead employ a “bag of models” approach, dynamically engaging different strategies depending on context. To investigate these trade-offs, we designed an experiment to test how predictive mechanisms shift as the context of prediction changes.

We started conducting these experiments at the Allen Institute in April 2025, and we have so far recorded pilot datasets; the distributed community has helped collectively analyze the results, enabling us to iterate on our design. We are also improving our use of social media: Our weekly meetings are now recorded and shared on YouTube so that anyone can follow along from any time zone. We have migrated from Google Docs to GitHub and continue to interact in writing through a public GitHub discussion forum. This practice keeps us close to the code that generates stimuli and promotes code exchange. We have posted some experiments to the DANDI archive, just weeks after data collection. We continue to welcome new participants at any stage. Now that we are generating new experimental datasets, the opportunities for researchers at any level to contribute are greater than ever.

Our “science in the open” approach could readily be applied to many open questions in systems neuroscience. In fact, several participants in our community project have expressed interest in pursuing similar efforts on other topics. The essential ingredient is simply a core group of passionate scientists to start open, respectful and transparent discussions.

AI disclosure

The author used artificial intelligence during the writing process to assess grammar, comment on English language usage and enhance the readability of the text.

Sign up for our weekly newsletter.

Catch up on what you missed from our recent coverage, and get breaking news alerts.

privacy consent banner

Privacy Preference

We use cookies to provide you with the best online experience. By clicking “Accept All,” you help us understand how our site is used and enhance its performance. You can change your choice at any time. To learn more, please visit our Privacy Policy.