Population properties: The shape and structure of neuronal data connect brain activity to behavior.
Illustration by Kouzou Sakai / Courtesy of SueYeon Chung

Error equation predicts brain’s ability to generalize

Four statistical measurements of neural network geometry capture how well brains and artificial networks use what they already know to solve new problems, a study suggests.

By Natalia Mesa
10 April 2026 | 5 min read

The brain effortlessly tackles countless variations of the same task. A person walking around a city, for example, can avoid a crowded sidewalk or stop at a traffic light, even as the surrounding scene changes from block to block. 

This ability may come down to the geometry of the activity of neural populations, according to a new study. A single equation representing that geometry predicts how well neural networks, both biological and artificial, adapt to new but related tasks, the study found. 

When researchers analyze the collective activity of a large population of neurons, that activity takes on a certain geometric structure, such as a donut-shaped torus. This neural geometry can link population activity to behavior, but most studies use it to simply describe neuronal activity, says SueYeon Chung, assistant professor of physics and applied mathematics at Harvard University and an investigator on the new paper.

Instead, “we wanted to come up with a unifying theory between the geometry of representation and our ability to generalize across tasks,” Chung says. “There are certain shapes and structures of those geometric activities that we can look for in real data, and those shapes will precisely predict the system’s ability to generalize.” 

Four terms in the equation Chung and her colleagues derived represent distinct features of neural geometry and predict how well neural populations handle new tasks that draw on prior information, the study showed. Chung and her team applied this theory to neuronal data from rats, monkeys and artificial networks, showing that the four variables accurately predicted future behavior. 

“A lot of people say that it’s very hard to put equations on neural activity, or that AI is a black box,” says Nina Miolane, assistant professor of electrical and computer engineering at the University of California, Santa Barbara, who was not involved in the work. But this study is endeavoring to “define the mathematical structure of intelligence, not focusing only on behavior, or the final output, but opening up the black box and having the courage to say, ‘We can put this into an equation.’”

U

sing a set of images that vary in simple dimensions, such as shape, size and orientation, the researchers built a theoretical model that classifies stimuli, such as big shapes or little shapes, by computing the difference in average neural responses to each category. From there, the team derived a mathematical formula to predict the error rate when the model encountered new tasks—for example, differentiating between two shapes based on their orientation instead of size. 

Four terms in that error formula predicted the neural population’s ability to generalize: a higher correlation between neural activity and the task-related variables; a higher dimensionality of the neural responses; increased signal-to-noise factorization (meaning task-unrelated noise is kept separate from the information the neural activity is tracking); and increased signal-signal factorization (meaning the population encodes different task-related variables separately from one another). 

Geometric abstraction: Four statistical properties determine how well neural networks, both biological and artificial, can generalize across similar tasks using prior information.
Graphic by Natalia Mesa

In both artificial and biological neural networks, dimensionality of neural representations was low at the beginning of a task and increased as task performance improved. This counterintuitive result suggests that the brain focuses on the most important variables early in learning, and these representations incorporate less-relevant information as the brain sees more examples, which improves performance when the brain needs to switch tasks, Chung says. 

The four terms predicted how well a stimulus could be decoded from neuronal activity, including in multi-unit recordings from the macaque V4 and IT cortex while the monkeys viewed images of objects, and recordings from the rat prefrontal cortex and hippocampus while the rats learned a spatial navigation task. And they also accurately predicted how well a more complex artificial system could generalize. The work was published in February in Nature Neuroscience.

Previous studies have focused primarily on individual tasks, not generalizability for many tasks, says Xuexin Wei, assistant professor of neuroscience at the University of Texas at Austin, who was not involved in the work. This “provides a recipe for future investigation to tell you, ‘This is what you should be looking for in your data, and this is what potentially could account for the type of behavior,’” he adds. 

The findings add to literature showing that neural manifold geometry shapes flexible behavior. Functional separation of neural processes into independent dimensions along the manifold allows the same neurons to keep track of, and differentiate among, task variables, studies show.

In vision, for example, the brain must often distinguish between similar-looking objects, such as cats and dogs. Retinal responses are two-dimensional and hard to separate for similar images. The brain’s task is to separate the two images, which it does by changing the geometry of how the stimuli are represented. The strategy the researchers used in this study could also help predict how well the brain is able to categorize images, Wei says. 

But whether the same equation could apply to other, more complex tasks is unclear, says Valeria Fascianelli, associate research scientist in computational neuroscience at Columbia University’s Zuckerman Institute, who was not involved in the study. In tasks that involve finding hidden rules when there are a lot of visual features, for example, the dimensionality of the neural representations may instead shrink over time. 

Chung says she hopes that the theory can help experimentalists make predictions about behavior from neural recordings, and begin to help bridge the gap between behavior and population activity. “Our primary goal is to help experimentalists be able to really make sense out of their high-dimensional data,” she says.

Sign up for our weekly newsletter.

Catch up on what you missed from our recent coverage, and get breaking news alerts.