Illustration of a series of shapes, with a few resembling human eyes.
Illustration by Matthieu Borrel

The visual system’s lingering mystery: Connecting neural activity and perception

Figuring out how the brain uses information from visual neurons may require new tools. I asked 10 neuroscientists what experimental and conceptual methods they think we’re missing.

By Grace Lindsay
13 October 2025 | 5 min read

Neuroscientists have spent decades characterizing the types of information represented in the visual system. In some of the earliest studies, scientists recorded neural activity in anesthetized animals passively viewing stimuli—a setup that led to some of the most famous findings in visual neuroscience, including the discovery of orientation tuning by David Hubel and Torsten Wiesel.

But passive viewing, whether while awake or anesthetized, sidesteps one of the more intriguing questions for vision scientists: How does the rest of the brain use this visual information? Arguably, the main reason for painstakingly characterizing the information in the visual system is to understand how that information drives intelligent behavior. Connecting the dots between how visual neurons respond to incoming stimuli and how that information is “read out” by other brain regions has proven nontrivial. It is not clear that we have the necessary experimental and computational tools at present to fully characterize this process. To get a sense for what it might take, I asked 10 neuroscientists what experimental and conceptual methods they think we’re missing.

D

ecoding is a common approach for understanding the information present in the visual system and how it might be used. But decoding on its own—training classifiers to read out prespecified information about a visual stimulus from neural activity patterns—cannot tell us how the brain uses information to perform a task. This is because the decoders we use for data analysis do not necessarily match the downstream processes implemented by neural circuits. Indeed, there are pieces of information that can reliably be read out from the visual system but aren’t accessible to participants during tasks. Primary visual cortex contains information about the ocular origin of a stimulus, for example, but participants are not able to accurately report this information.

This doesn’t mean decoding has no role to play. Attempting to identify decoders whose performance correlates with behavior broadly or on an image or trial-wise basis can sharpen our hypotheses about how visual information is read out. A recent study used such an approach to understand how mice perform an orientation detection task. This work compared a decoder that was trained to individually weight neurons to detect the grating with one that takes a simple mean of neural activity. They found that the latter was closer to matching mouse behavior; optogenetic perturbations validated this finding.

Complicating matters, evidence suggests that readout mechanisms can depend on the task or context. For example, a 2008 study showed a surprising result. When monkeys were trained to perform a coarse depth discrimination task, inactivating area MT significantly worsened performance. After training on a fine discrimination task, however, MT inactivation did not degrade coarse discrimination performance. This training did not cause observable changes in MT tuning properties, so the researchers proposed that it must have triggered a change in the readout mechanism instead.

 

Illustration of a brain on top of a series of Art Deco-like patterns and lines.

These kinds of studies hint at a much more complex and less intuitive mechanism for turning visual information into action than is generally appreciated. The time is ripe to consider what it would take to fully characterize this process. How visual information is read out is important not just for understanding vision-guided behavior but also for constraining our theories of other processes, such as visual attention and perceptual learning, which must work within the constraints of the readout mechanism.

What kinds of experimental designs and methods are needed? In many systems of interest, meso- to micro-scale anatomy information is still lacking. What is the nature of the connections, for example, between the MT and prefrontal cortex in macaques? What are the constraints on those connections and how plastic are they? To make progress, we will also likely need more multi-region recording studies to see how information in the visual system correlates with activity elsewhere on a trial-by-trial basis. The fact that readout is often context dependent suggests that we should be using more naturalistic tasks: How information propagates in a simplified task using artificial stimuli may not be a good basis for understanding vision-guided behavior in ethological settings.

What advances do we need in data analysis and modeling? It’s clear that optimal, trained decoders don’t guarantee us an accurate mechanistic understanding. Are there principles we could use to constrain these decoders to steer them more toward how the brain is likely to use the information? Should we be trying a different approach altogether, perhaps by focusing more on the biophysics of the system? Or should we be building more “full-brain” models rather than trying to isolate readout in one specific location or connection? What can low-dimensional descriptions of activity in terms of subspaces and manifolds provide? How do we capture dynamics of the readout, both on short and long timescales?

An overarching question will always be: How general are our findings? Is it possible to have guiding principles that can help us understand readout across tasks or even species? Or will we make more efficient progress by studying different questions separately? For example, the rules for how visual information guides motor planning may be quite distinct from how it is used for navigation. Is the concept of a “readout” even the right framework for these questions?

The scientists I queried pulled from a wide range of research traditions to tackle this question. They generally agreed that the traditional, modular approach to studying the brain has held us back from understanding inter-region communication, and they outlined how inspiration from a variety of different experimental and computational fields could forge a path forward. Read on for their perspectives.

Get alerts for essays by Grace Lindsay in your inbox.

Subscribe to get notified every time a new essay is published.

privacy consent banner

Privacy Preference

We use cookies to provide you with the best online experience. By clicking “Accept All,” you help us understand how our site is used and enhance its performance. You can change your choice at any time. To learn more, please visit our Privacy Policy.