Decisions emerge from coordinated activity patterns across many brain areas. The challenge we face as neuroscientists is figuring out how. Technologies such as Neuropixels and optical imaging enable recordings from populations of neurons across many brain areas, leading to enormously impressive datasets with thousands of neurons. But making sense of these data to uncover the computations underlying decision-making has proved elusive. I think it is a great time for the field to design experiments that match the ambition of our tools. By designing decision-making tasks that vary along multiple dimensions and truly challenge our animals, we might finally understand how multiple brain areas coordinate to drive decisions.
The starting point of most decision-making experiments is to get animals to perform a task for rewards, such as juice or food. It is often tempting to train the animal to do “something simple” because the training is easy and quick. Later we can get to the “exciting stuff”: Go in with a kitchen sink of experimental tools to collect neurophysiological data and/or perturb the system and use mathematical tools to uncover how activity in the brain leads to the behavior of interest.
Though this approach sounds great in principle, analyzing the neural data associated with simple behavioral tasks can be challenging for multiple reasons. First, when the behavior is too simple, the brain does not need to compute much. When many areas could solve a problem, often they do: Relevant signals pop up all over the brain, leaving us with the somewhat puzzling conclusion that the behavior is global. But some tasks may be too trivial to require different computations from different areas, so it’s unsurprising that many areas look similar in such contexts.
Second, animals perform simple tasks quickly, generating only a narrow window of neural activity from which to try to make sense of how they reached a decision. You might be left with just 50 milliseconds of potentially very noisy neural data from which to understand decision-related computations.
Third, animals often perform uninstructed movements, such as wiggling for one set of conditions but staying still for another, and this variability can bleed into the neural data and lead to false conclusions.
Finally, when the task is too simple, the animals get it right almost every time, and it can be hard to know how an animal solves the task if there is nothing to compare against.
Much of the challenge with these experiments lies in eliciting the right types of errors so that you can isolate which brain regions or patterns of activity lead to correct versus incorrect decisions. Fortunately, we can overcome this difficulty by taking a page from human cognitive neuroscience and developing decision-making tasks that challenge our animals.
W
We and others have realized that decoupling deliberation and perceptual judgments from action preparation and execution reveals more about how different brain areas contribute to decisions. In one study, we asked animals to discriminate the dominant color in a red-green checkerboard and report their decision by touching a red or green target. By randomizing whether the red target was on the left or the right, we could decouple the action from the judgment on the sensory stimulus. To perform this task, the brain needs to solve an exclusive-or (XOR) problem. We find that brain areas carry distinct signals—the dorsolateral prefrontal cortex reflects judgment of color (red versus green) and the combinations of color and action (red and left), thereby solving the XOR problem, whereas the dorsal premotor cortex only signals the final action (left versus right). We were surprised by how powerful this design turned out to be—so much so that we could even reveal gradients within the dorsolateral prefrontal cortex. The anterior part of the prefrontal cortex was more sensitive to the dominant color of the checkerboard, whereas the ventral aspects were most sensitive to the combinations of color and action.
In research presented at Cosyne in March, we went even further. We showed the animal the checkerboard, and revealed the targets after a delay. In simple working memory tasks, it is difficult to differentiate motor planning from working memory and reward expectation. In contrast, because an animal cannot plan a movement during the stimulus or the delay, our task cleanly separated deliberation, working memory and motor planning. In this way, we found that areas in the dorsolateral prefrontal cortex were involved in the task from start to finish—deliberating on the stimulus, storing a representation of it in working memory and turning it into an appropriate action at the right time. The activity in the dorsolateral prefrontal cortex could predict the animal’s behavior on both correct and error trials, as well as the animal’s biases. Other areas, which we and others have recorded from during similar tasks, do not seem to carry all these signals and reflect only the action the animal performed.
More complex tasks also make experiments that use causal techniques more powerful. Causal experiments—for example, using optogenetics or chemical techniques to silence specific cells or brain areas—with simple tasks can and do fail because other parts of the brain compensate. In contrast, a richer task forces the brain to perform a more complex computation over a longer time period, which means that perturbations can target computations during specific parts of the task; you can ask which signals are disrupted, which survive and which are communicated between areas. Richer tasks also elicit more variable reaction times, error patterns and movement kinematics, giving you far more to link to neural activity, and more opportunities to observe what causal perturbations do to behavior and neural dynamics.
I hope that I and others in the field will move beyond our existing workhorse tasks and get creative, designing decision-making tasks that require animals to navigate complex mazes, problem solve, manipulate objects, solve puzzles, learn hierarchical rules or adapt strategies on the fly in naturalistic settings. These types of tasks—combined with large-scale recordings, targeted perturbations and elegant computational approaches—will likely reveal how multiple brain areas work together to drive decisions.
