Thanks to the explosion of technology for recording from many neurons simultaneously, talk of “populations” is now ever-present in the neuroscience literature. We talk of recording from a population of neurons. We study their population coding. We plot and model their population dynamics. It’s a term we now use widely, freely, instinctively. But as a basis for thinking about how brains work, the concept of a neural population poses a problem: It is an analogy, and a poor one at that. It is unclear that a meaningful neural population exists—it could just be useful fiction.
To see this, let’s think about how we define actual populations in the world. Picking a city at random, I can tell you that Melbourne’s population in 2024 was an implausibly precise 5,350,705 people. Or that, according to its 2019 census, Mombasa in Kenya had a population of 1,208,333 people. These ridiculously exact numbers are only possible because some agreed boundary defines what is and is not Melbourne, and what is and is not Mombasa. People on the inside of that boundary are in the city and people on the outside of it are not, no matter that their houses may be mere meters apart. Such boundaries are arbitrary. The ancient and noble City of London, the Square Mile at the heart of the UK’s capital that dates to at least 1191, changed its boundary in 1994. An almost perfectly straight line splits the island of New Guinea in half—one side Papua New Guinea, the other now part of Indonesia. These countries’ populations in 2025 were arbitrarily defined back in 1895 by British and Dutch colonialists resolving their dispute with a map, a pencil and a ruler.
Arbitrary boundaries exist so that authorities know who is in their jurisdiction, who they can count, regulate and tax. They need bear no resemblance to the ways in which the people of that population identify themselves or with whom they share kinship, culture and religion. Similarly, arbitrary boundaries are frequently how we define populations in neuroscience, calling a “population” the set of neurons we happened to record together at the same time, be it a few tens on a tetrode or hundreds of thousands with calcium imaging. This reflexive term extends to how we then use those recordings: We study population coding by training a classifier on all the recorded neurons; we examine population dynamics by applying some dimension reduction method or model, such as a recurrent neural network, to all the recorded neurons. We wouldn’t define a population of people by counting every person you happened to spot while looking down on a city block from a helicopter, but that’s how we do it for neurons.
Physical boundaries offer another way to define populations. This is certainly how we think of animal populations. Members of the same species may form different populations when physically unable to meet, split apart by mountains, rivers, deserts or seas. Geographical isolation is a potent driver of speciation, notably in physically isolated islands giving rise to unique species, such as Madagascar’s ring-tailed lemurs and aye-ayes.
In the brain, defining a population of neurons by the presence of physical boundaries might instinctively feel more useful than arbitrary ones. But such boundaries, if they exist at all, are rare. We could point at two regions of the cortex with different names and say they are different populations, but this is literal cartography. Defining populations by differences in the densities, and sometimes the shape, of neurons, is like defining populations of animals by how bunched up they are on a tundra. If connectomics has taught us anything, it’s that no neuron is truly, physically isolated from another; we can always find a short path that links them. The primary somatosensory cortex (S1) and primary motor cortex (M1), for example, show subtle differences in the gene expression of neurons in those areas, but they are directly connected by synapses in both directions. They have no physical boundaries to define them as separate neural populations.
B
oth arbitrary and physical boundaries for defining a neural population are dependent on what we, the observer, happen to measure—our window on neural activity, or on neurons’ shapes and density, or their gene expression. A neural population’s size, then, depends on our measurement technique: Electrodes give us a few hundred neurons in a population, calcium imaging up to a million at present, and fMRI many millions of neurons. Testing hypotheses of population coding or population dynamics using the captured data is then defined by the measurement scale, not the scale at which the recorded brain itself has coding or dynamics.Instead, we’d like to define a neural population by its action, an entity that implements a computation or function within the brain, however we define those terms. We want our “population” to mean something self-contained. How do we define that?
A clue lies in how we instinctively think about different labeled regions of the brain as being somehow meaningfully separate things. We are implicitly saying that, connected as they might be, these regions are somehow independent of each other. That is, these regions have independent dynamics: The dynamics of neurons in one region are strongly constrained by the interactions between the neurons in that region but at best weakly modified by connections from outside it. This then gives us a third way of defining a neural population, a dynamical boundary; we could say two groups of neurons who weakly influence each other at best are forming two neural populations.
We don’t yet have well-established methods to cleave neural activity at the joins, to take what we frequently call a population—that arbitrary set of neurons we happen to record—and split them into their independent, dynamical populations. The challenge is to come up with a definition of independence. One such potential definition is the notion of “subspace communication,” the idea that the dynamics of one group of neurons can range across many dimensions of activity, but that group of neurons only influences another group, often weakly, across a select few of those dimensions, or “subspace.” As a result, most changes of dynamics in the first group are along dimensions that don’t affect the neurons in the other group. A stricter, inverse definition may say that the two populations are defined by their “null space,” that there exist many dimensions along which the activity of two groups of neurons can change without changing their output to the other group.
A seeming issue with any such definition is that neural populations defined by their dynamical influence on each other could be capricious, changing with brain state. But such transience of neural populations is a feature, not a bug, of defining them by a dynamical boundary. After all, changes in brain state are just changes in the dynamics of its neural activity, so the dynamical influence of neurons on each other must also change, reconfiguring the dynamical boundary that separates them. One example is the onset of slow-wave sleep, in which neurons across many millimeters of cortex begin oscillating together between hyperpolarized and depolarized states, erasing any dynamical independence that the same neurons had during waking. These changes don’t follow any of our observer-imposed boundaries; they are not restricted to the arbitrary boundaries of our instruments nor the physical boundaries we draw on our maps of cortex.
Defining a neural population by a dynamical boundary thus offers us two advantages: It is both a principled approach, for which we can offer quantitative, testable definitions; and a way to capture the reorganization of the brain’s dynamics. But it also shows us the inherent contradiction in the very idea of a neural “population.” For a dynamically defined “neural population” isn’t a population at all. It more closely resembles how we define species, because it is defined by one group’s inability to interact with another. The neural “population” is thus a convenient fiction of neuroscience, for it is not just inaccurate in its definition or misleading in its use, but it likely doesn’t even exist.