One of the key scientific questions about consciousness concerns its distribution. We know that adult humans have the capacity for consciousness, but what about human neonates, bees or artificial intelligence (AI) systems? Who else—other than ourselves—belongs in the “consciousness club,” and how might we figure this out?
It is tempting to assume, as many do, that we need a theory of consciousness to answer the distribution question. In the words of neuroscientists Giulio Tononi and Christof Koch, “we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it.” This is what philosopher Jonathan Birch has labeled the “theory-heavy” approach to the distribution problem.
But there are serious issues with the theory-heavy approach. One is that we don’t have a consensus theory of consciousness. In a highly selective review that Anil Seth and I published in 2022, we listed no fewer than 22 neurobiological theories of consciousness. This overabundance of theories could reasonably be ignored if most agreed on fundamental questions in the field, such as which systems have the capacity for consciousness or the question of when consciousness first emerges in human development, but they don’t.
A further problem with the theory-heavy approach is that in order to speak to the distribution problem, a theory cannot be restricted to consciousness as it occurs in adult humans, but must also apply to human infants, nonhuman animals, synthetic biological systems and AI. But because theories are largely based on data drawn from the study of adult humans, there will inevitably be a gap between the evidence base of a general theory and its scope. Why should we think that a theory developed in response to adult humans applies to different kinds of systems?
This isn’t just an issue of the legitimacy of applying theories developed from adult human data; it also concerns how the theory itself is specified. A comprehensive theory needs to distinguish features that might be essential to (adult) human consciousness but are not essential to consciousness as it might occur in other systems. However, it’s not clear how that distinction can be drawn if our primary data are drawn from the study of adult humans.
In light of the challenges facing the theory-heavy approach, many have tried to develop an alternative approach to the distribution problem: the marker approach.
T
hink of markers as a bit like a detective’s clues. Rather than constituting proof of consciousness, markers need only be indicative (or “credence-raising”). Converging evidence of different types of markers might establish the presence of consciousness, in the same way that evidence from various clues might establish guilt beyond a reasonable doubt.The basic idea behind the marker-based approach isn’t entirely new. Consider the fact that in ordinary life, we rely on capacities for agency and verbal communication to ascribe consciousness to other people. These behavioral markers of consciousness are crucial in clinical contexts. For example, the capacity to reliably follow commands is taken to be an indicator of consciousness in severely brain-damaged patients.
But behavioral markers have their own limitations. The capacity to produce intelligent linguistic behavior is one of the most trustworthy markers of consciousness in humans, but few of us are inclined to regard the linguistic behavior of large language models such as ChatGPT or Claude as evidence that they, too, are conscious. There are also cases—involving cerebral organoids or neonates, for example—in which consciousness might exist in the absence of complex behavioral capacities. If the marker approach is to make serious headway against the distribution problem, behavioral markers will need to be supplemented with nonbehavioral markers, such as those that involve neural, physiologic, metabolic or computational features.
Whatever form they take, markers will need to be validated, and we need to reach a consensus on when and why we should trust them. How, then, shall we proceed?
The standard approach appeals to the contrast between conscious and unconscious states in our consensus population (that is, adult humans who can report on their experiences). To see how this works in practice, consider a marker that has received significant attention recently: the “global effect.” Developed by Tristan Bekinschtein and his colleagues at the University of Cambridge, the global effect exploits the fact that consciousness appears to be required for a certain type of auditory “oddball.” It has long been known that when people are exposed to a sequence of identical tones followed by one that differs in some way (the oddball), the brain generates a “surprise” response to the stimulus. This response is known as the P300 response, because it occurs roughly 300 milliseconds after the oddball. What Bekinschtein and his colleagues found is that the brain produces P300 responses to local (or “first-order”) oddballs (e.g., AAAB) even when a person is unconscious, but consciousness does appear to be required for P300 responses to global (or “second-order”) oddballs, such as those taking the form AAAB AAAB AAAB AAAA.
Taking inspiration from this finding, theorists have suggested that the global effect might be indicative of consciousness in other populations. For example, the fact that it can be found in 3-month-old infants, newborns and perhaps even in fetuses past 35 weeks’ gestational age has been taken by many to provide some evidence in favor of consciousness in early infancy (and, more tentatively, in late-stage fetuses).
Although I myself have used data about the global effect to justify claims about the emergence of consciousness in human development, this line of reasoning is not immune to criticism. The problem is that neural responses functioning as markers of adult consciousness might not function as markers of infant consciousness. And there are questions about how well the global effect tracks consciousness in systems that are even less like “us” than infants or fetuses are. Finding a global effect in an AI system surely provides little to no evidence that it is conscious.
The upshot, then, is that we need principled ways of identifying the scope of putative markers of consciousness—an account of when features that indicate the presence (or absence) of consciousness in us also indicate the presence (or absence) of consciousness in systems that are very different from us. If such an account requires a theory of consciousness, then the marker-based approach collapses back into the theory-heavy approach. Is there a middle ground between these two approaches?
Perhaps. Together with various colleagues, I’ve suggested that we can bootstrap our way toward a solution to the distribution problem by a process of iterative revision. For example, we use our most promising theories of consciousness to tell us when the global effect is a reliable guide to the presence of consciousness, and we use what our markers tell us about the distribution of consciousness to further refine our theories. Eschewing foundationalism, we should regard neither theories nor markers as sacrosanct but look for an account that maximizes the fit between them.