Your new experience awaits. Try the new design now and help us make it even better

LEAD article

Front Sci, 30 October 2025

Volume 3 - 2025 | https://doi.org/10.3389/fsci.2025.1546279

This is part of an article hub

Consciousness science: where are we, where are we going, and what if we get there?

  • 1Center for Research in Cognition and Neuroscience (CRCN), ULB Neuroscience Institute (UNI), Université libre de Bruxelles, Brussels, Belgium
  • 2Brain, Mind, and Consciousness Program, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, Canada
  • 3School of Psychological Sciences, Tel-Aviv University, Tel Aviv, Israel
  • 4Sagol School for Neuroscience, Tel-Aviv University, Tel Aviv, Israel
  • 5Sussex Centre for Consciousness Science, University of Sussex, Brighton, United Kingdom
  • 6School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom

Abstract

Understanding the biophysical basis of consciousness remains a substantial challenge for 21st-century science. This endeavor is becoming even more pressing in light of accelerating progress in artificial intelligence and other technologies. In this article, we provide an overview of recent developments in the scientific study of consciousness and consider possible futures for the field. We highlight how several novel approaches may facilitate new breakthroughs, including increasing attention to theory development, adversarial collaborations, greater focus on the phenomenal character of conscious experiences, and the development and use of new methodologies and ecological experimental designs. Our emphasis is forward-looking: we explore what “success” in consciousness science may look like, with a focus on clinical, ethical, societal, and scientific implications. We conclude that progress in understanding consciousness will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world, usher in new realms of intervention for modern medicine, and inform discussions around both nonhuman animal welfare and ethical concerns surrounding the beginning and end of human life.

Key points

  • Understanding consciousness is one of the most substantial challenges of 21st-century science and is urgent due to advances in artificial intelligence (AI) and other technologies.
  • Consciousness research is gradually transitioning from empirical identification of neural correlates of consciousness to encompass a variety of theories amenable to empirical testing.
  • Future breakthroughs are likely to result from the following: increasing attention to the development of testable theories; adversarial and interdisciplinary collaborations; large-scale, multi-laboratory studies (alongside continued within-lab effort); new research methods (including computational neurophenomenology, novel ways to track the content of perception, and causal interventions); and naturalistic experimental designs (potentially using technologies such as extended reality or wearable brain imaging).
  • Consciousness research may benefit from a stronger focus on the phenomenological, experiential aspects of conscious experiences.
  • “Solving consciousness”—even partially—will have profound implications across science, medicine, animal welfare, law, and technology development, reshaping how we see ourselves and our relationships to both AI and the natural world.
  • A key development would be a test for consciousness, allowing a determination or informed judgment about which systems/organisms—such as infants, patients, fetuses, animals, organoids, xenobots, and AI—are conscious.

Introduction

Understanding consciousness is one of the greatest scientific challenges of the 21st century, and potentially one of the most impactful for society. This challenge reflects many factors, including (i) the many philosophical puzzles involved in characterizing how conscious experiences relate to physical processes in brains and bodies; (ii) the empirical challenge of obtaining objective, reliable, and complete data about phenomena that appear to be intrinsically subjective and private; (iii) the conceptual/theoretical challenge of developing a theory of consciousness that is sufficiently precise and not only accounts for empirical data and clinical cases but is also sufficiently comprehensive to account for all functional and phenomenological properties of consciousness; and (iv) the epistemological and methodological challenges of developing valid tests for consciousness that can determine if a given organism/system is conscious. The potential impact of understanding consciousness stems from the many interlinked implications this can have for science, technology, medicine, law, and other critical aspects of society. Existentially, a complete scientific account of consciousness is likely to profoundly change our understanding of the position of humanity in the universe.

Accordingly, consciousness has become an object of intense scrutiny from different disciplines. While the connection between mind and body is an ancient philosophical conundrum, in recent decades, the metaphysical issues have been accompanied by a set of empirical questions, with neuroscience and psychology attempting to discover and explain the connections between conscious experiences and neural activity. Yet, strikingly, the core problem had already been formulated in scientific terms at the turn of the 20th century: certain articles from that period read almost as though they had been written today. For instance, in 1902, Minot wrote a Science article titled “The problem of consciousness in its biological aspects” in which he “[…] hopes to convince you that the time has come to take up consciousness as a strictly biological problem …” (1).

Eighty-eight years later, Crick and Koch called for renewed inquiry into “the neural correlates of consciousness” (2, 3), prompted in part by the increasing availability of novel brain imaging methods that could link the biological activity of the brain with subjective experience. This empirical program continues apace, together with theory development and ever deeper interactions with philosophy. But today, there is also a sense that the field has reached an uneasy stasis. For example, a recent review (4) taking a highly inclusive approach identified over 200 distinct approaches to explaining consciousness, exhibiting a breathtaking diversity in metaphysical assumptions and explanatory strategies. In such a landscape, there is a danger that researchers talk past each other rather than to each other. Empirically, Yaron et al. (5) showed that most extant experimental research on theories of consciousness is geared toward supporting them rather than attempting to falsify or compare them, reflecting a confirmatory posture that hinders progress. This manifested both in the low percentage of experiments that ended up challenging theories, as opposed to supporting them (15%), and in the low percentage of experiments that were designed a priori to test theoretical predictions (35%, with only 7% testing more than one theory in the same experiment).

Beyond the genuine and highly complex scientific challenges that the study of consciousness must address, sociological factors may also contribute to the current sense of entrenchment: nobody likes to change their mind (6)! Emerging collaborative frameworks—especially adversarial collaborations—may help alleviate this concern, at least to some extent. But there are also further factors: the possibility that consciousness research is not sufficiently addressing why it feels like anything at all to be conscious and the role that conscious phenomenology plays in our mental, and indeed biological lives (79).

This paper is structured in a forward-looking manner, moving from the past, through the present, and on to the future. First, we clarify terms and make some essential conceptual distinctions. Then, we briefly review what has been achieved so far in elucidating the neural and theoretical basis of consciousness. Next, we consider the future of our field, outlining some promising directions, approaches, methods, and applications, and advocating for a renewed focus on the phenomenological/experiential aspects of consciousness. Finally, we imagine a time in which we have “solved consciousness” and explore some of the key consequences of such an understanding for science and society.

Three distinctions about consciousness

Consciousness is a broad construct—a “mongrel” concept (10)—used by different people to mean different things. In this paper, we stress three distinctions.

The first distinction is between the notion of the level of consciousness and the notion of the contents of consciousness. In the first sense, consciousness is a property associated with an entire organism (a creature) or system: one is conscious (for example, when in a normal state of wakefulness) or not (for example, when in deep dreamless sleep or a coma). There is an ongoing vibrant debate about whether one should think of levels of consciousness as degrees of consciousness or whether they are best characterized in terms of an array of dimensions (11) or as “global states” (12). In the second sense, consciousness is always consciousness of something: our subjective experience is always “contentful”—it is always about something, a property philosophers call intentionality (3, 13). Here, again, there is some debate over the terms, for example, whether there can be fully contentless global states of consciousness (14) and whether consciousness levels (or global states) and contents are fully separable (11, 15).

The second distinction is between perceptual awareness and self-awareness (note that in this article, we use the terms consciousness and awareness interchangeably). Perceptual awareness simply refers to the fact that when we are perceptually aware, we have a qualitative experience of the external world and of our bodies within it (though of course, some perceptual experiences can be entirely fictive, such as when dreaming, vividly imagining, or hallucinating). Importantly, mere sensitivity to sensory information is not sufficient to be considered as perceptual awareness: the carnivorous plant Dionaea muscipula and the camera on your phone are both sensitive to their environment, but we have little reason to think that either has perceptual experiences. Thus, mere sensitivity is not sufficient for perceptual awareness, as it does not necessarily feel like something to be sensitive. This experiential character is precisely what makes the corresponding sensation a conscious sensation (16).

We take self-awareness, on the other hand, to mean experiences of “being a self.” These experiences can be of many different kinds, from low-level experiences of mood and emotion (17) to high-level experiences of being the subject of our experiences, which might be supported by some inner (metacognitive) model of ourselves and our mental states (1820). This kind of high-level reflective self-awareness is associated with the “I” and with a sense of personal identity over time (21).

The distinction between self-awareness and perceptual awareness is not sharp. Some aspects of the experience of “being a self” seem not to involve reflective self-awareness, such as experiences of emotion, mood, body ownership, agency, and of having a first-person perspective (22, 23). Some of these aspects may arguably have perceptual features. For example, emotional experience may depend on interoception (2426). In addition, some perspectives, such as the higher-order theories described below, suggest that a form of metacognition might play a constitutive role in all instances of perceptual awareness, not only in self-awareness (18, 27, 28).

Human beings normally possess both perceptual awareness and self-awareness, but this is probably not true at all times or for all species. In humans, reflective self-awareness may be absent in specific conscious states, such as absorption or flow (29), or in states of minimal phenomenal experience (14). Other species may lack this reflective capability altogether. For example, few will doubt that dogs have perceptual experiences as well as various non-reflective self-related experiences—though this can be contested as we currently lack a way to directly test for consciousness in other species [see (3032) for recent attempts to tackle this problem]. Nevertheless, there is no convincing evidence that dogs have reflective self-awareness in the sense defined above. Putting these debates aside, consciousness research has thus far largely focused, with exceptions (26, 33, 34), on trying to explain perceptual awareness as a first, albeit notoriously difficult, step toward understanding other aspects of consciousness. This emphasis most likely stems from the fact that perceptual awareness is generally easier to manipulate in experiments.

The third distinction contrasts the phenomenological (i.e., experiential) aspects of consciousness with its functions. This discussion has been largely shaped by Block’s (35) influential, yet controversial (36, 37), distinction between phenomenal consciousness and access consciousness—informally, what consciousness feels like and what it does. Access consciousness is associated with the various functions that consciousness enables, such as global availability, verbal report, reasoning, and executive control. Phenomenal consciousness, on the other hand, refers to the felt qualities of conscious mental states: the complex mixture of bitterness and sweetness of a Negroni cocktail, the distinctive hue of International Klein Blue, the anxiety prompted by one’s to-do list. All such conscious mental states have phenomenal character (using the philosophical term, often referred to as “qualia”): there is something it is like for us to be in each of these states. By contrast, there is nothing it was like for the neural network Alpha Go (38) to win against the South Korean world Go champion Lee Sedol (it was Sir Demis Hassabis and the DeepMind team who drank the champagne instead). Despite its seductive use of language, we think there is also nothing it is like for GPT-5 to engage in a conversation (39, 40).

Just as there has been greater emphasis within consciousness science on studying perceptual awareness compared with self-awareness, there has also been a greater emphasis on studying the functional rather than the phenomenological aspects of consciousness. This, again, may be due to the relative ease with which functional properties related to conscious access can be studied empirically compared with phenomenological aspects (4143). With respect to the neural underpinnings of consciousness, we have been more focused on finding the mechanisms that differentiate between a consciously processed and an unconsciously processed stimulus than on explaining the difference between two conscious experiences, again with exceptions (4448). Additionally, with respect to the functions of consciousness, we have been more oriented toward documenting what we can do without awareness rather than because of it (4952). The potential for complex behavior in the absence of awareness has been further emphasized by the rapid advances in artificial intelligence (AI), where complicated functions can be executed without any accompanying phenomenology, at least as far as we can tell.

What have we achieved so far?

Following this clarification of terms, we briefly review where things stand today in consciousness research. Given the enormous challenge that explaining consciousness represents, it is easy to underestimate the significant progress that has already been made. This progress has been particularly visible over the last 30 or so years, but in fact it extends much further back, with highlights including seminal work on split-brain patients, neurological patients, work with brain stimulation, research on nonhuman primates, and much more (5355).

Some basic facts are now well established. In humans and other mammals, the thalamocortical system is strongly involved in consciousness, whereas the cerebellum (despite having many more neurons) is not. Different regions of the cortex are associated with different aspects of conscious content, whether these are distinct perceptual modalities (56), experiences of volition or agency (34), emotions (57), or other aspects of the sense of “self” (58). Researchers have identified a myriad of candidate signatures of consciousness in humans, focusing on global neural patterns [e.g., neuronal complexity (59), non-linear cortical ignitions (60), stability of neural activity patterns (61)], specific electrophysiological markers of consciousness [e.g., the perceptual awareness negativity (62)], alpha suppression (63), late gamma bursts (64), and on relevant brain areas such as the “posterior hot zone” (65) or frontoparietal areas (66) as well as subcortical structures and brainstem arousal systems that may contribute to and modulate awareness (6770). For some of these regions, notably brainstem arousal systems, there is debate about whether they represent necessary enabling conditions for consciousness and/or whether they contribute to the material basis of consciousness (67, 69).

At the same time, some previously popular hypotheses have now been empirically excluded. For example, the idea that consciousness is uniquely associated with 40 Hz (gamma band) oscillations has fallen out of favor based on substantial evidence (71, 72). In parallel, there has been a growing recognition that various confounds need to be carefully ruled out in order to interpret these findings, including those related to the enabling conditions for conscious experience, post-perceptual processes such as memory and report, and the concern that consciousness is often (but not always) correlated with greater signal strength and performance capacity (7376). In this regard, phenomena such as blindsight, in which consciousness can be partly dissociated from performance capacity, are particularly intriguing [(7779); but see (80, 81), for critiques].

Complementing these empirical findings, many theories of consciousness have been developed over recent years. These vary greatly in their aims and scope, in the degree of traction they have gained in the community, and in their level of empirical support (5, 12, 8284). A selection of these theories provides a useful lens through which to focus attention on the progress made so far in the scientific study of consciousness.

Global workspace theory

One prominent theory, named “global workspace theory” (GWT), originated from “blackboard” architectures in computer science. Such architectures contain many specialized processing units that share and receive information from a common centralized resource—the “workspace.” The first version of GWT (85) was a cognitive theory that assumed that consciousness depends on global availability: just like blackboard architectures, the cognitive system consists of a set of specialized modules capable of processing their inputs automatically and unconsciously, but they are all connected to a global workspace that can broadcast information throughout the entire system and make its contents available to a wide range of specialized cognitive processes such as attention, evaluation, memory, and verbal report (86). The core claim of GWT is thus that it is the wide accessibility and broadcast of information within the workspace that constitutes conscious (as opposed to unconscious) contents. Since the 1990s, GWT has developed into a neural theory (referred to as global neuronal workspace theory) in which neural signals that exceed a threshold cause “ignition” of recurrent interactions within a global workspace distributed across multiple cortical regions—this being the process of “broadcast” (64, 87). Importantly, GWT is what is called a first-order theory: what makes a mental state conscious depends on properties of that mental state (and its neural underpinnings) only and not on some other process relating to that mental state in some way. Thus, in contrast with the assumptions of higher-order theories (HOTs, introduced below), GWT does not postulate that consciousness depends on higher-order representation or indexing of some kind.

GWT is primarily a theory of conscious access (88), focused on how mental states gain access to consciousness and how they accrue functional utility as a result. This is characterized largely in terms of supporting flexible, content-dependent behavior, including the ability to deliver subjective verbal reports [but see (89) for a discussion of the phenomenal aspect of consciousness and how the theory explains it, and see Dehaene’s section in (90)]. GWT’s clear neurophysiological predictions (centering on nonlinear “ignition” and on the involvement of frontoparietal regions) has led to a wealth of supportive experimental evidence (64). For example, divergences of activity ~250–300 ms post-stimulus have been associated with ignition (91), and measures of long-distance information sharing among cortical regions have been associated with broadcast (92). However, a major challenge for GWT lies in specifying what exactly counts as a “global workspace” (12): does it depend on the nature of the “consuming” systems, the type of broadcast, and/or on other factors?

Higher-order theories

A second prominent theory of consciousness is Rosenthal’s (93) higher-order thought theory, which proposes that a mental state is a conscious mental state when one has a “higher-order” thought that one is in that mental state. This core idea has now been elaborated on in different ways, resulting in a family of higher-order theories (HOTs). Unlike first-order theories, higher-order theories all claim that mental states are conscious when they are the target of a “higher-order” mental state of a specific kind (18, 9395). The nature of the relationship between first-order and higher-order states varies among HOTs, but they all share the basic notion that for a first-order mental state X to be conscious, there must be a higher-order state X that in some way monitors or meta-represents X. Take the experience of consciously seeing a red chair. According to HOTs, the first-order representation (perhaps instantiated as a pattern of neural activity in the visual cortex) of red is not by itself sufficient to produce a conscious experience. Instead, there need to be additional “higher-order” states that point to or (meta)represent the first-order representation for it to be experienced as red. Crucially, such higher-order states need not be conscious themselves (i.e., we do not need to be aware of a mental state with content like, “I am now seeing red”). Rather, it is their very existence that makes the target content conscious. HOTs capture the intuitively plausible notion that a mental state is a conscious mental state as soon as I am aware of being in that mental state. This offers an equally intuitive distinction between conscious and unconscious mental states: I am conscious of some situation when I know about that situation; otherwise, I am unconscious of that situation.

Many HOTs locate the neural basis of the relevant meta-representations in anterior regions of the human brain, with an emphasis on the prefrontal cortex (96). Future “neural HOTs” will likely develop richer mappings between brain states and the theoretical distinction between first- and higher-order states (97). These theories are therefore supported by evidence implicating these regions in consciousness and undermined by evidence that anterior regions are not necessary for consciousness. As such, they have motivated studies investigating the neural correlates of consciousness (NCCs) with this question in mind (98). Of particular note are experiments that attempt to control for how well participants perform at a perceptual task: such studies (including in “blindsight” participants) have shown that when conditions are matched for performance, differences between conscious and unconscious perception are found in anterior cortical regions (75, 99) and interference with prefrontal function using transcranial magnetic stimulation (TMS) or multivariate neurofeedback affects subjective aspects of perception (such as confidence) without changing performance (100, 101). Studies associating perceptual metacognitive abilities with anterior prefrontal function also provide intriguing supportive evidence, albeit less direct (e.g., 102, 103). Additional support can be drawn from demonstrations of decoding of the content of consciousness from frontal areas (104).

However, HOTs currently do not fully specify the actual neural mechanism(s) mediating the implementation of first- versus higher-order states: how exactly does one brain state “point” at another, and what motivates the choice of which first-order state to point at or re-represent? Another challenge is that they focus on the contents of consciousness and provide less explanation for the level of consciousness. These under-specifications reflect the relatively limited empirical formulation of HOTs—despite their considerable philosophical backbone (105)—as compared with other theories (5). These aspects of the theory are currently being developed (45), and an ongoing adversarial collaboration (ETHoS1) is specifically aimed at comparing the empirical predictions of four HOT variants.

Integrated information theory

A very different perspective is provided by “integrated information theory” (IIT), developed by Giulio Tononi and colleagues since the 1990s (44, 106, 107). Rather than asking what in the brain gives rise to consciousness, IIT identifies features of conscious experience (described in five axioms) that it assumes are essential and then asks what properties a physical substrate of consciousness must have for these features to be present. A striking claim of IIT is that any physical substrate that possesses these properties will exhibit some level of consciousness (108). The two most illustrative essential features, or axioms, are (unsurprisingly) information and integration. According to IIT, every conscious experience is necessarily both informative (in virtue of ruling out many alternative experiences; i.e., every experience is the way it is, and not some other way) and integrated (every experience is a unified scene). IIT introduces a mathematical measure, phi (Φ), which, broadly speaking, measures the extent to which a physical system entails irreducible maxima of integrated information and thereby, according to the theory, provides a full measure of consciousness. Different versions of IIT introduce different varieties of Φ, with the latest being IIT 4.0 (107), but all associate consciousness with the underlying “cause—effect structure” of a physical system and not just with the dynamics (e.g., neural activity) that the physical system supports. IIT is arguably the most ambitious theory we discuss because it addresses both the level and content of consciousness, proposes a sufficient basis for consciousness, and explicitly addresses phenomenological aspects of consciousness, such as spatiality (109) and temporality (110).

IIT has been criticized on the grounds that measurement of Φ is challenging or infeasible for anything other than very simple systems. Other “weak” versions of IIT have been proposed in which Φ is easier to measure, but this comes at the cost of abandoning claims of an identity relationship between Φ and consciousness (111). Another line of criticism is that the axioms proposed by full IIT do not satisfy standard philosophical criteria of being self-evidently true (112). Concerns like these have led to robust debate over whether the core claims of IIT are empirically testable and over what should be expected from a scientific theory of consciousness (40, 113, 114).

The most commonly referenced experimental support for IIT comes from evidence examining empirically applicable proxies2 for integrated information (Φ) under different global states of consciousness. In a canonical series of studies (115, 116), Massimini and colleagues have developed a measure of consciousness, called the “perturbation complexity index” (PCI), which quantifies the complexity of the brain’s response to cortical stimulation. Most commonly, the method uses TMS to inject a brief pulse of energy into the cortex, an electroencephalogram to measure the response, and the information-theoretic metric of Lempel–Ziv complexity (which quantifies the diversity of patterns within a signal) to quantify the complexity of the response. High PCI values arguably correspond to high levels of integration and information in the underlying dynamics. However, it is important to emphasize that the PCI, while inspired by and based on IIT, is not a measure or approximation of Φ, and differences in PCI across conscious levels may also be affected by differences in how unconscious processes operate at these levels. The PCI results, while fascinating, cannot be taken to directly support the distinctive aspects of IIT that rely on the definition of Φ, and are also compatible with or supportive of other theories, notably GWT. Nevertheless, the PCI method has shown exciting promise in important practical scenarios, such as detecting residual consciousness in unresponsive patients following severe brain injury (59).

In terms of neural correlates, IIT theorists claim that brain activity sufficient for conscious perception is localized to posterior regions (e.g., the posterior cortical “hot zone”). This claim is based on the argument that neural connectivity in these regions is well suited to generating high levels of (irreducible) integrated information, rather than the anterior regions favored by HOTs and GWT (117).

Predictive (and recurrent) processing theory

The final theory we mention here is not really (or at least not primarily) a theory of consciousness but rather a general theory of brain function—of perception, cognition, and action—from which more specific connections between brain processes and aspects of consciousness can be derived and tested (118). According to “predictive processing” (PP), the brain continually minimizes sensory “prediction error” signals, either by updating its predictions about the causes of sensory signals or by performing actions to bring about predicted or desired sensory inputs (the latter process being termed “active inference”) (119121). This ongoing process of prediction error minimization provides a mechanism by which the view of perception as a process of Bayesian inference, or “best-guessing”, (122) and as a means of predictive regulation of physiological variables can be implemented (123, 124). In its most ambitious and all-encompassing version, the “free energy principle,” the mechanism of prediction error minimization, arises out of fundamental constraints regarding control and regulation that apply to all physical systems that maintain their organization over time in the face of external perturbations (125, 126).

Several distinct theories of consciousness fall under the umbrella of PP (e.g., 23, 127, 128). These typically share the claim that the contents of conscious experiences arise from (top-down) predictions rather than from a “read out” of (bottom-up) sensory signals. Informally, the contents of perceptual experience are given by the brain’s “best guess” of the causes of its sensorium or, even more informally, as a “controlled hallucination” in which the brain’s predictions are reined in by sensory signals arising from the world and the body (23).

One particular influential theory under the PP umbrella deserves mention: recurrent processing theory (RPT), also known as “local recurrency” or “re-entry” theory, associates consciousness with top-down (recurrent) signaling in the brain but does not appeal directly to the Bayesian aspects of PP (129, 130). Instead, RPT uses neurophysiological evidence to motivate the view that local recurrence (e.g., in visual cortex) is sufficient for phenomenal experience to occur and that feedforward (bottom-up) activity is always insufficient for conscious perception, no matter how “deep” into the brain this activity reaches (36). RPT’s focus on local recurrence is usually used to contrast the theory with other theories that involve widespread broadcast (GWT) or higher-order processes (HOT) (90), but as theories gain precision, it could be that aspects of RPT also surface in other theories (83). For example, the “ignition” process central to GWT might involve local recurrence. Nonetheless, a key difference between RPT and these other theories remains that RPT allows that phenomenal experience could be present without cognitive access (36).

The core commitments of PP do not directly specify a necessary or sufficient basis for consciousness to happen, nor do they specify how to distinguish conscious from unconscious processing. RPT is an exception here, proposing sufficient conditions, given the right enabling background conditions. Instead, the value of PP for theories of consciousness may largely reside in providing resources for developing and testing systematic or explanatory correlations between brain processes and properties of conscious experience, both functional and experiential (118). PP accounts tend to focus on conscious content rather than conscious level (e.g., 131, 132); they speak to both phenomenological (in terms of the nature of top-down predictions) and functional aspects of consciousness and address aspects of selfhood and embodiment more directly than other theories discussed here (e.g., 40, 133). Notably, variants of the theories discussed above can be expressed within the framework of PP, so there can be ‘PP versions’ of, for example, GWT and HOT (95, 134).

Whether PP succeeds as a theory in consciousness science will depend both on evidence that prediction error minimization is indeed a core brain operation and on its ability to draw explanatorily and predictively powerful links between elements of predictive processing and aspects of conscious experience. While there is substantial evidence linking top-down signaling to conscious perception (135, 136), evidence for explicit sensory prediction error signals playing the roles proposed by PP remains mixed (137), at least when compared to the well-studied dopaminergic reward prediction error signal (138). Further, while abundant evidence shows that participant expectations can shape conscious perception (139), much remains to be done to causally connect the computational entities of PP with specific forms of consciousness. For some, this is a shortcoming of the theory: it might be too general and accordingly not informative enough to explain consciousness. Conversely, more specific formulations of the top-down principle, such as RPT, have been criticized for being too narrow, for example, focusing on visual processing only and failing to explain how this relates to other modalities and how conscious information is integrated across modalities.

This short tour of several of the many theories of consciousness [for a recent comprehensive survey, see (4)] highlights that there is not only a lack of agreement about the answers in consciousness science but also a lack of consensus about approaches and relevant questions. This does not mean there has been no progress. On the contrary, the last two decades have witnessed an enlightening move away from a simple search for NCCs in a comparatively theory-free and therefore explanatorily impoverished way to a rich landscape of different theories with varying degrees of experimental support. The Consciousness Theories Studies (ConTraSt) (https://contrastdb.tau.ac.il) database study has recently quantified the differences in the extent of research relating to the four theories of consciousness described above, and demonstrated how research results tend to align with the predictions of the supported theory [see Figure 1 and (5)]. There are also some striking commonalities as well as differences among theories. For example, recurrent processing emerges as a key principle in GWT, IIT, PP, and some versions of HOT as well as other theories. Such unifying principles might point toward a “minimal unifying model” of consciousness, at least in biological systems (140).

Figure 1
Panel A shows a bar chart of experiments by theory support or challenge. Global workspace has 239 supports and 56 challenges, first order and predictive processing has 141 supports and 23 challenges, integrated information has 125 supports and 10 challenges, and higher order has 14 supports and 9 challenges. Panel B shows a line graph of experiments by theory over time from 2000 to 2025, depicting increasing trends in global workspace, first order and predictive processing, integrated information, and higher order. Panel C shows brain images visualizing fMRI experiments for each theory: Global workspace (N=66), higher order (N=4), integrated information (N=25), and first order (N=35), with distinct color mappings.

Figure 1. Results of the Consciousness Theories Studies (ConTraSt) database study (5). Updated results of the ConTraSt database, now including 511 experiments published until mid-2025, which interpreted their findings in light of four prominent theories of consciousness: global workspace theory (GWT), higher-order theories (HOT), integrated information theory (IIT), and recurrent processing theory (RPT). Notably, there are currently no papers in the database for predictive processing theory (PPT). This is mainly because the database is based on the work done by Yaron et al. (5), where PPT was not included, and new uploads referring to this theory have not been made yet. (A) Distribution of experiments across theories. Green sections in the bars represent the number of experiments interpreted as supporting the theory; purple sections represent experiments interpreted as challenging it. (B) Effects over time: a cumulative distribution of experiments supporting the theories. (C) Functional magnetic resonance imaging (fMRI) findings for experiments supporting each of the theories. The same conventions used by Yaron et al. (5) are used here: for each activation, the color intensity indicates the relative frequency of experiments reporting activations in that brain area. While overlaying all findings demonstrates that most of the cortex has been implicated in consciousness, the breakdown by theory presents four different pictures, each aligning with the predictions of the supported theory. This further illustrates the confirmatory posture that most authors in the field have—intentionally or not—espoused.

Where are we going?

Thus far, we have surveyed some of the current main directions in the study of consciousness. As our overview makes clear, the sheer diversity of approaches and theories that characterize the field raises questions about how it can best make progress. In this section, we consider the most promising directions to follow in this ongoing quest, which some consider potentially endless (141). What will be the state of our field 50 years from now? Will our successors look back with satisfaction at the progress made toward “solving consciousness,” or will they feel that the research has been going in circles, not getting any closer?

Considering that prophecy is given to fools, we will refrain from making a prediction here. But we note that the history of science abounds with unfulfilled scientific promises to solve one mystery or another, like producing cold fusion (142), curing cancer (143), achieving room temperature superconductivity, or indeed fully simulating the human brain (144). On the other hand, science often outperforms human predictions: 50 years ago, it probably seemed unthinkable that a computer would ever beat a human chess champion (145), converse fluently (146), or be able to create art (147). Bearing this in mind, what will the future of consciousness science look like? In the following sections, we sketch out nascent trends that will most likely shape the field in the coming decade: a shift toward theory-driven research, the necessity of collaborative and interdisciplinary work, the adoption of new methods, and an emphasis on applications. We hope that developments like these may help the field move beyond the current “uneasy stasis” we mentioned earlier.

From correlates to testable theories

The first major shift is a transition from “searching for the NCCs” to an increased focus on theory-driven empirical research (12, 8284). While the former has been largely dominated by a data-driven, bottom-up approach consisting, for instance, of manipulating consciousness in hopes of identifying neural contrasts between consciously perceived and non-consciously perceived stimuli, the latter is driven by empirical predictions derived from specific theories of consciousness. Generally speaking, the agenda seems to be gradually transitioning toward providing explanations that go beyond descriptions [see (9), for a critical review]. This seems to be a step in the right direction, though more work is needed to potentially turn this simple step into a major leap.

First, theories must be thoroughly scrutinized to identify both their core constructs (148) plus testable predictions that have high explanatory power. Most, if not all, theories include claims and concepts that are somewhat fuzzy—often almost metaphorical—and these are then translated into neural terms in ways that are sometimes too simplistic, for example by debating whether consciousness is subserved by the front or the back of the brain (149, 150). Further elucidation and formalization are needed to make it possible for the theories to be fully tested. Addressing such issues would open up another research strategy, focused on the “search for computational correlates of consciousness” (151)—that is, identifying which computational differences best characterize the distinction between conscious and unconscious information processing. This in turn requires further precision. For example, what does it mean for information to be globally broadcast (152), and how do the receiving neurons understand the message? Similarly, how exactly does a higher-order brain state point at first-order brain states (96)? Or how is the unfolded cause–effect structure of a certain conscious state (107) physically implemented in neural terms? Only when predictions are fully fleshed out will we be able to assess their explanatory power using clear measures (153, 154).

Second, the explananda (explanatory targets) of the theories should be better defined, especially given claims that they might not be explaining the same things and the fact that they are supported by different types of empirical data, at least to some degree (5, 82). We believe that a greater focus on the phenomenological, experiential aspects of consciousness—for example, by studying quality spaces (45, 48, 155) or by pursuing computational phenomenology (14, 156158)—is likely to yield substantial dividends here, by making the explananda more precise and thereby sharpening the distinctions among theories.

Third, as Seth and Bayne (12) argue, current theories should become not only more precise (for example, by using computational modeling) and more testable (for example, by developing new measures) but also more comprehensive. That is, theories should progressively be able to explain more distinct aspects of consciousness, and a good theory should explain as many aspects of consciousness as possible (82). An alternative and potentially complementary strategy is to focus on explaining the minimal, universally present features of consciousness (140)—perhaps reflecting a kind of “minimal phenomenal experience” (159).

Another shift in emphasis encouraged by theory-driven predictions is a focus on causal as well as on correlational evidence. Causal predictions generally provide stricter tests of a theory and hence more informative evidence. An example of a theory-based causal prediction can be found in INTREPID (https://arc-intrepid.com/about/), one of the current crop of adversarial collaborations. There, the team is using optogenetics in mice to contrast the effects of merely inactive versus optogenetically inactivated neurons in the visual cortex on visual perception, testing a prediction derived from IIT. Outside the context of theory testing, some have used optogenetics to examine the dependence of conscious perception of cortico-cortical and cortico-thalamic connectivity (157).

Finally, to allow us to home in on promising theories and reduce our credence in less useful ones, the field should focus on evaluating these theories through experiments designed a priori to test their predictions. At least some of these experiments should probe multiple theories simultaneously, to create meaningful contrasts between them. This leads us to the next suggested move.

From isolation to collaboration

Until recently, consciousness has mostly been studied by dozens of laboratories around the world, mostly independently. Each scholar has addressed the problem using their own tools, ideas, and theoretical approaches and pursued their research alone or with a small group. Yet, other fields have taught us that big questions often cannot be solved by individuals or small groups and that such questions may be better addressed through collaborative science (e.g., 160162). Applied to our field, collaborative approaches can be used at multiple levels.

Selecting research questions

Defining key questions that are worth pursuing can be taken up by the community at large (163) or by a joint process involving multiple researchers and scholars. One form of such collaboration that we have already mentioned is adversarial collaboration, championed by Kahneman (6). Here, theoretical opponents work together to design experiments that would test their approaches, pushing each other toward better theoretical and experimental definitions of their claims.

A recent program initiated by the Templeton World Charity Foundation (TWCF) adopted this method in an attempt to “accelerate research on consciousness” by encouraging theory leaders to mutually engage and design experiments likely to arbitrate between competing theories. A series of such adversarial collaborations is now underway, pioneered by the Cogitate Consortium (Figure 2; 117). Time will tell if these collaborations allow us to arbitrate between theories. The first results of the Cogitate Consortium interestingly—and perhaps unsurprisingly—do not fully align with either of the predictions made by the theories in question, namely IIT and GWT. A challenge for this consortium, and likely for future adversarial collaborations, is that the agreed-upon experiments did not directly test the core aspects of either theory—a problem that in turn may follow from each theory making different assumptions and having distinct explananda. Yet, the experiments provided meaningful tests of the neuroscientific predictions of these theories, and the failure to confirm some of these predictions will hopefully lead to self-correction by the theories and to shifting the credence assigned by the community to each theory (164).

Figure 2
Diagram showing five theories of consciousness mapped to brain regions over years 2019 to 2023. Integrated Information Theory (IIT) highlights the posterior cortical “hot zone.” Global Workspace Theory (GWT) includes global workspace and local processors. Predictive Processing Theory (PPT) involves prediction circuits. Recurrent Processing Theory (RPT) shows lower-order representation. Higher-Order Theories (HOT) feature meta-representation. Abbreviations include HOROR, HOSS, PRM, and SOMA. Adaptations from Seth and Bayne (2022) and Lamme (2010).

Figure 2. An illustration of the ongoing adversarial collaborations funded by the Templeton World Charity Foundation. Such collaborations invite theory leaders to jointly conceive experiments aimed at falsifying the core tenets of different theories. The experiment designs and theoretical predictions to be tested are preregistered and the experiments are performed and replicated by independent teams. In total, eight theories (see text) will be tested. To date, five adversarial collaborations have been launched. Cogitate (initiated in 2019) tested predictions of information integration theory (IIT) and global neuronal workspace theory (GWT). Data collection is complete and the first experimental results have been published (117). A second adversarial collaboration (2021) is comparing IIT and GWT in nonhuman animals. Thirdly, INTREPID (2022) is testing IIT against predictive processing theory (PPT) and neurorepresentationalism. A fourth collaboration (2020) contrasts higher-order theories (HOTs) of consciousness—specifically higher-order representation of a representation (HOROR) (94)—with some first-order theories, in particular recurrent processing theory (RPT) (130) and perceptual reality monitoring (PRM) (75). Finally, ETHoS (2023) aims to test four HOT variants: HOROR, PRM, higher-order state space (HOSS) (95), and the self-organizing metarepresentational account (SOMA) (18, 27). The outcomes of such vast empirical programs will likely shape the field over the next decade, but whether they will decisively rule out specific theories remains to be seen.

Defining research methods

Since its inception, the field of consciousness science has been characterized by controversies about how best to operationalize, manipulate, and measure consciousness for example, (165168). Unsurprisingly, this lack of consensus practices is accompanied by a myriad of conflicting findings and claims, for example, on the scope of unconscious processing (169171). Developing new methods and protocols that achieve broad uptake and consensus by virtue of having demonstrated validity would significantly advance the field (172), akin, for example, to collaborative attempts to define the goals of research in metacognition (173). Beyond making progress toward resolving key questions about what consciousness is and how it should be best studied, this would also allow direct comparisons between datasets obtained in different laboratories across the world and hopefully increase the chances of converging on agreed-upon claims about conscious versus unconscious processing. Notably, a single consensus approach would not suffice by itself and is likely to be extremely hard to obtain given the inherent complexities in studying consciousness. Rather, field-specific standardized approaches could usefully complement the rich variety of experimental and theoretical approaches currently flourishing. A relevant example is a recent collaborative effort (174) to define best practices for characterizing unconscious processing, for example, which awareness scale is preferable in each context, when tests of awareness should be administered, etc.

Collecting data

The plea for better-powered, multi-laboratory studies has been made in many fields, and in recent years such attempts have abounded (175177). This might even be more crucial for consciousness research and psychological science more generally, where effects are typically weak and short-lived (178). Indeed, several such initiatives are already underway, some benefitting from the engagement and involvement of large swathes of the public. Examples include The Perception Census, a large-scale citizen science study of perceptual diversity (https://perceptioncensus.dreamachine.world), the SkuldNet COST Consortium (http://skuldnet.org/), and the Cogitate Consortium (117).

Interdisciplinarity

Consciousness is one of the most complex phenomena known to science, and understanding it requires the collaboration of scholars of different disciplines. In many ways, our field seems to have been a frontrunner in interdisciplinarity, as seen already in the “Towards a science of consciousness” conferences and at the annual meetings of the Association for the Scientific Study of Consciousness. Collaborations between neuroscientists, psychologists, and philosophers (11, 12, 48, 179181); psychologists and computational neuroscientists (182); neuroscientists, philosophers, and physicists (183); and psychiatrists and psychologists (184) are just a few examples of the ways an interdisciplinary approach can advance the field of consciousness. Crucially, effective interdisciplinarity takes decades; this is perhaps one of the core arguments in support of bottom-up, curiosity-driven fundamental research: to allow interdisciplinary connections to formulate and flourish.

One aspect of interdisciplinarity worth highlighting is the benefit of involving philosophers of science (not only philosophers of mind). The challenge of understanding consciousness is of such magnitude that, even with substantial progress being made, there remain robust discussions about appropriate definitions, conceptual foundations, and constraints on empirical research as illustrated by the recent debate over IIT (40, 113, 114). Here, philosophy of science can provide a systematic meta-theory that can help the community to converge around exactly what should be explained and how.

Embracing new methods

Consciousness science could also benefit greatly from the development of new experimental methods, just as the field of neuroscience has benefited from functional brain imaging and other innovations. One promising arena for new methods is the opportunity to study consciousness in less constrained, more naturalistic environments. Given the challenge of studying consciousness, and a historical skepticism toward our ability to do so (185), the field has thus far mainly focused on finding the most controlled and simplified paradigms and operational definitions. Yet, recently, research has been conducted in more ‘real world’-like settings, relying on state-of-the-art technologies that continue to evolve (186). Studies using virtual/augmented reality (187190) suggest new ways to study consciousness, potentially in tandem with wearable brain imaging technologies such as optically pumped magnetometers (191). A particular advantage of these “extended reality” technologies is that they provide powerful new ways of investigating aspects of conscious experience that would otherwise remain difficult to study. Examples include experiences of embodiment through the use of avatars or virtual/augmented body parts (192194), the influence of social context on the neural basis of consciousness, and the ability to suppress real-life objects (not merely computer screen images) from consciousness (195). In general, methods like these enable us to get closer to studying conscious (and unconscious processes) as they happen in real life, for example, when we are walking down a street—a situation of enormous sensory richness compared with a typical laboratory experiment (196).

Much can also be gained from combining the old and the new. An example here is the emerging approach of computational neurophenomenology, which merges new methods in computational models of neurocognitive systems together with relatively old philosophical and behavioral methods from phenomenological research to help build more informative bridges between brain mechanisms and conscious experience (151, 197). The key to doing so successfully may lie in flexibly recognizing how “old” and “new” approaches might benefit each other, rather than treating them in an either/or fashion, or as beholden to certain paradigms or explanatory targets.

One useful approach is to focus on the phenomenological aspects of consciousness. Here, a promising method to understanding what an experience is like for an organism—how it is similar or different to other possible experiences—is to investigate empirical mappings between (objective) neural and (subjective) perceptual similarity structures in high-dimensional spaces (4548). As von Uexküll (198) and, more recently, Yong (199) point out, the world looks, smells, and tastes very different to a fly than it does to us: each organism is sensing its environment through sensory modalities that have been shaped by different evolutionary constraints and hence yield conscious experiences that are markedly different. Recent investigations using intracranial recordings have linked visual experiences to stable points in a high-dimensional neural state space (200204). Such empirical investigations, in turn, bear on the philosophical claim that the qualitative nature of conscious experience arises by virtue of its relational similarity to other experiences—the “quality space” hypothesis (48, 205, 206). However, it remains unclear how these proposals for the neural encoding of qualitative features of (and relationships between) sensory experiences can be integrated into the theories of consciousness reviewed above [though see IIT for one approach (107)].

While these new methods offer exciting opportunities, they should not blind us to lessons from the past and lead us to forget hard-won lessons from earlier epochs of psychology (207). Of particular relevance to consciousness research is the issue of demand characteristics, which refers to how attributes of experimental context may implicitly influence participant behavior and experience (207). If experimental conditions are not carefully matched for participants’ expectations, observed differences may arise from mixtures of compliance and changes in experience caused by suggestibility and not from whatever other underlying process may be under examination (208, 209). Even more problematic may be experimenters’ own expectations about their study participants (210212).

A focus on methods should also pay attention to the general problems of reliability and replicability that have plagued many areas of science, including psychology, where initiatives aiming to address these issues have become particularly prominent. Therefore, both old and new methods such as pre-registration, open data and materials, should be developed with appropriate rigor and embrace open science principles wherever possible. This probably means that most experiments will require large samples and more work such as measuring and controlling for demand characteristics, compared with what has typically been the case. This seems like a small price to pay to ensure, compared with what the field generates research of lasting value.

What if we succeed?

The final section of this paper ventures from the near future to the far horizon: imagine a time in which we have “solved consciousness,” whenever that may be. What would be the consequences of such an understanding? How would a complete explanation of the mechanisms of consciousness change the way we understand ourselves? One aspect seems clear—the more certainty we gain in our methods and in our theories, the more we will be able to translate findings from consciousness research into applications that address real-world problems. The impetus to apply consciousness science to address practical problems is increasing, given the accelerating progress in AI, developments in neuronal organoids (213, 214), and increasing societal attention to ethical concerns relating to nonhuman animal welfare and to the beginning and end of human life. For instance, a potential contribution of particular importance to society will be the development of a test (or tests) for consciousness, allowing one to determine, or at least provide an informed judgment about, which entities—infants, patients, fetuses, animals, lab-grown brain organoids, xenobots, AI—are conscious (32, 215, see also the section on “Clinical implications”).

Akin to calls to consider what may ensue if we succeed in building artificial general intelligence (216), here we consider the potential implications of success in consciousness science in four areas: scientific, clinical, ethical, and societal (Figure 3). Importantly, many of these implications may apply already even for partial “solutions” as we, hopefully, incrementally approach a full scientific understanding of consciousness.

Figure 3
Diagram illustrating various aspects of consciousness with a central circle labeled “Consciousness.” Surrounding areas include Scientific, Society, Law, Existential, Animal welfare, Human enhancement, Wellbeing, Prenatal policy, Medicine, Artificial intelligence, and Neurotechnology. Each area includes brief descriptions, such as ethical implications, neural basis of self, AI consciousness, and brain-body interactions. Icons accompany each category.

Figure 3. Scientific, clinical, ethical, and societal implications of “solving” consciousness.

Scientific implications

Consciousness science remains somewhat marginal relative to the wider ecosystem of neuroscience and cognitive science (217). Every year, tens of thousands of neuroscientists attend the Society for Neuroscience Annual Meeting in the United States, but only a small fraction of their abstracts mention consciousness or subjective experience (in 2023, for instance, 92 abstracts included “consciousness”, compared with 4,297 that mentioned “behavior” and 7,237 that mentioned “brain”). Instead, the community’s approach to the brain remains mostly engineering-like: geared toward understanding how its component parts interact to produce behavior. This mechanistic approach will continue to accelerate in the next decades with the increased precision of tools such as optogenetics to probe and stimulate circuit function, affording unprecedented control over brain states and behavior. However, without progress in consciousness science, we may reach a point at which the brain, animal or human, is understood at a similar level as current AI systems. We might understand, at least in part, how it works, yet still not know how, or even whether, its mechanisms are linked to conscious experience. In contrast, success in consciousness science will provide a rich account of the biological basis of behavior and allow us to precisely determine when and whether certain brain states are conscious or underpin conscious experiences. In turn, it will provide neuroscience with the tools to control consciousness with the same precision as is currently being pursued for behavior3.

Interfacing mechanistic neuroscience with consciousness science will be aided by asking what function(s) consciousness serves. Why should it matter that certain mental states are conscious and others are not if both are able to guide behavior? Indeed, much research on behavioral control and decision-making has proceeded without heeding consciousness as a variable (85). The key concepts of feeling, reward, value, valence, and utility have been approached differently in different fields, and have seldom been connected with consciousness research (33, 69, 218, 219). Debate continues about which broader behavioral functions specifically depend on phenomenal experience. Meanwhile, there is renewed interest in approaching this crucial “why” question from evolutionary (220, 221), philosophical (222), and psychological perspectives (8).

Clinical implications

Success in consciousness science will usher in a new realm of interventions in modern medicine. It already has a substantial practical impact on the clinical approach to neurological disorders of consciousness (223). Pioneering neuroimaging work over the past two decades has allowed communication with non-responsive patients previously considered to be in a vegetative state (224, 225). As discussed above, other approaches have applied indices of neural complexity, inspired by theories such as IIT, to distinguish between subgroups of patients with and without neural signatures that imply minimal levels of conscious experience (116, 226, 227). Thankfully, such cases of nonresponsive consciousness following coma are relatively rare in the population as a whole. In contrast, in an increasingly ageing population, the incidence of nonresponsive advanced dementias is likely to rise substantially. We know relatively little about what subjective experience is like in advanced dementia (228, 229), but progress in consciousness science will enable similar measures of consciousness level to be applied to such patients, to guide their care and to provide information to families about what their loved one’s experience may be like.

Consciousness science also has considerable potential to improve our understanding and management of mental health conditions. These conditions, including depression, anxiety, schizophrenia, and autism spectrum disorders, are leading drivers of the global disease burden (230). In the European Union alone, the healthcare and socioeconomic costs of mental health conditions are estimated to total €600 billion/year (equivalent to >4% of gross domestic product) (231). Major unmet needs remain in this field, with first-line pharmacological and psychotherapeutic interventions, often discovered through serendipity alone, having remained unchanged for decades and showing limited effect sizes overall (232). One concern is that drug discovery efforts focus on behavioral markers of a certain condition (e.g., anxiety or chronic pain) in model animals such as mice, which may not accurately represent the conscious experience of these conditions in humans (233235). Indeed, in 2017, Thomas Insel, a former head of the United States National Institute of Mental Health, conceded that even US$20 billion of investment had failed to “move the needle” on mental health (236).

From this perspective, it is striking that emotion and affective states in general have been comparatively neglected in consciousness research, (though see references 25, 33, 209, 237241) and especially so if phenomenal experience is indeed about it feeling like something to be a conscious creature. Experimental medicine in this space is hampered by the fact that we still know very little about the neural mechanisms that underpin debilitating changes in subjective experience (242). Another interesting approach is to harness unconscious processes to develop more effective treatments for phobias. In one proof-of-concept study, researchers were able to decode fear-related representations that occurred unconsciously and reward them using neurofeedback (243). This, in turn, was reported to reduce physiological fear responses to consciously perceived stimuli without the participant having to undergo (often aversive) conscious exposure therapy.

The next frontier is to create an effective bridge from mechanisms to new interventions in mental health by studying markers of subjective experience in both human and animal models. Both sides of this interaction are crucial, as psychiatry and neuroscience need consciousness science and vice versa. Dysfunctions of consciousness in mental health disorders are probably distinctive in humans, having idiosyncratic content—mice may not become consciously depressed, for instance. At the same time, however, many of the same computational primitives supporting the neural control of behavior are conserved across species, raising the potential for biomarkers of conscious experience to be validated in humans and back-translated into animal models of mental health conditions (239241). More broadly, if we gain a precise understanding of the biological basis of consciousness, it should be possible to design circuit-level interventions, for instance, brain–computer interfaces, that directly target, remediate, and potentially enhance aspects of conscious experience.

Ethical implications

Consciousness in nonhuman animals

Consideration of consciousness is often intertwined with ethical and moral obligations toward (presumed) conscious organisms (244). In ancient traditions, such as the Dhārmic religions, moral obligations toward nonhuman animals often rested on conscious aspects of experience (245, 246). For example, damage caused to an animal or system would only be ethically problematic if it caused them to be consciously in pain. Today, the corresponding philosophical term is “sentientism”—the notion that moral status follows from the capacity for phenomenal experience. Note that some versions of sentientism restrict the claim to the capacity for valenced experience, for example, the ability to feel some form of “good” or “bad” such as “pleasure” or “pain” [for a discussion of the importance of a general theory of suffering, see Lee (247) and Metzinger (248)]. Most consciousness researchers now reject the notion that only humans have consciousness (163)—a notion that can be traced in different ways to Aristotle and Descartes (249). However, they disagree on the dividing line between conscious and non-conscious entities, or indeed on whether such a dividing line exists at all. IIT, for instance, has implications that consciousness is probably a widespread feature of all living and potentially also some non-living complex systems (108). Conversely, global cognitive theories such as HOT suggest that a meta-representational neural system is a prerequisite for consciousness, one that might be limited in scope to those creatures who also have the capacity for, perhaps implicit, metacognition (75, 250, 251). As theoretical and experimental progress refines our credence in these various theories, we will be able to better characterize our confidence in which animals are conscious and what kinds of conscious experiences they may have (32). A consensual theory of consciousness would crystallize this dividing line, possibly affecting not only the use of model animals in neuroscience itself (244) but also societal perceptions of animals’ suffering and their use by humans as sources of food, clothing, and medical products (252255).

The ethical implications of success in consciousness science are intertwined with the kind of scientific explanation of consciousness that would correspond to such success. The example of life is informative. Vitalists held that there was a firm dividing line between the living and the non-living and that this was associated with some additional property, élan vital, that supported life. Biomedical science dissolved the need for such an extra property and led to a move away from “living versus non-living” being a central dividing line of moral status; for example, this can be seen in the adoption of brain death by Western medicine as the more relevant criterion for moral obligations toward sustaining life support. Indeed, consciousness itself now takes a more central stage in discussions of moral obligations toward humans and other animals. In the future, however, advances in consciousness science may begin to dissolve or reformulate those dividing lines based on consciousness, leading them to be replaced by other as yet unknown considerations.

Law

The law presents another broad area of societal implications. Many legal frameworks distinguish between notions of mens rea (“guilty mind”) and actus rea (“guilty act”), in which mens rea picks out the conscious intent to engage in particular conduct. The neuroscience and biology of voluntary action, along with a deeper philosophical understanding of the concept of “free will”, can be perceived as undermining the foundational notion that conscious intent is under the individual’s control. Already, since the 1960s, brain injuries and diseases have been leveraged for legal defenses or post-hoc exoneration, as in the case of Charles Whitman, the “Texas Tower Sniper” found to have a large brain tumor impacting his amygdala (256). Today, the “my brain made me do it” defense is becoming increasingly popular, despite inherent conceptual difficulties (257). As we elucidate the neural basis of voluntary action (34, 258, 259) and the effects of unconscious processes on decision-making (260), it will become increasingly difficult to discern when moral and criminal responsibility should, if ever, apply (34, 261). These issues are not abstract. Whatever one’s philosophical position on free will may be, judgments are being passed every day on people whose brain development and operation will have been affected by factors outside their control.

Artificial consciousness

Success in consciousness science may also result in a detailed understanding of mechanisms that can, in principle, be recreated in an artificial system (40, 262, 263). In philosophy, the position that mental states—including conscious ones—can be instantiated, rather than merely simulated, in an artificial system with different structural and/or material properties is known as substrate independence (or, in more restrained versions, substrate flexibility). Substrate independence/flexibility is closely related to multiple realizability—the idea that similar mental states can be implemented in different ways, although not necessarily in different kinds of material (264). Both substrate independence/flexibility and multiple realizability are, in turn, related to functionalism—the idea that mental states depend on the functional organization of a system, which can include its internal causal structure, rather than on its material properties (265). Within the category of functionalism, the more specific notion of computational functionalism claims that computation provides a sufficient basis for consciousness (266), so that consciousness could be implemented in non-biological information processing devices such as artificial neural networks of the sort deployed in cutting-edge AI systems (262). However, even if some version of computational functionalism is true, which remains debated (40, 267), abstracting the biological information processing sufficient for consciousness away from the messy realities of sensory and motor systems may be difficult, if not impossible, leading any artificial consciousness to end up looking animal-like rather than existing disembodied in software (268, 269). Further, functionalism, whether computational or otherwise, may not be true in the end, in which case other factors may be necessary for consciousness, such as being “biological” or “alive”—a position broadly known as biological naturalism (40, 270).

If artificial consciousness were achieved, whether by design or inadvertently, it would of course bring about a huge shift in allowing consciousness to decouple from biological life, which would in turn herald major ethical challenges on at least a similar scale to those discussed in relation to animals. The ethical problems could even be more severe in some regards since we humans might not be able to recognize or have any relevant intuitions about artificial consciousness or its qualitative character. There may also be the potential to mass-produce artificial consciousness systems perhaps with the click of a mouse, leading to the possibility (even if very low probability) of introducing vast quantities of new suffering into the world, potentially of a form we could not recognize. These observations provide some reasons for why we should not deliberately pursue the goal of creating artificial consciousness (271). There are other reasons too. Artificially conscious systems may be more likely to have their own interests, as distinct from the “interests” endowed by human designers. This could exacerbate the problem of ensuring their behavior is guaranteed to be in line with human and broader planetary interests: this being the “value alignment” problem already prominent in discussions of AI ethics.

In the near future, it is more likely that artificial intelligence, in the form of machine learning, and consciousness will continue to decouple. Such decoupling is likely to be counterintuitive and affect what we can conceive. For instance, the philosopher John Searle wrote in 2015, “[...] I am convinced that consciousness matters enormously. Try to imagine me writing this book unconsciously, for example.” (272). Now, with the advent of large language models such as the GPT series, generative AI can write coherent and novel prose, and we can imagine how it could write a book unconsciously—though it might not yet be a very good one (273). Here, a different ethical question comes into focus. It is likely that artificial systems such as large language models will continue to improve in their mimicry of humans, leading a large sector of the population to adopt the “intentional stance” (274) toward such systems, attributing them with psychological properties such as beliefs, desires, and intentions, or perhaps even to fully perceive and believe these systems as being conscious. In these scenarios, people will intuitively assume they are conscious, even if scientists and engineers protest otherwise (40, 275277). Such misfirings of mindreading machinery may sometimes be benign, for instance when children believe their favorite Pixar character is real (278280), but this will not always be true. Significant challenges arise when people invest emotional significance into their relationships with seemingly conscious agents, as in the case of a Belgian man committing suicide after interacting with a chatbot (281).

More generally, the growth and societal penetrance of more powerful pseudo-conscious artefacts could have considerable societal impact (40, 282). People may be more open to psychological and behavioral manipulation if they believe the AI systems they are interacting with really feel and understand things. There may be calls to restructure our moral and legal systems around an intuition that an AI is conscious, even when it is not thereby potentially diverting resources and moral attention away from humans and animals that are actually conscious. Alternatively, we may learn to treat AI systems as if they are not conscious, even though we cannot help feeling that they are [perhaps illusions of artificial consciousness will be cognitively impenetrable, in the same way that some visual illusions are, (40)]. In situations like this, we risk brutalizing our minds, a danger long ago identified by Kant among others (283).

A key question running through all these issues is whether AI systems are more similar to humans in ways that turn out not to matter for consciousness (e.g., seducing our anthropic biases through linguistic ability) and less similar in ways that turn out to be critical (e.g., lacking the biological basis common to all known instances of consciousness, or not implementing the right kind of functional recurrency). A mature science of consciousness, guided by experiment and theory, will play a critical role in these debates.

Societal implications

An increasingly mechanistic understanding of consciousness is likely to reshape how humans see themselves and their place in the universe. We anticipate that human conceptions of consciousness will be reshaped by a gradual, mechanistic understanding provided by experimental science. The fields of evolution, genetics, and comparative cognition have eroded notions of mysterious human uniqueness in favor of explanations in terms of biological mechanisms. We envisage a continuation of this process as consciousness research continues to mature (though of course, a backlash is always possible, if elements of society feel threatened). This form of “success” that we envisage in consciousness science may lead the moral and ethical implications of consciousness to become more nuanced. In 1950, Alan Turing suggested that “[…] at the end of the 20th century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (284). Perhaps a similar process will lead us to soon no longer be vexed by the question, “But is it conscious”?

More problematic may be the societal and ethical questions that accompany the ability to control consciousness in other animals and humans (22), which is similar to how the development of clustered regularly interspaced short palindromic repeats (CRISPR) technology has allowed biomedical scientists unprecedented artificial control over life. There, the concern is not whether the RNA strands or bacteria going into a CRISPR-cas9 system are categorized as living or non-living. Instead, the overriding concern is that humanity now has the power to determine who and what lives, and the parameters, such as genetics, that affect the trajectory of such a life (see also 285 for a perspective based on bioelectrics rather than genetics). The molecular biology community has led the way in creating globally agreed standards on which forms of genome editing are permissible and which are prohibited—standards that have largely been adhered to, though with some exceptions. By analogy to CRISPR, similar concerns may be raised by a model of consciousness that is detailed enough to allow lasting, systematic manipulations of subjective experience—for instance, through pharmaceutical or brain–machine interface stimulation (286). Here, too, there will be a need for robust frameworks for scientific governance surrounding what types of “consciousness editing” are permissible, whether in the manipulation of consciousness where it already exists or in the de novo creation of conscious systems (248, 263). There will be related ethical questions about what forms of consciousness are ethically desirable or undesirable. These issues may arise sooner than we think. The rise of brain organoids and assembloids as model systems for neuroscience already pose a challenge in understanding how and whether synthetic consciousness may be created in a lab (214, 287, 288).

Conclusions

From Copernicus to Darwin, from Freud to Turing and modern biology and neuroscience, the history of scientific endeavor has repeatedly dethroned humanity from the center of the universe, each time widening our wonder rather than reducing it, and each time rendering our view of ourselves as more part of nature rather than apart from nature. There is every reason to believe that a deeper understanding of consciousness will follow a similar trajectory, enriching our lives with meaning and beauty, rather than draining them of these things.

But every revolution in understanding is different, and consciousness will be different again. How will a complete scientific understanding of consciousness penetrate our everyday conscious experiences and our appreciation of the human condition? As we ourselves are the explanatory target for consciousness science, might it be difficult, perhaps impossible, for us to fully appreciate the explanatory power of a successful theory of consciousness, even if one is developed? In this scenario, a distinctive cognitive disconnect may persist for consciousness science, so that our scientific understanding remains divorced from our lived experience. The alternative is that our experience of being human will change in ways that are currently extremely hard to foresee. One possibility is that we come to see ourselves as more fully embodied, rather than as conscious minds carried about by meat machines (289).

Closer to home, a mature and adequately funded (290) science of consciousness will intervene in many contemporary debates, perhaps resolving them decisively or at least changing their nature. Discussions over how to treat animals, adult human patients, and unborn children will be substantially informed by knowledge of the degree and form of conscious experiences in these organisms. The current politically dominated discourse in these areas may come to be seen as archaic as religious debates about the nature of the solar system seem to us today, but this, of course, depends on how political attitudes evolve.

Just as humans are now beginning to be able to create life from scratch, we will also be able to create conscious minds from scratch. What will this new ability to “play God” do to us? Importantly, we may be able to create specific kinds of consciousness rather than just new conscious organisms as we do when having children.

Finally, it is tempting to wonder what other discoveries could follow or precede the scientific explanation of consciousness—discoveries that could match its potential for reframing the human condition. Predictions here are especially futile, given the prevalence of unknown unknowns, but one possibility is finding evidence of extraterrestrial intelligent life. Such a discovery could highlight the diversity of conscious minds, the uniqueness of our own, and change how we see ourselves within the vastness of the universe. The difference between a universe teeming with mere life and one suffused with awareness is simply astronomical.

Acknowledgments

The authors would like to express their gratitude to Stephen M. Fleming (University College London) for his contributions to the development of the ideas presented in the manuscript. We also thank Anastassia Loukianov (Université Libre de Bruxelles, Belgium) for her assistance in editing and coordinating the integration of multiple documents.

Statements

Author contributions

AC: Conceptualization, Project administration, Visualization, Writing – original draft, Writing – review & editing.

LM: Conceptualization, Visualization, Writing – original draft, Writing – review & editing.

AKS: Conceptualization, Visualization, Writing – original draft, Writing – review & editing.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.

Funding

The authors declared that financial support was received for this work and/or its publication.

AC is a Research Director with the National Fund for Scientific Research (FRS-FNRS) Belgium, a member of the Royal Academy of Sciences, Letters and Fine Arts of Belgium, and a Fellow of the Canadian Institute for Advanced Research (CIFAR) Brain, Mind and Consciousness program. This work was supported by the European Research Council Advanced grant no. 101055060 “EXPERIENCE” to AC.

LM is a Tanenbaum Fellow and the co-director of the CIFAR Brain, Mind and Consciousness program. Her work is supported by the European Research Council Starting grant no. 101077144 “IndUncon”.

AKS is the co-director of the CIFAR Brain, Mind and Consciousness program. His work is supported by the European Research Council Advanced Grant no. 101019254 “CONSCIOUS”.

The funders were not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

Conflict of interest

AKS is an advisor to Conscium Ltd and AllJoined Inc.

The remaining authors declared that this work was conducted in the absence of financial relationships that could be construed as a potential conflict of interest.

The Frontiers in Science Editorial Office assisted in the conceptualization and design of this article’s figures.

The authors AC and LM declared that they were an editorial board member of Frontiers at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The authors declared that no generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

  1. ^ See https://www.arc-ethos.org.
  2. ^ Proxies are distinct from approximations. They are used to stand in for quantities that are not directly measurable or accessible. For example, carbon dioxide levels from ice cores serve as proxies for historical global temperatures. In contrast, approximations are values that are close to, and tend to asymptote toward, a target quantity but which may be easier or possible to measure. To illustrate, velocities calculated according to Newton’s laws are good approximations of actual velocity if relativistic effects are small. Something being used as a proxy does not mean it can be used as, or interpreted as, an approximation.
  3. ^ As Metzinger puts it, a future neurotechnology may enable “phenotechnology” (22).

References

1. Minot CS. The problem of consciousness in its biological aspects. Science (1902) 16(392):1–12. doi: 10.1126/science.16.392.1

PubMed Abstract | Crossref Full Text | Google Scholar

2. Crick FHC and Koch C. Towards a neurobiological theory of consciousness. Semin Neurosci (1990) 2:263–75. Available at https://profiles.nlm.nih.gov/101584582X469

Google Scholar

3. Metzinger T. Neural correlates of consciousness: empirical and conceptual questions. Cambridge, MA: MIT Press (2000). doi: 10.7551/mitpress/4928.001.0001

Crossref Full Text | Google Scholar

4. Kuhn RL. A landscape of consciousness: toward a taxonomy of explanations and implications. Prog Biophys Mol Biol (2024) 190:28–169. doi: 10.1016/j.pbiomolbio.2023.12.003

PubMed Abstract | Crossref Full Text | Google Scholar

5. Yaron I, Melloni L, Pitts M, and Mudrik L. The ConTraSt database for analysing and comparing empirical studies of consciousness theories. Nat Hum Behav (2022) 6(4):593–604. doi: 10.1038/s41562-021-01284-5

PubMed Abstract | Crossref Full Text | Google Scholar

6. Kahneman D. Adversarial collaboration: an EDGE lecture by Daniel Kahneman [video] (2022). Available at: https://www.edge.org/adversarial-collaboration-daniel-kahneman

Google Scholar

7. Lau HC. Are we studying consciousness yet? In: Weiskrantz L and Davies M, editors. Frontiers of consciousness. New York, NY: Oxford University Press (2008) 245–58. doi: 10.1093/acprof:oso/9780199233151.003.0008

Crossref Full Text | Google Scholar

8. Cleeremans A and Tallon-Baudry C. Consciousness matters: phenomenal experience has functional value. Neurosci Conscious (2022) 2022(1):niac007. doi: 10.1093/nc/niac007

PubMed Abstract | Crossref Full Text | Google Scholar

9. Schurger A and Graziano M. Consciousness explained or described? Neurosci Conscious (2022) 2022(1):niac001. doi: 10.1093/nc/niac001

PubMed Abstract | Crossref Full Text | Google Scholar

10. Zeman A. Consciousness. Brain (2001) 124(7):1263–89. doi: 10.1093/brain/124.7.1263

PubMed Abstract | Crossref Full Text | Google Scholar

11. Bayne T, Hohwy J, and Owen AM. Are there levels of consciousness? Trends Cogn Sci (2016) 20(6):405–13. doi: 10.1016/j.tics.2016.03.009

PubMed Abstract | Crossref Full Text | Google Scholar

12. Seth AK and Bayne T. Theories of consciousness. Nat Rev Neurosci (2022) 23(7):439–52. doi: 10.1038/s41583-022-00587-4

PubMed Abstract | Crossref Full Text | Google Scholar

13. Brentano F. Psychology from an empirical standpoint. London: Routledge and Kegan Paul; New York: Humanities Press (1973).

Google Scholar

14. Metzinger T. The elephant and the blind. Cambridge, MA: MIT Press (2024). doi: 10.7551/mitpress/15196.001.0001

Crossref Full Text | Google Scholar

15. Hudetz AG. Does consciousness have dimensions? J Conscious Stud (2024) 31(7):55–73. doi: 10.53765/20512201.31.7.055

Crossref Full Text | Google Scholar

16. Nagel T. What is like to be a bat? Philos Rev (1974) 83:434–50. doi: 10.2307/2183914

Crossref Full Text | Google Scholar

17. Damasio A. Self comes to mind: constructing the conscious brain. New York, NY: Pantheon Books (2010).

Google Scholar

18. Cleeremans A, Achoui D, Beauny A, Keuninckx L, Martin J-R, Muñoz-Moldes S, et al. Learning to be conscious. Trends Cogn Sci (2020) 24(2):112–23. doi: 10.1016/j.tics.2019.11.011

PubMed Abstract | Crossref Full Text | Google Scholar

19. Fleming SM. Know thyself: the science of self-awareness. New York, NY: Basic Books (2021).

Google Scholar

20. Frith CD. Consciousness, (meta) cognition, and culture. Q J Exp Psychol (Hove) (2023) 76(8):1711–23. doi: 10.1177/17470218231164502

PubMed Abstract | Crossref Full Text | Google Scholar

21. Hofstadter D. I am a strange loop. New York, NY: Basic Books (2007).

Google Scholar

22. Metzinger T. The ego tunnel – the science of the mind and the myth of the self. New York, NY: Basic Books (2009).

Google Scholar

23. Seth AK. Being you. A new science of consciousness. London: Faber & Faber (2021).

Google Scholar

24. Bechara A, Damasio H, and Damasio AR. Emotion, decision making and the orbitofrontal cortex. Cereb Cortex (2000) 10(3):295–307. doi: 10.1093/cercor/10.3.295

PubMed Abstract | Crossref Full Text | Google Scholar

25. Seth AK. Interoceptive inference, emotion, and the embodied self. Trends Cogn Sci (2013) 17(11):565–73. doi: 10.1016/j.tics.2013.09.007

PubMed Abstract | Crossref Full Text | Google Scholar

26. Barrett LF. The theory of constructed emotion: an active inference account of interoception and categorization. Soc Cogn Affect Neurosci (2017) 12(11):1833. doi: 10.1093/scan/nsx060

PubMed Abstract | Crossref Full Text | Google Scholar

27. Cleeremans A. The radical plasticity thesis: how the brain learns to be conscious. Front Psychol (2011) 2:86. doi: 10.3389/fpsyg.2011.00086

PubMed Abstract | Crossref Full Text | Google Scholar

28. Shea N and Frith CD. The global workspace needs metacognition. Trends Cogn Sci (2019) 23(7):560–71. doi: 10.1016/j.tics.2019.04.007

PubMed Abstract | Crossref Full Text | Google Scholar

29. Csikszentmihalyi M. Flow: the psychology of optimal experience. New York, NY: Harper and Row (1990).

Google Scholar

30. Ben-Haim MS, Dal Monte O, Fagan NA, Dunham Y, Hassin RR, Chang SWC, et al. Disentangling perceptual awareness from nonconscious processing in rhesus monkeys (Macaca mulatta). Proc Natl Acad Sci USA (2021) 118(15):e2017543118. doi: 10.1073/pnas.2017543118

PubMed Abstract | Crossref Full Text | Google Scholar

31. Birch J. Global workspace theory and animal consciousness. Philos Top (2020) 48(1):21–37. doi: 10.5840/philtopics20204812

Crossref Full Text | Google Scholar

32. Bayne T, Seth AK, Massimini M, Shepherd J, Cleeremans A, Fleming SM, et al. Tests for consciousness in humans and beyond. Trends Cogn Sci (2024) 28(5):454–66. doi: 10.1016/j.tics.2024.01.010

PubMed Abstract | Crossref Full Text | Google Scholar

33. Damasio A. The feeling of what happens: body and emotion in the making of consciousness. New York, NY: Harcourt Brace & Company (1999).

Google Scholar

34. Haggard P. Human volition: towards a neuroscience of will. Nat Rev Neurosci (2008) 9(12):934–46. doi: 10.1038/nrn2497

PubMed Abstract | Crossref Full Text | Google Scholar

35. Block N. On a confusion about a function of consciousness. Behav Brain Sci (1995) 18(2):227–47. doi: 10.1017/S0140525X00038188

Crossref Full Text | Google Scholar

36. Lamme VAF. How neuroscience will change our view on consciousness. Cogn Neurosci (2010) 1(3):204–20. doi: 10.1080/17588921003731586

PubMed Abstract | Crossref Full Text | Google Scholar

37. Cohen MA and Dennett DC. Consciousness cannot be separated from function. Trends Cogn Sci (2011) 15(8):358–64. doi: 10.1016/j.tics.2011.06.008

PubMed Abstract | Crossref Full Text | Google Scholar

38. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al. Mastering the game of go without human knowledge. Nature (2017) 550(7676):354–9. doi: 10.1038/nature24270

PubMed Abstract | Crossref Full Text | Google Scholar

39. Chalmers D. Could a large language model be conscious. Boston Review (2023). Available at: https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/

Google Scholar

40. Seth AK. Conscious artificial intelligence and biological naturalism. Behav Brain Sci (2025) 1–42. doi: 10.1017/S0140525X25000032

PubMed Abstract | Crossref Full Text | Google Scholar

41. Amir YZ, Assaf Y, Yovel Y, and Mudrik L. Experiencing without knowing? Empirical evidence for phenomenal consciousness without access. Cognition (2023) 238:105529. doi: 10.1016/j.cognition.2023.105529

PubMed Abstract | Crossref Full Text | Google Scholar

42. Suzuki K, Seth AK, and Schwartzman DJ. Modelling phenomenological differences in aetiologically distinct visual hallucinations using deep neural networks. Front Hum Neurosci (2024) 17:1159821. doi: 10.3389/fnhum.2023.1159821

PubMed Abstract | Crossref Full Text | Google Scholar

43. Moriguchi Y, Watanabe R, Sakata C, Zeleznikow-Johnston A, Wang J, Saji N, et al. Comparing color qualia structures through a similarity task in young children versus adults. Proc Natl Acad Sci USA (2025) 122(11):e2415346122. doi: 10.1073/pnas.2415346122

PubMed Abstract | Crossref Full Text | Google Scholar

44. Tononi G, Boly M, Massimini M, and Koch C. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci (2016) 17(7):450–61. doi: 10.1038/nrn.2016.44

PubMed Abstract | Crossref Full Text | Google Scholar

45. Lau H, Michel M, LeDoux JE, and Fleming SM. The mnemonic basis of subjective experience. Nat Rev Psychol (2022) 1(8):479–88. doi: 10.1038/s44159-022-00068-6

Crossref Full Text | Google Scholar

46. Tsuchiya N and Saigo H. A relational approach to consciousness: categories of level and contents of consciousness. Neurosci Conscious (2021) 2021(2):niab034. doi: 10.1093/nc/niab034

PubMed Abstract | Crossref Full Text | Google Scholar

47. Kawakita G, Zeleznikow-Johnston A, Tsuchiya N, and Oizumi M. Gromov–Wasserstein unsupervised alignment reveals structural correspondences between the color similarity structures of humans and large language models. Sci Rep (2024) 14:15917. doi: 10.1038/s41598-024-65604-1

PubMed Abstract | Crossref Full Text | Google Scholar

48. Fleming SM and Shea N. Quality space computations for consciousness. Trends Cogn Sci (2024) 28(10):896–906. doi: 10.1016/j.tics.2024.06.007

PubMed Abstract | Crossref Full Text | Google Scholar

49. Bargh JA. Before you know it: the unconscious reasons we do what we do. New York, NY: Atria Books (2017).

Google Scholar

50. Dijksterhuis A and Nordgren LF. A theory of unconscious thought. Perspect Psychol Sci (2006) 1(2):95–109. doi: 10.1111/j.1745-6916.2006.00007.x

PubMed Abstract | Crossref Full Text | Google Scholar

51. Hassin RR, Uleman JS, and Bargh JA. The new unconscious. Oxford: Oxford University Press (2005).

Google Scholar

52. Winkielman P and Berridge KC. Unconscious emotion. Curr Dir Psychol Sci (2004) 13(3):120–3. doi: 10.1111/j.0963-7214.2004.00288.x

Crossref Full Text | Google Scholar

53. Penfield W. The mystery of the mind: a critical study of consciousness and the human brain. Princeton, NJ: Princeton University Press (1975).

Google Scholar

54. Gazzaniga MS, Bogen JE, and Sperry RW. Some functional effects of sectioning the cerebral commissures in man. Proc Natl Acad Sci USA (1962) 48(10):1765–9. doi: 10.1073/pnas.48.10.1765

PubMed Abstract | Crossref Full Text | Google Scholar

55. LeDoux JE, Michel M, and Lau H. A little history goes a long way toward understanding why we study consciousness the way we do today. Proc Natl Acad Sci USA (2020) 117(13):6976–84. doi: 10.1073/pnas.1921623117

PubMed Abstract | Crossref Full Text | Google Scholar

56. Dehaene S and Changeux JP. Experimental and theoretical approaches to conscious processing. Neuron (2011) 70(2):200–27. doi: 10.1016/j.neuron.2011.03.018

PubMed Abstract | Crossref Full Text | Google Scholar

57. Horikawa T, Cowen AS, Keltner D, and Kamitani Y. The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions. iScience (2020) 23(5):101060. doi: 10.1016/j.isci.2020.101060

PubMed Abstract | Crossref Full Text | Google Scholar

58. Northoff G, Heinzel A, de Greck M, Bermpohl F, Dobrowolny H, and Panksepp J. Self-referential processing in our brain—a meta-analysis of imaging studies on the self. Neuroimage (2006) 31(1):440–57. doi: 10.1016/j.neuroimage.2005.12.002

PubMed Abstract | Crossref Full Text | Google Scholar

59. Casarotto S, Comanducci A, Rosanova M, Sarasso S, Fecchio M, Napolitani M, et al. Stratification of unresponsive patients by an independently validated index of brain complexity. Ann Neurol (2016) 80(5):718–29. doi: 10.1002/ana.24779

PubMed Abstract | Crossref Full Text | Google Scholar

60. Del Cul A, Baillet S, and Dehaene S. Brain dynamics underlying the nonlinear threshold for access to consciousness. PloS Biol (2007) 5(10):e260. doi: 10.1371/journal.pbio.0050260

PubMed Abstract | Crossref Full Text | Google Scholar

61. Schurger A, Sarigiannidis I, Naccache L, Sitt JD, and Dehaene S. Cortical activity is more stable when sensory stimuli are consciously perceived. Proc Natl Acad Sci USA (2015) 112(16):E2083–92. doi: 10.1073/pnas.1418730112

PubMed Abstract | Crossref Full Text | Google Scholar

62. Dembski C, Koch C, and Pitts M. Perceptual awareness negativity: a physiological correlate of sensory consciousness. Trends Cogn Sci (2021) 25(8):660–70. doi: 10.1016/j.tics.2021.05.009

PubMed Abstract | Crossref Full Text | Google Scholar

63. Samaha J, Iemi L, and Postle BR. Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy. Conscious Cogn (2017) 54:47–55. doi: 10.1016/j.concog.2017.02.005

PubMed Abstract | Crossref Full Text | Google Scholar

64. Fisch L, Privman E, Ramot M, Harel M, Nir Y, Kipervasser S, et al. Neural “ignition”: enhanced activation linked to perceptual awareness in human ventral stream visual cortex. Neuron (2009) 64(4):562–74. doi: 10.1016/j.neuron.2009.11.001

PubMed Abstract | Crossref Full Text | Google Scholar

65. Koch C, Massimini M, Boly M, and Tononi G. Neural correlates of consciousness: progress and problems. Nat Rev Neurosci (2016) 17(5):307–21. doi: 10.1038/nrn.2016.22

PubMed Abstract | Crossref Full Text | Google Scholar

66. Mashour GA, Roelfsema PR, Changeux JP, and Dehaene S. Conscious processing and the global neuronal workspace hypothesis. Neuron (2020) 105(5):776–98. doi: 10.1016/j.neuron.2020.01.026

PubMed Abstract | Crossref Full Text | Google Scholar

67. Parvizi J and Damasio A. Consciousness and the brainstem. Cognition (2001) 79(1–2):135–60. doi: 10.1016/S0010-0277(00)00127-X

PubMed Abstract | Crossref Full Text | Google Scholar

68. Spindler LRB, Luppi AI, Adapa RM, Craig MM, Coppola P, Peattie ARD, et al. Dopaminergic brainstem disconnection is common to pharmacological and pathological consciousness perturbation. Proc Natl Acad Sci USA (2021) 118(30):e2026289118. doi: 10.1073/pnas.2026289118

PubMed Abstract | Crossref Full Text | Google Scholar

69. Solms M. The hidden spring. London: Profile Books (2021).

Google Scholar

70. Fang Z, Dang Y, Ping A, Wang C, Zhao Q, Zhao H, et al. Human high-order thalamic nuclei gate conscious perception through the thalamofrontal loop. Science (2025) 388(6742):eadr3675. doi: 10.1126/science.adr3675

PubMed Abstract | Crossref Full Text | Google Scholar

71. Luo Q, Mitchell D, Cheng X, Mondillo K, McCaffrey D, Holroyd T, et al. Visual awareness, emotion, and gamma band synchronization. Cereb Cortex (2009) 19(8):1896–904. doi: 10.1093/cercor/bhn216

PubMed Abstract | Crossref Full Text | Google Scholar

72. Yuval-Greenberg S, Tomer O, Keren AS, Nelken I, and Deouell LY. Transient induced gamma-band response in EEG as a manifestation of miniature saccades. Neuron (2008) 58(3):429–41. doi: 10.1016/j.neuron.2008.03.027

PubMed Abstract | Crossref Full Text | Google Scholar

73. De Graaf TA, Hsieh P-J, and Sack AT. The ‘correlates’ in neural correlates of consciousness. Neurosci Biobehav Rev (2012) 36(1):191–7. doi: 10.1016/j.neubiorev.2011.05.012

PubMed Abstract | Crossref Full Text | Google Scholar

74. Aru J, Bachmann T, Singer W, and Melloni L. Distilling the neural correlates of consciousness. Neurosci Biobehav Rev (2012) 36(2):737–46. doi: 10.1016/j.neubiorev.2011.12.003

PubMed Abstract | Crossref Full Text | Google Scholar

75. Lau H. In consciousness we trust: the cognitive neuroscience of subjective experience. Oxford: Oxford University Press (2022). doi: 10.1093/oso/9780198856771.001.0001

Crossref Full Text | Google Scholar

76. Tsuchiya N, Wilke M, Frässle S, and Lamme VAF. No-report paradigms: extracting the true neural correlates of consciousness. Trends Cogn Sci (2015) 19(12):757–70. doi: 10.1016/j.tics.2015.10.002

PubMed Abstract | Crossref Full Text | Google Scholar

77. Lau HC and Passingham RE. Relative blindsight in normal observers and the neural correlate of visual consciousness. Proc Natl Acad Sci USA (2006) 103(49):18763–8. doi: 10.1073/pnas.0607716103

PubMed Abstract | Crossref Full Text | Google Scholar

78. Weiskrantz L, Warrington EK, Sanders MD, and Marshall J. Visual capacity in the hemianopic field following a restricted occipital ablation. Brain (1974) 97(4):709–28. doi: 10.1093/brain/97.1.709

PubMed Abstract | Crossref Full Text | Google Scholar

79. Bao Y, Zhou B, Yu X, Mao L, Gutyrchik E, Paolini M, et al. Conscious vision in blindness: a new perceptual phenomenon implemented in the “wrong” side of the brain. Psych J (2024) 13(6):885–92. doi: 10.1002/pchj.787

PubMed Abstract | Crossref Full Text | Google Scholar

80. Overgaard M, Fehl K, Mouridsen K, Bergholt B, and Cleeremans A. Seeing without seeing? Degraded conscious vision in a blindsight patient. PloS One (2008) 3(8):e3028. doi: 10.1371/journal.pone.0003028

PubMed Abstract | Crossref Full Text | Google Scholar

81. Phillips I. Blindsight is qualitatively degraded conscious vision. Psychol Rev (2021) 128(3):558–84. doi: 10.1037/rev0000254

PubMed Abstract | Crossref Full Text | Google Scholar

82. Doerig A, Schurger A, and Herzog MH. Hard criteria for empirical theories of consciousness. Cogn Neurosci (2021) 12(2):41–62. doi: 10.1080/17588928.2020.1772214

PubMed Abstract | Crossref Full Text | Google Scholar

83. Northoff G and Lamme VAF. Neural signs and mechanisms of consciousness: is there a potential convergence of theories of consciousness in sight? Neurosci Biobehav Rev (2020) 118:568–87. doi: 10.1016/j.neubiorev.2020.07.019

PubMed Abstract | Crossref Full Text | Google Scholar

84. Signorelli CM, Szczotka J, and Prentner R. Explanatory profiles of models of consciousness—towards a systematic classification. Neurosci Conscious (2021) 2021(2):niab021. doi: 10.1093/nc/niab021

PubMed Abstract | Crossref Full Text | Google Scholar

85. Baars BJ. A cognitive theory of consciousness. Cambridge: Cambridge University Press (1988).

Google Scholar

86. Fodor JA. The modularity of mind. Boston, MA: Bradford Books (1983). doi: 10.7551/mitpress/4737.001.0001

Crossref Full Text | Google Scholar

87. Dehaene S, Sergent C, and Changeux JP. A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc Natl Acad Sci USA (2003) 100(14):8520–5. doi: 10.1073/pnas.1332574100

PubMed Abstract | Crossref Full Text | Google Scholar

88. Block N. Consciousness, accessibility, and the mesh between psychology and neuroscience. Behav Brain Sci (2007) 30(5–6):481–99. doi: 10.1017/S0140525X07002786

PubMed Abstract | Crossref Full Text | Google Scholar

89. Naccache L. Why and how access consciousness can account for phenomenal consciousness. Philos Trans R Soc Lond B Biol Sci (2018) 373(1755):20170357. doi: 10.1098/rstb.2017.0357

PubMed Abstract | Crossref Full Text | Google Scholar

90. Mudrik L, Boly M, Dehaene S, Fleming SM, Lamme V, Seth A, et al. Unpacking the complexities of consciousness: theories and reflections. Neurosci Biobehav Rev (2025) 170:106053. doi: 10.1016/j.neubiorev.2025.106053

PubMed Abstract | Crossref Full Text | Google Scholar

91. Van Vugt B, Dagnino B, Vartak D, Safaai H, Panzeri S, Dehaene S, et al. The threshold for conscious report: signal loss and response bias in visual and frontal cortex. Science (2018) 360(6388):537–42. doi: 10.1126/science.aar7186

PubMed Abstract | Crossref Full Text | Google Scholar

92. Gaillard R, Dehaene S, Adam C, Clémenceau S, Hasboun D, Baulac M, et al. Converging intracranial markers of conscious access. PloS Biol (2009) 7(3):e61. doi: 10.1371/journal.pbio.1000061

PubMed Abstract | Crossref Full Text | Google Scholar

93. Rosenthal D. Consciousness and mind. Oxford: Oxford University Press (2006).

Google Scholar

94. Brown R. The HOROR theory of phenomenal consciousness. Philos Stud (2015) 172(7):1783–94. doi: 10.1007/s11098-014-0388-7

Crossref Full Text | Google Scholar

95. Fleming SM. Awareness as inference in a higher-order state space. Neurosci Conscious (2020) 2020(1):niz020. doi: 10.1093/nc/niz020

PubMed Abstract | Crossref Full Text | Google Scholar

96. Lau H and Rosenthal D. Empirical support for higher-order theories of consciousness. Trends Cogn Sci (2011) 15(8):365–73. doi: 10.1016/j.tics.2011.05.009

PubMed Abstract | Crossref Full Text | Google Scholar

97. Brown R, Lau H, and LeDoux JE. Understanding the higher-order approach to consciousness. Trends Cogn Sci (2019) 23(9):754–68. doi: 10.1016/j.tics.2019.06.009

PubMed Abstract | Crossref Full Text | Google Scholar

98. Crick FHC and Koch C. A framework for consciousness. Nat Neurosci (2003) 6(2):119–26. doi: 10.1038/nn0203-119

PubMed Abstract | Crossref Full Text | Google Scholar

99. Persaud N, Davidson M, Maniscalco B, Mobbs D, Passingham RE, Cowey A, et al. Awareness-related activity in prefrontal and parietal cortices in blindsight reflects more than superior visual performance. Neuroimage (2011) 58(2):605–11. doi: 10.1016/j.neuroimage.2011.06.081

PubMed Abstract | Crossref Full Text | Google Scholar

100. Cortese A, Amano K, Koizumi A, Kawato M, and Lau H. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance. Nat Commun (2016) 7(1):13669. doi: 10.1038/ncomms13669

PubMed Abstract | Crossref Full Text | Google Scholar

101. Shekhar M and Rahnev D. Distinguishing the roles of dorsolateral and anterior PFC in visual metacognition. J Neurosci (2018) 38(22):5078–87. doi: 10.1523/JNEUROSCI.3484-17.2018

PubMed Abstract | Crossref Full Text | Google Scholar

102. Fleming SM, Ryu J, Golfinos JG, and Blackmon KE. Domain-specific impairment in metacognitive accuracy following anterior prefrontal lesions. Brain (2014) 137(10):2811–22. doi: 10.1093/brain/awu221

PubMed Abstract | Crossref Full Text | Google Scholar

103. Miyamoto K, Setsuie R, Osada T, and Miyashita Y. Reversible silencing of the frontopolar cortex selectively impairs metacognitive judgment on non-experience in primates. Neuron (2018) 97(4):980–989.e6. doi: 10.1016/j.neuron.2017.12.040

PubMed Abstract | Crossref Full Text | Google Scholar

104. Panagiotaropoulos TI. An integrative view of the role of prefrontal cortex in consciousness. Neuron (2024) 112(10):1626–41. doi: 10.1016/j.neuron.2024.04.028

PubMed Abstract | Crossref Full Text | Google Scholar

105. Carruthers P and Gennaro R. Higher-order theories of consciousness. In Zalta EN and Nodelman U, editors. The Stanford Encyclopedia of Philosophy (Fall 2020 edition). Stanford, CA: Stanford University Press (2020). Available at: https://plato.stanford.edu/entries/consciousness-higher/

Google Scholar

106. Tononi G. Consciousness as integrated information: a provisional manifesto. Biol Bull (2008) 215(3):216–42. doi: 10.2307/25470707

PubMed Abstract | Crossref Full Text | Google Scholar

107. Albantakis L, Barbosa L, Findlay G, Grasso M, Haun AM, Marshall W, et al. Integrated information theory (IIT) 4.0: formulating the properties of phenomenal existence in physical terms. PloS Comput Biol (2023) 19(10):e1011465. doi: 10.1371/journal.pcbi.1011465

PubMed Abstract | Crossref Full Text | Google Scholar

108. Tononi G and Koch C. Consciousness: here, there and everywhere? Philos. Trans R Soc Lond B Biol Sci (2015) 370(1668):20140167. doi: 10.1098/rstb.2014.0167

PubMed Abstract | Crossref Full Text | Google Scholar

109. Haun A and Tononi G. Why does space feel the way it does? Towards a principled account of spatial experience. Entropy (2019) 21(12):1160. doi: 10.3390/e21121160

Crossref Full Text | Google Scholar

110. Comolatti R, Grasso M, and Tononi G. Why does time feel the way it does? Towards a principled account of temporal experience. arXiv [preprint] (2024). doi: 10.48550/arXiv.2412.13198

Crossref Full Text | Google Scholar

111. Mediano PAM, Rosas FE, Bor D, Seth AK, and Barrett AB. The strength of weak integrated information theory. Trends Cogn Sci (2022) 26(8):646–55. doi: 10.1016/j.tics.2022.04.008

PubMed Abstract | Crossref Full Text | Google Scholar

112. Bayne T. On the axiomatic foundations of the integrated information theory of consciousness. Neurosci Conscious (2018) 2018(1):niy007. doi: 10.1093/nc/niy007

PubMed Abstract | Crossref Full Text | Google Scholar

113. Klincewicz M, Cheng T, Schmitz M, Sebastián MÁ, and Snyder JS. What makes a theory of consciousness unscientific? Nat Neurosci (2025) 28(4):689–93. doi: 10.1038/s41593-025-01881-x

PubMed Abstract | Crossref Full Text | Google Scholar

114. Tononi G, Albantakis L, Barbosa L, Boly M, Cirelli C, Comolatti R, et al. Consciousness or pseudo-consciousness? A clash of two paradigms. Nat Neurosci (2025) 28(4):694–702. doi: 10.1038/s41593-025-01880-y

PubMed Abstract | Crossref Full Text | Google Scholar

115. Rosanova M, Gosseries O, Casarotto S, Boly M, Casali AG, Bruno MA, et al. Recovery of cortical effective connectivity and recovery of consciousness in vegetative patients. Brain (2012) 135(4):1308–20. doi: 10.1093/brain/awr340

PubMed Abstract | Crossref Full Text | Google Scholar

116. Casali AG, Gosseries O, Rosanova M, Boly M, Sarasso S, Casali KR, et al. A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med (2013) 5(198):198ra105. doi: 10.1126/scitranslmed.3006294

PubMed Abstract | Crossref Full Text | Google Scholar

117. Cogitate Consortium, Ferrante O, Gorska-Klimowska U, Henin S, Hirschhorn R, Khalaf A, et al. Adversarial testing of global neuronal workspace and integrated information theories of consciousness. Nature (2025) 642(8066):133–42. doi: 10.1038/s41586-025-08888-1

PubMed Abstract | Crossref Full Text | Google Scholar

118. Hohwy J and Seth AK. Predictive processing as a systematic basis for identifying the neural correlates of consciousness. PhiMiSci (2020) 1:3. doi: 10.33735/phimisci.2020.II.64

Crossref Full Text | Google Scholar

119. Friston KJ, Daunizeau J, Kilner J, and Kiebel SJ. Action and behavior: a free-energy formulation. Biol Cybern (2010) 102(3):227–60. doi: 10.1007/s00422-010-0364-z

PubMed Abstract | Crossref Full Text | Google Scholar

120. Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav Brain Sci (2013) 36(3):181–204. doi: 10.1017/S0140525X12000477

PubMed Abstract | Crossref Full Text | Google Scholar

121. Hohwy J. The predictive mind. Oxford: Oxford University Press (2013). doi: 10.1093/acprof:oso/9780199682737.001.0001

Crossref Full Text | Google Scholar

122. Helmholtz HL. Handbuch der physiologischen Optik (1st edition; 3rd edition with extensive commentary by Gullstrand A, von Kries J, and Nagel W, editors). Leipzig: L. Voss. (1st: 1867; 3rd: 1910).

Google Scholar

123. Barrett LF and Simmons WK. Interoceptive predictions in the brain. Nat Rev Neurosci (2015) 16(7):419–29. doi: 10.1038/nrn3950

PubMed Abstract | Crossref Full Text | Google Scholar

124. Seth AK. The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies. In: Metzinger T and Windt J, editors. Open MIND. Frankfurt am main: MIND Group (2015). doi: 10.15502/9783958570108

Crossref Full Text | Google Scholar

125. Friston K. The free-energy principle: a unified brain theory? Nat Rev Neurosci (2010) 11(2):127–38. doi: 10.1038/nrn2787

PubMed Abstract | Crossref Full Text | Google Scholar

126. Friston K, Da Costa L, Sajid N, Heins C, Ueltzhöffer K, Pavliotis GA, et al. The free energy principle made simpler but not too simple. Phys Rep (2023) 1024:1–29. doi: 10.1016/j.physrep.2023.07.001

Crossref Full Text | Google Scholar

127. Pennartz CMA. Consciousness, representation, action: the importance of being goal directed. Trends Cogn Sci (2018) 22(2):137–53. doi: 10.1016/j.tics.2017.10.006

PubMed Abstract | Crossref Full Text | Google Scholar

128. Friston KJ. The mathematics of mind time. Aeon (2017). Available at: https://aeon.co/essays/consciousness-is-not-a-thing-but-a-process-of-inference

Google Scholar

129. Lamme VA and Roelfsema PR. The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci (2000) 23(11):571–9. doi: 10.1016/S0166-2236(00)01657-X

PubMed Abstract | Crossref Full Text | Google Scholar

130. Lamme VAF. Towards a true neural stance on consciousness. Trends Cogn Sci (2006) 10(11):494–501. doi: 10.1016/j.tics.2006.09.001

PubMed Abstract | Crossref Full Text | Google Scholar

131. Parr T, Corcoran AW, Friston KJ, and Hohwy J. Perceptual awareness and active inference. Neurosci Conscious (2019) 2019(1):niz012. doi: 10.1093/nc/niz012

PubMed Abstract | Crossref Full Text | Google Scholar

132. Novicky F, Parr T, Friston K, Mirza MB, and Sajid N. Bistable perception, precision and neuromodulation. Cereb Cortex (2024) 34(1):bhad401. doi: 10.1093/cercor/bhad401

PubMed Abstract | Crossref Full Text | Google Scholar

133. Limanowski J and Blankenburg F. Minimal self-models and the free energy principle. Front Hum Neurosci (2013) 7:547. doi: 10.3389/fnhum.2013.00547

PubMed Abstract | Crossref Full Text | Google Scholar

134. Whyte CJ and Smith R. The predictive global neuronal workspace: a formal active inference model of visual consciousness. Prog Neurobiol (2021) 199:101918. doi: 10.1016/j.pneurobio.2020.101918

PubMed Abstract | Crossref Full Text | Google Scholar

135. Hardstone R, Zhu M, Flinker A, Melloni L, Devore S, Friedman D, et al. Long-term priors influence visual perception through recruitment of long-range feedback. Nat Commun (2021) 12(1):6288. doi: 10.1038/s41467-021-26544-w

PubMed Abstract | Crossref Full Text | Google Scholar

136. Tal A, Sar-Shalom M, Krawitz T, Biderman D, and Mudrik L. Awareness is needed for contextual effects in ambiguous object recognition. Cortex (2024) 173:49–60. doi: 10.1016/j.cortex.2024.01.003

PubMed Abstract | Crossref Full Text | Google Scholar

137. Solomon SS, Tang H, Sussman E, and Kohn A. Limited evidence for sensory prediction error responses in visual cortex of macaques and humans. Cereb Cortex (2021) 31(6):3136–52. doi: 10.1093/cercor/bhab014

PubMed Abstract | Crossref Full Text | Google Scholar

138. Schultz W. Dopamine reward prediction error coding. Dial Clin Neurosci (2016) 18(1):23–32. doi: 10.31887/DCNS.2016.18.1/wschultz

PubMed Abstract | Crossref Full Text | Google Scholar

139. de Lange FP, Heilbron M, and Kok P. How do expectations shape perception? Trends Cogn Sci (2018) 22(9):764–79. doi: 10.1016/j.tics.2018.06.002

PubMed Abstract | Crossref Full Text | Google Scholar

140. Wiese W. The science of consciousness does not need another theory, it needs a minimal unifying model. Neurosci Conscious (2020) 2020(1):niaa013. doi: 10.1093/nc/niaa013

PubMed Abstract | Crossref Full Text | Google Scholar

141. Chalmers D. Facing up to the problem of consciousness. J Conscious Stud (1995) 3(23):200–19. Available at: https://consc.net/papers/facing.pdf.

Google Scholar

142. Huizenga JR. Cold fusion: the scientific fiasco of the century. New York, NY: University of Rochester Press (1992).

Google Scholar

143. Rettig M. Using the multiple intelligences to enhance instruction for young children and young children with disabilities. Early Child Educ J (2005) 32:255–9. doi: 10.1007/s10643-004-0865-2

Crossref Full Text | Google Scholar

144. Markram H. The blue brain project. Nat Rev Neurosci (2006) 7(2):153–60. doi: 10.1038/nrn1848

PubMed Abstract | Crossref Full Text | Google Scholar

145. Newborn M. Deep blue: an artificial intelligence milestone. New York, NY: Springer (2010). doi: 10.1007/978-0-387-21790-1

Crossref Full Text | Google Scholar

146. Open AI, Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, et al. GPT-4 technical report. arXiv [preprint, version 1]. (2023). doi: 10.48550/arXiv.2303.08774

Crossref Full Text | Google Scholar

147. du Sautoy M. The creativity code: Art and innovation in the age of AI, fourth estate. Cambridge, MA: Harvard University Press (2019).

Google Scholar

148. Fazekas P, Cleeremans A, and Overgaard M. A construct-first approach to consciousness science. Neurosci Biobehav Rev (2024) 156:105480. doi: 10.1016/j.neubiorev.2023.105480

PubMed Abstract | Crossref Full Text | Google Scholar

149. Boly M, Massimini M, Tsuchiya N, Postle BR, Koch C, and Tononi G. Are the neural correlates of consciousness in the front or in the back of the cerebral cortex? Clinical and neuroimaging evidence. J Neurosci (2017) 37(40):9603–13. doi: 10.1523/JNEUROSCI.3218-16.2017

PubMed Abstract | Crossref Full Text | Google Scholar

150. Odegaard B, Knight RT, and Lau H. Should a few null findings falsify prefrontal theories of conscious perception? J Neurosci (2017) 37(40):9593–602. doi: 10.1523/JNEUROSCI.3217-16.2017

PubMed Abstract | Crossref Full Text | Google Scholar

151. Cleeremans A. Computational correlates of consciousness. Prog Brain Res (2005) 150:81–98. doi: 10.1016/S0079-6123(05)50007-4

PubMed Abstract | Crossref Full Text | Google Scholar

152. Dehaene S and Naccache L. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition (2001) 79(1–2):1–37. doi: 10.1016/S0010-0277(00)00123-2

PubMed Abstract | Crossref Full Text | Google Scholar

153. Schupbach JN and Sprenger J. The logic of explanatory power. Philos Sci (2011) 78(1):105–27. doi: 10.1086/658111

Crossref Full Text | Google Scholar

154. Ylikoski P and Kuorikoski J. Dissecting explanatory power. Philos Stud (2010) 148(2):201–19. doi: 10.1007/s11098-008-9324-z

Crossref Full Text | Google Scholar

155. Tsuchiya N, Phillips S, and Saigo H. Enriched category as a model of qualia structure based on similarity judgments. Conscious Cogn (2022) 101:103319. doi: 10.1016/j.concog.2022.103319

PubMed Abstract | Crossref Full Text | Google Scholar

156. Ramstead MJD, Seth AK, Hesp C, Sandved-Smith L, Mago J, Lifshitz M, et al. From generative models to generative passages: a computational approach to (neuro) phenomenology. Rev Philos Psychol (2022) 13(4):829–57. doi: 10.1007/s13164-021-00604-y

PubMed Abstract | Crossref Full Text | Google Scholar

157. Sandved-Smith L. A computational model of Minimal Phenomenal Experience (MPE). Preprints.org [preprint, version 3] (2024). Available at: https://www.preprints.org/manuscript/202411.0649/v1

Google Scholar

158. Suzuki M and Larkum ME. General anesthesia decouples cortical pyramidal neurons. Cell (2020) 180(4):666–676.e13. doi: 10.1016/j.cell.2020.01.024

PubMed Abstract | Crossref Full Text | Google Scholar

159. Metzinger T. Minimal phenomenal experience: meditation, tonic alertness, and the phenomenology of “pure” consciousness. PhiMiSci (2020) 1(I):1–44. doi: 10.33735/phimisci.2020.I.46

Crossref Full Text | Google Scholar

160. ATLAS Collaboration, Aad G, Abat E, Abdallah J, Abdelalim AA, Abdesselam A, et al. The ATLAS experiment at the CERN large hadron collider. J Instrum (2008) 3(8):S08003. doi: 10.1088/1748-0221/3/08/S08003

Crossref Full Text | Google Scholar

161. Abbott BP, Abbott R, Adhikari R, Ajith P, Allen B, Allen G, et al. LIGO: the laser interferometer gravitational-wave observatory. Rep Prog Phys (2009) 72(7):76901. doi: 10.1088/0034-4885/72/7/076901

Crossref Full Text | Google Scholar

162. Abbott LF, Angelaki DE, Carandini M, Churchland AK, Dan Y, Dayan P, et al. An international laboratory for systems and computational neuroscience. Neuron (2017) 96(6):1213–8. doi: 10.1016/j.neuron.2017.12.013

PubMed Abstract | Crossref Full Text | Google Scholar

163. Francken JC, Beerendonk L, Molenaar D, Fahrenfort JJ, Kiverstein JD, Seth AK, et al. An academic survey on theoretical foundations, common assumptions and the current state of consciousness science. Neurosci Conscious (2022) 2022(1):niac011. doi: 10.1093/nc/niac011

PubMed Abstract | Crossref Full Text | Google Scholar

164. Negro N. (Dis)confirming theories of consciousness and their predictions: towards a Lakatosian consciousness science. Neurosci Conscious (2024) 2024(1):niae012. doi: 10.1093/nc/niae012

PubMed Abstract | Crossref Full Text | Google Scholar

165. Eriksen CW. Subception: fact or artifact? Psychol Rev (1956) 63(1):74–80. doi: 10.1037/h0044441

PubMed Abstract | Crossref Full Text | Google Scholar

166. Holender D. Semantic activation without conscious identification in dichotic listening, parafoveal vision and visual masking: a survey and appraisal. Behav Brain Sci (1986) 9(1):1–23. doi: 10.1017/S0140525X00021269

Crossref Full Text | Google Scholar

167. Reingold EM and Merikle PM. Using direct and indirect measures to study perception without awareness. Percept Psychophys (1988) 44(6):563–75. doi: 10.3758/BF03207490

PubMed Abstract | Crossref Full Text | Google Scholar

168. Shanks DR. Regressive research: the pitfalls of post hoc data selection in the study of unconscious mental processes. Psychon Bull Rev (2017) 24(3):752–75. doi: 10.3758/s13423-016-1170-y

PubMed Abstract | Crossref Full Text | Google Scholar

169. Hassin RR. Yes it can: on the functional abilities of the human unconscious. Perspect Psychol Sci (2013) 8(2):195–207. doi: 10.1177/1745691612460684

PubMed Abstract | Crossref Full Text | Google Scholar

170. Newell BR and Shanks DR. Unconscious influences on decision making: a critical review. Behav Brain Sci (2014) 37(1):1–19. doi: 10.1017/S0140525X12003214

PubMed Abstract | Crossref Full Text | Google Scholar

171. Peters MAK, Kentridge RW, Phillips I, and Block N. Does unconscious perception really exist? Continuing the ASSC20 debate. Neurosci Conscious (2017) 2017(1):nix015. doi: 10.1093/nc/nix015

PubMed Abstract | Crossref Full Text | Google Scholar

172. Rothkirch M and Hesselmann G. What we talk about when we talk about unconscious processing—a plea for best practices. Front Psychol (2017) 8:835. doi: 10.3389/fpsyg.2017.00835

PubMed Abstract | Crossref Full Text | Google Scholar

173. Rahnev D, Balsdon T, Charles L, de Gardelle V, Denison R, Desender K, et al. Consensus goals in the field of visual metacognition. Perspect Psychol Sci (2022) 17(6):1746–65. doi: 10.1177/17456916221075615

PubMed Abstract | Crossref Full Text | Google Scholar

174. Stockart F, Schreiber M, Amerio P, Carmel D, Cleeremans A, Deouell LY, et al. Studying unconscious processing: contention and consensus. Behav Brain Sci (2025) 1–77. doi: 10.1017/S0140525X25101489

PubMed Abstract | Crossref Full Text | Google Scholar

175. Schlegelmilch R. Estimating the reproducibility of psychological science. Science (2015) 349(6251):aac4716. doi: 10.1126/science.aac4716

PubMed Abstract | Crossref Full Text | Google Scholar

176. Frank MC, Alcock KJ, Arias-Trejo N, Aschersleben G, Baldwin D, Barbu S, et al. Quantifying sources of variability in infancy research using the infant-directed-speech preference. Adv Methods Pract Psychol Sci (2020) 3(1):24–52. doi: 10.1177/2515245919900809

Crossref Full Text | Google Scholar

177. Moshontz H, Campbell L, Ebersole CR, IJzerman H, Urry HL, Forscher PS, et al. The psychological science accelerator: advancing psychology through a distributed collaborative network. Adv Methods Pract Psychol Sci (2018) 1(4):501–15. doi: 10.1177/2515245918797607

PubMed Abstract | Crossref Full Text | Google Scholar

178. Greenwald AG, Draine SC, and Abrams RL. Three cognitive markers of unconscious semantic activation. Science (1996) 273(5282):1699–702. doi: 10.1126/science.273.5282.1699

PubMed Abstract | Crossref Full Text | Google Scholar

179. Maoz U, Yaffe G, Koch C, and Mudrik L. Neural precursors of decisions that matter—an ERP study of deliberate and arbitrary choice. eLife (2019) 8:e39787. doi: 10.7554/eLife.39787

PubMed Abstract | Crossref Full Text | Google Scholar

180. Raccah O, Block N, and Fox KCR. Does the prefrontal cortex play an essential role in consciousness? Insights from intracranial stimulation in the human brain. J Neurosci (2021) 41(10):2076–87. doi: 10.1523/JNEUROSCI.1141-20.2020

PubMed Abstract | Crossref Full Text | Google Scholar

181. Amerio P, Michel M, Goerttler S, Peters MAK, and Cleeremans A. Unconscious perception of Vernier offsets. Open Mind (Camb.) (2024) 8:739–65. doi: 10.1162/opmi_a_00145

PubMed Abstract | Crossref Full Text | Google Scholar

182. Kriegeskorte N and Douglas PK. Cognitive computational neuroscience. Nat Neurosci (2018) 21(9):1148–60. doi: 10.1038/s41593-018-0210-5

PubMed Abstract | Crossref Full Text | Google Scholar

183. Furstenberg A, Breska A, Sompolinsky H, and Deouell LY. Evidence of change of intention in picking situations. J Cogn Neurosci (2015) 27(11):2133–46. doi: 10.1162/jocn_a_00842

PubMed Abstract | Crossref Full Text | Google Scholar

184. Fletcher PC and Frith CD. Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nat Rev Neurosci (2009) 10(1):48–58. doi: 10.1038/nrn2536

PubMed Abstract | Crossref Full Text | Google Scholar

185. Sutherland NS. The international dictionary of psychology. New York, NY: Continuum (1989).

Google Scholar

186. Mudrik L, Hirschhorn R, and Korisky U. Taking consciousness for real: increasing the ecological validity of the study of conscious vs. unconscious processes. Neuron (2024) 112(10):1642–56. doi: 10.1016/j.neuron.2024.03.031

PubMed Abstract | Crossref Full Text | Google Scholar

187. Suzuki K, Lush P, Seth AK, and Roseboom W. Intentional binding without intentional action. Psychol Sci (2019) 30(6):842–53. doi: 10.1177/0956797619842191

PubMed Abstract | Crossref Full Text | Google Scholar

188. Hirschhorn R, Biderman D, Biderman N, Yaron I, Bennet R, Plotnik M, et al. Multi-trial inattentional blindness in virtual reality. Behav Res Methods (2024) 56(4):3452–68. doi: 10.3758/s13428-024-02401-8

PubMed Abstract | Crossref Full Text | Google Scholar

189. Herbelin B, Salomon R, Serino A, and Blanke O. Neural mechanisms of bodily self-consciousness and the experience of presence in virtual reality. In: Gaggioli A, Ferscha A, Riva G, Dunne S, and Viaud-Delmon I, editors. Human computer confluence. DeGruyterBrill, Berlin (2016) 80–96

Google Scholar

190. Cohen M, Botch T, and Robertson C. How colorful is visual experience? Evidence from gaze-contingent virtual reality. J Vis (2020) 20(11):917. doi: 10.1167/jov.20.11.917

Crossref Full Text | Google Scholar

191. Brookes MJ, Leggett J, Rea M, Hill RM, Holmes N, Boto E, et al. Magnetoencephalography with optically pumped magnetometers (OPM-MEG): the next generation of functional neuroimaging. Trends Neurosci (2022) 45(8):621–34. doi: 10.1016/j.tins.2022.05.008

PubMed Abstract | Crossref Full Text | Google Scholar

192. Blanke O and Metzinger T. Full-body illusions and minimal phenomenal selfhood. Trends Cogn Sci (2009) 13(1):7–13. doi: 10.1016/j.tics.2008.10.003

PubMed Abstract | Crossref Full Text | Google Scholar

193. Lenggenhager B, Tadi T, Metzinger T, and Blanke O. Video ergo sum: manipulating bodily self-consciousness. Science (2007) 317(5841):1096–9. doi: 10.1126/science.1143439

PubMed Abstract | Crossref Full Text | Google Scholar

194. Sanchez-Vives MV and Slater M. From presence to consciousness through virtual reality. Nat Rev Neurosci. (2005) 6(4):332–9. doi: 10.1038/nrn1651

PubMed Abstract | Crossref Full Text | Google Scholar

195. Korisky U and Mudrik L. Dimensions of perception: 3D real-life objects are more readily detected than their 2D images. Psychol Sci (2021) 32(10):1636–48. doi: 10.1177/09567976211010718

PubMed Abstract | Crossref Full Text | Google Scholar

196. Stangl M, Maoz SL, and Suthana N. Mobile cognition: imaging the human brain in the ‘real world’. Nat Rev Neurosci (2023) 24(6):347–62. doi: 10.1038/s41583-023-00692-y

PubMed Abstract | Crossref Full Text | Google Scholar

197. Sandved-Smith L, Bogotá JD, Hohwy J, Kiverstein J, and Lutz A. Deep computational neurophenomenology: a methodological framework for investigating the how of experience. Researchhub [preprint] (2024). doi: 10.31219/osf.io/qfgmj. preprint

PubMed Abstract | Crossref Full Text | Google Scholar

198. von Uexküll J. A foray into the worlds of animals and humans with a theory of meaning. Minneapolis, MN: University of Minnesota Press (2010).

Google Scholar

199. Yong E. An immense world: how animal senses reveal the hidden realms around us. New York, NY: Random House (2022).

Google Scholar

200. Broday-Dvir R, Norman Y, Harel M, Mehta AD, and Malach R. Perceptual stability reflected in neuronal pattern similarities in human visual cortex. Cell Rep (2023) 42(6):112614. doi: 10.1016/j.celrep.2023.112614

PubMed Abstract | Crossref Full Text | Google Scholar

201. Vishne G, Gerber EM, Knight RT, and Deouell LY. Distinct ventral stream and prefrontal cortex representational dynamics during sustained conscious visual perception. Cell Rep (2023) 42(7):112752. doi: 10.1016/j.celrep.2023.112752

PubMed Abstract | Crossref Full Text | Google Scholar

202. Chang L and Tsao DY. The code for facial identity in the primate brain. Cell (2017) 169(6):1013–1028.e14. doi: 10.1016/j.cell.2017.05.011

PubMed Abstract | Crossref Full Text | Google Scholar

203. Malach R. Local neuronal relational structures underlying the contents of human conscious experience. Neurosci Conscious (2021) 2021(2):niab028. doi: 10.1093/nc/niab028

PubMed Abstract | Crossref Full Text | Google Scholar

204. Huang Z, Mashour GA, and Hudetz AG. Functional geometry of the cortex encodes dimensions of consciousness. Nat Commun (2023) 14(1):72. doi: 10.1038/s41467-022-35764-7

PubMed Abstract | Crossref Full Text | Google Scholar

205. Clark A. A theory of sentience. Oxford: Oxford University Press (2000). doi: 10.1093/acprof:oso/9780198238515.001.0001

Crossref Full Text | Google Scholar

206. Rosenthal D. How to think about mental qualities. Philos Issues (2010) 20(1):368–93. doi: 10.1111/j.1533-6077.2010.00190.x

Crossref Full Text | Google Scholar

207. Corneille O and Lush P. Sixty years after Orne’s. Pers Soc Psychol Rev (2023) 27(1):83–101. doi: 10.1177/10888683221104368

PubMed Abstract | Crossref Full Text | Google Scholar

208. Lush P, Botan V, Scott RB, Seth AK, Ward J, and Dienes Z. Trait phenomenological control predicts experience of mirror synaesthesia and the rubber hand illusion. Nat Commun (2020) 11(1):4853. doi: 10.1038/s41467-020-18591-6

PubMed Abstract | Crossref Full Text | Google Scholar

209. Amoruso E, Terhune DB, Kromm M, Kirker S, Muret D, and Makin TR. Reassessing referral of touch following peripheral deafferentation: the role of contextual bias. Cortex (2023) 167:167–77. doi: 10.1016/j.cortex.2023.04.019

PubMed Abstract | Crossref Full Text | Google Scholar

210. Rosenthal R. Interpersonal expectations: effects of the experimenter’s hypothesis. In: Rosenthal R and Rosnow R, editors. Artifacts in behavioral research. New York, NY: Oxford University Press (2009) 138–210. doi: 10.1093/acprof:oso/9780195385540.003.0006

Crossref Full Text | Google Scholar

211. Doyen S, Klein O, Pichon C-L, and Cleeremans A. Behavioral priming: It’s all in the mind, but whose mind? PloS One (2012) 7(1):e29081. doi: 10.1371/journal.pone.0029081

PubMed Abstract | Crossref Full Text | Google Scholar

212. Corneille O and Lush P. Sixty years after Orne’s American psychologist article: A conceptual framework for subjective experiences elicited by demand characteristics. Pers Soc Psychol Rev (2023) 27(1):83–101. doi: 10.1177/10888683221104368

PubMed Abstract | Crossref Full Text | Google Scholar

213. Lancaster MA, Renner M, Martin CA, Wenzel D, Bicknell LS, Hurles ME, et al. Cerebral organoids model human brain development and microcephaly. Nature (2013) 501(7467):373–9. doi: 10.1038/nature12517

PubMed Abstract | Crossref Full Text | Google Scholar

214. Smirnova L, Caffo BS, Gracias DH, Huang Q, Morales Pantoja IE, Tang B, et al. Organoid intelligence (OI): the new frontiers in biocomputing and intelligence in a dish. Front Sci (2023) 1:1017235. doi: 10.3389/fsci.2023.1017235

Crossref Full Text | Google Scholar

215. Birch J. The edge of sentience: risk and precaution in humans, other animals, and AI. Oxford: Oxford University Press (2024). doi: 10.1093/9780191966729.001.0001

Crossref Full Text | Google Scholar

216. Russell S. If we succeed. Dædalus (2022) 151(2):43–57. doi: 10.1162/daed_a_01899

Crossref Full Text | Google Scholar

217. Michel M, Beck D, Block N, Blumenfeld H, Brown R, Carmel D, et al. Opportunities and challenges for a maturing science of consciousness. Nat Hum Behav (2019) 3(2):104–7. doi: 10.1038/s41562-019-0531-8

PubMed Abstract | Crossref Full Text | Google Scholar

218. Barrett LF and Bar M. See it with feeling: affective predictions during object perception. Philos Trans R Soc Lond B Biol Sci (2009) 364(1521):1325–34. doi: 10.1098/rstb.2008.0312

PubMed Abstract | Crossref Full Text | Google Scholar

219. LeDoux JE. The deep history of ourselves. New York, NY: Viking (2019).

Google Scholar

220. Ginsburg S and Jablonka E. The evolution of the sensitive soul: learning and the origins of consciousness. Boston, MA: MIT Press (2019).

Google Scholar

221. Feinberg TE and Mallatt JM. The ancient origins of consciousness: how the brain created experience. Cambridge, MA: MIT Press (2016). doi: 10.7551/mitpress/10714.001.0001

Crossref Full Text | Google Scholar

222. Siewert C. The significance of consciousness. Princeton, NJ: Princeton University Press (1998).

Google Scholar

223. Owen AM. Detecting consciousness: a unique role for neuroimaging. Annu Rev Psychol (2013) 64:109–33. doi: 10.1146/annurev-psych-113011-143729

PubMed Abstract | Crossref Full Text | Google Scholar

224. Owen AM, Coleman MR, Boly M, Davis MH, Laureys S, and Pickard JD. Detecting awareness in the vegetative state. Science (2006) 313(5792):1402. doi: 10.1126/science.1130197

PubMed Abstract | Crossref Full Text | Google Scholar

225. Cruse D, Chennu S, Chatelle C, Bekinschtein TA, Fernández-Espejo D, Pickard JD, et al. Bedside detection of awareness in the vegetative state: a cohort study. Lancet (2011) 378(9809):2088–94. doi: 10.1016/S0140-6736(11)61224-5

PubMed Abstract | Crossref Full Text | Google Scholar

226. Sarasso S, Casali AG, Casarotto S, Rosanova M, Sinigaglia C, and Massimini M. Consciousness and complexity: a consilience of evidence. Neurosci Conscious (2021) 7(2):1–24 doi: 10.1093/nc/niab023

PubMed Abstract | Crossref Full Text | Google Scholar

227. Luppi AI, Cain J, Spindler LRB, Górska UJ, Toker D, Hudson AE, et al. Mechanisms underlying disorders of consciousness: bridging gaps to move toward an integrated translational science. Neurocrit Care (2021) 35(Suppl 1):37–54. doi: 10.1007/s12028-021-01281-6

PubMed Abstract | Crossref Full Text | Google Scholar

228. Huntley JD, Fleming SM, Mograbi DC, Bor D, Naci L, Owen AM, et al. Understanding Alzheimer’s disease as a disorder of consciousness. Alzheimers Dement (N Y) (2021) 7(1):e12203. doi: 10.1002/trc2.12203

PubMed Abstract | Crossref Full Text | Google Scholar

229. O’Shaughnessy NJ, Chan JE, Bhome R, Gallagher P, Zhang H, Clare L, et al. Awareness in severe Alzheimer’s disease: a systematic review. Aging Ment Health (2021) 25(4):602–12. doi: 10.1080/13607863.2020.1711859

PubMed Abstract | Crossref Full Text | Google Scholar

230. GBD 2019 Mental Disorders Collaborators. Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Psychiatry (2022) 9(2):137–50. doi: 10.1016/S2215-0366(21)00395-3

PubMed Abstract | Crossref Full Text | Google Scholar

231. Organisation for Economic Co-operation and Development and European Union. Health at a Glance: Europe 2018 – State of Health in the EU Cycle. Paris: OECD. doi: 10.1787/health_glance_eur-2018-en

Crossref Full Text | Google Scholar

232. Leichsenring F, Steinert C, Rabung S, and Ioannidis JPA. The efficacy of psychotherapies and pharmacotherapies for mental disorders in adults: an umbrella review and meta-analytic evaluation of recent meta-analyses. World Psychiatry (2022) 21(1):133–45. doi: 10.1002/wps.20941

PubMed Abstract | Crossref Full Text | Google Scholar

233. LeDoux J. Anxious: using the brain to understand and treat fear and anxiety. London: Penguin (2016).

Google Scholar

234. Taschereau-Dumouchel V, Michel M, Lau H, Hofmann SG, and LeDoux JE. Putting the “mental” back in “mental disorders”: a perspective from research on fear and anxiety. Mol Psychiatry (2022) 27(3):1322–30. doi: 10.1038/s41380-021-01395-5

PubMed Abstract | Crossref Full Text | Google Scholar

235. Tracey I. Why pain hurts. Trends Cogn Sci (2022) 26(12):1070–2. doi: 10.1016/j.tics.2022.09.020

PubMed Abstract | Crossref Full Text | Google Scholar

236. Henriques G. Twenty billion fails to ‘move the needle’ on mental illness. Psychology Today (2017). Available at: https://www.psychologytoday.com/gb/blog/theory-knowledge/201705/twenty-billion-fails-move-the-needle-mental-illness

Google Scholar

237. LeDoux JE and Brown R. A higher-order theory of emotional consciousness. Proc Natl Acad Sci USA (2017) 114(10):E2016–25. doi: 10.1073/pnas.1619316114

PubMed Abstract | Crossref Full Text | Google Scholar

238. Adolphs R, Mlodinow L, and Barrett LF. What is an emotion? Curr Biol (2019) 29(20):R1060–4. doi: 10.1016/j.cub.2019.09.008

PubMed Abstract | Crossref Full Text | Google Scholar

239. Bach DR. Cross-species anxiety tests in psychiatry: pitfalls and promises. Mol Psychiatry (2022) 27(1):154–63. doi: 10.1038/s41380-021-01299-4

PubMed Abstract | Crossref Full Text | Google Scholar

240. Barron HC, Mars RB, Dupret D, Lerch JP, and Sampaio-Baptista C. Cross-species neuroscience: closing the explanatory gap. Philos Trans R Soc Lond B Biol Sci (2021) 376(1815):20190633. doi: 10.1098/rstb.2019.0633

PubMed Abstract | Crossref Full Text | Google Scholar

241. Schmack K, Ott T, and Kepecs A. Computational psychiatry across species to study the biology of hallucinations. JAMA Psychiatry (2022) 79(1):75–6. doi: 10.1001/jamapsychiatry.2021.3200

PubMed Abstract | Crossref Full Text | Google Scholar

242. Nour MM, Liu Y, and Dolan RJ. Functional neuroimaging in psychiatry and the case for failing better. Neuron (2022) 110(16):2524–44. doi: 10.1016/j.neuron.2022.07.005

PubMed Abstract | Crossref Full Text | Google Scholar

243. Taschereau-Dumouchel V, Cortese A, Chiba T, Knotts JD, Kawato M, and Lau H. Towards an unconscious neural reinforcement intervention for common fears. Proc Natl Acad Sci USA (2018) 115(13):3470–5. doi: 10.1073/pnas.1721572115

PubMed Abstract | Crossref Full Text | Google Scholar

244. Mazor M, Brown S, Ciaunica A, Demertzi A, Fahrenfort J, Faivre N, et al. The scientific study of consciousness cannot and should not be morally neutral. Perspect Psychol Sci (2023) 18(3):535–43. doi: 10.1177/17456916221110222

PubMed Abstract | Crossref Full Text | Google Scholar

245. Carpenter AD. Illuminating community: animals in classical Indian thought. In: Adamson P and Edwards GF, editors. Animals: a history. Oxford: Oxford University Press (2018). 63–86.

Google Scholar

246. Finnigan B. Buddhism and animal ethics. Philos Compass (2017) 12(7):e12424. doi: 10.1111/phc3.12424

Crossref Full Text | Google Scholar

247. Lee AY. Is consciousness intrinsically valuable? Philos Stud (2019) 176(3), 655–71. doi: 10.1007/s11098-018-1032-8

Crossref Full Text | Google Scholar

248. Metzinger T. Suffering. In: Almqvist K and Haag A, editors. The return of consciousness: a new science on old questions. Stockholm: Axel and Margaret Johnson Foundation (2017).

Google Scholar

249. Makari G. Soul machine: the invention of the modern mind. London: W.W. Norton (2016).

Google Scholar

250. Kepecs A and Mainen ZF. A computational framework for the study of confidence in humans and animals. Philos Trans R Soc Lond B Biol Sci (2012) 367(1594):1322–37. doi: 10.1098/rstb.2012.0037

PubMed Abstract | Crossref Full Text | Google Scholar

251. LeDoux JE. The four realms of existence: a new theory of being human. Cambridge, MA: Harvard University Press (2023). doi: 10.2307/jj.6695537

Crossref Full Text | Google Scholar

252. Birch J. Should animal welfare be defined in terms of consciousness? Philos Sci (2022) 89(5):1114–23. doi: 10.1017/psa.2022.59

Crossref Full Text | Google Scholar

253. Birch J. The search for invertebrate consciousness. Noûs (2022) 56(1):133–53. doi: 10.1111/nous.12351

PubMed Abstract | Crossref Full Text | Google Scholar

254. Godfrey-Smith P. Living on Earth: life, consciousness and the making of the natural world. New York, NY: Farrar, Straus and Giroux (2024).

Google Scholar

255. Andrews K, Birch J, Sebo J, and Sims T. Background to the New York declaration on animal consciousness [online] (2024). Available at: https://sites.google.com/nyu.edu/nydeclaration/background

Google Scholar

256. Johnson M. How responsible are killers with brain damage? Scientific American (2018). Available at: https://www.scientificamerican.com/article/how-responsible-are-killers-with-brain-damage/

Google Scholar

257. Hardcastle VG. My brain made me do it? Neuroscience and criminal responsibility (2nd edition). In: Johnson LSM and Rommelfanger KS, editors. The Routledge handbook of neuroethics. Abingdon: Routledge (2018). 185–97.

Google Scholar

258. Caspar EA, Christensen JF, Cleeremans A, and Haggard P. Coercion changes the sense of agency in the human brain. Curr Biol (2016) 26(5):585–92. doi: 10.1016/j.cub.2015.12.067

PubMed Abstract | Crossref Full Text | Google Scholar

259. Brass M and Haggard P. To do or not to do: the neural signature of self-control. J Neurosci (2007) 27(34):9141–5. doi: 10.1523/JNEUROSCI.0924-07.2007

PubMed Abstract | Crossref Full Text | Google Scholar

260. Ferguson ML, Hassin R, and Bargh JA. Implicit motivation: past, present, and future. In: Shah J and Gardner W, editors. Handbook of motivation science. New York, NY: Guilford Press (2007).

Google Scholar

261. Muñoz JM, García-López E, and Rusconi E. Editorial: neurolaw: the call for adjusting theory based on scientific results. Front Psychol (2020) 11:582302. doi: 10.3389/fpsyg.2020.582302

PubMed Abstract | Crossref Full Text | Google Scholar

262. Butlin P, Long R, Elmoznino E, Bengio Y, Birch J, Constant A, et al. Consciousness in artificial intelligence: insights from the science of consciousness. arXiv [preprint] (2023). doi: 10.48550/arXiv.2308.08708

Crossref Full Text | Google Scholar

263. Aru J, Larkum M, and Shine JM. The feasibility of artificial consciousness through the lens of neuroscience. Trends Neurosci (2023) 46(12):1008–17. doi: 10.1016/j.tins.2023.09.009

PubMed Abstract | Crossref Full Text | Google Scholar

264. Polger T and Shapiro L. The multiple realization book. Oxford: Oxford University Press (2016).

Google Scholar

265. Putnam H. Psychological predicates. In: Capitan WH and Merrill DD, editors. Art, mind, and religion. Pittsburgh: University of Pittsburgh Press (1967). doi: 10.2307/jj.6380610.6

Crossref Full Text | Google Scholar

266. Shagir O. The rise and fall of computational functionalism. In: Ben-Menahem Y, editor. Hilary Putnam (contemporary philosophy in focus). Cambridge: Cambridge University Press (2005).

Google Scholar

267. Searle JS. The problem of consciousness. In: Revonsuo A and Kamppinen M, editors. Consciousness in philosophy and cognitive neuroscience (1st edition). New York, NY: Psychology Press (1994).

Google Scholar

268. Godfrey-Smith P. Metazoa: animal life and the birth of the mind. New York, NY: Farrar, Straus & Giroux (2020).

Google Scholar

269. Cao R. Multiple realizability and the spirit of functionalism. Synthese (2022) 200(6):506. doi: 10.1007/s11229-022-03524-1

Crossref Full Text | Google Scholar

270. Searle J. Biological naturalism. In: Schneider S and Velmans M, editors. The Blackwell companion to consciousness, (2nd edition). Hoboken: John Wiley & Sons (2017) 327–36. doi: 10.1002/9781119132363.ch23

Crossref Full Text | Google Scholar

271. Metzinger T. Artificial suffering: an argument for a global moratorium on synthetic phenomenology. J AI Consci (2021) 8(1):43–66. doi: 10.1142/S270507852150003X

Crossref Full Text | Google Scholar

272. Searle JR. Seeing things as they are. New York, NY: Oxford University Press (2015). doi: 10.1093/acprof:oso/9780199385157.001.0001

Crossref Full Text | Google Scholar

273. Altman S. ‘A machine-shaped hand’: Read a story from OpenAI’s new creative writing model. The Guardian. (2025). Available at: https://www.theguardian.com/books/2025/mar/12/a-machine-shaped-hand-read-a-story-from-openais-new-creative-writing-model

Google Scholar

274. Dennett DC. The intentional stance (1st edition). Cambridge, MA: MIT press (1989).

Google Scholar

275. Dennett D. I’ve been thinking. London: Allen Lane (2023).

Google Scholar

276. Shanahan M. Talking about large language models. Commun ACM (2024) 67(2):68–79. doi: 10.1145/3624724

Crossref Full Text | Google Scholar

277. Colombatto C and Fleming S. Folk psychological attributions of consciousness to large language models. Neurosci Conscious (2024) 2024(1):niae013. doi: 10.1093/nc/niae013

PubMed Abstract | Crossref Full Text | Google Scholar

278. Heider F and Simmel M. An experimental study of apparent behavior. Am J Psychol (1944) 57(2):243–59. doi: 10.2307/1416950

Crossref Full Text | Google Scholar

279. Gray HM, Gray K, and Wegner DM. Dimensions of mind perception. Science (2007) 315(5812):619. doi: 10.1126/science.1134475

PubMed Abstract | Crossref Full Text | Google Scholar

280. Epley N, Waytz A, and Cacioppo JT. On seeing human: a three-factor theory of anthropomorphism. Psychol Rev (2007) 114(4):864–86. doi: 10.1037/0033-295X.114.4.864

PubMed Abstract | Crossref Full Text | Google Scholar

281. Xiang C. “He would still be here”: man dies by suicide after talking with AI chatbot, widow says. Vice (2023). Available at: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

Google Scholar

282. Shevlin H. Consciousness, machines, and moral status [online] (2024). Available at: https://philpapers.org/rec/SHECMA-6

Google Scholar

283. Chrisley R. A human-centered approach to AI ethics: a perspective from cognitive science. In: Dubber MD, Pasquale F, and Das S, editors. The Oxford handbook of ethics of AI. New York, NY: Oxford University Press (2020) 462–74. doi: 10.1093/oxfordhb/9780190067397.013.29

Crossref Full Text | Google Scholar

284. Turing AM. Computing machinery and intelligence. Mind. (1950) LIX(236):433–60. doi: 10.1093/mind/LIX.236.433

Crossref Full Text | Google Scholar

285. Levin M. Bioelectric signaling: reprogrammable circuits underlying embryogenesis, regeneration, and cancer. Cell (2021) 184(8):1971–89. doi: 10.1016/j.cell.2021.02.034

PubMed Abstract | Crossref Full Text | Google Scholar

286. Gordon EC and Seth AK. Ethical considerations for the use of brain-computer interfaces for cognitive enhancement. PloS Biol (2024) 22(10):e3002899. doi: 10.1371/journal.pbio.3002899

PubMed Abstract | Crossref Full Text | Google Scholar

287. Bayne T, Seth AK, and Massimini M. Are there islands of awareness? Trends Neurosci (2020) 43(1):6–16. doi: 10.1016/j.tins.2019.11.003

PubMed Abstract | Crossref Full Text | Google Scholar

288. Kim JI, Imaizumi K, Jurjuţ O, Kelley KW, Wang D, Thete MV, et al. Human assembloid model of the ascending neural sensory pathway. Nature (2025) 642(8066):143–53. doi: 10.1038/s41586-025-08808-3

PubMed Abstract | Crossref Full Text | Google Scholar

289. Bloom P. Descartes’ baby: how the science of child development explains what makes us human. New York, NY: Basic Books (2004).

Google Scholar

290. Bradford NA, Shen A, Odegaard B, and Peters MAK. Aligning consciousness science and U.S. funding agency priorities. Commun Biol (2024) 7(1):1315. doi: 10.1038/s42003-024-07011-w

PubMed Abstract | Crossref Full Text | Google Scholar

Annex: further reading

Horowitz A. Smelling themselves: dogs investigate their own odours longer when modified in an “olfactory mirror” test. Behav Processes (2017) 143:17–24. doi: 10.1016/j.beproc.2017.08.001

Kob L. Exploring the role of structuralist methodology in the neuroscience of consciousness: a defense and analysis. Neurosci Conscious (2023) 2023 (1):niad011. doi: 10.1093/nc/niad011

Organisation for Economic Cooperation and Development/European Union. Health at a glance. Paris: OECD Publishing (2018). doi: 10.1787/health_glance_eur-2018-en

Keywords: consciousness, adversarial collaborations, phenomenal experience, artificial intelligence, medicine, ethics, society

Citation: Cleeremans A, Mudrik L and Seth AK. Consciousness science: where are we, where are we going, and what if we get there? Front Sci (2025) 3:1546279. doi: 10.3389/fsci.2025.1546279

Received: 16 December 2024; Accepted: 15 September 2025;
Published: 30 October 2025.

Edited by:

Lucina Q. Uddin, University of California, Los Angeles, United States

Reviewed by:

Thomas K. Metzinger, Johannes Gutenberg University Mainz, Germany
Anthony G. Hudetz, University of Michigan, United States

Copyright © 2025 Cleeremans, Mudrik and Seth. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Axel Cleeremans, YXhjbGVlckB1bGIuYWMuYmU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.