Consciousness just became an urgent research priority
AI systems and neurotech are moving faster than our ability to tell where awareness exists. A new review in Frontiers in Science argues that explaining how consciousness arises-and developing evidence-based tests to detect it-has become an urgent scientific and ethical priority.
"Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society-and for what it means to be human," said Professor Axel Cleeremans. If we create consciousness-even by accident-the ethical load is immense.
Where the science stands
Consciousness remains one of science's hardest problems. We've mapped neural activity and identified correlates of awareness, but there's no consensus on what is necessary and sufficient for subjective experience.
- Global workspace theory (GWT): Consciousness occurs when information is broadcast widely for use by multiple systems like action and memory.
- Higher-order theories (HOT): A mental state becomes conscious when another brain state points to it: "this is what I'm experiencing now."
- Integrated information theory (IIT): A system is conscious if its parts are integrated in very specific ways, yielding unified and informative experience.
- Predictive processing: Experience is the brain's best guess about the causes of sensory input, updated by prediction errors.
The field needs stronger, theory-led tests that pit rival models against each other and expose where each succeeds-or fails.
Sentience tests: from patients to organoids and AI
Reliable tests for consciousness could change clinical practice and clarify when awareness is present in patients with disorders of consciousness, advanced dementia, or under anesthesia. They could also inform debates about fetuses, animals, brain organoids, and AI systems that appear sentient.
Early tools informed by IIT and GWT have already revealed signs of awareness in some people diagnosed with unresponsive wakefulness syndrome. Scaling this work demands better theory, better measurements, and careful interpretation.
Wide implications you should plan for
- Medicine: Refine bedside assessments and neurophysiological metrics to detect covert awareness. Expect new protocols for coma care, anesthesia monitoring, and end-of-life decisions.
- Mental health: Map how subjective experience relates to neural mechanisms in depression, anxiety, and schizophrenia to close the gap between animal models and human emotion.
- Animal welfare: Identify which species-and which lab-grown systems-are sentient. Adjust research practices, farming standards, and conservation policies accordingly.
- Law: Re-examine intent (mens rea) and responsibility as evidence accumulates on unconscious contributions to behavior and decision-making.
- Neurotechnology and AI: Set criteria for when systems might warrant moral consideration. Even AI that only appears conscious will raise societal and ethical challenges.
How to move the field forward
The authors call for coordinated, evidence-driven progress grounded in rival theories and shared experiments. Adversarial collaborations-where competing camps co-design tests-can break silos and expose hidden assumptions.
- Pre-register theory-specific predictions that can be decisively confirmed or ruled out.
- Adopt common benchmarks, open datasets, and agreed-upon end points for detecting awareness (in humans, animals, organoids, and machines).
- Pair function-focused measures with phenomenology-what experience feels like-to avoid missing the target while measuring correlates.
- Build ethics review pathways for brain-computer interfaces, organoid research, and AI systems that might be misperceived as conscious.
- Require public model cards and auditing for AI systems that make claims-implicit or explicit-about awareness.
- Invest in methods that distinguish attention, intelligence, and language ability from consciousness itself.
"Progress in consciousness science will reshape how we see ourselves and our relationship to artificial intelligence and the natural world," said Professor Anil Seth. "The question is ancient-but it's never been more urgent."
For researchers and R&D leaders: next steps
- In clinical trials, include standardized consciousness metrics and report null results to counter publication bias.
- In animal and organoid work, define stopping rules and welfare triggers tied to sentience indicators.
- For AI teams, avoid anthropomorphic claims; publish evaluation protocols that separate capability from awareness.
- Form cross-lab adversarial collaborations with shared code, preregistration, and blinded analysis pipelines.
Read the paper
Full review: Consciousness science: where are we, where are we going, and what if we get there?
Stay current on AI practice
If you lead research or product teams working with AI, it helps to track methods, evaluation standards, and training. See curated options by role at Complete AI Training.
Photo credit: © 2025 Cleeremans, Mudrik and Seth
Funding note: The authors report support from the European Research Council and the CIFAR Brain, Mind and Consciousness program, among others. Funders were not involved in study design, analysis, writing, or the decision to publish.
Your membership also unlocks:
 
             
             
                            
                           