Consciousness Research Is Now a Scientific and Moral Priority for the AI Era
As machines get better at simulating thought, the gap that still separates them from us-conscious experience-has become impossible to ignore. A new review in Frontiers in Science argues that progress in AI and neurotechnology now outpaces our grasp of how awareness arises, creating real ethical risk. The authors call for actionable tests of sentience and a coordinated plan to study it across humans, animals, organoids, and AI.
"Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society-and for what it means to be human," said Prof Axel Cleeremans. "If we become able to create consciousness-even accidentally-it would raise immense ethical challenges and even existential risk."
"The question of consciousness is ancient-but it's never been more urgent than now," added Prof Anil Seth.
Where the science stands
Consciousness-subjective experience of self and world-remains one of science's hardest problems. We have candidate mechanisms and networks tied to awareness, but there is no agreement on which are essential or how they combine to produce experience. Some argue the standard mechanistic approach may never fully explain why experience exists at all.
The review maps current evidence and outlines what happens if we learn to detect or create consciousness, in patients or in engineered systems like brain organoids and AI. Tests of sentience could help identify awareness in people with severe brain injury, clarify fetal and animal sentience, guide organoid research, and assess claims about AI. That progress, however, would trigger difficult legal and ethical questions about how to treat any system shown to be aware.
Reference: "Consciousness science: where are we, where are we going, and what if we get there?" Frontiers in Science, September 15, 2025. Read the review. For background on core concepts, see the Stanford Encyclopedia of Philosophy entry on consciousness.
Why this matters for your work
- Clinical care: Measures inspired by integrated information theory and global workspace theory-such as perturbational complexity indices-have revealed covert awareness in patients once labeled unresponsive. Better tools could improve prognosis in coma, anesthesia, and advanced dementia, with direct impact on treatment choices and end-of-life decisions.
- Mental health: Linking neural dynamics to felt experience may help align animal models with human emotion. Expect new biomarkers and outcome measures that mix behavior, brain data, and structured self-report.
- Animal research and welfare: Evidence-based criteria for sentience will change how we select species, design protocols, and justify endpoints. "Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems," said Prof Liad Mudrik.
- Law and responsibility: As we map the boundary between conscious and unconscious processes in decision-making, concepts like mens rea may need refinement. Courts will confront how much intent requires awareness-and where responsibility begins.
- AI, organoids, and brain-computer interfaces: Whether silicon alone can host awareness is contested, but AI that gives the impression of being conscious already poses social and ethical challenges. Clear labeling, audit trails, and governance are needed before deploying systems that claim-or appear-to feel.
What the field needs next
- Adversarial collaborations: Run head-to-head tests where rival theories co-design protocols, preregister predictions, and share data. Reward replication and negative results.
- Convergent measures: Standardize batteries that work across species and states (sleep, anesthesia, disorders of consciousness). Combine perturbation-based metrics, decoding of report-independent signals, and behavioral assays-then define error bars for any sentience claim.
- Phenomenology, not just function: Pair neural measures with structured reports that capture quality and structure of experience. Use disciplined methods (e.g., experience sampling, neurophenomenology) to avoid over-interpreting noise.
- Ethics-by-design: Predefine red lines for organoid complexity, stimulation regimes, and AI training that could plausibly induce awareness. Extend IRB-style oversight to projects with even a small chance of creating felt experience.
- Reporting standards and risk registers: Make claims about consciousness explicit, state assumptions, and document mitigation steps. Maintain incident logs for experiments that could cross awareness thresholds.
- Training for cross-disciplinary teams: Neuroscience, AI, philosophy, and law need a shared baseline. Equip teams to read across fields and stress-test claims before they hit the clinic, the courtroom, or production AI systems.
Action items for science and research leaders
- Fund adversarial studies that pit leading theories against each other with common datasets.
- Adopt a lab policy for any work on organoids, closed-loop BCIs, or AI agents that mimic self-report.
- In clinical settings, pilot awareness measures in anesthesia and disorders-of-consciousness workflows.
- In AI teams, prohibit anthropomorphic claims without independent testing; disclose limitations and failure modes.
- Coordinate with legal and ethics experts early, not after publication or deployment.
"Progress in consciousness science will reshape how we see ourselves and our relationship to artificial intelligence and the natural world," said Prof Anil Seth. The message behind the review is clear: treat consciousness research as a near-term priority, build tests you would trust in a hospital or a courtroom, and plan for the consequences now-not after systems claim to feel.
If you lead AI or neurotech projects and need structured upskilling across roles, see AI courses by job at Complete AI Training.
Funding: National Fund for Scientific Research; European Research Council.
Image credit: Shutterstock.
Your membership also unlocks: