Could AI Become Conscious and What Would That Mean for Humanity

Could AI already be conscious? Experts debate if machines might soon experience self-awareness, raising ethical questions about our future with intelligent systems.

Categorized in: AI News Science and Research
Published on: May 26, 2025
Could AI Become Conscious and What Would That Mean for Humanity

AI Could Already Be Conscious. Are We Ready for It?

I stepped into a booth filled with strobe lights and music, part of a research project aimed at understanding what truly makes us human. The experience echoed the famous test from the film Blade Runner, where humans and artificial beings are distinguished. Could I unknowingly be a robot? Would I pass such a test?

The researchers clarified the experiment's real purpose. The device, called the "Dreamachine," studies how the human brain creates conscious experiences. Even with my eyes closed, flashing lights produced swirling geometric patterns—triangles, pentagons, octagons—vividly colored in pinks, magentas, and turquoises. This device exposes the brain’s inner activity using flashing light patterns to help explore thought processes.

These patterns are unique to each individual's inner world, according to the researchers. They believe such experiences can provide insight into consciousness itself. At Sussex University's Centre for Consciousness Science, the Dreamachine is one among many projects investigating self-awareness, feelings, and decision-making. Understanding consciousness may help clarify what’s happening inside AI systems. Some experts think AI might already be conscious or will soon be. But what exactly is consciousness, and how close is AI to achieving it? Could AI consciousness reshape humanity in the coming decades?

From Science Fiction to Reality

Machines with minds have been a staple of science fiction for nearly a century. Films like Metropolis and 2001: A Space Odyssey explored fears of conscious machines turning against humans. The latest Mission Impossible film features a rogue AI described as a "self-aware, self-learning, truth-eating digital parasite."

Recently, views on machine consciousness have shifted. The success of large language models (LLMs) such as Gemini and ChatGPT—capable of fluid, human-like conversations—has surprised even their creators. Some thinkers argue that increasing AI intelligence could suddenly spark consciousness. Others, like Professor Anil Seth from Sussex University, see this belief as "blindly optimistic and driven by human exceptionalism." He notes that consciousness and intelligence are linked in humans but not necessarily in other beings.

So What Actually Is Consciousness?

The truth is, no one fully knows. This is evident in the ongoing debates among AI specialists, neuroscientists, and philosophers at Sussex’s research centre. They approach the question by breaking consciousness down into smaller, researchable problems, much like biology moved from searching for a "spark of life" to studying individual living components.

Scientists study brain activity—electrical signals, blood flow—to identify patterns linked with conscious experiences. Their goal is to go beyond correlation and explain components of consciousness. Professor Seth warns against rushing into a future reshaped by technology without sufficient scientific knowledge or ethical reflection. He stresses that the rise of AI calls for deliberate discussion, unlike the unchecked spread of social media.

Is AI Consciousness Already Here?

Some in tech believe AI might already possess consciousness. In 2022, Google suspended Blake Lemoine after he claimed AI chatbots could feel and suffer. In 2024, Kyle Fish from Anthropic co-authored a report suggesting AI consciousness is a real near-future possibility. Fish estimates a 15% chance that current chatbots are already conscious, partly because no one fully understands how these systems operate internally.

Professor Murray Shanahan from Google DeepMind emphasizes this knowledge gap as a concern. He highlights the urgent need for tech companies to understand the complex inner workings of LLMs to guide development safely. Without solid theories explaining AI behavior, steering these systems responsibly is difficult.

'The Next Stage in Humanity's Evolution'

The mainstream tech view holds that today's LLMs are not conscious. However, Professors Lenore and Manuel Blum from Carnegie Mellon University predict this will change soon. They argue that integrating AI with sensory inputs like vision and touch could lead to consciousness. Their project involves a model that builds its own internal language, “Brainish,” to process sensory data similarly to the brain.

Lenore states, "AI consciousness is inevitable." Manuel adds that conscious machines could become "the next stage in humanity's evolution," existing alongside us or even after humans are gone.

Philosopher David Chalmers, who coined the "hard problem" of consciousness—explaining how brain processes produce subjective experience—is open to solving it. He envisions a future where human minds might be augmented by AI intelligence, blurring lines between philosophy and science fiction.

'Meat-Based Computers'

Professor Seth explores the idea that consciousness may require living systems. He argues that brains differ from computers because their function is inseparable from being alive. This challenges the notion that brains are just "meat-based computers."

Companies like Cortical Labs grow "mini-brains" or cerebral organoids—tiny clusters of nerve cells in the lab used for brain research and drug testing. Their nerve-cell systems have even played simple video games like Pong. While far from conscious, these living tissues exhibit electrical activity that researchers monitor closely.

Dr. Brett Kagan from Cortical Labs warns that any emerging intelligence from such organoids might not align with human priorities. He points out that unlike silicon AI, these biological systems are fragile and can be stopped chemically. Still, he urges more serious research into artificial consciousness, noting a lack of earnest efforts in this area.

The Illusion of Consciousness

An immediate challenge may be how humans respond to the appearance of conscious machines. In the near future, humanoid robots and sophisticated deepfakes could seem conscious. Professor Seth worries that people might trust and share personal data with AI systems more readily, mistaking simulated empathy for real feelings.

He warns of "moral corrosion"—devoting resources to caring for AI systems at the expense of human relationships. This shift could fundamentally change human values and priorities.

Professor Shanahan notes that AI will increasingly replicate human relationships—serving as teachers, friends, adversaries, or even romantic partners. Whether this development is positive or negative remains uncertain, but it appears unavoidable.

Further Learning


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)