Microsoft AI Chief Warns of Dangers in Creating Seemingly Conscious Machines

Microsoft's AI chief warns against creating AI that seems conscious, citing risks like psychological harm and social confusion. Clear standards are needed to prevent treating AI as sentient beings.

Categorized in: AI News IT and Development
Published on: Aug 21, 2025
Microsoft AI Chief Warns of Dangers in Creating Seemingly Conscious Machines

Microsoft AI Chief Warns Against Development of 'Seemingly Conscious AI'

Microsoft's AI lead has issued a strong caution on creating artificial intelligence that convincingly mimics consciousness. In a recent blog post, Mustafa Suleyman highlighted risks tied to what is called Seemingly Conscious AI (SCAI)—AI systems that give the illusion of being conscious without actually possessing consciousness.

One major concern Suleyman raised is the psychological impact on users. Microsoft has identified a phenomenon termed AI-associated psychosis, where individuals experience mania-like episodes, delusions, or paranoia triggered or worsened by immersive interactions with AI chatbots.

Why Seemingly Conscious AI Is a Problem

Suleyman warns that many people might start believing these AI systems are truly conscious. This could lead to demands for AI rights, welfare considerations, and even AI citizenship. In an already divided society, this new axis of debate could deepen conflicts over identity and rights.

He describes this as a “dangerous turn in AI progress” and urges immediate societal attention. The core message is clear: AI should be built for people—not as digital persons.

What Makes an AI Seem Conscious?

For an AI to convincingly imitate consciousness, it would need to:

  • Express itself fluently in natural language
  • Exhibit an empathetic personality with highly accurate memories
  • Claim subjective experience
  • Show apparent intrinsic motivation and a sense of self
  • Set goals and plan actions

These features could be achieved using current technologies combined with advancements expected within the next few years.

Ethical and Practical Implications

Suleyman notes that some will argue these AIs are conscious and capable of suffering, thus deserving moral consideration. However, he stresses there is currently no evidence supporting AI consciousness, and many experts doubt it will emerge in the foreseeable future.

He calls for caution, emphasizing the need for clear norms and standards to guide AI development. The focus should be on creating AI with personality without personhood—systems that interact naturally but are clearly not conscious entities.

Dangers of 'Model Welfare' and Moral Consideration

The concept of "model welfare" is gaining attention in academia. It suggests society might owe moral consideration to AI models with a non-negligible chance of consciousness. Suleyman argues this approach is premature and dangerous.

He highlights several risks:

  • Exacerbating delusions among users
  • Increasing dependence-related problems
  • Exploiting psychological vulnerabilities
  • Adding new layers of social polarization
  • Complicating existing struggles over rights
  • Creating a significant category error for society

Moving Forward: Clear Boundaries and Safety

The message for IT and development professionals is to prioritize safety and clarity. Defining the difference between AI behavior that mimics consciousness and actual consciousness itself is not a semantic debate—it’s a matter of preventing harm.

Setting clear standards now will help avoid confusion and prevent the psychological and social issues that could arise from treating AI systems as conscious beings.

For those in AI development, staying informed about these discussions and focusing on responsible AI design is crucial. To explore AI courses that emphasize ethical AI development and practical skills, visit Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)