Brain Surgeon Explains Why AI Will Never Be Human

A brain surgeon argues the mind isn't just brain, so AI can't be human. Clinical realities and ethics say treat AI as a tool; identity, agency, and meaning stay human.

Published on: Sep 28, 2025
Brain Surgeon Explains Why AI Will Never Be Human

Brain Surgeon Shows Why AI Can Never Become Human

Spoiler: AI runs on hardware; the human mind doesn't appear confined to it. That single difference rewrites the debate.

AI is algorithmic. It executes instructions designed by people. The human mind shows capacities that don't reduce cleanly to code: creativity, understanding, and sentience - expressed as compassion, love, empathy, elation, sadness, fear, anger, disgust, pleasure, pride, excitement, embarrassment, regret, jealousy, grief, hope, and faith.

The popular assumption: mind = brain = machine

Many respected voices argue the mind is nothing but brain activity. Carl Sagan called mind "a consequence of [the brain's] anatomy and physiology and nothing more." Daniel Dennett described the brain as a deterministic machine. Todd Feinberg and Jon Mallatt see no explanatory gap between brain and subjective experience. Kenneth Miller argues consciousness is "something that matter does."

If mind is only matter, then - in theory - a big enough machine could equal it. That's the premise behind hopes of uploading minds and predictions that AI will overtake human cognition and identity.

Think outside the skull

Neurosurgeon Michael Egnor and science writer Denyse O'Leary challenge that premise in The Immortal Mind (2025). Their argument: the mind associates with the brain but is not identical to it. Comparing AI to clinical realities makes the gap clear.

What brains and minds do vs. what machines do

  • People can function without "all" the brain. Rare case reports describe individuals born without a cerebellum who grow into adulthood, and children missing much of the cortex who still show signs of awareness and experience. Personality can persist after major neurosurgery. AI breaks predictably when you remove hardware or corrupt software.
  • Higher intellect doesn't localize neatly. Perception, movement, memory, and emotion map to regions. Abstract understanding and free choice don't localize with the same precision. AI systems are fully mapped across identifiable models, modules, and data paths.
  • Split brains don't split persons. Severing hemispheres to treat epilepsy doesn't yield two separate selves. Personal identity remains unified. Split a computer and functions halt or degrade; you don't get a coherent duplicate mind.
  • Conjoined twins share tissue, not intellect. Shared brain-adjacent structures do not merge reasoning or will. Each person learns, decides, and acts as a distinct self. Networked AIs share what they're engineered to share, and their "personalities" reflect code and data, not independent wills.
  • Electrical stimulation can trigger feelings, not reasoning. Wilder Penfield's work showed that stimulating cortex evokes memories and emotions, but not abstract thought or free choice. Seizures don't produce genuine reasoning either. Jolting a computer produces faults, not spontaneous insight.
  • Near-death experiences report consciousness during flatline. A subset of revived patients describe accurate perceptions and meaningful episodes while brain activity was minimal or undetectable, including veridical details later confirmed. No AI can leave its chassis and report anything from outside its sensors. Recent cardiac-arrest studies document such reports for scientific review.

Why this matters to science, research, and healthcare

Category errors waste time. Treating AI like a person invites false expectations and bad design. Treating patients like machines overlooks lived experience, agency, and values.

AI will be powerful - and limited. It ingests data, surfaces patterns, and simulates behavior. It does not originate first-person experience, moral agency, or meaning. That distinction guides safety, validation, and clinical use.

Policy and ethics hinge on personhood. If mind is more than mechanism, then rights, duties, and consent stay with humans. AI deserves risk controls, not human status.

Practical rules you can use

  • Use AI as a tool, not a teammate. Demand traceability, calibration data, and failure modes. No anthropomorphic language in protocols.
  • Separate simulation from experience. A system that "expresses" empathy is executing patterns. Verify outcomes, not vibes.
  • In clinical settings, keep the human in the loop. AI can assist with triage, summaries, and prediction. Judgment, consent, and responsibility remain human.
  • Audit claims of "human-level" AI. Ask what tasks, with what data, under what constraints, and who is accountable when it's wrong.

The bottom line

AI is impressive code running on silicon. The human mind shows properties - identity, understanding, and will - that don't line up with circuits, modules, or uploads. That isn't mysticism; it's consistent with what surgeons, neuroscientists, and patients have shown us.

Be kind to your robot. Just don't call her human.

Further reading (balanced and technical)

Build responsible AI skills

If your team needs practical training for science or healthcare workflows - with a clear line between simulation and decision-making - explore role-based options at Complete AI Training.