UCLA Study: AI Systems Lack Internal Body Awareness That Humans Possess
Researchers at UCLA Health argue that today's most advanced AI systems are missing a fundamental component: the internal regulatory mechanisms that come from having a physical body. A new study proposes that building functional equivalents of "internal embodiment" into AI represents a critical gap in current development.
The distinction matters because it affects how safe and trustworthy AI becomes when deployed in high-stakes settings. Without internal regulatory mechanisms, AI models can sound like they understand human experiences without actually grasping them.
What the research found
The study, published in Neuron in April 2026, examined multimodal large language models like ChatGPT that process text, images, and video. These systems lack the bodily experience to truly understand concepts like fatigue, uncertainty, or physiological need.
The researchers illustrated this limitation with a simple test: several AI models failed to identify a point-light display of a human figure in motion-something newborns recognize without training.
The safety problem
Marco Iacoboni, a professor in the Department of Psychiatry and Biobehavioral Sciences at UCLA's David Geffen School of Medicine, described the core issue: "Current AI systems have no equivalent mechanism. They process inputs and generate outputs without any persistent internal state that regulates how they behave over time."
Without internal costs or constraints, AI systems lack intrinsic reasons to avoid overconfident errors, resist manipulation, or behave consistently. In humans, the body functions as a built-in safety system.
Akila Kadambi, a postdoctoral fellow and the paper's first author, said: "In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system."
The proposed solution
The researchers propose a "dual-embodiment framework" to guide future work. The framework would help AI systems model both their interactions with the external world and their own internal states, potentially addressing safety and trustworthiness concerns.
This approach differs from current focus on external embodiment-how AI interacts outwardly with the world. The study argues that internal dynamics deserve equal attention.
What comes next
The findings point toward a specific research direction: developing AI systems that genuinely align with human experiences rather than merely sounding fluent about them. For researchers working on Generative AI and LLM systems, the study identifies a concrete design challenge worth investigating.
Those interested in the broader implications may find value in exploring AI Research Courses that cover AI safety and alignment frameworks.
Your membership also unlocks: