A new study from UCLA Health argues that today's most advanced artificial intelligence systems are missing a fundamental property of human cognition — and that the gap could have significant consequences for AI safety.
Published in the journal Neuron, the paper proposes that current AI systems are missing two essential ingredients that humans take for granted: a body that interacts with the physical world and an internal awareness of that body's own states such as fatigue, uncertainty or physiological need. The researchers call this combined property "internal embodiment."
"While there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics," said Akila Kadambi, a postdoctoral fellow in the Department of Psychiatry and Biobehavioral Sciences at UCLA's David Geffen School of Medicine and the paper's first author. "AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that's a real problem for many reasons, especially when these systems are being deployed in consequential settings."
To illustrate the gap, researchers showed leading AI models a point-light display — a simple image of dots suggesting a human figure in motion that even newborns can recognize. Several models failed to identify the figure as a person, with one describing it instead as a constellation of stars.
"Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently," said Dr. Marco Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences and a senior author on the paper.
The authors propose a "dual-embodiment framework" to guide future AI development and call for new benchmarks to evaluate whether systems can monitor their own internal states.
Edited by SMDP Staff