AI Needs a Body: How ‘Internal Embodiment’ Can Improve Safety & Trustworthiness

by Priyanka Patel

The simple act of reaching for the salt shaker involves a complex interplay of brain and body – spatial awareness, understanding of physical properties and social cues all processed in a fraction of a second. But today’s most advanced artificial intelligence systems, despite their growing capabilities, lack this fundamental connection to the physical world and, crucially, to an internal sense of self. A new study from UCLA Health suggests this “embodiment gap” isn’t just a philosophical point, but a critical factor impacting the safety and trustworthiness of AI as it becomes increasingly integrated into our lives.

Researchers are beginning to understand that truly intelligent AI may require more than just vast datasets and complex algorithms. It may necessitate something akin to a body – not necessarily a physical robot, but a computational framework that simulates the experience of having a physical presence and an internal state. This concept, termed “internal embodiment,” is gaining traction as a potential key to building AI systems that are more reliable, predictable, and aligned with human values. The implications are particularly significant as AI takes on more consequential roles, from healthcare to autonomous vehicles.

The Missing Pieces: External and Internal Embodiment

The study, published in the journal Neuron, distinguishes between two types of embodiment. “External embodiment” refers to an AI’s ability to interact with the external world – to perceive its environment, plan actions, and respond to feedback. What we have is the focus of much current AI research, particularly in areas like robotics and computer vision. However, the UCLA team, led by postdoctoral fellow Akila Kadambi and Dr. Marco Iacoboni, argues that “internal embodiment” – the continuous monitoring of one’s own internal states, like fatigue, uncertainty, or even basic physiological needs – has been largely overlooked.

“In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system,” Kadambi explained. “If you’re uncertain, if you’re depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that’s a real problem for many reasons, especially when these systems are being deployed in consequential settings.” This lack of internal awareness, the researchers contend, creates a fundamental instability in AI decision-making.

How AI Falls Short: A Simple Visual Test

The researchers demonstrated this deficiency with a surprisingly simple test. They presented several leading AI models with a “point-light display” – a series of dots arranged to suggest a human figure in motion. This is a perceptual test that even newborns can readily recognize as a person. However, many of the AI models failed to identify the figure, with one even describing it as a constellation of stars. The problem was exacerbated when the image was rotated by just 20 degrees, causing even the best-performing models to break down.

This failure, the researchers argue, highlights the crucial role of bodily experience in human perception. Humans don’t struggle with this test because our understanding of movement and form is deeply rooted in our own physical experience of moving and interacting with the world. AI, trained on vast datasets of images and text but lacking that embodied experience, relies on pattern matching without a grounding in reality. As Dr. Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine, put it, “Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently.”

Building a More Robust AI: The Dual-Embodiment Framework

The UCLA team proposes a “dual-embodiment framework” as a potential path forward. This framework suggests building AI systems that model both their interactions with the external world and their own internal states. These internal state variables wouldn’t necessarily need to replicate human biology, but could function as persistent signals tracking things like uncertainty, processing load, and confidence. These signals could then shape the system’s outputs and constrain its behavior over time, creating a form of internal regulation.

Developing benchmarks to measure “internal embodiment” is as well crucial. Current AI evaluations primarily focus on external performance – can the system navigate a space, identify an object, or complete a task? The researchers argue that the field needs new tests that probe whether a system can monitor its own internal states, maintain stability when those states are disrupted, and behave pro-socially in ways that emerge from shared internal representations rather than simply mimicking statistical patterns.

The researchers acknowledge that implementing internal embodiment will be a significant challenge. It requires moving beyond simply increasing the size and complexity of AI models and instead focusing on fundamentally different architectural approaches. However, they believe It’s a necessary step towards building AI systems that are not only more intelligent but also more safe and reliable.

The work is intended to guide future research as AI technology continues to evolve. “If we want AI systems that are genuinely aligned with human behavior – not just superficially fluent – we may need to give them vulnerabilities and checks that function like internal self-regulators,” Iacoboni said.

The development of AI with internal embodiment is still in its early stages, but the UCLA study provides a compelling argument for its importance. As AI systems grow more pervasive, understanding and addressing this embodiment gap will be critical to ensuring that these technologies benefit humanity.

Researchers are continuing to explore different approaches to modeling internal states in AI, and the field is expected to see significant advancements in the coming years. The next key milestone will likely be the development of standardized benchmarks for measuring internal embodiment, allowing for more rigorous evaluation and comparison of different AI systems.

What are your thoughts on the future of AI and the importance of embodiment? Share your comments below.

You may also like

Leave a Comment