
Apple’s new AI study underscores what philosophers already know: Mind cannot be severed from body.
Earlier this week, Apple researchers published a sobering analysis of the reasoning limits of today’s most advanced AI models—including OpenAI’s GPT-4, Claude, Gemini, and DeepSeek. When given logic puzzles of increasing difficulty—like the Tower of Hanoi or complex River Crossing scenarios—models that performed well on simple tasks collapsed entirely as complexity rose. Accuracy dropped to near-zero. Worse, the models appeared to “give up” rather than push harder, using fewer tokens and halting their own reasoning early.
This wasn’t just a failure of computation—it was a failure of commitment.
As some have noted, including The Guardian and MarketWatch, Apple’s study lands in a tense moment—on the eve of WWDC, and amid growing hype around AGI (artificial general intelligence). But beyond the strategic timing, the findings reinforce a more fundamental truth:
You cannot simulate human intelligence without a body.
The Mind-Body Divide: An Unsolved Problem
AI researchers often talk about “reasoning,” “memory,” and even “intelligence” as though they are abstract computations. But human intelligence didn’t evolve in the abstract. It evolved to serve survival—to keep a body alive in an unpredictable physical world, in service of genetic continuation.
Take away the body, and you take away the purpose. You remove:
- Affective feedback: Fear, hunger, desire—what drives decisions
- Sensorimotor grounding: The loop between action and consequence
- Developmental embodiment: Learning through motion, touch, risk
- Evolutionary salience: The difference between “wrong” and “fatal”
Today’s AI models lack all of these. They are pattern-matchers in a void, trained to sound intelligent without needing to be intelligent. Apple’s study shows that once the pattern breaks—once novelty exceeds training scope—LLMs don’t generalize, they collapse.
Can AGI Ever Be Real Without a Body?
This isn’t a new idea. Philosophers like Maurice Merleau-Ponty and cognitive scientists like Francisco Varela have long argued that consciousness and cognition are embodied. More recently, roboticists like Rodney Brooks and AI critics like Gary Marcus have echoed the same: Without a body to navigate, sense, and suffer, “intelligence” remains an illusion.
If we ever achieve true AGI, it will likely require:
- A physical or embodied form with complex sensory inputs
- The ability to fail and adapt in a high-stakes, open world
- An internal motivation system that mimics drive—not just prompts
- A developmental arc, not just pre-training on scraped data
None of this exists in today’s LLMs. Nor can it, as long as we treat intelligence as a software problem.
What This Means for the AI Conversation
Apple’s paper may have strategic motives—it casts doubt on rival models while Apple ramps up its own on-device AI—but the message stands. If we are to move beyond tools toward something that truly thinks, we must wrestle with the mind-body problem, not just parameter count.
That means:
- Shifting attention from delusions about server-based AGI to embodied cognition research
- Questioning benchmarks that merely acknowledge shallow imitation
- Rejecting the idea that “sounding smart” is the same as “being smart”
We don’t need to fear AGI, even though it is clear that society is far from ready for it–witness the palaver around LLM AI’s taking jobs today! We must understand AGI—and perhaps redefine it—not as a digital endpoint, but as a living, evolving, embodied being. That is at least decades, possibly centuries, away. It may be impossible.
But if it is possible, it will not be built in a data center. It will be born into a body.
(Article drafted by ChatGPT. Edited by Roger Harris)
