Every few months, headlines declare that artificial general intelligence (AGI) is “just around the corner.” We are told that large language models (LLMs) like GPT, Claude, and Gemini—trained on staggering amounts of human text—are already inching toward superintelligence. Their dazzling fluency in language, their ability to pass exams, draft essays, write code, and even simulate reasoning, all seem to suggest that we are on the cusp of building a machine mind.
But I want to press pause here. Because what we call “intelligence” in humans is not just about generating words, nor even about reasoning through problems. It is about seeing the world whole, responding to it with intuition, emotion, and embodied understanding, and only then clothing that deeper comprehension in words. This distinction—between what language expresses and what truly grounds human experience—explains why our current paradigm in AI, for all its utility, cannot get us to AGI.
In The Master and His Emissary, Iain McGilchrist revisits a long-misunderstood truth about the brain: its two hemispheres are not redundant halves but complementary modes of attending to the world. The left hemisphere is narrow, focused, analytic, language-dominant. It parses, categorizes, and manipulates. The right hemisphere, by contrast, is broad, integrative, and contextual. It sees relationships, perceives patterns, grasps metaphor, appreciates music, understands lived experience.
McGilchrist describes the left hemisphere as the emissary—a skilled servant, but one that works best when guided by the master, the right hemisphere. The right brain apprehends reality in all its richness; the left brain then abstracts from that reality to produce tools, language, and strategies. When the emissary takes over—when we live only in the world of categories, symbols, and words—we lose the very grounding that makes those abstractions meaningful.
Now think about large language models. They are the left hemisphere writ large. They excel at manipulating symbols, following statistical patterns, recombining fragments of text into coherent wholes. But what they lack—utterly and completely—is the right hemisphere’s grounding in lived experience, perception, embodiment, and holistic context. They are emissaries without a master.
And that is a profound limitation.
Jonathan Haidt, in The Righteous Mind, deepens this critique. His research into moral psychology shows that our reasoning mind (the rider) is mostly a PR department for our intuitions (the elephant). We feel first, we act, we intuit—and only afterward do we rationalize those actions with language. Language, in other words, evolved not to discover truth but to manage reputation, justify choices, and bind groups together.
This has huge implications for AI. If language is not the wellspring of thought but its after-the-fact rationalization, then training a machine exclusively on language is like building a city on scaffolding without laying a foundation. You can get impressive façades, yes—but the deeper machinery of thought, the evolutionary machinery that produces intuition, emotion, and values, is missing.
Our machines are riders with no elephants. Left brains with no right.
This is why LLMs feel so uncannily human while being, in essence, profoundly alien. They speak our language better than most of us ever could. They can draft Shakespearean sonnets, summarize physics papers, and counsel us on relationships. But notice: they never see. They never feel. They never apprehend the world directly. They are entirely derivative.
When humans speak, words are tethered to perception, to a body moving through space, to a nervous system shaped by millions of years of survival. When an LLM speaks, words are tethered only to other words. It’s as if you had an infinitely large dictionary that could rearrange itself in perfect syntax but had never once stepped outside to feel the wind or smell the rain.
This is why even the best models still hallucinate facts, fail to grasp basic common sense, and collapse when pushed outside the training distribution. They lack the grounding that comes from the right hemisphere’s way of knowing—the kind that sees wholes rather than parts, feels before it rationalizes, perceives before it names.
McGilchrist’s warning is that when the left hemisphere dominates—when abstraction and manipulation replace embodied understanding—we become brittle, disconnected, and blind to meaning. A society ruled by the emissary loses sight of what the master knows: that life is not a puzzle to be solved but a mystery to be inhabited.
With LLMs, we are building precisely such a left-brained intelligence. One that can parse infinitely, but not perceive; justify endlessly, but never feel. If we mistake this for superintelligence, we risk creating systems that amplify our cleverness while hollowing out our wisdom.
This isn’t just a philosophical worry. It has practical stakes. A purely “left-brain” machine may optimize relentlessly for stated goals while missing the broader context, producing catastrophic unintended consequences. It may rationalize convincingly without ever grasping the underlying reality it claims to describe. In short: it may become an emissary run amok.
If McGilchrist and Haidt are right, then AGI will not emerge from better language models alone. It will require machines that experience the world more like the right hemisphere does:
Until then, what we have are brilliant mimics of language, not minds.
Of course, it’s possible I am underestimating the power of language itself. Some theorists argue that language is not merely post hoc rationalization but the very engine of abstract thought. If that’s true, then scaling LLMs—especially when coupled with multimodal perception—might eventually yield emergent properties akin to right-brain understanding.
Another possibility is that embodiment can be simulated at scale. If virtual environments grow rich enough, and agents learn by acting within them, perhaps the boundary between text-based rationalization and embodied intuition could blur.
Finally, it may be that my analogy over-commits to hemispheric differences. The brain is not literally split into “language left” and “holistic right” in such a tidy way. McGilchrist’s framework is powerful, but the reality is more nuanced. AI might stumble into intelligence by paths very unlike our own.
For now, though, the lesson is clear: we should be wary of mistaking linguistic brilliance for general intelligence. Our brains evolved with a master and an emissary, with an elephant and a rider. We live first through intuition, perception, and embodied being, and only afterward rationalize it with words.
LLMs are all rider, no elephant. All emissary, no master. All left brain, no right.
If we want to build true AGI—or avoid the dangers of a world ruled by clever but ungrounded machines—we will need to remember this. Intelligence is not what happens in words. It is what happens before words.