Sunday, March 1, 2026

Something interesting to look at on Sunday morning

Words, code, guardrails & weasels: OpenAI, Anthropic, and the Pentagon

I've copied the entire “tweet” below in case you don't want to click. But you might want to glance through the thread. This is the “tweet” where Gimus says his badge stopped working.

* * * * *

I work in government affairs at OpenAI.

My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says "Policy Is Just Code That Runs on People." I bought the frame at Target. It was in the Live Laugh Love section. I did not see the irony at the time. I still don't.

We had a good week.

On Monday, we closed a $110 billion funding round. One hundred and ten billion dollars. Amazon put in fifty. Nvidia put in thirty. Valuation: $730 billion. The largest private fundraise in the history of anyone raising anything. There was a company-wide Slack message about it. The message used the word "transformative" twice and the word "safety" once. The word "safety" was in the last sentence, after the link to the new branded hoodie pre-order. The hoodies are nice. They're the soft kind.

On Tuesday, we fired a research scientist for insider trading on Polymarket.

Why Gemini 3.1 is so good [long chains of reasoning, across disciplinary boundaries]

YouTube:

What's really happening when Google ships the smartest AI model on the planet, prices it at a seventh of the competition, and doesn't care if you keep using Claude or ChatGPT? The common story is that this is another benchmark race—but the reality is more interesting when the company generating $100 billion in annual free cash flow is playing a fundamentally different game. In this video, I share the inside scoop on why Gemini 3.1 Pro reveals more about problem types than model rankings:

  • Why Google's vertical stack from TPU silicon to Nobel Prize research is an impregnable fortress
  • How Deep Think solved 18 previously unsolved problems across math, physics, and economics
  • What separates reasoning problems from effort, coordination, ambiguity, and emotional intelligence problems
  • Where the question "which AI should I use" becomes the wrong question entirely

For knowledge workers watching the model landscape differentiate, the margin between routing models well and using one model for everything is widening every single month.

Chapters

00:00 Google Shipped the Smartest Model and Doesn't Care If You Use It
03:15 Arc AGI 2: The Largest Single-Generation Reasoning Gain Ever
05:30 What Google Optimized For vs Anthropic and OpenAI
07:10 Demis Hassabis: Solve Intelligence, Then Solve Everything Else
09:45 Google's Vertical Stack: From Transistor Design to Protein Folding
13:20 Why Google Can Afford to Lose the Model Race 15:00 What Gemini 3.1 Pro Is and Isn't
17:30 Naked Reasoner vs Equipped Reasoner vs Specialist Coder
19:45 Deep Think: Disproving Conjectures and Catching Peer Review Errors
23:10 Hard Is Not One Thing: Six Types of Difficult Problems
28:40 Which Problems Does Pure Reasoning Actually Help?
32:15 What This Means for Your Work Tomorrow
35:50 Google's Quiet Game: Building the Thing Underneath the Thing

These two short passages give you a flavor:

14:03: “The model crossed disciplinary boundaries that human specialists very rarely cross because the model doesn't see disciplinary boundaries and that is one of the strengths of an AI model.”

15:10: “Gemini is good for certain kinds of problems: “And they share specific characteristics. The inputs are well-defined like a protein sequence. The problem can be stated extremely precisely. And the solution requires a long and sustained chain of logical deduction that a human mind can verify but often cannot generate without years of specialized training.”

If you don’t want to watch the whole thing, start with “What Gemini 3.1 Pro Is and Isn’t,” @ 15:00.

Carving at the joints: Plato, Zhuangzi, Guo Xiang

First, a prompt I gave Claude 5.4. Then Claude’s reply.

* * * * *

There’s a cliché about carving Nature at its joints.

There’s one version from Plato’s Phaedrus. Socrates has likened a well-formed speech to an animal with its various appropriately arranged parts and is now examining two different speeches on love (265e-266a):

... we are not to attempt to hack off parts like a clumsy butcher, but to take example from our two recent speeches. The single general form which they postulated was irrationality; next on the analogy of a single natural body with its pairs of like-named members, right arm or leg, as we say, and left, they conceived of madness as a single objective form existing in human beings. Wherefore the first speech divided off a part on the left, and

continued to make divisions ...

There’s a different version from Guo Xiang’s redaction of Zhuangzi (from Wikipedia):

Cook Ting was cutting up an ox for Lord Wen-hui. At every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee, zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Ching-shou Music.

“Ah, this is marvelous!” said Lord Wen-hui. “Imagine skill reaching such heights!”

Cook Ting laid down his knife and replied, “What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now I go at it by spirit and don't look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and follow things as they are. So I never touch the smallest ligament or tendon, much less a main joint.”

Plato dates: 428-348 BC. Guo Xiang dates: 252-312 AD.

Is there any chance that this idea made its way from Ancient Greece to Ancient China? Or is it likely that we have a case of independent invention? After all, butchering an animal is a common enough task and the difference between an unskilled butch and a highly skilled one must have been apparent to many people.

The question of transmission versus independent invention is genuinely interesting here, and the honest answer is that scholars are quite uncertain, though the weight of evidence favors independent invention.

The chronological gap is suggestive but not decisive. The Zhuangzi text itself predates Guo Xiang — Zhuangzi lived roughly 369-286 BC, making him a near-contemporary of Plato, not a distant inheritor. Guo Xiang's role was that of editor and commentator, so the Cook Ding story almost certainly originates in the 4th-3rd century BC, not the 3rd-4th century AD. This matters a great deal, because it means we're not looking at a case where ideas could have filtered eastward via the Silk Road over centuries — both texts are roughly contemporaneous.

There was contact between the Greek and Chinese worlds in this era, but it was extremely attenuated. Alexander's campaigns reached Bactria and the Punjab, and there are documented cases of idea-transmission in specific domains (certain astronomical and mathematical concepts, some artistic motifs). But the philosophical literature shows no clear fingerprints of cross-cultural borrowing in this period. The intermediary links needed to carry a specific Platonic metaphor from Athens to the Warring States courts of China in the 4th-3rd century BC are simply not there.

The more compelling case for independent invention rests on a few observations. First, as you note, skilled butchery was universally observable. Any culture that slaughtered animals for food — which is to say, every culture — could notice that an expert carver seems to find the natural seams while a novice hacks. The perceptual phenomenon is available everywhere. Second, and more importantly, the two passages are actually doing quite different philosophical work with the same raw material. Plato's Socrates is making an epistemological and rhetorical point about how to carve up concepts — how a good speech should divide a subject at its natural joints rather than arbitrarily. The emphasis is on correct classification, on getting the logical structure of reality right. Cook Ding's lesson is about something closer to wu wei — effortless action, yielding to the natural order, the dissolution of the self-conscious subject into the flow of the Way. The knife goes where the ox already opens; the cook stops trying. These are recognizably different philosophical preoccupations, and each fits neatly into its own tradition without requiring outside influence.

So what you have is a case where a shared human experience — watching an expert butcher work — gets recruited into philosophical argument independently in two places, but gets recruited to illuminate quite different problems. That's actually philosophically interesting in its own right: it suggests the analogy has a kind of natural aptness that makes it available to very different thinkers facing very different questions. The butcher doesn't just carve the ox at its joints; he also, apparently, carves the history of philosophy at a joint.

Saturday, February 28, 2026

Elysian Park, Hoboken

Computation, Chess, and Language in Artificial Intelligence

New working paper. Title above, links, abstract, contents and introduction below:

Academia.edu: https://www.academia.edu/164885566/Computation_Chess_and_Language_in_Artificial_Intelligence
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6319062
ResearchGate: https://www.researchgate.net/publication/401355671_Computation_Chess_and_Language_in_Artificial_Intelligence

Abstract: This paper reexamines the foundations of artificial intelligence by contrasting chess and natural language as paradigmatic domains. Chess, long treated as a benchmark for intelligence, is finite, rule-governed, and geometrically well-defined. It lends itself naturally to symbolic search and evaluation. Natural language, by contrast, operates in an unbounded and geometrically complex reality. Its rules are open-ended, its objectives diffuse, and its domain inseparable from embodied experience. With chess as its premier case – McCarthy: “the Drosophila of AI,” – AI has been guided by a deeper assumption: that the first principles of intelligence reduce to the first principles of computation. Drawing on Miriam Yevick’s distinction between symbolic and neural computational regimes, I propose that intelligence must be understood as operating in a geometrically complex world under finite resource constraints. Embodiment is therefore a formal condition of intelligence, not an incidental feature. Recognizing the structural difference between bounded games and open-ended cognition clarifies both the historical trajectory of AI and the conceptual limits of current systems.

Contents

Introduction: Chess, Language, and Intelligence 3 
Chess and Language as Paradigmatic Cases for Artificial Intelligence 5 
Three Principles of Intelligence (That Aren't Principles of Computation) 12 
Chronology of Chess, Language, and AI 15

Introduction: Chess, Language, and Intelligence

Chess has been a central concern of AI from the beginning. AI researchers didn’t become interested in natural language until the 1970s. Before that computational research on natural language was the domain of computational linguistics (CL), which started with machine translation (of texts from one natural language to another) as its primary problem. Thus we have two different disciplines, AI and CL.

In a sense, AI was fundamentally a philosophical exercise. It was an attempt to demonstrate, in effect, that we could understand the human mind in terms of computation. But rather than advance its philosophical objective through argument, it chose computational demonstration as its mode of expression. Chess became a central concern for two reasons: 1) On the one hand it was widely regarded as exhibiting the pinnacle of human reasoning ability. If we could create a computer program to play a championship game of chess, we could create a computer program that would be capable of cognitive or even perceptual task humans can do. 2) But also, the nature of chess made it well-suited for computational investigation.

The article that opens this working paper – Chess and Language as Paradigmatic Cases for Artificial Intelligence – concentrates on this and then goes on to make the point that language is utterly unlike chess in this respect. The chess domain is bounded and well-defined. Natural language is not; it is ill-defined and unbounded.

That’s as far as I got in the article, but I had been aiming for an argument that AI is still, in effect, mesmerized by the chess paradigm. I didn’t make it that far because language is so obviously different from chess that it is difficult to see how anyone would made that mistake.

What I have come to realize, only after I’d finished the article, is that it isn’t so much chess that has mesmerized AI. Rather it is computation itself. AI has been implicitly assuming that the First Principles of intelligence reduce to the First Principles of Computing. The first principles of computing can be found in the work of Alan Turing (the abstract idea of computing) and and others.

The first principles of intelligence are more stringent. As Claude put it a recent dialog:

First principle of intelligence: Must operate in unbounded, geometrically complex physical reality with finite resources.

Those two qualifications, an unbounded, geometrically complex reality, and finite computational resources, change the nature of the problem considerably. I note, in passing, that this allows us to assign formal significance to the concept of embodiment, for it is embodiment that commits intelligence to operating with finite resources in a geometrically complex universe.

Miriam Yevick’s 1975 paper, “Holographic or Fourier Logic,” is the crucial document, but it’s been forgotten. Using identification in the visual domain as her case, she showed that, where we are dealing with geometrically simple objects, sequential symbolic processing is the most efficient computational regime. But when we are dealing with geometrically complex objects, neural net processing is the most efficient computational regime. AI started out with symbolic processing in the 1950s and arrived at neural nets in the 2010s. But it hasn’t explicitly recognized that one must fit the mode of processing to the nature of the world. In that (perhaps a bit peculiar) sense, the researchers in the currently-dominant paradigm don’t know what they’re doing.

The second article in this working paper, Three Principles of Intelligence (That Aren't Principles of Computation), discusses this in more detail. I had it generated by Claude 4.5 after a long series of dialogs over several days.

The last article is a chronology of events in the history of chess and language in AI.

Friday, February 27, 2026

Are You Going With Me? – Pat Metheny

Friday Fotos: An urban canyon in Hoboken, NJ

Two Ways to Use AI: Homo Economicus vs. Homo Ludens

The academy has a problem, and it's been getting worse for over a century.[1]

We organize knowledge into disciplines—history, psychology, neuroscience, linguistics, economics—each with its own journals, conferences, and vocabulary. This structure, inherited from 19th-century German universities, serves one purpose brilliantly: it lets specialists gather details efficiently within well-defined boundaries.

But knowledge doesn't respect boundaries. The most important questions—How does the mind work? What makes us creative? Why do societies change?—require insights from multiple disciplines. The pattern you need to see often spans several "bins" of specialized knowledge.

Here's the paradox: we've been talking about interdisciplinary work for decades. Universities have interdisciplinary centers everywhere. Yet the actual structure of academic life—hiring, promotion, publication, funding—still runs on disciplinary rails laid down 150 years ago.

Now we have large language models. And we face a choice about how to use them.

The Economicus Approach

One path is to use LLMs to amplify and accelerate current arrangements. Let's call this the Homo economicus approach—the economic human, focused on optimizing production.

In this mode, LLMs become tools for:

  • Writing literature reviews faster
  • Reviewing papers for journals more efficiently
  • Generating incremental research at scale
  • Producing more publications per year
  • Staying safely within disciplinary boundaries

This sounds productive. More papers, faster reviews, greater output. But it doubles down on exactly what's broken. We already produce too many narrow specialist papers that too few people read. Using AI to produce more of them faster just amplifies the dysfunction.

The economicus approach treats knowledge production like manufacturing: maximize output, minimize cost, optimize existing processes. Stay in your lane. Don't take risks. Generate the next incremental advance.

The Ludens Alternative

There's another path. Call it Homo ludens—the playing human, focused on exploration and discovery.

In this mode, LLMs become tools for genuine cross-disciplinary integration. Not producing papers, but discovering connections. Not automating existing processes, but enabling new formations.

Here's what this looks like in practice:

Strategic Search Across Disciplines

Say you're investigating how language develops in children. Traditional approach: read the developmental psychology literature, maybe venture into linguistics if you're bold.

Ludens approach with LLMs: "Find work from any field that addresses the relationship between motor development and symbolic capacity."

The LLM doesn't care about departmental boundaries. It surfaces relevant work from neuroscience, evolutionary biology, comparative psychology, and anthropology—connections that specialists, confined to their silos, would miss.

Constraint Satisfaction Across Domains

Rigorous integration requires checking whether your ideas satisfy constraints from multiple fields simultaneously. Is your model of language acquisition consistent with what we know about brain development? Does it align with evolutionary timescales? Does it match observed behavior?

An LLM can rapidly check these cross-domain constraints: "Does this cognitive science claim contradict findings in neurobiology? What about developmental timelines?" It doesn't replace judgment, but it surfaces contradictions and connections that would take months of reading to discover.

Pattern Discovery in Unexpected Places

The most valuable insights often come from recognizing that two fields are studying the same phenomenon with different vocabularies. LLMs excel at this kind of pattern matching across terminological boundaries.

"What work in any discipline addresses hierarchical control systems switching between modes?" The answer might come from neuroscience (neural modulation), robotics (control architectures), or organizational psychology (decision-making frameworks). These aren't citations to pad your bibliography—they're genuinely different perspectives on the same deep problem.

The Center-Out Method

Start with a specific case—a text, an event, a phenomenon—and radiate outward to topics it touches. An LLM can help map these connections systematically: given this particular case study, what frameworks from different disciplines illuminate different aspects of it? [2]

This mirrors how actual insight works: you're wrestling with something specific, and you need whatever intellectual tools help, regardless of which department developed them.

Why This Matters

The difference isn't just practical—it's philosophical.

Economicus treats LLMs as labor-saving devices. Do what we already do, but faster and cheaper. This keeps us trapped in the existing system, just at higher speed.

Ludens treats LLMs as exploration tools. Find patterns we couldn't see before. Make connections that disciplinary blinders obscured. Enable the integrative work that institutions make nearly impossible.

The economicus approach optimizes local maxima—you get better and better at what you're already doing. The ludens approach helps you find new maxima you didn't know existed.

The Play Element

There's a deeper reason the ludens approach matters: genuine discovery requires play.

Not play as opposed to serious work, but play in the sense of free exploration before commitment. Trying unusual combinations. Following tangential connections. Seeing what emerges without knowing in advance what you're looking for.

This is how children learn, how scientists make breakthroughs, how jazz musicians create. You need freedom to explore widely before you settle on what's worth pursuing seriously.

The economicus approach eliminates this exploratory freedom in the name of efficiency. It optimizes production, but production of what? More of what we already have.

The ludens approach embraces exploration. You're not trying to write the next incremental paper. You're trying to discover what you don't yet know you're looking for.

The Current Moment

Right now, institutions are moving toward the economicus approach. Using LLMs to review more papers, generate more text, process more grant applications. It's understandable—they're under pressure to handle increasing volume.

But this is a catastrophic missed opportunity.

LLMs are genuinely good at working across disciplinary boundaries. They don't have careers to protect or departments to represent. They can pattern-match across the entire literature without caring which journal it appeared in. They're natural tools for the kind of integrative work that the current system makes nearly impossible.

Using them instead to accelerate existing processes is like using the internet purely to send faxes faster.

Thursday, February 26, 2026

Down by the river [Hoboken, Hudson River]

Dancing is fun, and good for you, too.

Danielle Friedman, Yes, Even You Can Dance, NYTimes.

For many people, dance feels more like play than exercise, which helps to explain its enduring appeal as a workout.

What began as “aerobic dancing” in the 1970s has evolved with exercise science (and contemporary playlists) into today’s cardio dance classes, which are typically high-energy sessions that engage the whole body. [...]

In recent decades, a growing body of research has found that dance may be just as beneficial for cardiovascular health as other common forms of aerobic exercise, when performed at a moderate to vigorous intensity.

Studies also suggest that dance can be an effective way to cultivate strength, balance and coordination, and can help to manage chronic pain. [...]

Dancing can have powerful psychological and cognitive benefits, helping to improve mood and memory. A 2024 review study found that, for some people, dancing was more effective for improving symptoms of depression than any other form of exercise.

When you dance with other people, you may also experience the many health benefits of being social, said Erica Hornthal, a dance therapist based in Chicago.

The article goes on to explain how you can create your own dance workout.

Or, you can just move to the music. Think about how you danced when you were a kid, Ms. Hornthal said, shaking off stress, letting loose and having fun.

“I really believe anyone can dance,” said Sadie Kurzban, founder of the cardio dance franchise 305 Fitness. “You can have no rhythm and still dance. You can be seated and still dance.”

Wednesday, February 25, 2026

The Paradox of Contemporary AI: Intellectual Success and Institutional Failure

We’re faced with a paradox: On the one hand the last 15 years of work in machine learning has to be seen as a profound INTELLECTUAL SUCCESS. In particular, it’s clear that the success of the transformer architecture – which first became apparent with GPT-3 – has brought us to the threshold of a new intellectual and technological era. However, existing architectures – and I’m thinking in particular of LLMs made by transformers – aren’t sufficient, as Gary Marcus, Yann LeCun and now even Ilya Sutskever, among others, have argued.

Thus we must face what has happened since then. An intellectual monoculture, one based on scaling and the construction of ever larger data farms, has come to dominate the field, and that has to be seen as a profound INSTITUTIONAL FAILURE. I say “institutional” quite deliberately because it wasn’t just this individual and that one and the other one and on through a whole list of individuals. No, the failure must be attributed to institutions within which all those individuals function. 

* * * * * 

NOTE: This article at 3 Quarks Daily gives some of the reasons I regard this intellectual monoculture to be an institutional failure: Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales.

Three Tablescapes

Wuthering Heights as initiation

Ross Douthat, Whatever Happened to Grown-Up Movies for Kids? NYTimes, Feb. 24, 2026.

What Douthat has in mind is “telling grown-up stories in a fashion suited to the ages between, say, 10 and 16.” He goes on:

What I want is emphatically not more Y.A. culture or “tween” books or Marvel sequels. Rather I want more adult culture that’s accessible to early teenagers, that presents grown-up themes without being explicit about everything, that feels like a bridge connecting childhood and adulthood rather than a young-adult detour or a jarringly coarse acceleration.

For example:

What I want is emphatically not more Y.A. culture or “tween” books or Marvel sequels. Rather I want more adult culture that’s accessible to early teenagers, that presents grown-up themes without being explicit about everything, that feels like a bridge connecting childhood and adulthood rather than a young-adult detour or a jarringly coarse acceleration.

He then goes on to contrast the novel, Wuthering Heights, with the current movie version:

which has been framed by its director, Emerald Fennell, as an attempt to channel her own experience encountering the Emily Brontë novel as a teenager. For Fennell that means not just giving us masturbation on the moors but also sexualizing every inch of the story, every cracked egg and kneaded loaf of dough, just as a hormonal teenage mind might do.

But that’s not what the Brontë novel offered to her teenage-reader self. It told a story in which sexuality is a potent force but not a pornographic one, in which extremity is everywhere but obscenity is not, and there are undercurrents and implications that the younger reader can grasp in part and the adult reader more completely.

“Wuthering Heights” the novel initiates, in other words, where “Wuthering Heights” the movie browbeats. And that feeling of initiation is what neither explicit R-rated entertainments nor the Y.A. fiction/superhero complex can really offer: a sense of encountering a world that’s fully adult but that makes allowances for innocence and inexperience, and that can be grasped provisionally with the promise of a greater understanding later on.

Snow World

Tuesday, February 24, 2026

Three Principles of Intelligence (That Aren't Principles of Computation)

Note: Claude 4.5 drafted this article after a long series of dialogs over several days. This is a continuation of the thinking in my current article in 3 Quarks DailyChess and Language as Paradigmatic Cases for Artificial Intelligence.


In the 1950s, artificial intelligence emerged from a productive confusion. We had just formalized computation itself—Turing and von Neumann had given us the fundamental principles of what computers could do. When we turned these powerful new machines toward intelligence, we naturally assumed the principles would be the same.

They aren't.

Computation vs. Intelligence

The principles of computation are domain-independent. A universal Turing machine can compute anything computable, whether that's arithmetic, chess moves, or protein folding. The Church-Turing thesis tells us that all models of computation are equivalent in what they can ultimately compute, given unlimited time and memory.

This universality is computation's glory—and intelligence's red herring.

Intelligence, as it actually exists in nature, operates under entirely different constraints. It must function in the physical world, with finite resources, solving problems that often don't have clean formal specifications. These aren't just practical limitations to be worked around; they're constitutive features that shape what intelligence is and how it must work.

Principle 1: Geometric Complexity Determines Computational Regime

The critical variable isn't how hard a problem is in some abstract computational sense, but the geometric complexity of the domain.

Consider chess versus visual object recognition. Chess is played on an 8×8 grid with a small set of piece types following rigid rules. The game tree is astronomically large—around 10^120 possible games—but it's finite and well-defined. You can represent board positions symbolically, enumerate legal moves, and search through possibilities systematically.

Vision operates in continuous three-dimensional space with effectively unbounded variation. Objects appear at different scales, orientations, and lighting conditions. There's no finite set of "legal configurations." You can't enumerate all possible images the way you can enumerate chess positions.

This difference in geometric complexity demands different computational approaches. Chess yields to systematic search through a definable space—what we might call sequential or symbolic processing. Vision requires something else: massively parallel processing that can handle continuous variation and incomplete information—holographic or neural processing.

In 1975, Miriam Yevick demonstrated this formally: the geometric complexity of objects in a domain determines the computational regime needed to identify them. Simple geometric objects can be handled by sequential symbolic systems. Complex geometric objects require holographic processing. This wasn't mere speculation—she made a formal mathematical argument about pattern recognition systems.

The field ignored her insight. We assumed all problems were fundamentally like chess—just harder. If symbolic AI could master chess, we thought, it would eventually master vision, language, and physical reasoning through better algorithms and more compute.

We were wrong. Vision didn't yield to symbolic AI no matter how much compute we threw at it. It required a regime shift to neural networks—systems whose architecture matches the geometric complexity of the visual world.

Principle 2: Intelligence Operates in Unbounded, Geometrically Complex Reality

Here's what makes intelligence different from computation in the abstract: intelligence evolved to work in the physical world, which is geometrically complex and open-ended. There's no finite game tree for "objects I might encounter" or "situations I might face."

This has profound implications. You can solve chess by exploring its game tree faster than humans can. But you can't solve vision or language understanding the same way because there's no complete tree to explore. The space isn't closed and enumerable—it's unbounded.

This is why Deep Blue beating Kasparov in 1997 didn't generalize the way we thought it would. Chess was solved by a room-sized supercomputer with custom hardware doing exactly what computers do best: blindingly fast systematic search. By 2025, a smartphone runs chess engines that would destroy both Deep Blue and Kasparov.

But that same smartphone can't run a GPT-4 level language model. Language still requires massive data centers. Why? Because language connects to the unbounded complexity of physical and social reality. No amount of faster chess-style search bridges that gap.

The field learned to beat humans at chess by doing what computers naturally excel at. Then we mistook this for a general template. We thought: "Intelligence is search through problem spaces. We just need bigger computers to search bigger spaces." But geometric complexity isn't about bigger—it's about different.

Principle 3: Embodiment as Formal Constraint

Embodiment isn't a philosophical talking point. It's a formal constraint on intelligence architecture.

When we say intelligence must be embodied, we mean: it must operate with finite computational resources in a geometrically complex physical world. This changes everything.

Abstract computation doesn't care about efficiency—a proof is valid whether it takes a second or a century. Physical computation must complete before the hardware fails. But biological intelligence faces a sharper constraint: it must acquire the energy it uses to compute. A deer's visual system can't require more calories than the deer can acquire. The computation must pay for itself.

This constraint shapes what kinds of solutions are viable. You can't exhaustively search unbounded spaces. You can't maintain perfect world models. You must make do with approximate, good-enough processing that operates in real time with available resources.

Crucially, this means different problems need different solutions—not just more or less compute, but fundamentally different architectures matched to the geometric complexity of the domain.

Why This Matters Now

Current AI has powerful neural networks that excel at pattern recognition in geometrically complex domains—vision, speech, even aspects of language. But the field still carries assumptions from the symbolic AI era:

  • That intelligence is domain-independent
  • That scaling compute will eventually solve any problem
  • That we can ignore embodiment and resource constraints
  • That all problems are fundamentally like chess

These assumptions persist even though we've abandoned symbolic AI. We've swapped the implementation (symbols → neural networks) but kept the framework (more compute → general intelligence).

This is why we need to distinguish computation principles from intelligence principles. Turing and von Neumann gave us the former. For the latter, we need to recognize that geometric complexity, unbounded reality, and embodied constraints aren't bugs to be worked around—they're the constitutive features that determine what intelligence is and how it must work.

The principles of intelligence aren't the principles of computation. Understanding this distinction is the key to understanding both what current AI can do and what it cannot.