Tuesday, February 24, 2026

Three Principles of Intelligence (That Aren't Principles of Computation)

Note: Claude 4.5 drafted this article after a long series of dialogs over several days. This is a continuation of the thinking in my current article in 3 Quarks DailyChess and Language as Paradigmatic Cases for Artificial Intelligence.


In the 1950s, artificial intelligence emerged from a productive confusion. We had just formalized computation itself—Turing and von Neumann had given us the fundamental principles of what computers could do. When we turned these powerful new machines toward intelligence, we naturally assumed the principles would be the same.

They aren't.

Computation vs. Intelligence

The principles of computation are domain-independent. A universal Turing machine can compute anything computable, whether that's arithmetic, chess moves, or protein folding. The Church-Turing thesis tells us that all models of computation are equivalent in what they can ultimately compute, given unlimited time and memory.

This universality is computation's glory—and intelligence's red herring.

Intelligence, as it actually exists in nature, operates under entirely different constraints. It must function in the physical world, with finite resources, solving problems that often don't have clean formal specifications. These aren't just practical limitations to be worked around; they're constitutive features that shape what intelligence is and how it must work.

Principle 1: Geometric Complexity Determines Computational Regime

The critical variable isn't how hard a problem is in some abstract computational sense, but the geometric complexity of the domain.

Consider chess versus visual object recognition. Chess is played on an 8×8 grid with a small set of piece types following rigid rules. The game tree is astronomically large—around 10^120 possible games—but it's finite and well-defined. You can represent board positions symbolically, enumerate legal moves, and search through possibilities systematically.

Vision operates in continuous three-dimensional space with effectively unbounded variation. Objects appear at different scales, orientations, and lighting conditions. There's no finite set of "legal configurations." You can't enumerate all possible images the way you can enumerate chess positions.

This difference in geometric complexity demands different computational approaches. Chess yields to systematic search through a definable space—what we might call sequential or symbolic processing. Vision requires something else: massively parallel processing that can handle continuous variation and incomplete information—holographic or neural processing.

In 1975, Miriam Yevick demonstrated this formally: the geometric complexity of objects in a domain determines the computational regime needed to identify them. Simple geometric objects can be handled by sequential symbolic systems. Complex geometric objects require holographic processing. This wasn't mere speculation—she made a formal mathematical argument about pattern recognition systems.

The field ignored her insight. We assumed all problems were fundamentally like chess—just harder. If symbolic AI could master chess, we thought, it would eventually master vision, language, and physical reasoning through better algorithms and more compute.

We were wrong. Vision didn't yield to symbolic AI no matter how much compute we threw at it. It required a regime shift to neural networks—systems whose architecture matches the geometric complexity of the visual world.

Principle 2: Intelligence Operates in Unbounded, Geometrically Complex Reality

Here's what makes intelligence different from computation in the abstract: intelligence evolved to work in the physical world, which is geometrically complex and open-ended. There's no finite game tree for "objects I might encounter" or "situations I might face."

This has profound implications. You can solve chess by exploring its game tree faster than humans can. But you can't solve vision or language understanding the same way because there's no complete tree to explore. The space isn't closed and enumerable—it's unbounded.

This is why Deep Blue beating Kasparov in 1997 didn't generalize the way we thought it would. Chess was solved by a room-sized supercomputer with custom hardware doing exactly what computers do best: blindingly fast systematic search. By 2025, a smartphone runs chess engines that would destroy both Deep Blue and Kasparov.

But that same smartphone can't run a GPT-4 level language model. Language still requires massive data centers. Why? Because language connects to the unbounded complexity of physical and social reality. No amount of faster chess-style search bridges that gap.

The field learned to beat humans at chess by doing what computers naturally excel at. Then we mistook this for a general template. We thought: "Intelligence is search through problem spaces. We just need bigger computers to search bigger spaces." But geometric complexity isn't about bigger—it's about different.

Principle 3: Embodiment as Formal Constraint

Embodiment isn't a philosophical talking point. It's a formal constraint on intelligence architecture.

When we say intelligence must be embodied, we mean: it must operate with finite computational resources in a geometrically complex physical world. This changes everything.

Abstract computation doesn't care about efficiency—a proof is valid whether it takes a second or a century. Physical computation must complete before the hardware fails. But biological intelligence faces a sharper constraint: it must acquire the energy it uses to compute. A deer's visual system can't require more calories than the deer can acquire. The computation must pay for itself.

This constraint shapes what kinds of solutions are viable. You can't exhaustively search unbounded spaces. You can't maintain perfect world models. You must make do with approximate, good-enough processing that operates in real time with available resources.

Crucially, this means different problems need different solutions—not just more or less compute, but fundamentally different architectures matched to the geometric complexity of the domain.

Why This Matters Now

Current AI has powerful neural networks that excel at pattern recognition in geometrically complex domains—vision, speech, even aspects of language. But the field still carries assumptions from the symbolic AI era:

  • That intelligence is domain-independent
  • That scaling compute will eventually solve any problem
  • That we can ignore embodiment and resource constraints
  • That all problems are fundamentally like chess

These assumptions persist even though we've abandoned symbolic AI. We've swapped the implementation (symbols → neural networks) but kept the framework (more compute → general intelligence).

This is why we need to distinguish computation principles from intelligence principles. Turing and von Neumann gave us the former. For the latter, we need to recognize that geometric complexity, unbounded reality, and embodied constraints aren't bugs to be worked around—they're the constitutive features that determine what intelligence is and how it must work.

The principles of intelligence aren't the principles of computation. Understanding this distinction is the key to understanding both what current AI can do and what it cannot.

No comments:

Post a Comment