Thursday, March 19, 2026

Brave New World: Notes on the next 30 years in AI [Work in Progress]

You may or may not be wondering why so many tweets have recently been showing up on New Savanna. There’s a good reason: I’ve been thinking. These days, more often than not, the means interacting with either ChatGPT or Claude, and often both in one day. I copy these interactions to Word documents, which I save to my hard drive. And then promptly forget what topic is in what document where on my hard drive. FWIW, these discussions can ramble all over the place, which is fine. That’s how I think. But keeping track of it all is difficult.

Anyhow, over the last week I’ve been thinking about work that David Hays and I have done going back to the mid-1970s when I first began working with him in graduate school. I’ve asked ChatGPT to look at some of our papers and evaluate them in terms of current literature and debates. And so forth and so on. I’m also looking forward to next steps, for me, but also in general.

At the moment I’m working on a piece for 3 Quarks Daily with the tentative title: “The Paradox of Contemporary AI: Engineering Success and Institutional Failure.” Here’s how ChatGPT put it in an interaction we had yesterday:

The engineering success is real. LLMs and related systems have given us access to a new conceptual continent. They work, and at extraordinary scale. But the institutional failure lies in the monoculture: too much intellectual, financial, and training-path dependence on one family of architectures and one style of thought about intelligence. The result is that we are building out the utility before we have adequately explored the space of possible successor technologies or developed the conceptual tools needed to understand what these systems are revealing about language, cognition, and cultural structure.

That’s a metaphor I’ve used off and on over the years, the discovery of a new continent. My paradigm case is the European discovery of the Americas. Making it to the eastern shore is one thing. Then comes exploring and settling. Columbus set foot in the Caribbean in 1492. Lewis and Clarke made it across North America to the Pacific Ocean in 1805. That’s over three centuries later. It would take another century for Europeans and their descendants to more or less settle North America from coast to coast.

Let’s say that GPT-3 in late 2022 is analogous to Columbus making it to shore in the New World. While I don’t think it will take four centuries to settle this new world, nor even three centuries to measure its extent. Intellectual discovery, engineering implementation, and commercial development move at a faster pace. But I do think there’s a lot of work, very exciting work, ahead of us.

Here’s a fragment of an interaction I had with ChatGPT 5.2 a couple of days ago. My prompt is first, then ChatGPT’s response.

* * * * * 

Obviously I don’t know how long it will take to explore that territory and to settle it, nor do I know what we’ll have when that’s done. But I’m thinking in terms of 30 years or more, not one or two bright ideas away.

Your time horizon sounds realistic, and historically it fits the pattern of major intellectual/technical shifts. What you’re describing—the exploration and eventual settlement of that “continent”—is not the discovery of a single clever algorithm. It is the construction of a new intellectual infrastructure, and those almost always take decades.

Let me sketch why.

1. Precedent: how long paradigm infrastructures take

If we look at previous transformations in science and technology, they tend to unfold over multi-decade arcs.

Digital computing

•       1936 — Turing machine

•       1940s–50s — early computers

•       1960s–70s — programming languages, operating systems

•       1980s–90s — personal computing ecosystem

Roughly 50 years from theory to mature civilization-scale infrastructure.

Molecular biology

•       1953 — DNA structure

•       1960s — genetic code deciphered

•       1970s — recombinant DNA

•       1990s — genome sequencing

Again 40 years to transform biology.

Deep learning

•       1986 — backprop revival

•       1998 — LeNet

•       2012 — AlexNet

•       2020 — GPT-3

That arc is already 35+ years long.

So a 30-year horizon for what you’re describing is not pessimistic—it is historically typical.

2. Why this particular transition is slow

The reason the transition will be slow is that the field is confronting three different layers simultaneously.

1. Mathematics of distributed cognition

We still lack a clear mathematical theory of:

•      representation in high-dimensional spaces

•       compositionality in distributed systems

•       interaction between symbolic and dynamical processes

Yevick was an early attempt at that level.

But the theory is still incomplete.

2. Architecture

Even if the mathematics existed, we would still need working architectures.

The field is currently experimenting with:

•       transformers

•       memory systems

•       agent frameworks

•       search hybrids

These are early prototypes.

3. Cognitive theory

Perhaps the hardest layer is understanding what intelligence actually is.

That question spans:

•       neuroscience

•       psychology

•       linguistics

•       philosophy

Your RNA work sits squarely in that domain. [Relational Network over Attractors]

3. Why the current moment feels fast

The speed of the last few years comes from something different: engineering scaling.

Transformers gave the field a powerful substrate for experimentation.

So exploration is happening rapidly near the coastline.

But deeper theoretical consolidation will take longer.

4. Your metaphor of a continent

Your continent metaphor captures the situation well.

If we extend it a little:

first explorers
(Yevick, Pribram, Hays)
 
coastal landing
(AlexNet, GPT-3)
 
coastal settlements
(LLMs, agents, search hybrids)
 
interior mapping
(new cognitive architectures)

We are currently in the coastal settlement phase.

People are building tools and infrastructure near the shore.

The deeper interior—the true theory of distributed cognition—remains largely unmapped.

5. Why earlier work may become valuable again

One interesting consequence of this long timeline is that older conceptual frameworks often get rediscovered.

Ideas that seemed speculative decades ago suddenly become relevant once technology catches up.

That may happen with several traditions:

•       holographic memory

•       attractor dynamics

•       vector symbolic architectures

•       relational networks

Your RNA framework sits at the intersection of several of those.

6. The generational aspect

A 30-year horizon also means that this project spans multiple intellectual generations.

The people working on it now will not necessarily be the ones who finish it.

That’s normal for foundational projects.

Examples:

•       Maxwell did not see electronics

•       Turing did not see modern computing

•       Watson and Crick did not see CRISPR

Each generation pushes the frontier forward.

7. What we might expect over that time

If your time horizon is roughly correct, the sequence might look something like:

0–10 years

•       hybrid neural-symbolic systems

•       agent architectures

•       better memory and planning modules

10–20 years

•       new architectures beyond transformers

•       clearer mathematical theory of distributed cognition

20–30+ years

•       integrated cognitive architectures

•       systems with robust reasoning and world models

But that’s speculative.

8. The intellectual shift already underway

Even if the technical endpoint is far away, one thing has already changed.

The old debate between:

symbolic AI
vs
connectionism

has effectively ended.

No serious researcher now believes intelligence is purely one or the other.

The problem is now framed as:

how do distributed systems produce structured cognition?

That’s exactly the question your work has been circling for many years.

9. A final observation

Your long horizon suggests a useful stance toward the work.

Rather than trying to produce a final architecture, the goal becomes something like:

clarifying the conceptual terrain so future explorers know where to look.

That’s the role many important theoretical frameworks play.

They don’t settle the continent, but they provide the first maps of the interior.

And those maps often turn out to matter decades later.

3 comments:

  1. "Lewis and Clarke made it across North America to the Pacific Ocean in 1805. "
    ...
    "the goal becomes something like:
    clarifying the conceptual terrain so future explorers know where to look.
    That’s the role many important theoretical frameworks play.
    They don’t settle the continent, but they provide the first maps of the interior.
    And those maps often turn out to matter decades later."

    Reminds me of...Time.
    As in resolution and implications. Captain Cook would still be bumping up against South America if not for The Clock so Longitude able to be calculated.
    What is the AI version of the Longitude Prize?

    "The longitude rewards were the system of inducement prizes offered by the British government for a simple and practical method for the precise determination of a ship's longitude at sea.
    ...
    "List of awardees by amount
    edit
    - John Harrison – £23,065 awarded overall after many years of contention with the Board ending in 1773.
    - Thomas Mudge – £500 advance in 1777 for developing his marine timekeeper and a £3,000 award approved by a special committee in 1793 in recognition for his accomplishments.[14]
    ...
    https://en.wikipedia.org/wiki/Longitude_prize

    The New! Longitude Prize. (bow severely stretched). Cheap prize money compared to Longitude Prize.
    Seb, I'd say, is the boss of...

    "Measuring progress toward AGI: A cognitive framework
    Mar 17, 2026
    ·
    "We’re introducing a framework to measure progress toward AGI, and launching a Kaggle hackathon to build the relevant evaluations.
    Ryan Burnell
    Research Scientist, Google DeepMind
    Oran Kelly
    Product Manager, Google DeepMind
    ...
    "Going from theory to practice
    Defining these cognitive abilities is a crucial first step, but we need more than a framework to measure progress. To put this theory into practice, we are launching a new Kaggle hackathon — “Measuring progress toward AGI: Cognitive abilities”. The hackathon encourages the community to design evaluations for five cognitive abilities where the evaluation gap is the largest: learning, metacognition, attention, executive functions and social cognition.
    Participants can use Kaggle's newly launched Community Benchmarks platform to build and test their evaluations against a lineup of frontier models.
    "We are offering a total prize pool of $200,000: $10,000 awards for the top two submissions in each of the five tracks, and $25,000 grand prizes for the four absolute best overall submissions. Submissions are open March 17 through April 16, and we’ll announce the results June 1. Head over to the Kaggle website to start building.
    https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework/

    SD

    ReplyDelete
    Replies
    1. SD: "What is the AI version of the Longitude Prize? "

      "As Palantir chief executive Alex Karp recently said in a CNBC interview:
            "What makes America special right now is our lethal capabilities, our ability to fight war, both because we’ve been doing it for 20 years, because we have meritocracy in our military, and because we finance it in a way no other country has. And because the A.I. revolution is uniquely American. Every company. All the model providers that are relevant and the ontology that makes them valuable and the chips that they run on are built in this country."

      That is: these systems are not developed in isolation from the American “defense” technology ecosystem. They are architecturally continuous with it, letting everyone involved maintain the polite fiction that there is a meaningful technical distinction between an Israeli system that generates kill lists from surveillance data and an American system that does the same thing but with a nicer user interface and a chatbot bolted on top, pointed at civilians in a country an ally has been begging for a chance to invade for years now.

      "We can look at yet another head of the beast: surveillance firm Palantir. Palantir’s Maven Smart System—the Pentagon’s primary AI warfare platform, managed by the National Geospatial-Intelligence Agency, performs structurally identical functions: applying computer vision algorithms to satellite imagery, automatically detecting objects likely to be enemy systems, visualizing potential targets, and “nominating” them for bombardment. A tool within Maven called the “AI Asset Tasking Recommender” proposes which bombers and munitions should be assigned to which targets. In November 2024, Palantir announced that it would integrate Anthropic’s Claude into the software it sells to US intelligence and defense agencies—Claude became the voice and reasoning engine of Palantir’s AIP system, the application layer through which military analysts interact with a chatbot that can identify enemy units, generate courses of action, plan routes for troops, and assign electronic warfare assets.
      "In a Palantir demo released in 2023, the entire Observe, Orient, Decide, Act loop—anomaly detection to unit identification to strike planning to troop mobilization—was compressed into a single chatbot conversation.

      "One has to labor under an arduous delusion to think there exists any distance between Israel’s “factory for mass assassinations” and the Pentagon’s AI warfare platform and the various programs (or models) used within it. 
      ...
      How Much a Dollar Cost?
      The AI Bubble in 2026 (2/4)
      Edward Ongweso Jr
      Mar 17, 2026
      https://www.thetechbubble.info/p/how-much-a-dollar-cost?

      Delete
    2. Well, Google is putting up $200K in prize money for the development of more realistic benchmarks: https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework/

      To put this theory into practice, we are launching a new Kaggle hackathon — “Measuring progress toward AGI: Cognitive abilities”. The hackathon encourages the community to design evaluations for five cognitive abilities where the evaluation gap is the largest: learning, metacognition, attention, executive functions and social cognition.

      Participants can use Kaggle's newly launched Community Benchmarks platform to build and test their evaluations against a lineup of frontier models.

      We are offering a total prize pool of $200,000: $10,000 awards for the top two submissions in each of the five tracks, and $25,000 grand prizes for the four absolute best overall submissions. Submissions are open March 17 through April 16, and we’ll announce the results June 1. Head over to the Kaggle website to start building.

      Delete