As you probably know by now, OpenAI has delivered ChatGPT-5. For awhile Altman was touting it as the coming of (the mythical) AGI. Whatever it is, it is not that. I’ve been using. For my purposes little has changed. If I wasn’t told that we’ve got a new model I probably wouldn’t have noticed.
Futurist Bryan Alexander has a nice rundown on it, reviewing its features and moving on to how it’s been received: “My sense is that there was an initial outburst of interest, followed in just hours by a storm of complaints, criticisms, and outrage.”
What I think is that we've hit that wall that Gary Marcus has been talking about. But it's not a hard wall. It’s a soft spongy wall, but very thick. So we’re not going through it, not by simply scaling up current new tech. This wall will absorb anything the industry, as it currently exists, is likely to throw at it.
We need new architectures. Unfortunately, the industry seems intent on doubling down on current architecture. I’m worried that they’ll get mired in sunk costs. And that has knock-on effects. It discourages academic research in new directions and certainly influences training as well. You can't train students to develop new tech if no one's interested in doing that.
What we need:
Symbolic AI
I think we need the sort of symbolic capacity that Marcus talks about, and that David Ferrucci has been working on. But that’s not all, not by a long shot.
Variable bandwidth associative memory
I've got a new working paper that starts out by talking about mirror recognition, works its way to the default mode network in the brain and ends up talking about something that ChatGPT called an “associative drift engine” (pp. 11-15). I think of current machine learning models as associative memory. Associative memories are content-addressable. In obvious ways that’s very convenient. But there’s a problem. Here’s how ChatGPT put the issue:
In content-addressable systems:
- Access is based on similarity: you input a pattern (probe), and you get back items that match it.
- But you only get what the probe activates.
- If your probe is too specific, you only get exact matches.
- If your probe is too vague, you get noise—or nothing useful.
So the challenge is:
How can we design a system that varies the specificity or scope of the probe, allowing it to search narrowly or broadly, sharply or fuzzily, depending on its current mode of operation?
That's to support mind-wandering and day-dreaming, loose thinking that gets you somewhere you don't know about but recognize when you get there. My current series of posts, Intellectual creativity, humans-in-the-loop, and AI, contains detailed examples of cases where those capacities are essential.
Organic growth of the core memory
Penultimately we need to be able to grow the core model (e.g. the LLM) rather than having to retrain it to accommodate new stuff. I’ve got a bunch of posts on what I’m calling polyviscosity, many of which address this issue with respect to the brain. In particular, see:
- Consciousness, reorganization and polyviscosity, Part 2: ‘Fluidity’ & its requirements (mini-ramble)
- Consciousness, reorganization and polyviscosity, Part 4: Glia
Autonomy
Finally, we need to figure out how to make a fully and richly autonomous system. I address this in my working paper, Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind. The final section, “Kinds of Minds” (pp. 49-58) considers the issue, albeit briefly, adopting the concept of strategic autonomy as defined by Ali Minai, who observes:
This is the highest level of autonomy where the intelligent system decides autonomously what goals or purposes it should pursue in the world. An AI system at this level would be a fully independent, sentient, autonomous being like a free human. Such AI is still strictly the stuff of science fiction and futurist literature.
Indeed.
I figure all of that is the work of an intellectual generation or three. It’s not going to be accomplished by a half-dozen brilliant dissertations or the industry equivalent.
OpenEvolve.
ReplyDeleteThey are trying to implement selective optimisation toward autonomous MCP / agent...
"... adopting the concept of strategic autonomy as defined by Ali Minai"
"An AI system at this level would be a fully independent, sentient, autonomous being like a free human. Such AI is still strictly the stuff of science fiction and futurist literature."
These links use map / world generation and Perlin noise to optimise an algorithm as examples, yet "code" may also be a heuristic and or pattern or reflexive actions a person may deign to update / optimise in concert with an ai.
A step toward; "the work of an intellectual generation or three. It’s not going to be accomplished by a half-dozen brilliant dissertations or the industry equivalent."
codelion/openevolve
Open-source implementation of AlphaEvolve
License Apache-2.0 license
OpenEvolve
The most advanced open-source evolutionary coding agent
Turn your LLMs into autonomous code optimizers that discover breakthrough algorithms
From random search to state-of-the-art: Watch your code evolve in real-time
https://github.com/codelion/openevolve
https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
https://en.m.wikipedia.org/wiki/AlphaEvolve#cite_note-openevolve_github-5
aperoc/toolkami
Minimal AI agent framework that just works with only seven tools
https://github.com/aperoc/toolkami
AlphaEvolve: ToolKami Style
(Last updated: 2025-09-23):
TL;DR: We implemented AlphaEvolve as an LLM workflow with MCP tools to optimize Perlin noise implementation for procedural generation of images. Code is available at the end of this post.
When I had just started experimenting with ToolKami, Google released AlphaEvolve: A coding agent for scientific and algorithmic discovery. The paper made waves in the news—unsurprisingly, given its impressive results and its innovative combination of two powerful techniques:
https://blog.toolkami.com/alphaevolve-toolkami-style/
Generative GameDev
An Oxford PhD explores the intersection of Generative AI and Game Development with Unity.
By Stefan Webb
https://gamedev.blog
Voronoi map generation in Civilization VII(2k.com)230 points by Areibman 23 hours ago | hide | past | favorite | 20 comments
https://news.ycombinator.com/item?id=45382300
Hope these are relevant and not too serendipitous!
codelion/openevolve
Open-source implementation of AlphaEvolve
License Apache-2.0 license
OpenEvolve
The most advanced open-source evolutionary coding agent
Turn your LLMs into autonomous code optimizers that discover breakthrough algorithms
From random search to state-of-the-art: Watch your code evolve in real-time
https://github.com/codelion/openevolve
https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
https://en.m.wikipedia.org/wiki/AlphaEvolve#cite_note-openevolve_github-5
aperoc/toolkami
Minimal AI agent framework that just works with only seven tools
https://github.com/aperoc/toolkami
AlphaEvolve: ToolKami Style
(Last updated: 2025-09-23):
TL;DR: We implemented AlphaEvolve as an LLM workflow with MCP tools to optimize Perlin noise implementation for procedural generation of images. Code is available at the end of this post.
When I had just started experimenting with ToolKami, Google released AlphaEvolve: A coding agent for scientific and algorithmic discovery. The paper made waves in the news—unsurprisingly, given its impressive results and its innovative combination of two powerful techniques:
https://blog.toolkami.com/alphaevolve-toolkami-style/
Generative GameDev
An Oxford PhD explores the intersection of Generative AI and Game Development with Unity.
By Stefan Webb
https://gamedev.blog
Voronoi map generation in Civilization VII(2k.com)230 points by Areibman 23 hours ago | hide | past | favorite | 20 comments
https://news.ycombinator.com/item?id=45382300
SD