Here's the content of a tweet by Séb Krier (you should check out comments to the original):
Yes, I've been saying this for a while now. See for example https://x.com/sebkrier/status/1968753358216302894 and Danzig's work here: https://cset.georgetown.edu/wp-content/uploads/Machines-Bureaucracies-and-Markets-as-Artificial-Intelligences.pdf
I don't think the predominant narrative of AI as a singular entity, a Sand God, a discrete moment in time, or a 'separate species' (as Tegmark puts it) is correct or helpful. As Danzig argues, AI is indeed "alien," but only in the same way a stock market or the DMV is alien: they are all reductionist, correlative intelligences.
They strip the world of context, reducing reality to standardized inputs like prices or tokens to process information at scales humans cannot. To me at least, this shared "alien" nature normalizes AI as the latest evolution in a lineage of artificial processors we’ve lived with for centuries.
So instead of a unitary being or species, AGI should be understood as a collection of complex systems, models, and products that functions similarly to (and integrates with) existing human macro-systems. An amplifier for the bureaucracies and markets that already govern us, not a discrete 'biological-style' agent. Its governance is a continuous sociopolitical struggle (insert always has been meme) that is shaped by many different forces, not a one-time mathematical proof of safety before a launch.
Relatedly, I feel like the current discourse also has a blind spot for the 'demand' side. We obsess over the supply (R&D, model scaling, 'the AGI') as if these systems are created in a vacuum. I think this is how people end up with scenarios where AGIs are just doing things for their own sake, completely detached from human preferences (who are usually described as 'disempowered').
But they aren't; they are pulled and shaped by downstream demand, cost constraints, and efficiency needs. This economic reality has implications for how the technology develops. See also Drexler's CAIS model (https://owainevans.github.io/pdfs/Reframing_Superintelligence_FHI-TR-2019.pdf) - Drexler anticipated much of this and the core intuitions remain true, even if slightly out of date. You won’t see one omniscient agent, but a proliferation of specialized systems, models of varying sizes, and distinct products rising in parallel because that is what is economically viable.
This is why the AGI governance conversation often feels so confused. If you view AGI as a singular biological entity, you make two mistakes: safetyists project human-like 'intent' where they should be looking at incentives, and policymakers reach for a singular 'FDA' when instead they need to look into different different markets, sectors, products etc.
You can’t have a single regulator or discrete safety rules for 'The Economy' or 'The Bureaucracy,' and you won't be able to have one for 'Intelligence' either. Models still matter of course - none of this means you shouldn't test, evaluate, and understand them better - but I think we overindex on this frame a bit. And as Dean says, none of this is to downplay concerns and risks: but I do think it has implications for how to understand and address them.