Pioneer of causal AI, Judea Pearl, argues that no amount of scaling will get LLMs to AGI.
— Big Brain AI (@realBigBrainAI) February 18, 2026
He believes current large language models face fundamental mathematical limitations that can't be solved by making them bigger.
"There are certain limitations, mathematical limitation that… pic.twitter.com/xEpBKQReEj
From the tweet:
When hospitals collect data on treatment effects, that raw data never reaches the LLMs.
Instead, the models consume doctors' written interpretations. Analyses shaped by people who already have a mental model of how disease and treatment work.
In other words, LLMs are learning from the map, not the territory.
"In other words, LLMs are learning from the map, not the territory."
ReplyDeleteAdd wars, both conventional, asymmetric and technological...
"Conversely, this stupidly military and technological war corresponds to a primacy of the model over the event, that is to
fictitious stakes and to a non-sequitur. War extends/continues the absence
at the heart of politics through other means."
LE MONDE
November 11, 2001
L'esprit du terrorisme [The Spirit of Terrorism]
Jean Baudrillard
Translated by Dr Rachel Bloul, School of Social Sciences,
Australian National University.
https://humanities.psydeshow.org/political/baudrillard-eng.htm
SD