I've posted a new working paper. Title above, links, abstract, contents, and introduction below.
Academia: https://www.academia.edu/129111476/Dialog_with_Claude_3_5_on_the_Intellectual_Potential_of_Man_Machine_Interaction_A_Working_Paper_April_30_2025
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5237019
ResearchGate: https://www.researchgate.net/publication/391320022_Dialog_with_Claude_35_on_the_Intellectual_Potential_of_Man-Machine_Interaction
Abstract: This paper takes the form of a discussion between me and Claude 3.5. We began by discussing how neural-network based chess programs like AlphaZero play differently from humans, and whether humans can learn from these new approaches. This leds to a key distinction between computational limitations (humans simply can't calculate as deeply as computers) and conceptual barriers that might be overcome through developing new theoretical frameworks. We explored how knowledge develops over historical time - not through changes in human biological capacity, but through the development of new conceptual frameworks and cultural tools. This was illustrated through examples like A Connecticut Yankee in King Arthur's Court (showing the vast gulf between different historical periods' knowledge) and cargo cults (showing the difference between mimicking surface behaviors and understanding underlying principles). The discussion then moved to how chess and language represent fundamentally different kinds of problems. Chess has a simple geometric footprint (8x8 grid, 6-piece types) and finite though vast possibility space. Language, in contrast, must interface with the entire world of human experience and lacks such clear boundaries. This connected to the historical development of AI - chess yielded to symbolic AI approaches (Deep Blue) while language required statistical/neural approaches (modern LLMs). This reflects fundamental differences in the problems - chess being rule-based and well-defined, language being messy and contextual. Similar patterns appeared in computer vision, where early symbolic approaches struggled with the fractal-like complexity of real-world objects, leading to the success of machine learning approaches.
Note: This abstract is a lightly edited version of a summary of the discussion that I asked Claude 3.5 to create.
Contents
Introduction: “Superintelligence” through man-machine interaction 2
Humans learning to think like AIs in chess 4
Super-intelligence 6
Connecticut Yankee 8
Cargo Cults 9
Chess and Search 9
Language 10
Recursus 13
Introduction: “Superintelligence” through man-machine interaction
I don’t know when the word “superintelligence” was first used, what it was used for, or who coined the term. This Google Ngram plot shows evidence of the term going back to the early 20th century:
But the term doesn’t take hold until the second decade of this millennium. This chart gives us a clearer picture:
The big jump starts at 2014, the year Nick Bostrom published his book, Superintelligence: Paths, Dangers, Strategies, which went on to become New York Times bestseller and to garner praise from Elon Musk, Bill Gates, Peter Singer, Derek Parfit, and Sam Altman.
Here’s how Bostrom defines superintelligence: “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (p. 26). That’s it. He treats it as an unstructured thing the provides intellectual power, like gasoline powering an automobile engine. Nowhere does he explain the mechanisms through which superintelligence operates.
By contrast, starting with the 1950s, the cognitive sciences have devoted a great deal of attention to mechanisms. Most of the controversies in linguistics have been about structures and mechanisms. Research in artificial intelligence has classically been a search for specific computational mechanisms. In this context, one might as well talk about Oooomph as talk about superintelligence. They mean pretty much the same thing, and that thing doesn’t seem to have anything to do with mechanisms.
Back in the mid-1980s David Hays and I reviewed a wide range of material in linguistics, cognitive psychology, neuroscience, developmental psychology, and neurobiology and published an article in which we proposed five principles of natural intelligence: 1) modal, 2) diagonalization, 3) action, 4) figural, and 5) indexing. We associate each principle with some mathematics, however informally, and with neural structures. As far as I know, the literature on superintelligence has nothing remotely comparable on offer. Our proposal is surely speculative, but there’s something there. There’s nothing to the concept of superintelligence except more more more.
* * * * *
This paper uses the term “superintelligence” rather lightly. I takes the form of a conversation with Claude 3.5, which I initiate by inquiring about neural-net-based chess engines play the game differently from humans. That is, I am interested in specific behaviors. After discussing chess for a bit we move on to superintelligence. There we develop a distinction “between something that's novel but ultimately comprehensible to humans [...] versus something that's inherently beyond human comprehension” (Claude’s formulation). Pay particular attention to the discussion of search, pp. 9-10, which leads to language, where I talk about the geometric footprint of a problem domain. The geometric footprint of chess is simple and finite while the geometric footprint of language is complex and apparently unbounded.
We’re by no means at the end; there’s a few more pages in the dialog. But that should give you a sense its style. The object is to tease out important distinctions and details, distinctions and details important for thinking about the nature of the underlying mechanisms.


No comments:
Post a Comment