I’ve got a new article at 3 Quarks Daily:
Chess and Language as Paradigmatic Cases for Artificial Intelligence
Chess has been a central concern of AI from the beginning. AI researchers didn’t become interested in natural language until the 1970s. Before that computational research on natural language was the domain of computational linguistics (CL), which started with machine translation (of texts from one natural language to another) as its primary problem. Thus we have two different disciplines AI and CL.
In a sense, AI was fundamentally a philosophical exercise. It was an attempt to demonstrate, in effect, that we could understand the human mind in terms of computation. But rather than advance its philosophical objective through argument, it chose computational demonstration as its mode of expression. Chess became a central concern for two reasons: 1) On the one hand it was widely regarded as exhibiting the pinnacle of human reasoning ability. If we could create a computer program to play a championship game of chess, we could create a computer program that would be capable of cognitive or even perceptual task humans can do. 2) But also, the nature of chess made it well-suited for computational investigation.
My article concentrates on this and then goes on to make the point that language is utterly unlike chess in this respect. The chess domain is bounded and well-defined. Natural language is not; it is ill-defined and unbounded.
That’s really as far as I got. Which is OK. But what I was aiming for was an argument that AI is still, in effect, mesmerized by the chess paradigm. I couldn’t quite make it that far. Language is just so obviously different.
What I’ve come to realize, only after I’d finished the article, is that it isn’t so much chess that has mesmerized AI. Rather it is computation itself. AI has been implicitly assuming that the First Principles of intelligence reduce to the First Principles of computing. The first principles of computing can be found in the work of Alan Turing (the abstract idea of computing) and John von Neumann (for the physical implementation of computing).
The first principles of intelligence are more stringent. As Claude put it in our dialog last night:
First principle of intelligence: Must operate in unbounded, geometrically complex physical reality with finite resources.
Those two qualifications, an unbounded, geometrically complex reality, and finite computational resources change the nature of the problem considerably. Miriam Yevick’s 1975 paper, “Holographic or Fourier Logic,” is the crucial document, but it’s been forgotten. Using identification in the visual domain as her case, she showed that, where we are dealing with geometrically simple objects, sequential symbolic processing is the most efficient computational regime. But when we are dealing with geometrically complex objects, neural net processing is the most efficient computational regime. AI started out with symbolic processing in the 1950s and arrived at neural nets in the 2010s. But it hasn’t explicitly recognized that one must fit the mode of processing to the nature of the world. In that (perhaps a bit peculiar) sense, the researchers in the currently-dominant paradigm don’t know what they’re doing.
I’ve written a number of blog posts and articles about Yevick’s work. Try these two articles:
Next Year in Jerusalem: The brilliant ideas and radiant legacy of Miriam Lipschutz Yevick [in relation to current AI debates], 3 Quarks Daily, October 9, 2023, https://3quarksdaily.com/3quarksdaily/2023/10/next-year-in-jerusalem-the-brilliant-ideas-and-radiant-legacy-of-miriam-lipschutz-yevick-in-relation-to-current-ai-debates.html
What Miriam Yevick Saw: The Nature of Intelligence and the Prospects for A.I., A Dialog with Claude 3.5 Sonnet, Working Paper, January 3, 2025, https://www.academia.edu/126773246/What_Miriam_Yevick_Saw_The_Nature_of_Intelligence_and_the_Prospects_for_A_I_A_Dialog_with_Claude_3_5_Sonnet_Version_2

No comments:
Post a Comment