Pages in this blog

Wednesday, December 14, 2022

Does AI remain in the grip of chess-thinking?

I don’t know. These are some thoughts I found in notes I made about two years ago.

* * * * *

Much of the AI world remains captive to its original concerns and framing from the early 1950s. Skipping over cybernetics – with its origins in pre-digital-computer analog control circuitry – we have programs for checkers, chess, and symbolic logic (from Russell and Whitehead’s Principia Mathematica). Those are very constrained artificial worlds. While both chess and checkers are accessible to children, they also give scope to the most sophisticated adults. Symbolic logic is not really open to children, other than math prodigies, but certainly has room for the most sophisticated and energetic of adults.

By the 1980s, however, AI had discovered that, for a computer, coming to grips with the physical world was as hard, if not harder, than playing chess or proving theorems. I first encountered this idea in the work of David Marr, the vision researcher and theorist, but it has become associated with Hans Moravec and is known as Moravec’s paradox. As Rodney Brooks as put it:

... according to early AI research, intelligence was “best characterized as the things that highly educated male scientists found challenging”, such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. “The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence.”

While there is a certain logic to the standard AI position, it does bias thinking about the computational simulation and/or realization of mental capacities. It leads the field to over-value the success of chess programs, which now play the best chess in the world, and to under-value the weakness of, say, natural language programs, like GPT-3 or machine translation, something I commented on in “To the extent that chess is a prototypical domain for AI, AI researchers are seriously deceived [compute power isn’t enough]”. Whatever these natural language programs do, they certainly do not rival the best human performances. They don’t even rival mediocre human performances.

One effect of over-valuing chess is to believe that if we summon more computational resources – more memory, more CPU cycles – performance will improve proportionally and eventually will equal and the exceed human performance. I suspect that GPT-3 used more computational resources than the largest chess (and Go) engines, but, when measured against human performance, it is mediocre at best. Now, measured against performances of previous engines that performance is impressive, and worthy of serious thought, but it’s not human class and there’s no reason to think that, merely by scaling up, such engines will achieve human class performance.

The underlying thinking does seem to be that language is just, you know, symbols, like chess. And if we collect enough examples, we’ll have it made in the shade. There is at best only a weak sense of language as being grounded in sensorimotor experience. THAT’s the legacy of chess-thinking, if you will. The sensorimotor basis of chess is trivial; that of language is not.

In chess the number of pieces is finite and their proper use is simply and strictly characterized. Further, if we adopt a convention for stopping a game when pieces are no longer being exchanged, then the total number of possible chess games is finite, huge but nonetheless finite. That is to say, chess, like tic-tac-toe, is a finite game.

Language is not finite. If we think of words as being analogous to chess piece, their number is unbounded, though finite at any given moment. The proper usage of many/most words is not simple nor can it be sharply characterized, neatly separating legitimate from improper uses. Moreover usage is often subject to matters not in evidence within language itself; that is, usage is subject to conditions obtaining in the world, as perceived by speakers and writers. The chess board is fixed and finite. While any text, whether spoken or written, is necessarily finite, the size and structure is not at all fixed; such matters are open to indefinite variation.

Thus any attempt to deal with language as though it were an extension of chess is bound to fail. Of course, no AI researchers explicitly think about natural language in that way. But the work they do suggests that they are nonetheless guided by such a conception. Why? Because it is computationally convenient.

And then there’s the empiricism of it all. As long as the code works, who cares why? This problem is exacerbated by the nature of machine learning techniques; just what the machine has learned tends to be opaque to investigators. So why bother figuring it out?

* * * * *

The difference I’ve pointed out between chess and language, finite vs. unbounded, is well-known. But I can’t help but wonder if the overall “envelope” within which AI operates isn’t derived from the one that arose from the computational study of chess and similar problems. If so, that envelop is constraining the development of the field. I can’t help but think that the problem of bringing these large deep learning models to heal will destroy that envelope and force a reconsideration of the field.

No comments:

Post a Comment