So...GPT-3’s got me rethinking AI and other things, like mind, intelligence, and the structure of the world. I remain convinced that GPT-3 is special, but not THAT special. It’s gotten me to think through some things that have been on my mind for awhile, mostly, why do these statistical technique work at all? I think “World, mind, and learnability: A note on the metaphysical structure of the cosmos” is on the right track, and I note that its line of thought runs parallel to that in “Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds]”. That surely requires a post – but not now [likely it requires a book, later]. And I suppose that frames my current concerns – mind, the economics of growth, the cosmos.
Yikes!
I’m also thinking that AI remains captive to its original concerns and framing from back in the early 1950s. Skipping over cybernetics we have programs for checkers, chess, and symbolic logic (from Russell and Whitehead’s Principia Mathematica). That is, we’re dealing with very constrained artificial worlds. While both chess and checkers are open to children, they also give scope to the most sophisticated adults. Symbolic logic is not really open to children, other than math prodigies, but certainly has room for the most sophisticated and energetic of adults.
By the 1980s, however, it had discovered that, for a computer, coming to grips with the physical world was as hard, if not harder, than playing chess or proving theorems. I first encountered this idea in the work of David Marr, the vision researcher and theorist, but it has become associated with roboticist Hans Moravec. As Rodney Brooks as put it:
... according to early AI research, intelligence was “best characterized as the things that highly educated male scientists found challenging”, such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. “The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence.
While there is a certain logic to the standard AI position, it does bias thinking about the computational simulation and/or realization of mental capacities. It leads the field to over-value the success of chess programs, which now play the best chess in the world, and to under-value the weakness of, say, natural language programs, like GPT-3 or machine translation, something I commented on in “To the extent that chess is a prototypical domain for AI, AI researchers are seriously deceived [compute power isn’t enough]”. Whatever these natural language programs do, they certainly do not rival the best human performances. They don’t even rival mediocre human performances.
But the underlying thinking does seem to be that language is just, you know, symbols, like chess. And if we collect enough examples, we’ll have it made in the shade. There is at best only a weak sense of language as being grounded in sensorimotor experience. THAT’s the legacy of chess-thinking, if you will. The sensorimotor basis of chess is trivial; that of language is not.
But the underlying thinking does seem to be that language is just, you know, symbols, like chess. And if we collect enough examples, we’ll have it made in the shade. There is at best only a weak sense of language as being grounded in sensorimotor experience. THAT’s the legacy of chess-thinking, if you will. The sensorimotor basis of chess is trivial; that of language is not.
And then there’s the empiricism of it all. As long as the code works, who cares why? This problem is exacerbated by the nature of machine learning techniques; just what the machine has learned tends to be opaque to investigators. So why bother figuring it out?
More later.
Addendum: AI is gripped by chess-thinking the way literary criticism is dominated by the search for meaning, and the correlative inability formulate coherent accounts of the text and of meaning. See my posts, The problematics of text and form and the transmutation zone between symbol and thought [once more into the methodological breach], and On the nature of academic literary criticism as an intellectual discipline: text, form, and meaning [where we are now].
I believe, BTW, that such disciplinary inertia is the hallmark of an evolutionary process; cultural evolution in this case. Everything takes place within the bounds of the originating disciplinary framework.
Addendum: AI is gripped by chess-thinking the way literary criticism is dominated by the search for meaning, and the correlative inability formulate coherent accounts of the text and of meaning. See my posts, The problematics of text and form and the transmutation zone between symbol and thought [once more into the methodological breach], and On the nature of academic literary criticism as an intellectual discipline: text, form, and meaning [where we are now].
I believe, BTW, that such disciplinary inertia is the hallmark of an evolutionary process; cultural evolution in this case. Everything takes place within the bounds of the originating disciplinary framework.
No comments:
Post a Comment