Over at Marginal Revolution Alex Tabarrok posted about Pathways, The Chinese Room Thinks. He claims:
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism. [...] The sheer ability of AI to reason, counter-balances our initial intuition, bias and hubris, making the defects in Searle’s argument easier to accept.
Hmmm... I'm not so sure. I commented:
Well, back in the early-1970s ARPA (now DARPA) sponsored The Speech Understanding Project. It was a five-year effort spread over, I believe, four research groups. The goal was to produce a question-answering system (a term of art) that would take a spoken-language question as input and produce the correct answer to the question. The problem domain was defined by a database of navy ships. So you ask "When was the USS Forestal commissioned?" If it answers "1955," that's scored as a yes. A correct answer was taken to mean that the system had understood the question.
Is that understanding? Well, it's something. But it's hard to say that it is understanding in any deep sense. Still, that required a major research effort involving 10s if not (low) 100s of people and much of the work was quite interesting, if you've got a taste for that sort of thing, which I did in those days. What GPT-3 did in response to Jerry Seinfeld's bit about the Roosevelt tramway is a deeper kind of understanding. As is Pathways' performance in the examples Alex posted above.
How much deeper? If, on a scale of one to ten, we say ARPA's speech understanding systems work at level 1, where do we put Pathways? Does such a metric make sense? Is understanding one-dimensional?
If you ask me to explain Einstein's theory of relativity, I can certainly produce something. But it won't be very deep. Certainly better than 1, but probably not a 5, depending on how you define those levels. In what sense do I understand the theory of relativity?
(BTW, YouTube has a bunch of videos where something is explained on five levels: 1 grade school, 2 high school, 3) college graduate, 4) graduate student, 5) professional.)
What about Steven Spielberg's Jaws? We can ask questions: What was the sheriff's name? Who was the second person killed? How many people were killed? What animal did the killing? Those are relatively easy questions. These are more difficult: Why was the mayor reluctant to close the beach? Quint was haunted by an experience he had as a young adult. Tell us about that experience? Why did it haunt him? I suspect that most people who've seen the movie would have little trouble answering those questions.
And then there's this sort of question, one that I set out to answer: How does the movie exhibit Girard's theory of sacrifice? Of course, one might argue that it doesn't exhibit Girard's theory at all, and some, of course, would say that Girard's theory doesn't hold water. What kind of understanding is going on there? I suspect that most people who've seen Jaws could not deal with those questions. They know little, if anything at all, about Girard and likely don't care much for that sort of thing. Does that mean they don't understand the movie? To say so would be perverse.
But I also think that most people who've seen the movie understand it on a deeper level than is required to answer the first set of questions about the film. And they probably cannot verbalize much of their understanding. Does that mean they don't really understand the movie?
It's not at all clear to me just what one is claiming when one says that Pathways understands. For that matter, what is one claiming when one says that Pathways doesn't really understand?
It seems to me that we're caught between a rock and a hard place. The rock is our current conceptual system, which seems to treat understanding either as a binary variable – one either understands or one doesn't – or as a one-dimensional variable where the distinctions between one level and the next are not at all clear. The hard place is to start coming up with a new conceptual vocabulary that gives us a finer-grained an more useful way to comprehend and think about what these alien mentalities – is that the word we want? – are doing. I think we've got to start investigating that hard place.
Here's my notice of Pathways.
* * * * *
Addendum: Let's assume for a minute that we're going to rate understanding on a scale, say, from 1 to 10. GPT-3 rates, say, 3. Along comes Pathways and it's clearly better than GPT-3. Where does it go? 4? 5? 6?
No.
Given that considerable distance remains between Pathways and humans, I'd say that a 1-10 scale is insufficient. Let's make it 1-100. GPT-3 goes in at 23 and Pathways at, say, 38.
That is to say, each time one of these remarkable results comes in I think it enlarges our sense of the measure space. Maybe it even forces us to start adding dimensions. It just makes the world larger.
* * * * *
Gary Marcus comments:
This week seems like win for AI, but it's actually a step back:
— Gary Marcus 🇺🇦 (@GaryMarcus) April 7, 2022
• No info about training set [Dall-E]
• Sparse disclosure of methods and errors [both]
• Anecdotal data only [PaLM jokes]
• Cherry-picking [both]
• No access for scientific community [both] https://t.co/HCn1g5yG7L
No comments:
Post a Comment