Monday, November 28, 2016

AI Panics (When will they learn?) – A post at Language Log

The last month or so has seen renewed discussion of the benefits and dangers of artificial intelligence, sparked by Stephen Hawking's speech at the opening of the Leverhulme Centre for the Future of Intelligence at Cambridge University. In that context, it may be worthwhile to point again to the earliest explicit and credible AI warning that I know of, namely Norbert Wiener's 1950 book The Human Use of Human Beings [...]:
[T]he machine plays no favorites between manual labor and white-collar labor. Thus the possible fields into which the new industrial revolution is likely to penetrate are very extensive, and include all labor performing judgments of a low level, in much the same way as the displaced labor of the earlier industrial revolution included every aspect of human power. […]

The introduction of the new devices and the dates at which they are to be expected are, of course, largely economic matters, on which I am not an expert. Short of any violent political changes or another great war, I should give a rough estimate that it will take the new tools ten to twenty years to come into their own. […]
Liberman goes on to offer an old sorta' prognostication of his own (more or a cautionary note) and quotes more of Wiener's book. His point in quoting Wiener, which he makes explicit in a reply to a comment by Victor Mair, is that Wiener's time scale was way off:
Wiener seriously underestimated the difficulty of pattern recognition, of robotic control for complex mechanisms, and of integrating the two. Considerable progress has been made in those areas but there are still unsolved problems. He also underestimated the difficulty of based speech recognition and text analysis.

In my opinion, current prognosticators tend to similarly underestimate the difficulty of human-like communicative interaction. It's relatively easy to give the impression of solving the problem (Eliza, Siri) without really even trying to solve it.
Thus Siri has no understanding of questions put to it or of the answers it provides, even if the answers are good ones. But there is powerful technology behind Siri, powerful in a way that could scarcely have been imagined in Wiener's time. 

I've appended a comment I made to Liberman's post.

* * * * *

Back in the mid-1970s I was studying computational semantics with David Hays. Every now and then I would ask him, When do you think we'll be able to do X? where X ranged over various interesting things one might want of linguistic computing. He always refused to answer, asserting that these things are deeply unpredictable. Remember, was in the the first generation of researchers into machine translation and he'd been on the committee that wrote the ALPAC report. He had practical experience in such things.

In 1975 he got invited to review the computational linguistics literature for the journal, Computers and the Humanities. He asked me to draft the text (as I'd been reviewing the literature for the American Journal of Computational Linguistics). I did so and included a bit about an article about computational semantics I was publishing in MLN (Modern Language Notes), as it spoke directly to humanist concerns and included an analysis of a Shakespeare sonnet. We then floated, as a thought experiment, the idea of a computational system capable of reading a Shakespeare play, in some interesting, but unspecified, sense of the the word 'reading.' We called it Prospero and set no date on when Prospero would be operational, but in my mind I figured we'd have it in 20 years or so.

Well, the article appeared in 1976 ("Computational Linguistics and the Humanist"). Add 20 to that and we have 1996. Was anything like Prospero available then? No. Not only that, but the symbolic computing that was at the center of our review, and of Prospero, was being pushed into the background by statistical methods. It's now 2016, 40 years after that paper. We don't have anything like Prospero – though I believe Patrick Henry Winston is using the Macbeth story (but not Shakespeare's play) in an investigation of story comprehension – now and I see no prospects for Prospero in the near future. And yet, by the practical standards of 1976 Siri, as is Google's translation tech, and self-driving vehicles. Etc.

It's a brave new world that has such machines in it, and most of it is still unexplored.

* * * * *

I've been entertaining the idea that, in some ways, we're on the edge of the Marvelous Future. No, we're not flying around in jet packs; getting humans to low-earth orbit is not as routine as Kubrick depicted in 2001; the computational marvels of the Star Trek computer are still in the unforeseeable future, not to mention Cmdr Data; and environmental catastrophe seems to be closing in on us. But we're living in a very different world from that of 1950 and confront very different possibilities. Technology is at the center of it. Now we have to accommodate our thinking about society to fit the very different world before us. We need to think about universal basic income. Among other things.

I just watched conversation in which economist Glenn Loury (of Brown) cited Dani Rodrik to the effect that, given globalization, the national sovereignty, and democracy, you can have any two of the three, but not all three.

No comments:

Post a Comment