Pages in this blog

Saturday, November 4, 2017

"Philosophy doesn't do nuance" – Will computers ever be able, you know, to think?

IMG_2884bw

Philosophy doesn’t do nuances well. It might fancy itself a model of precision and finely honed distinctions, but what it really loves are polarisations and dichotomies. Internalism or externalism, foundationalism or coherentism, trolley left or right, zombies or not zombies, observer-relative or observer-independent, possible or impossible worlds, grounded or ungrounded … Philosophy might preach the inclusive vel (‘girls or boys may play’) but too often indulges in the exclusive aut aut (‘either you like it or you don’t’).

The current debate about AI is a case in point. Here, the dichotomy is between those who believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen (I am the happy owner of all three). Think instead of the false Maria in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.
Singulatarians:
...believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity.
Here he hits pay dirt: "Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence." But, retorts the Singulatarian, it IS possible, no? Yeah, sure:
But this ‘could’ is mere logical possibility – as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling unwell, and ‘I could be a butterfly that dreams it’s a human being.’
And then there's that good old chestnut, exponential growth, as though it's somehow impossible for an exponential 'hockey stick' curve to level out into a sigmoid curve.

But, Floridi argues, their arch enemies, whom he dubs the AItheiests, are just as bad:
AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.
Quod erat demonstrandum.

NOT.

Here he references, among others, John Searle, whom I recently discussed, Searle almost blows it on computational intelligence, almost, but not quite [biology]. I give Searle points on his suggestion that specifically human biochemistry is essential for duplicating the causal powers of the human brains, but, as far as I can see, that has little or nothing to do with his famous Chinese Room argument, which is the center of his critique.

Floridi goes on to note that in his (in)famous 1950 article Turing had noted that the question of mechanical thought is "too meaningless to deserve discussion".
True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.
he goes on to talk of the "Fourth Revolution" (from the title of his current book) in our self-understanding:
We are not at the centre of the Universe (Copernicus), of the biological kingdom (Charles Darwin), or of rationality (Sigmund Freud). And after Turing, we are no longer at the centre of the infosphere, the world of information processing and smart agency, either. We share the infosphere with digital technologies. These are ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique.
Compare that with this paragraph from "The Evolution of Cognition", which David Hays and I published in 1990, seven years before Deep Blue beat Kasparov, then the reigning world chess champion:
A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add "because . . . " to the assertion "I'm important." But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.
We go on to suggest:
Perhaps adults who, as children, grow up with computers might not find these issues so troublesome. Sherry Turkle (1984) reports conversations of young children who routinely play with toys which "speak" to them—toys which teach spelling, dolls with a repertoire of phrases. The speaking is implemented by relatively simple computers. For these children the question about the difference between living things and inanimate things—the first ontological distinction which children learn (Keil 1979)—includes whether or not they can "talk," or "think," which these computer toys can do.
Is that happening? Has anyone compared ideas about computational "intelligence" between adults who grew up with computer toys and were programming by, say, middle school or younger, versus those who were not raised on computer-enabled toys and do not program at all? We conclude:
In general it seems obvious to us that a generation of 20-year-olds who have been programming computers since they were 4 or 5 years old are going to think differently than we do. Most of what they have learned they will have learned from us. But they will have learned it in a different way. Their ontology will be different from ours. Concepts which tax our abilities may be routine for them, just as the calculus, which taxed the abilities of Leibniz and Newton, is routine for us. These children will have learned to learn Rank 4 concepts.

No comments:

Post a Comment