Wednesday, October 2, 2013

Artificial General Intelligence (AGI): Curiouser and Curiouser

A year ago physicist David Deutsch mused on the possibility of fully general artificial intelligence. The bottom line: We don't know what we're doing. Though he takes a round about way to get there.
In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.
It certainly seems that way, that we're no closer than we were 50 years ago. At least we've managed to toss out a lot of ideas that won't work. Or have we?

Deutsch thinks the problem is philosophical: "I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place."

He places great emphasis on the ideas of Karl Popper, whom I admire, but I don't quite see what Deutsch sees in Popper. Still, he manages to marshall an interesting quote:
As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.
That is to say, minds are built from within. And the building is done by fundamental units that are themselves alive and so trying achieve something, if only the minimal something of remaining alive. Those fundamental units are, of course, cells.

Here, the final paragraph, Deutsch makes a curious error:
For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees.
"Information", really? That's a neat trick, chalking AGI up to the mere difference between ape DNA and human DNA. Alas, the trick's a useless one as we don't understand how any DNA really works. Not yet. Not down to the bottom.

That is, something is information only with respect to a device that treats it as information. If we don't know how that device works, then information talk is empty. I make this point in a bit more detail in Culture Information Memes WTF!

But the caption to the illustration at the head of the article DOES get it right: "Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough."

When it comes to AGI we're in a situation that's quite different from, say, planning to send humans to Mars. That would be an expensive, difficult, and dangerous task. But it seems doable because, as far as we know, we grasp all the fundamental principles involved in creating the technology. After all, we've already sent small vehicles to Mars, vehicles which have taken pictures and done other interesting and useful things. We know we can get there. And we know how to equip humans to survive in outer space. We've sent humans to the moon and we've sent them into Earth orbit. We just have to put all this stuff together and we can go to Mars.

But AGI, the problem there is we don't understand the principles.

3 comments:

  1. Is it preposterous to suggest that such principles would have to take into consideration the mind's capacity for illness such as schizophrenia? hallucination? though not necessarily producing such ability?

    ReplyDelete
  2. No, that's not at at all preposterous.

    ReplyDelete