Tuesday, June 23, 2020

AGI and Superintelligence vs. Humans on Mars – such different kinds of discussions

Every once in awhile I find myself thinking about AGI (artificial general intelligence) and superintelligent machines: How long before they emerge and so on? Color me skeptical.

Beyond that, however, I don’t think we can argue the issue in an interesting and illuminating way, that is, in a way that helps proponents refine their understanding and so advance their intellectual projects. A suitable intellectual framework doesn’t exist. Sure, I’ve got my views, and I have no intention of abandoning them, but I don’t regard my arguments as particularly strong, no more than I find strong arguments in favor of the emergence of AGI or even superintelligence. We’re all just tap-dancing and hand-waving.

In contrast, consider the question of whether or not we should colonize Mars. We have a very rich framework in which to discuss that. We’ve already landed men on the moon and returned them to earth. Astronauts have spent weeks and months, even a year, living in low-earth orbit in the International Space Station. We’ve landed robots on Mars and gotten useful information back. All of that experience is relevant to sending humans to Mars and – here’s the point – we’ve got frameworks in which we can evaluate that experience against the requirements of a manned mission to Mars.

What do have in the realm of machine intelligence? We’ve got impressive working systems of machine translation. But we wouldn’t use those systems to translate legal documents, for example, and we don’t have any way of evaluating those systems that gives us a detailed sense of what we need to do to create MT systems adequate to legal translation. And so it goes in various domains. And in a few domains, such as chess and Go, the performance of artificial systems is superior to human performance. But individual humans can do all of these things, and more. How do we create a machine that can do that, much less one that can improve itself say beyond human capability? We haven’t a clue.

In the case of a manned mission to Mars we have a rich understanding of the mechanical, kinematic, chemical, thermodynamic, electrical, and electronic principles involved in building the devices needed to perform the mission. We also know quite a bit about the biological and psychological requirements of supporting human life for such a mission. But our understanding of the basic principles of intelligence – perception, cognition, reasoning, and so forth – is sadly lacking. We don’t know how humans do it – though we have learned a lot – and don’t know how to design machines that can perform at a human level. We have a rich knowledge of the basic principles involved in a manned mission to Mars, but a poor knowledge of the basic principles involved in constructing AGI or in understanding human intelligence.

Arguments about AGI seem more like science fiction than like doing a feasibility study of a mission we’re considering.

What other domains are more like AGI than a manned mission to Mars. How do we recognize the difference between such domains?

* * * * *

How we go to Mars, episode 1 of ?


No comments:

Post a Comment