“The interests of humanity may change, the present curiosities in science may cease, and entirely different things may occupy the human mind in the future.” One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
Any sufficiently advanced technology is indistinguishable from magic.
–Arthur C. Clarke
|
Superintelligence in the Upload |
I’m neither worried nor pleased at the prospect of superintelligent computers. Why? Because they aren’t going to happen in the foreseeable future, if ever. I figure the future is going to be more interesting than that. Why? Because: the next singularity.
Singularities – in the sense of a new regime “beyond which human affairs, as we know them, could not continue” – are not new in human history. Nor are they hard-edged. They are most easily seen in retrospect. Our nineteenth century predecessors could not have imagined the Internet or microsurgery, nor could our ninth century predecessors have imagined the steam locomotive or the telephone.
I’m sure that future developments in computing will be amazing, and many of them likely will be amazing in ways we cannot imagine, but I doubt that our successors will see superintelligent computers, not in the foreseeable future and perhaps even not ever. Yes, there will be a time in the future when technology changes so drastically that we cannot now imagine, and thus predict, what will happen. No, that change is not likely to take the form of superintelligent computing.
Why do I hold these views? On the one hand there is ignorance. I am not aware of any concept of intelligence that is so well articulated that we could plan to achieve it in a machine in a way comparable to planning a manned mission to Mars. In the later case we have accomplished relevant projects – manned flight to the moon, unmanned flight to the Martian surface – and have reason to believe that our basic grasp of the relevant underlying physical principles is robust. In the case of intelligent machines, yes, we do have a lot of interesting technology, none of which approximates intelligence as closely as a manned mission to the moon approximates a manned mission to Mars. More tellingly, we are not in possession of a robust understanding of the underlying mechanisms of intelligent perception, thought, and action.
And yet we do know a great deal about how human minds work and about, for example, how we have achieved the knowledge that allows us to build DNA sequencing devices, smart phones, or to support humans in near-earth orbit for weeks at a time. This knowledge suggests that super-intelligent computing is unlikely, at least if “super-intelligence” is defined to mean surpassing human intelligence in a broad and fundamental way.
Human Intelligence and Its Cultural Elaboration
When the work of developmental psychologist
Jean Piaget finally made its way into the American academy in the middle of the last century the developmental question became: Is the difference between children’s thought and adult thought simply a matter of accumulated facts or is it about fundamental conceptual structures? Piaget, of course, argued for the latter. In his view the mind was constructed in “layers” where the structures of higher layers were constructed over and presupposed those of lower layers. It’s not simply that 10-year olds knew more facts than 5-year olds, but that they reasoned about the world in a more sophisticated way. No matter how many specific facts a 5-year old masters, he or she cannot think like a 10-year old because he or she lacks the appropriate logical forms. Similarly, the thought of 5-year olds is more sophisticated than that of 2-year olds and that of 15-year olds is more sophisticated than that of 10-year olds.
This is, by now, quite well known and not controversial in broad outline, though Piaget’s specific proposals have been modified in many ways. What’s not so well known is that Piaget extended his ideas to the development of scientific and mathematical ideas in history in the study of
genetic epistemology. In his view later ideas developed over earlier ones through a process of
reflective abstraction in which the mechanisms of earlier ideas become objects manipulated by newer emerging ideas. In a series of studies published in the 1980s and 1990s the late David Hays and I developed similar ideas about the long-term cultural evolution of ideas.
The basic idea of cognitive rank was suggested by Walter Wiora’s work on the history of music, The Four Ages of Music (1965). He argued that music history be divided into four ages. The first age was that of music in preliterate societies and the second age was that of the ancient high civilizations. The third age is that which Western music entered during and after the Renaissance. The fourth age began with this century. (For a similar four-stage theory based on estimates of informatic capacity, see for example D. S. Robertson, The Information Revolution. Communication Research 17, 235-254.)
This scheme is simple enough. What was so striking to us was that so many facets of culture and society could be organized into these same historical strata. It is a commonplace that all aspects of Western culture and society underwent a profound change during the Renaissance. The modern nation state was born, the scientific revolution happened, art adopted new forms of realistic depiction, attitudes toward children underwent a major change, as did the nature of marriage and family, new forms of commerce were adopted, and so forth. If we look at the early part of our own century we see major changes in all realms of symbolic activity—mathematics, the sciences, the arts—while many of our social and political forms remain structured on older models.