Pages in this blog

Friday, May 10, 2024

What HAS AI changed? What WILL it change?

The answers to those questions range between “nothing” and “everything” depending on this and that. What is the scope of the question, earth only, or the whole universe? Are we talking about governing laws or their expression, to date, or through all of time? What are your metaphysical presuppositions, strict physical reductionism or (extravagant) pluralism? At the very least it seems to have changed the number and complexity of the questions we face, the limits of the bounding box of those questions.

Back in 1990 David Hays published what I regard as a foundational article, one of two such articles we have published:

The Evolution of Cognition, Journal of Social and Biological Structures. 13(4): 297-320, 1990, https://www.academia.edu/243486/The_Evolution_of_Cognition

In that article we said:

... there are researchers who think it inevitable that computers will surpass human intelligence and some who think that, at some time, it will be possible for people to achieve a peculiar kind of immortality by “downloading” their minds to a computer. As far as we can tell such speculation has no ground in either current practice or theory. It is projective fantasy, projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.

A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add “because . . . “ to the assertion “I’m important.” But the evolution of technology will eventually invalidate any claim that follows “because.” Sooner or later we will create a technology capable of doing what, heretofore, only we could.

That’s a fairly sweeping statement, though I must admit that I’m not quite sure what we meant by it. I’m quite sure that we meant downloading (or uploading) one’s mind really is a projective fantasy. I still believe that. Did we mean that “inevitable that computers will surpass human intelligence” is projective fantasy as well? If that’s what we meant, don’t we contradict that at the very end of the next paragraph? But just what does that mean, that sooner or later we will create a technology capable of doing what, heretofore, only we could? Does it mean that, task by task, problem domain by problem domain, some technology will surpass human performance in that domain, for that tast? Or did we mean that some one technology (the term AGI hadn’t been coined then) will surpass us in all domains?

Who knows?

That article was about the cultural evolution of cognition through a series of ranks, for want of a better word. Each rank was catalyzed by the emergence and maturation of a new cognitive technology. Rank 1 culture was enabled by speech. Writing saw the emergence of Rank 2 technology. Rank 3 technology first appeared in early modern Europe as the result of a conjunction of Arabic arithmetic with European mechanism. The seeds of Rank 4 thinking appear in the 19th century in statistical mechanics and Darwinian evolution and are catalyzed Turing’s abstract conceptualization of computation and von Neumann’s scheme for embodying that conceptualization in physical devices, modern computers.

None of these transitions appears as a step function on a scale measured in years, but rather would seem to follow the pattern of the familiar logistic function of exponential growth up to some plateau. Yet over the long run these curves come closer together. Rank 1 emerged on the order of 100s of thousands of years ago; Rank 2, multiple 1000s; Rank 3, multiple 100s, and Rank 4, within the lifetime of my grandparents or parents, depending on where you locate the starting point (I was born in 1947). Are we now moving around on the plateau of Rank 4, or are we moving toward Rank 5? If the latter, where are we on that curve?

Throughout these various transitions, human biology remains (fundamentally) unchanged while human culture, and thus the conditions of human life, are changed radically. The brains of humans living in different cultures are pretty much the same, but the behaviors and perceptions of which they are capable vary radically. What remains the same with AI and what changes?

There was a period when, for billions of years, the universe consisted entirely of inanimate matter. And then, over a long period of time, life emerged. Was that a fundamental change or not? The laws of physics remained the same, no? [Assuming, of course, that the laws of physics are everywhere the same throughout the universe.] Can the laws of biology be (effectively reduced to the laws of physics. Some say “yes” and some say “no.” But you can see why the question is an important one, no? Does the appearance of AI represent the emergence of new laws? If so, are they laws of mind or laws of matter? If not, does that mean that there is not anything fundamentally new about AI? And yet it may very well change human life as profoundly as writing did 1000s of years ago. Or is it more like the emergences of language 100s of 1000s of years ago? Are humans something fundamentally new in the biological world, or are we merely naked apes?

What of the future? The future consists of events and phenomena that have not yet happened yet. And yet some of them keep slipping into the real at a steady pace, even as I type, even as you read – which is necessarily some time after I type. That implies that that there is a range of events which is in the future for me as I type, but is in the past for you as you read. The more you read, the wider than range becomes. If you stop reading for a moment, does that range take a pause, or is it still widening though you aren’t paying attention?

What of hype? What’s real about the future for Sam Altman may be hype to me. That is, it’s real hype to me, but to Sam, my opinion is flat-out wrong.

AI is somehow mixed up in all this. It really is. No hype.

* * * * *

CODA: The other foundational article: Principles and Development of Natural Intelligence, Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322, https://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence

No comments:

Post a Comment