Tuesday, April 4, 2023

Can we make sense of AI Doom? [Living in a Complex world]

Over at Marginal Revolution Tyler Cowen has a post entitled, Does natural selection favor AIs over humans? Model this! It's about a paper claiming that, indeed, natural selection favors AIs. Cowen has his doubts: "I genuinely do not understand why he sees so much force in his own paper." The post has generated a fair amount of discussion so far, and I've contributed a somewhat lengthly comment, which I reproduce below. I follow that with a passage from a paper on natural selection that David Hays and I published some years ago.

A Drake Equation for AI Doom

First, here's a post by Rohit Krishnan (Strange Loop Canon) from December, in which he lays out an analog to the Drake equation for AGI, as follows:

Scary AI = I * A1 * A2 * U1 * U2 * A3 * S * D * F

He then goes on to discuss each of the variables in that equation, which he assumes to be independent. It's an interesting exercise.

AI Doom as a millennial cult

Second: It's clear that the AI Doom world is awash in very sophisticated epistemic theater, that is to say, complicated exercises designed to demonstrate great intelligence and methodological sophistication. Make no mistake, these are "serious" people. And they are. But the cause about which they are serious is a religious one, not scientific or technological, much less social-scientific.

Is there a way that can be argued that doesn't assume the conclusion? That's not clear to me. You don't have to know much history or social science to know that millennial movements have been and are common. Nor is it difficult to see that AI Doom appears to be such a movement. They may not have arrived at a specific date when the world as humans know it will come to an end, but they devote a lot of time and effort to estimating that. Moreover, and for what it's worth, here and there at LessWrong there's explicit discussion that they may be living in a cult.

Does that make it so? On the one hand, we have apocalyptic claims being made about AI and backed up by often complex argumentation. If those claims are right, then the world is in deep trouble. But on the other hand, we know that millennialism is a real phenomenon, one that does not exclude participation by intelligent people. If those claims are being made by millennialists, we can safely disregard them. They are no more valid claims about reality than flat-earthism is.

I have two further observations: 1) It's pretty clear to me that LessWrong is an insular world. Complex, sophisticated and perhaps even important arguments are being made there, but there is little to no incentive – much less actual effort – to advance them outside that world. For example, I've been following a discussion about the importance of publishing in the formal academic literature, or at least arXiv. That goes the other way as well, there's lots of work in the formal literature that's relevant to discussions within LessWrong, but that literature is not consulted.

2) Robin Hanson has a post in which he lays out 13 features of AGI that qualify it as sacred. He concludes:

I hope you can see how these AGI idealizations and values follow pretty naturally from our concept of the sacred. Just as that concept predicts the changes that religious folks seeking a more sacred God made to their God, it also predicts that AI fans seeking a more sacred AI would change it in these directions, toward this sort of version of AGI.

I haven't had a chance to fully consider it, but it's certainly a step in the right direction.

Opportunity cost

3) This is a complex matter. It can easily be argued back and forth around and about for hours, days, weeks, months, and so forth. Time and resources devoted to it is time and resources not being devoted to such things as pandemic preparedness, China and Taiwan, preventing nuclear war.... or for that matter, AI capabilities. Given that the world is at stake, I supposed one could argue that there is no better way to use the available resources.

And so the world ended, not with a bang, not with a whimper, but with an endless calculation.

* * * * *

Epistemology for a complex world

And then there's the epistemological argument David Hays and I made in A Note on Why Natural Selection Leads to Complexity:

Gibson's ecological psychology is grounded in the assertion that we cannot understand how perception works without understanding the structure of the environment in which the perceptual system must operate. In the context of his analysis of visual perception (1979) Gibson addressed an issue formulated most poignantly by Rene Descartes in his Meditations on First Philosophy (1641): How do I distinguish between a valid perception and an illusory image, such as a dream? The difference, Gibson (1979:256-257) tells us, between mere image and reality is that, upon repeated examination, reality shows us something new, whereas images only reiterate what we have seen before.

A surface is seen with more or less definition as the accommodation of the lens changes; an image is not. A surface becomes clearer when fixated; an image does not. A surface can be scanned; an image cannot. When the eyes converge on an object in the world, the sensation of crossed diplopia disappears, and when the eyes diverge, the “double image” reappears; this does not happen for an image in the space of the mind. . . . No image can be scrutinized -- not an afterimage, not a so-called eidetic image, not the image in a dream, and not even a hallucination. An imaginary object can undergo an imaginary scrutiny, no doubt, but you are not going to discover a new and surprising feature of the object this way.

Gibson presupposes an organism which is actively examining its environment for useful information. It can lift, touch, turn, taste, tear and otherwise manipulate the object so that its parts and qualities are exposed to many sensory channels. Under such treatment reality continually shows new faces. Dream, on the other hand, simply collapses. Dream objects are not indefinitely rich. They may change bafflingly into other objects, but in themselves they are finite.

With this simple remark, Gibson answered not only Descartes's question but also the first question in epistemology. If we want to assure ourselves of the reliability of our knowledge of the universe, we must first assure ourselves that we are observing it. Descartes realized that percepts do not contain within themselves markers indicating where they came from. If the percept is all you have, where can you find such a marker? Gibson suggests that the search for a marker is beside the point, in the same way that Alexander's solution to the Gordian knot problem made it clear that untying it won't work. Gibson's answer is like Alexander's sword, in that it rejects the stipulated terms of the problem. Reality is in the eye of the beholder, or more precisely, in the eye-hand coordination of the active observer. Therefore the answer has to be found in the actions available to the observer.

Reality is not perceived, it is enacted – in a universe of great, perhaps unbounded, complexity.

Now, it follows immediately that the more capable observer can obtain more knowledge of the universe. To repeat the same acts of manipulation and observation gains nothing; to do justice to the universe's unlimited capacity to surprise, the observer needs a large repertoire of manipulative techniques and perceptual channels. Indeed, nothing assures us that we, with elaborate technology for experiment and observation, have exhausted nature's store of surprises. After billions of years of biological evolution, thousands of years of cultural evolution, and centuries of scientific and technical cumulation, we can still hope to add to our capability and learn about new kinds of phenomena.

Is it the case that the Doomers calculate and calculate and never see anything new? If so, they're deluding themselves.

No comments:

Post a Comment