Saturday, March 5, 2022

These bleeding-edge AI thinkers have little faith in human progress and seem to fear their own shadows

I suspect, though, if you put the question to them, they’d deny it. “Are you crazy! Of course we believe in human progress.” Don’t believe them. It’s not that I think they’re being deceptive. Rather, I think they’re deceived about the implications of their ideas.

Just what bleeding-edge AI thinkers am I talking about? Scott Summers has a long and tangled post called, Biological Anchors: A Trick That Might Or Might Not Work. He says the post is “trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety.”[1] Those people and others like them.

Let me explain

At the heart of those discussions is a long report in which Ajeya Cotra attempts to estimate when human-level AI would finally see the light of day – I’ve not read it myself, but I’ve read quite a lot about it. They assume that human-level AI is inevitable, fear that it might go rogue and turn on us, and are very worried that there is little we can do to stop this from happening – though, I note, some of them are sitting on funding they’d like to give out to researchers who want to try.

From my point of view that means that they’ve given up on humanity and have all but ceded the future to computers. As you may know, I take a different view of these matters. I believe that, over the long haul, humankind as evolved ever more powerful systems of thought and that we are currently in the process of doing it again. This is not the place to offer even a short synopsis how this has happened beyond noting that it involves reflective abstraction (as discussed by Jean Piaget) and the recursive elaboration of new systems of thought over old so that the processes of old systems become objects operated on by the newer systems.[2] Computers and the idea(s) of computation are central to the current evolutionary ramp.

Computers didn’t come out of nowhere. They don’t grow on trees. We invented them. We had to conceived, design, construct, and operate them. Time and again. Computers allow us to explore ideas in ways that would be impossible without them. Even the idea of computation, without being linked to any specific computational activity, has proven enormously fruitful.

I doubt that these AI thinkers would find anything surprising or even interesting in that paragraph. But they take it for granted. They shouldn’t do that, not if they care about the future.

The process of developing new computational regimes requires us to develop new ideas. We have done and are doing that. Surely if we are to develop artificial systems with (near) human-level conceptual capabilities, will that not force us, give us the means, to think more deeply about ourselves? Other fields are developing new ideas as well, perhaps even fundamentally new ideas. Those ideas in our heads, in our collective culture, that’s where progress lies. They are the matrix in which computation, the ideas and the technologies, exists.

Why fear your own creations?

The men and women who created atomic weaponry feared their own creations. They had good reason to. The science was there; they knew that, once the engineering had been done, that the bombs would be horribly powerful. They were proven correct in Japan in 1945.

The situation with computing is quite different. We have yet to create human-level general AI, and yet these thinkers who are trying to bring it about, they’re already afraid of what will happen if they are successful. They fear the shadows they cast ahead of themselves as they walk.

Why?

Notes

[1] My recent post, What is AI alignment about?, contains a small snippet near the end of those conversations.

[2] This blog post isn’t the place to set forth these ideas, which the late David Hays and I have developed over the course of years. I’ve blogged extensively about these ideas. See, in particular, posts with these labels:

Note that not all of those posts will be specifically about the ideas that Hays and I have developed. Many are about ideas that I feel resonate with our ideas.

For a more systematic guide to those ideas, see, Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society
, New Savanna, July 4, 2020, https://new-savanna.blogspot.com/2014/08/mind-culture-coevolution-major.html.

Here is a downloadable PDF, https://www.academia.edu/37815917/Mind-Culture_Coevolution_Major_Transitions_in_the_Development_of_Human_Culture_and_Society.

If you want to go straight to the Singularity, see my post, Redefining the Coming Singularity – It’s not what you think, New Savanna, May 16, 2017, https://new-savanna.blogspot.com/2014/09/redefining-coming-singularity-its-not.html.

That post is also available as a downloadable PDF, with some additional material, https://www.academia.edu/8847096/Redefining_the_Coming_Singularity_It_s_not_what_you_think.

No comments:

Post a Comment