Scott Aaronson has just posted "The Problem of Human Specialness in the Age of AI."
For the past year and a half, I’ve been moonlighting at OpenAI, thinking about what theoretical computer science can do for AI safety. [...] In addition to “how do we stop AGI from going disastrously wrong?,” I find myself asking “what if it goes right? What if it just continues helping us with various mental tasks, but improves to where it can do just about any task as well as we can do it, or better? Is there anything special about humans in the resulting world? What are we still for?”
Here's a comment I posted over there:
@Matteo Villa, #31:
Should we not just let go of human specialness, just like humanity had to do when discovering that we are not at the centre of the universe and not even at the centre of our solar system?
I've been wondering the same thing. It's not as though the universe was made for us or is somehow ours to do with as we see fit. It just is.
From Benzon and Hays, The Evolution of Cognition, 1990:
A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add “because . . . “ to the assertion “I’m important.” But the evolution of technology will eventually invalidate any claim that follows “because.” Sooner or later we will create a technology capable of doing what, heretofore, only we could.
Something I've just begun to think about: What role can these emerging AIs play in helping us to synthesize what we know? Ever since I entered college in the Jurassic era I've been hearing laments about how intellectual work is becoming more and more specialized. I've seen and see the specialization myself. How do we put it all together? That's a real and pressing problem. We need help.
I suppose one could say: "Well, when a superintelligent AI emerges it'll put it all together." That doesn't help me all that much, in part because I don't know how to think about superintelligent AI in any way I find interesting. No way to get any purchase on it. That discussion – and I suppose the OP (alas) fits right in – just seems to me rather like a rat chasing its own tail. A lot of sound and fury signifying, you know...
But trying to synthesize knowledge, trying to get a broader view. That's something I can think about – in part because I've spent a lot of time doing it – and we need help. Will GPT-5 be able to help with the job? GPT-6?
BTW, before the Copernican Revolution we weren't special (by "we" I mean Europeans and their descendants; I don't know off hand how the rest of the world thought about these matters). Earth was as the "bottom" of the cosmos. Of course that was a very different cosmos from the one we're imagining today. That was a cosmos ordered by God.
Maybe the evolutionary psychologists have something to day about this need to think of ourselves as the masters of the universe.
ADDENDUM: Perhaps some of my thoughts about the (coming) Fourth Arena are relevant:
- Stakes in the Sand: Prediction, Cultural Ranks, and the Fourth Arena [+LLM Bonus]
- We’ve stepped over the threshold into the Fourth Arena, but don’t recognize it
- Welcome to the Fourth Arena – The World is Gifted
- The Fourth Arena 2: New beings in time
- The Fourth Arena: What’s Up in the world these days? We’re moving to a new, a new what?
- Living with Abundance in a Pluralist Cosmos: Some Metaphysical Sketches – That post has a link to my working paper of that title, Living with Abundance in a Pluralist Cosmos: Some Metaphysical Sketches, which is the "founding document," if you will, of my thinking about this. It also lists 8 propositions in my proposed metaphysics and contains links to and absracts of some relevant working papers.
- The Abundance Principle and The Fourth Arena
No comments:
Post a Comment