The problem with most of the thinking about the future effects of AGI, superintelligence, and so forth, is that its looking at the future through the eyes of past – well, it's the present now.
But when the future actually unfolds, our minds will change accordingly. As a result, these various current speculations, whether utopian, dystopian, or some-other-topian, will seem quaint and old-fashioned. It's like those articles that show up every now and then comparing how the future looked to Victorians with what actually happened. It turns out they were wrong. Way wrong.
But then how could they really know? Same with comparing "The Jetsons" with the world now, or Kubrick's "2001." We're living in the future they imagined. And – surprise, surprise! – they got it wrong. Just like we're inevitably getting it wrong with our speculations of superintelligence and AI Doom. The problem isn't that it's too science fiction. It's the WRONG science fiction.
We need to learn how to think differently. We need to evolve a new conceptual system, with a new ontology. One that's in concert with artificial intelligence and machine learning rather then thinking of it as some exotic Other. We keep comparing it to us, and us to it, and so the future seems really strange and scary. Paradoxically, if we accept these things as something new and deeply different, they won't seem so strange and threatening.
No comments:
Post a Comment