Yesterday, July 10, Anonymous showed up with a link to a rather Quixotic video extolling the marvels of a future in which AI helps humanity to a better way of life:
Kim Solez frames his video as a riposte to Eliezer Yudkowsky’s recent recitation of AI Doom, thus:
Recently there has been concern expressed about the safety of machine learning/artificial intelligence in the long run by Eliezer Yudkowsky AGI Ruin: A List of Lethalities widely quoted elsewhere. The first of the 40+ bolded sections of the piece is the most significant because of the way it is equally true of an AI utopia: “AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than humans would be able to learn from less evidence than humans require” Reading the beginning of that paragraph it is true that AI has taught humans much more beautiful moves in the game of Go than we would ever have been able to design ourselves. That means that AI can teach us ways of cooperating with each other that are superior to human’s own innate ability to cooperate successfully. Therefore, increasing use of machine learning is not the beginning of the slippery slope toward humanity’s demise, it could be exactly the opposite, a transition toward a world better than anything we ever imagined. We can determine which of these two contrasting futures happens, and AI can assist with that! An AI may also be more foresighted and have a longer temporal horizon, both of which promote cooperation.
This got me to thinking. AI has been predicting the future since its beginning. That future is always one in which machines are at least as smart as, if not smarter than, humans. Which is to say, AI has always had folks in the snake oil business. It seems to me that, if you’re selling snake oil, you should be touting the its virtues, not listing the many ways in which snake oil can kill you.
And yet that seems to be what the AI snake oil salesmen have decided upon. I’m sure that, here and there, you’ll find people extolling the wonders of a world in which AI is in the ascendant, like Solez is doing. But the gloom-and-doomers are louder, seem to be more numerous, better organized, and are certainly better funded. Why?
Sure, there’s plenty of hype in the AI business. But the purpose of that hype is to drum up business in the present and the near-term future. That hype is not a form of and does not lead to AI utopianism.
Where is the Walt Disney of AI? Why is there no AI utopianism? AI speculation about the future seems to be caught in the same dystopian maelstrom that characterizes current futurism in general.
On gloom and doom, see two posts from 2020 featuring Ross Douthat:
- Has "civilization" entered a phase of decadence?
- Ross Douthat on decadence || the Space Age || I've been wrestling with a longish post, working title: From Progress Studies to Progress
On Walt Disney:
No comments:
Post a Comment