Saturday, October 8, 2022

Hedging THE EVENT: Yes, the super-rich are different from you and me, they think they can buy their way out of mortality

I'm bouncing this to the top because it links up with my current interest in AI Doom. Rushkoff has turned this article into a book, reviewed by Cory Doctorow, Survival of the Richest.

 
* * * * *
 
Douglas Rushkoff was recently paid "about half my annual professor’s salary " to talk (republished at at The Guardian) with "five super-wealthy guys — yes, all men — from the upper echelon of the hedge fund world."
Which region will be less impacted by the coming climate crisis: New Zealand or Alaska? Is Google really building Ray Kurzweil a home for his brain, and will his consciousness live through the transition, or will it die and be reborn as a whole new one? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?”

The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

That’s when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology.
This, of course, is nuts.
There’s nothing wrong with madly optimistic appraisals of how technology might benefit human society. But the current drive for a post-human utopia is something else. It’s less a vision for the wholesale migration of humanity to a new a state of being than a quest to transcend all that is human: the body, interdependence, compassion, vulnerability, and complexity. As technology philosophers have been pointing out for years, now, the transhumanist vision too easily reduces all of reality to data, concluding that “humans are nothing but information-processing objects.”
How'd we come to this?
Of course, it wasn’t always this way. There was a brief moment, in the early 1990s, when the digital future felt open-ended and up for our invention. Technology was becoming a playground for the counterculture, who saw in it the opportunity to create a more inclusive, distributed, and pro-human future. But established business interests only saw new potentials for the same old extraction, and too many technologists were seduced by unicorn IPOs. Digital futures became understood more like stock futures or cotton futures — something to predict and make bets on. So nearly every speech, article, study, documentary, or white paper was seen as relevant only insofar as it pointed to a ticker symbol. The future became less a thing we create through our present-day choices or hopes for humankind than a predestined scenario we bet on with our venture capital but arrive at passively.

This freed everyone from the moral implications of their activities. Technology development became less a story of collective flourishing than personal survival. [...] So instead of considering the practical ethics of impoverishing and exploiting the many in the name of the few, most academics, journalists, and science-fiction writers instead considered much more abstract and fanciful conundrums: Is it fair for a stock trader to use smart drugs? Should children get implants for foreign languages? Do we want autonomous vehicles to prioritize the lives of pedestrians over those of its passengers? Should the first Mars colonies be run as democracies? Does changing my DNA undermine my identity? Should robots have rights?

Asking these sorts of questions, while philosophically entertaining, is a poor substitute for wrestling with the real moral quandaries associated with unbridled technological development in the name of corporate capitalism.
* * * * *

Here's a fictional addendum: It's from Kim Stanley Robinson's New York 2140

By that time the sea had risen 50 feet above where it had been in the 20th century, changing the city in substantial ways. Extreme income inequality still existed. And now a hurricane had hit the city with a 20 foot surge. Major disaster. An angry mob is headed toward some super-luxury residential towers north of Harlem. The private security forces starts firing over their heads. Inspector Gen is with a contingent of police between the mob and the towers. Gen gets the private security to go inside and starts talking with their boss, now under arrest, whom she’d met a week earlier on the water. He’s thinking (p. 506):
And it also looked like he was considering his options, not as this tower’s security head, but as an individual who could get sued or go to jail. Who had perhaps made mistakes, after being ordered to do an illegal and impossible thing, by bosses who did not care about him. Best options for himself, he was now considering.
Hmmm.

1 comment: