Saturday, November 19, 2022

SBF and FTX – "the horror! the horror!" [some links, the future as epistmological 'ground-zero']

1.) I don't have a lot to say about this debacle in the world of billionaires and cryptocurrency. If you're curious, Zvi Mowshowitz has a long blog post in which he discusses the WHOLE THING, from the BEGINNING, which links to many sources, Sadly, FTX. Issues addressed in the post:
  1. What just happened?
  2. What happened in the lead-up to this happening?
  3. Why did all of this happen?
  4. What is going to happen to those involved going forward?
  5. What is going to happen to crypto in general?
  6. Why didn’t we see this coming, or those who did see it speak louder?
  7. What does this mean for FTX’s charitable efforts and those getting funding?
  8. What does this mean for Effective Altruism? Who knew what when?
  9. What if anything does this say about utilitarianism?
  10. How are we casting and framing the movie Michael Lewis is selling, in which he was previously (it seems) planning on portraying Sam Bankman-Fried as the Luke Skywalker to CZ’s Darth Vader? Presumably that will change a bit.

2.) A plain-English account: Jamie Bartlett, Sam Bankman-Fried's grypto-gold turned to dust, Nov. 18, 2022.

3.) Ross Douthat has an interesting 'moderating' [think of moderating a nuclear reaction] column, The Case for a Less-Effective Altruism (NYT 11.18.22).

4.) Not so long ago I came across an article that sheds light on one of the signal features of this debacle, an intense focus on predicting the future. This is particularly relevant to Effective Altruism's interest in so-called long-termism. The article:

Sun-Ha Hong, Predictions Without Futures, History and Theory, Vol 61, No. 3: July 2022, 1-20. https://onlinelibrary.wiley.com/doi/epdf/10.1111/hith.12269

From page 4:

Notably, today’s society feverishly anticipates an AI “breakthrough,” a moment when the innate force of technological progress transforms society irreversibly. Its proponents insist that the singularity is, as per the name, the only possible future (despite its repeated promise and deferral since the 1960s—that is, for almost the entire history of AI as a research problem). Such pronouncements generate legitimacy through a sense of inevitability that the early liberals sought in “laws of nature” and that the ancien régime sought in the divine. AI as a historical future promises “a disconnection” from past and present, and it cites that departure as the source of the possibility that even the most intractable political problems can be solved not by carefully unpacking them but by eliminating all of their priors. Thus, virtual reality solves the problems with reality merely by being virtual, cryptocurrency solves every known problem with currency by not being currency, and transhumanism solves the problem of people by transcending humanity. Meanwhile, the present and its teething problems are somewhat diluted of reality: there is less need to worry so much about concrete, existing patterns of inequality or inefficiency, the idea goes, since technological breakthroughs will soon render them irrelevant. Such technofutures saturate the space of the possible with the absence of a coherent vision for society.

That paragraph can be taken as a trenchant reading of long-termism, which is obsessed with prediction and, by focusing attention on the needs for people in the future, tends to empty the present of all substance.

Nor does it matter that the predicted technofuture is, at best, highly problematic (p. 7):

In short, the more unfulfilled these technofutures go, the more pervasive and entrenched they become. When Elon Musk claims that his Neuralink AI can eliminate the need for verbal communication in five to ten years [...] the statement should not be taken as a meaningful claim about concrete future outcomes. Rather, it is a dutifully traditional performance that, knowingly or not, reenacts the participatory rituals of (quasi) belief and attachment that have been central to the very history of artificial intelligence. After all, Marvin Minsky, AI’s original marketer, had loudly proclaimed the arrival of truly intelligent machines by the 1970s. The significance of these predictions does not depend on their accurate fulfillment, because their function is not to foretell future events but to borrow legitimacy and plausibility from the future in order to license anticipatory actions in the present.

Here, “the future . . . functions as an ‘epistemic black market.’” The conceit of the open future furnishes a space of relative looseness in what kinds of claims are considered plausible, a space where unproven and speculative statements can be couched in the language of simulations, innovation, and revolutionary duty.

No comments:

Post a Comment