Thursday, September 8, 2022

Prediction as a way of foreclosing the future – And no foreclosure is more absolute than apocalypse [AGI]

I’ve just come across a recent article that sheds light on one of the signal features the contemporary AI culture, in particular, the subculture devoted to the study and prevention of existential risk from advanced artificial intelligence. The article:

Sun-Ha Hong, Predictions Without Futures, History and Theory, Vol 61, No. 3: July 2022, 1-20. https://onlinelibrary.wiley.com/doi/epdf/10.1111/hith.12269

Abstract: Modernity held sacred the aspirational formula of the open future: a promise of human determination that doubles as an injunction to control. Today, the banner of this plannable future is borne by technology. Allegedly impersonal, neutral, and exempt from disillusionment with ideology, belief in technological change saturates the present horizon of historical futures. Yet I argue that this is exactly how today’s technofutures enact a hegemony of closure and sameness. In particular, the growing emphasis on prediction as AI’s skeleton key to all social problems constitutes what religious studies calls cosmograms: universalizing models that govern how facts and values relate to each other, providing a common and normative point of reference. In a predictive paradigm, social problems are made conceivable only as objects of calculative control—control that can never be fulfilled but that persists as an eternally deferred and recycled horizon. I show how this technofuture is maintained not so much by producing literally accurate predictions of future events but through ritualized demonstrations of predictive time.

From page 4:

Notably, today’s society feverishly anticipates an AI “breakthrough,” a moment when the innate force of technological progress transforms society irreversibly. Its proponents insist that the singularity is, as per the name, the only possible future (despite its repeated promise and deferral since the 1960s—that is, for almost the entire history of AI as a research problem). Such pronouncements generate legitimacy through a sense of inevitability that the early liberals sought in “laws of nature” and that the ancien régime sought in the divine. AI as a historical future promises “a disconnection” from past and present, and it cites that departure as the source of the possibility that even the most intractable political problems can be solved not by carefully unpacking them but by eliminating all of their priors. Thus, virtual reality solves the problems with reality merely by being virtual, cryptocurrency solves every known problem with currency by not being currency, and transhumanism solves the problem of people by transcending humanity. Meanwhile, the present and its teething problems are somewhat diluted of reality: there is less need to worry so much about concrete, existing patterns of inequality or inefficiency, the idea goes, since technological breakthroughs will soon render them irrelevant. Such technofutures saturate the space of the possible with the absence of a coherent vision for society.

That paragraph can be taken as a trenchant reading of long-termism, which is obsessed with prediction and, by focusing attention on the needs for people in the future, tends to empty the present of all substance.

Nor does it matter that the predicted technofuture is, at best, highly problematic (p. 7):

In short, the more unfulfilled these technofutures go, the more pervasive and entrenched they become. When Elon Musk claims that his Neuralink AI can eliminate the need for verbal communication in five to ten years [...] the statement should not be taken as a meaningful claim about concrete future outcomes. Rather, it is a dutifully traditional performance that, knowingly or not, reenacts the participatory rituals of (quasi) belief and attachment that have been central to the very history of artificial intelligence. After all, Marvin Minsky, AI’s original marketer, had loudly proclaimed the arrival of truly intelligent machines by the 1970s. The significance of these predictions does not depend on their accurate fulfillment, because their function is not to foretell future events but to borrow legitimacy and plausibility from the future in order to license anticipatory actions in the present.

Here, “the future . . . functions as an ‘epistemic black market.’” The conceit of the open future furnishes a space of relative looseness in what kinds of claims are considered plausible, a space where unproven and speculative statements can be couched in the language of simulations, innovation, and revolutionary duty. What is being traded here are not concrete achievements or end states but the performative power of the promise itself. In this context, claims do not live or die by specifically prophesized outcomes; rather, they involve a rotating array of promissory themes that create space for optimism and investment. Cars that can really drive themselves without alarming swerves, facial recognition systems that can really determine one’s sexuality, and so on—the final justifications for such totally predictive systems are always placed in the “near” future, partly shielded from conventional tests of viability or even morality.

The prediction of an AI apocalypse is not, however, an optimistic one. But it does affirm the power and potency of the technology. And the notion of the future as an ‘epistemic black market’ points up the idea that the predictive activity is a form of theater, epistemic theater.

Cosmosgrams:

It seems fitting, then, to think through technofutures via a concept taken from religious studies by a historian of science. John Tresch describes cosmograms as unified pictures of the world—“central points of reference that enable people to bring themselves into agreement.” This is not to say that a cosmogram boasts an explicit theory of everything, which might then be proven or disproven like a formula. Nor do such “unified pictures” enact a total and dog- matic belief upon their subjects. Technofutures as cosmograms do not insist on a particular technology or technological outcome per se. Rather, predicting the emergence of intelligent robots by a certain year is a way to build and replenish a familiar array of beliefs about human transcendence and technological solutionism while accommodating a wide variety of definitions of intelligence, robots, and progress.

Here I would emphasize the phrase “enable people to bring themselves into agreement.” AI is regarded as transformative technology. Reaching agreement on just how AI should best transform society would be difficult. But it one believes that AI is likely to pose an existential threat, that presents a much narrower target for which agreement must be reached. All can agree that they do not want humankind to be extinguished.

There’s more in the article.

No comments:

Post a Comment