Casey Mock, Pete Hegseth Got His Happy Meal, Tomorrow's Mess, March 2, 2026.
Concerning the current dust-up between the Pentagon and Anthropic:
Something like this was always going to happen. Not because of Hegseth specifically, not because of this administration, but because of the narrative the AI safety community — the world that produced Anthropic, and whose language Anthropic still speaks even while disavowing its label — has been pushing for at least the last three years.
[Oh, much longer than that, much longer. – BB]
Imagine a six-year-old whose entire media diet includes a steady stream of McDonald’s commercials, a Happy Meal ad at every break, focused on whatever toy is the latest to be included along with the McNuggets. Now put that child in a car that drives past a McDonald’s. What happens?
The Rationalist and Effective Altruist communities — the intellectual cultures that gave us Anthropic, influence many of their employees, and which still shape how Dario Amodei talks about his company and his technology — have spent the better part of a decade insisting, with increasing urgency, that artificial intelligence is the most consequential technology in human history. Maybe it’s civilization-ending; maybe it’s civilization-saving. Either way, it’s the hinge on which everything henceforth turns.
With policymakers and the media largely having accepted the premise, thus surrendered was the argument for treating AI like a normal technology subject to normal governance. Policies being pushed by Effective Altruist groups, like 2024’s SB1047 in California — deprioritize harms happening today for theoretical existential ones in the future; despite the fact that today’s harms that could be existential for the folks experiencing them. These groups incessantly made the case that whoever controls this technology controls the future, and so the hypothetical future needs to be prioritized now. In a Washington now run by people who tend to impulsiveness and contemptuousness of institutional constraint — well, it’s easy to see where this was headed. Hegseth saw the ads for the toy, and so now he wanted his Happy Meal. [...]
Yet the prognostications of the doomer community have been, nearly without exception, wrong — not in small ways, but in the foundational sense that the imagined trajectory keeps failing to materialize. [...]
Thus, this news reveals the rationalists’ under-examined blind spot: they cannot model the messy Pete Hegseths of the world, even as their claims whet Hegseth’s appetite. The rationalist view of the world assumes, at some level, that the relevant actors are optimizing for well-understood, predictable variables and a clear understanding of what best serves their self-interest. What it cannot account for is bad faith, impulsiveness, ideological motivation untethered from evidence, random instances of force majeure, and personal whims and petty rivalries. And so while the doomer community spent years warning about uncontrollable AI systems that do things their creators didn’t intend, they apparently did not consider what would happen when the humans currently running the United States government got access to technology they’d been told was the hinge of history.
I've published an article about Doomers in 3 Quarks Daily: On the Cult of AI Doom, September 12, 2026.
No comments:
Post a Comment