Robert Wright, Iran and the immortality of OpenAI, Anthropic, and Google, Nonzero Newsletter, Mar. 6, 2026.
I'm not going to try to summarize the first three-quarters of this article, which is about how the irrational projective tendencies (my formulation [1], but not quite Wright's) of US foreign policy lead the country into senseless war after senseless war. Here's where he ends up:
All of this helps explain why the US has devoted so much time and energy to enterprises that kill or immiserate millions and millions of people—not just the military interventions we stage, but the profuse supplying of weapons (for Israel’s war on Gaza, for example), and the economic strangulation of nations like Cuba and Venezuela and Iran. All of these endeavors had the support of intensely motivated special interest groups. By and large, the deployment of US troops and arms and sanctions—our big, blunt, coercive instruments—have nothing to do with serving America’s actual interests, much less the interests of the world. And they repeatedly—as now in Iran—cover us in moral disgrace.
This is one reason I harp, however ineffectually, on the importance of respecting international law. The machinery for making US foreign policy is so out of control—so wildly misaligned with American interests, the global interest, and morality—that it urgently needs to be constrained by some clear and coherent set of rules. And so long as it’s not constrained by such a thing, we shouldn’t kid ourselves: The US military (and I say this as an Army brat who grew up with a genuine affection for the military and genuine pride in my father’s service during World War II and after) is now mainly an instrument of mayhem and is increasingly a source of global instability.
All of which brings us back to Anthropic, whose Claude large language model is integrated into Maven, software that’s operated by Palantir and used by the Pentagon to identify targets. The Washington Post reports that “as planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance.” Given that the Iranian elementary school was hit on the first day of the war, it seems fairly likely that Claude played a role in the selection of that target and thus in the death of more than 100 young girls—many times more kids than were killed in the worst American school shooting.
This might seem to vindicate Dario Amodei’s refusal to give the Pentagon carte blanche to use Claude in “fully autonomous” weapons systems. But before we give him the Nobel Peace Prize, note two things: (1) This kind of contractual carveout almost certainly wouldn’t have made a difference in this case even if honored. No doubt there was a “human in the kill chain”—someone who, at a minimum, scanned the list of targets generated by Maven and said, “Yep, looks like a list of targets. Let’s do it!” (2) Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder. More than 1,000 Iranian civilians have already been killed in this war—a war that flagrantly violates international law and continues to lack a coherently articulated rationale. Anyone who makes money by aiding endeavors like this has a lot to answer for.
Last week Amodei, in explaining Anthropic’s position on Pentagon contracts, emphasized the company’s overall commitment to national security. He wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” If Amodei genuinely believes that the US military is devoted to addressing actual “existential” threats to the US, he’s too naive to be entrusted with anything as important as running a big AI company.
Obviously, this indictment applies about equally to OpenAI’s Sam Altman (who gladly swooped in and snatched the Pentagon largesse that Amodei will now be denied) and to Google’s Sundar Pichai and Demis Hassabis and to xAI’s Elon Musk. All the big AI companies are putting their tools at the disposal of the Pentagon to use as it sees fit.[2]
Notes
[1] This paragraph, from my post, TO WAR! Part 1: War and America's National Psyche, will give you some idea of my thinking about the projective dynamic of America's urges to war:
As some of you may know, my thinking on these matters has been strongly influenced by an essay Talcott Parsons published in 1947 on “Certain Primary Sources of Aggression in the Social Structure of the Western World”. Parsons argued that Western child-rearing practices generate a great deal of insecurity and anxiety at the core of personality structure. This creates an adult who has a great deal of trouble dealing with aggression and is prone to scapegoating. Inevitably, there are lots of aggressive impulses which cannot be followed out. They must be repressed. Ethnic scapegoating is one way to relieve the pressure of this repressed aggression. That, Parsons argued, is why the Western world is flush with nationalistic and ethnic antipathy. I suspect, in fact, that this dynamic is inherent in nationalism as a psycho-cultural phenomenon.
[2] Between the Trump administration in Washington and the Big Tech Billionaires in Silicon Valley, this country is currently dominated by a confluence of crazies, perhaps the largest in American history.
No comments:
Post a Comment