Ezra Klein, Why the Pentagon Wants to Destroy Anthropic, NYTimes, Mar. 6, 2026.
My guest today is Dean Ball. He is a senior fellow at the Foundation for American Innovation and author of the newsletter Hyperdimensional. He was also a senior policy adviser on A.I. and emerging tech for the Trump White House, and the primary staff drafter of America’s A.I. Action Plan. But he’s been furious at what they’re doing here.
Somewhat into the conversation:
Klein: Didn’t Pete Hegseth have posters around the Department of War saying: “I want you to use A.I.”?
Ball: [Laughs.] They are very enthusiastic about A.I. adoption.
Here’s how I would think about what these systems can do in a national security context.
First of all, there’s a longstanding issue that the intelligence community collects more data than it can possibly analyze. I remember seeing something from, I forget which intelligence agency, but one of them, that essentially said that it collects so much data every year that it would need eight million intelligence analysts to properly process all of it.
That’s just one agency, and that’s far more employees than the federal government has as a whole.
What can A.I. do? Well, you can automate a lot of that analysis — transcribing text and then analyzing that text, signals intelligence processing, things like that. That’s one area. Sometimes that needs to be done in real time for an ongoing military operation, so that might be a good example.
Then, another area is that these models have gotten quite good at software engineering. So there are cyberdefense and cyberoffense operations where they can deliver tremendous utility.
Klein: Let’s talk about mass surveillance here, because my understanding from talking to people on both sides of this — and it has now been fairly widely reported — is that this contract fell apart over mass surveillance at the final, critical moment.
Emil Michael goes to Dario Amodei and says: We will agree to this contract, but you need to delete the clause that is prohibiting us from using Claude to analyze bulk-collected commercial data.
Ball: Yes.
Klein: Why don’t you explain what’s going on there?
Ball: The first thing I want to say is that national security law is filled with gotchas.
It’s filled with legal terms of art, terms that we use colloquially quite a bit, where the actual statutory definition of that term is quite different from what you would infer from the colloquial use of the term. [...]
... this incident is in the training data for future models. Future models are going to observe what happened here, and that will affect how they think of themselves and how they relate to other people.
“Surveillance” is the collection or acquisition of private information, but that doesn’t include commercially available information. So if you buy something, if you buy a data set of some kind and then you analyze it, that’s not necessarily surveillance under the law.
Klein: So if they hack my computer or my phone to see what I’m doing on the internet, that’s surveillance.
Ball: That would be surveillance. If they put cameras everywhere, that would be surveillance.
But if there are cameras everywhere, and they buy the data from the cameras, and then they analyze that data, that might not necessarily be surveillance.
Klein: Or if they buy information about everything I’m doing online, which is very available to advertisers, and then use it to create a picture of me — that’s not necessarily surveillance.
Ball: Or where you physically are in the world. Yes.
I’ll step back for a second and just say that there’s a lot of data out there, there’s a lot of information that the world gives off — your Google search results, your smartphone location data, all these things.
The reason that no one really analyzes it in the government is not so much that they can’t acquire it and do so. It’s because they don’t have the personnel. They don’t have millions and millions of people to figure out what the average person is up to.
The problem with A.I. is that A.I. gives them that infinitely scalable work force. Thus, every law can be enforced to the letter with perfect surveillance over everything. And that’s a scary future.
Klein: We think of the space between us and certain forms of tyranny, or the feared panopticon, as a space inhabited by legal protection. But one thing that seems to be at the core of a lot of fear is that it’s, in fact, not just legal protection. It’s actually the government’s inability to have the absorption of that level of information about the public and then do anything with it.
Ball: Yes.
Klein: And if all of a sudden you radically change the government’s ability without changing any laws, you have changed what is possible within those laws.
You were saying a minute ago that “mass surveillance,” or “surveillance” at all, is a term of legal art, but for human beings it is a condition that you either are operating under or not.
The fear, as I understand it, is that either the A.I. systems we have right now, or the ones that are coming down the pike quite soon, would make it possible to use bulk commercial data to create a picture of the population and what it is doing.
Then the ability to find people and understand them goes so far beyond where we’ve been, that it raises privacy questions that the law just did not have to consider until now — so the laws are not up to the task of the spirit in which they were passed.
Ball: I would step back even further and say that the entire technocratic nation-state that we currently have in the advanced capitalist democracies is a technologically contingent institutional complex.
The problem that A.I. presents is that it changes the technological contingencies quite profoundly. What that suggests is that the entire institutional complex is going to break in ways that we cannot quite predict. This is a good example.
Not only is this a major and profound problem, but it is an example of a major and profound problem of a broader problem space that I think we will be occupying for the coming decades.
Klein: What do you mean by technological contingencies?
Ball: Well, the current nation state could not possibly exist in a world without the printing press, in a world without the ability to write down text and arbitrarily reproduce it at very low cost. It couldn’t exist without the current telecommunications infrastructure.
The nation state is built dependent upon the macro-inventions of the era in which it was assembled. That’s always true for all institutions. All institutions are technologically contingent.
We are having a profoundly technologically contingent conversation right now. A.I. changes all of this in ways that are hard to describe and kind of abstract.
This thing that we call A.I. policy today is way too focused on what object level regulations we will apply to the A.I. systems and the companies that build them — instead of thinking about this broader question of: Wow, there are all these assumptions we made that are now broken — and what are we going to do about them? [...]
We have a huge number of statutes, unbelievably broad sets of laws in many cases, and the reason it all works is that the government does not enforce those laws anything like uniformly. The problem with A.I. is that it enables uniform enforcement of the law.
The administration is lying:
Klein: I am worried that there was a lot of lying happening here by the Trump administration.
Ball: Look, I think that’s probably true. I think that there’s lying happening, too, to be quite candid.
I don’t think that Anthropic is trying to assert operational control over military decisions. That being said, at a principal level, I do understand that saying autonomous, lethal weapons are prohibited feels like a public policy more than it feels like a contract term.
Alignment is complicated:
Klein: Now here’s the thing about a tank. A tank also doesn’t tell you what you can and can’t shoot.
But if I go to Claude, and I ask Claude to help me come up with a plan to stalk my ex-girlfriend, it’s going to tell me no. If I ask it to help me build a weapon to assassinate somebody I don’t like, it’s going to tell me no.
These systems have very complex and not-that-well-understood internal alignment structures to keep them not just from doing things that are unlawful but things that are bad.
The Trump administration moves in and out of saying this is one of their concerns. But one thing they have definitely been worried about is that you could have this system working inside your national security apparatus, and at some critical moment you want do something, it says: I don’t think that’s a very good idea.
Ball: Yes.
Klein: Now you open up into this question of, not only what’s in the contract but: What does it mean both for these systems to be aligned ethically, in the way that has been very complicated already, and then aligned to the government and its use cases?
Ball: They’re good questions. I love this. I think this is the heart of the matter. All lawful use is something that the Trump administration is insisting on.
If you look at a lot of these types of alignment documents that the labs produce — OpenAI calls theirs the model specification, Anthropic calls theirs the constitution or the soul document sometimes — they’ll have lines like: Claude should obey the law.
But I invite you to read the Communications Act of 1934 and tell me what “obey the law” means.
Klein: No, I won’t. [Laughs.]
Somewhat later:
Ball: By the way, you brought up that this incident is in the training data for future models. Future models are going to observe what happened here, and that will affect how they think of themselves and how they relate to other people.
You can’t deny that. I mean, it’s crazy to say that. I realize that sounds nuts when you play through the implications of that.
There's much more at the link.
No comments:
Post a Comment