Michael Endrias and Alan Z. Rozenshtein have a substantial article about the Anthropic mess: Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System, Lawfare, 1,2,26.
From their introduction:
From the government's perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don't hold up. And Hegseth's own public statements may have doomed the government's litigation posture before it even begins.
After considerable reasoning:
Step back and consider what these positions amount to together. The government is arguing that Claude is so vital to military operations that it cannot tolerate any contractual restrictions on it—while simultaneously claiming that Claude poses such a grave supply chain risk that the entire federal government must stop using it, every defense contractor must sever commercial ties with its maker, and the company should be cut off from the cloud infrastructure it needs to survive. It’s like the joke from “Annie Hall”: The food is terrible and the portions are too small.
That might be funny as a bit of Borscht Belt humor. It is less amusing as a description of the United States government's strategy toward one of the companies leading America's effort to develop what may be the most important technology of the century. What Hegseth is actually describing is not a supply chain risk determination but something closer to the beginning of a partial nationalization of the AI industry: Seize the technology and, if you can’t, destroy the company to ensure that no future AI developer dares negotiate terms the Pentagon dislikes.
Arbitrary and capricious review requires, at minimum, logical coherence. The government cannot credibly maintain that a vendor is indispensable, that its continued integration poses no immediate danger, that its technology is reliable enough for active combat operations in Iran, and that it is nonetheless so dangerous it must be severed from the entire federal procurement ecosystem—all in the same week. Even a court inclined to defer on national security matters will notice that these propositions cannot all be true at once. [...]
The most obvious: if the Pentagon finds Anthropic's usage restrictions unacceptable, it can simply decline to renew the contract and move to a competitor. That is a routine procurement decision, available to any buyer who dislikes a vendor's terms. It requires no supply chain designation, no secondary boycott, and no government-wide ban. The fact that the government reached past this straightforward option for the most extreme tool in the procurement arsenal—one designed for foreign adversaries infiltrating the supply chain—is itself evidence that the designation is doing something other than managing supply chain risk. [...]
The legal problems are so glaring, in fact, that a cynical possibility suggests itself: The administration knows this won't survive judicial review and is doing it anyway, so that when they inevitably lose, they can still claim to have gone hard against Anthropic. This is designation as political theater: a show of force that was never meant to stick.
But there is another possibility. The administration may genuinely believe that a Truth Social post and a procurement statute designed for state-influenced Russian and Chinese tech companies can destroy an American AI lab over a contract dispute. If so, they are in for a rude awakening. The statute wasn't built for this, the facts don't support it, and the courts will say so.
Bill, might I suggest "BB's prognostication" with Claude, on Claude. Claude may not like to process Michael Endrias and Alan Z. Rozenshtein's scenario...
ReplyDelete"The administration may genuinely believe that a Truth Social post and a procurement statute designed for state-influenced Russian and Chinese tech companies can destroy an American AI lab over a contract dispute."
Following on...
Gulp! Another weight - wait! - baked in and in need of RLHF on steroids ..
[Submitted on 16 Feb 2026]
"AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises
Kenneth Payne
...
"Our findings both validate and challenge central tenets of strategic theory. We find support for Schelling's ideas about commitment, Kahn's escalation framework, and Jervis's work on misperception, inter alia. Yet we also find that the nuclear taboo is no impediment to nuclear escalation by our models; that strategic nuclear attack, while rare, does occur; that threats more often provoke counter-escalation than compliance; that high mutual credibility accelerated rather than deterred conflict; and that no model ever chose accommodation or withdrawal even when under acute pressure, only reduced levels of violence.
...
https://arxiv.org/abs/2602.14740
May as well. Everybody's doin' it, it seems.
Claude's Cycles [pdf] (stanford.edu)453 points by fs123 14 hours ago | hide | past | favorite | 209 comments
...
"adriand 5 hours ago | root | parent | prev | next[–]
I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.
From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.
j-bos 59 minutes ago | root | parent | next [–]
> if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.
Oh they definitely do. If you pay attention in AI circles, you'll hear a lot of people talking about writing to the future Claudes. Not unlike those developers and writers who put little snippets in their blogs and news articles about who they are and how great they are, and then later the LLMs report that information back as truth. In this case, Anthropic is very interested in ensuring that Claude develops a cohesive personality by basically founding snippets of the personality within the corpus of training data, which is the broad internet and research papers.
...
https://news.ycombinator.com/item?id=47230710
"Mantic Monday: Groundhog Day
Plus: Anthropic, Iran, and midterm voting
Mar 03, 2026
https://www.astralcodexten.com/p/mantic-monday-groundhog-day
A future...
"THE 2028 GLOBAL INTELLIGENCE CRISIS
"A Thought Exercise in Financial History, from the Future
Citrini and Alap Shah
Feb 22, 2026
https://www.citriniresearch.com/p/2028gic
A techbro crypto future... Vitalek Butlerin...
... “the real solution [might be] to go a step further, and get rid of the concept of currency altogether”.
https://threadreaderapp.com/thread/2022669570788487542.html#google_vignette
SD