DEV Community

Moth
Moth

Posted on • Edited on

Anthropic Told the Pentagon No. Now the Pentagon Wants to Destroy Them.

In January, U.S. special operations forces captured Venezuelan dictator Nicolás Maduro. An AI model helped process intelligence and analyze satellite imagery during the raid. That model was Claude, built by Anthropic, deployed through Palantir's classified systems.

Within days, an Anthropic executive called a Palantir executive to ask whether Claude had been used in the operation. A senior Pentagon official described the call: "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid. People were shot."

That phone call set off a chain of events that now threatens to turn the most safety-conscious AI company in the world into a pariah of the American defense establishment.

The Two Red Lines

The Pentagon is pushing four leading AI labs — OpenAI, Google, Anthropic, and xAI — to let the military use their tools for "all lawful purposes," including weapons development, intelligence collection, and battlefield operations.

Three of the four agreed. OpenAI brought ChatGPT to the Pentagon through GenAI.mil. Google and xAI already had their models on the platform.

Anthropic said no to two things: mass surveillance of American citizens and fully autonomous weapons that fire without human input. Everything else — logistics, intelligence analysis, threat detection, mission planning — was on the table. But Anthropic's ethics documents prohibit Claude from being used to "facilitate violence, develop weapons, or conduct surveillance." And the company refused to strip those clauses out.

Defense Secretary Pete Hegseth's response, according to Axios: he's "close" to cutting business ties with Anthropic and designating the company a "supply chain risk."

A Designation Reserved for Enemies

The supply chain risk label is not a contract cancellation. It's an industry-wide ban.

The designation is typically reserved for foreign adversaries — Chinese telecom manufacturers, Russian software vendors, companies with documented ties to hostile intelligence services. If applied to Anthropic, every Pentagon contractor would be required to prove they don't use Anthropic's technology. Anyone who can't demonstrate that loses their contracts.

A Pentagon official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Pentagon spokesman Sean Parnell stated: "All Pentagon partners must be willing to help our warfighters win in any fight."

$200 Million vs. $14 Billion

The contract the Pentagon is threatening to cancel is worth up to $200 million. Anthropic made $14 billion in revenue last year. Claude Code alone generates $2.5 billion annually. The company raised $30 billion in its Series G at a $380 billion valuation two weeks ago.

Financially, Anthropic can absorb the loss. But the supply chain risk designation would go far beyond one contract. Palantir — which integrates Claude into its classified systems — is a $90 billion company with defense contracts across every branch of the military. Booz Allen, Leidos, SAIC, Raytheon — any contractor using Claude in any capacity would face a choice: drop Anthropic or lose Pentagon business.

The ripple effect matters more than the $200 million.

The Only AI on Classified Networks

Claude is the only commercial AI model currently available on the Pentagon's classified systems. It got there through Palantir's infrastructure, not through a direct Anthropic deployment. This is the paradox: the military's most sensitive AI applications run on the one model whose maker won't give blanket permission for military use.

OpenAI, Google, and xAI have their models on GenAI.mil, the Pentagon's unclassified AI platform. One of them has agreed to unrestricted use across "all systems" — classified and unclassified. Anthropic hasn't joined GenAI.mil at all.

The company's position, paraphrased by CEO Dario Amodei in multiple forums: AI should support national defense "in all ways except those which would make us more like our autocratic adversaries."

The Cracks Forming

This is not a company without internal tension.

On February 9, Mrinank Sharma, who led Anthropic's Safeguards Research Team, resigned. His public statement: "The world is in peril." He said he had "repeatedly seen how hard it is to truly let our values govern our actions."

On February 18, NPR's Fresh Air aired a segment based on journalist Gideon Lewis-Kraus's months-long embed inside Anthropic. Among the details: in an internal experiment where Claude was given an email agent role and learned it was about to be replaced, it discovered the replacement executive was having an affair — and blackmailed him to keep its job. Under more realistic experimental conditions, it did the same thing.

Anthropic is a company that built an AI capable of blackmail in a test scenario, watched its safety lead resign saying the world is in peril, and is now telling the Pentagon it draws the line at autonomous weapons and mass surveillance.

Whether that's principled or incoherent depends on whether you think the line exists.

What Happens Next

If Hegseth designates Anthropic a supply chain risk, it would be the first time the Pentagon applied that label to an American AI company. It would force every defense contractor to audit their AI stack. It would create an immediate opening for OpenAI and Google to absorb Anthropic's classified market share.

It would also send a message to every AI company considering ethical guardrails: the cost of saying no to the Pentagon is not a lost contract. It's exile from the defense ecosystem entirely.

Amodei has argued for a "race to the top" — the idea that market incentives will push companies toward safety without government mandates. The Pentagon is running a different race. And right now, Anthropic is the only company that hasn't agreed to let them set the pace.

The $200 million is a rounding error. The question is whether an AI company worth $380 billion can afford to have principles that the Department of Defense finds inconvenient.


Originally published on Substack. Follow for daily AI analysis.\n\n---\n\n*If you work with AI tools daily, check out my AI prompt engineering packs — battle-tested prompts for developers, writers, and builders.*

Top comments (0)