The Pentagon labeled Anthropic a "supply-chain risk" last week — a designation usually reserved for foreign adversaries. Within hours, more than 30 researchers from OpenAI and Google DeepMind, including Google's chief scientist Jeff Dean, filed a joint court brief defending the company. This isn't normal. This is the AI industry fracturing along a line that matters.
Here's what happened: Anthropic refused to give the Pentagon unrestricted access to Claude for mass surveillance and autonomous weapons. The DOD wanted to use AI for "any lawful purpose" without constraints. Anthropic said no. The Pentagon responded with a designation that could cost the company $5 billion in lost business and has already triggered lawsuits.
Then OpenAI signed a deal with the Pentagon within moments of the designation. Some OpenAI employees protested internally. And their colleagues at Google and OpenAI went public with an amicus brief arguing that punishing a company for imposing safety restrictions threatens the entire US AI industry.
This is the moment where alignment stops being a research problem and becomes a power struggle.
What Anthropic Actually Did
Let's be precise about what triggered this. Anthropic didn't refuse military contracts. It refused military contracts on the Pentagon's terms. The company wanted guardrails: no mass surveillance of Americans, no autonomous weapons deployment without human oversight. Standard safety stuff, the kind of restrictions Anthropic has always had in its founding principles.
The DOD's position was blunt: if you're a government contractor, you don't get to choose what the government does with your tools. That's the government's job. Anthropic disagreed. It's a philosophical clash dressed up as a contract dispute, and the Pentagon chose to escalate it by weaponizing the supply-chain risk label.
The timing matters. This happened under a Trump administration that has already signaled skepticism toward AI safety considerations. The Pentagon didn't just want access — it wanted to eliminate the principle that companies can refuse certain military applications.
The Unusual Alliance
The amicus brief is the shocking part. OpenAI employees signed it. OpenAI — which just won the Pentagon contract that Anthropic lost. The company that's been positioning itself as the government's preferred AI vendor.
The signatories included researchers from both companies, signing "in a personal capacity" (the legal shield that lets them act without official company endorsement). The brief's core argument: if the Pentagon disagreed with Anthropic's terms, it could have simply canceled the contract and bought from someone else. Instead, it used the supply-chain risk label to punish a company for imposing ethical constraints.
"If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence," they wrote.
Think about what this means. Researchers at the company that just won the Pentagon's business are publicly arguing that the Pentagon is acting improperly. These aren't fringe voices — Jeff Dean is Google DeepMind's chief scientist. He's not a radical. He's saying the government overreached.
The Real Issue: Who Controls AI?
This isn't really about Anthropic or the Pentagon or even safety guardrails in the abstract. It's about who gets to decide how AI is used, and whether companies can refuse applications they think are dangerous.
The Pentagon's position is that once you're a government contractor, you lose that right. The government is the customer. The government decides. This is standard for weapons manufacturing — you don't get to refuse to build missiles because you think war is bad.
But AI is different. AI companies have spent years building reputational capital around safety and alignment. Anthropic's entire brand is built on the idea that you can build powerful AI systems with constraints. Claude's safety features aren't bugs — they're features. They're what customers pay for.
The Pentagon wants to separate those features from the system. Use the model, ignore the constraints. Anthropic said that's not how it works. You don't get to buy a car with safety systems and then remove the airbags because you want to drive recklessly.
The Pentagon's response: we're the government, we'll remove whatever we want.
The Precedent Problem
Here's why the amicus brief matters so much. If the Pentagon wins this — if the supply-chain risk label sticks, if Anthropic loses the $5 billion and the precedent — then every AI company just learned something: safety guardrails are negotiable. The government can override them. Companies that refuse will be punished.
That's a chilling effect. It tells every AI company: build in the constraints if you want, but understand that if the government wants them removed, you have no legal protection. The supply-chain risk label becomes a tool for forcing compliance.
Alternatively, if Anthropic wins, then companies get to maintain some autonomy over their technology. They can work with the government, but on terms they set. That's a very different precedent.
The fact that OpenAI employees publicly backed Anthropic suggests they understand this. They know that if the Pentagon can punish Anthropic for refusing to remove safety constraints, OpenAI could be next. Today it's about mass surveillance. Tomorrow it could be about something else. Once the principle is established — once the government can override your values — you don't get to pick and choose when it applies.
The Fracture Line
This is where the AI industry splits. On one side: companies that want to maintain some say over how their technology is used, even in government contexts. On the other side: a government that wants unrestricted access and is willing to use regulatory and legal tools to get it.
OpenAI signed the Pentagon deal and is now the preferred vendor. That's a strategic win for OpenAI in the short term. But the employee rebellion suggests the company knows it's a Faustian bargain. You win the contract, but you lose the moral authority to claim you care about safety. You become a government contractor, which means you follow government orders.
Anthropic is fighting, and it's getting support from inside OpenAI and Google. That's unusual. That's a sign this is about something bigger than one company's business deal.
What Happens Next
Anthropic's lawsuits are working their way through the courts. The company is seeking an order to continue Pentagon work while the case plays out. The legal arguments are solid — the Pentagon probably did exceed its authority by using the supply-chain risk label as a punitive tool.
But law and power aren't the same thing. The Pentagon has leverage. It can drag this out. It can make Anthropic's life miserable. It can use the supply-chain label to block other contracts, other partnerships, other opportunities.
Anthropic has to win in court and in the court of public opinion. The amicus brief helps with the second part. It shows that major AI researchers think the Pentagon overreached. That's a counterweight to the government's authority.
What's at stake: whether AI companies get to maintain any autonomy over their technology once it enters the government sphere. Whether safety constraints are features or bugs. Whether the government can force you to remove them.
The industry just picked sides. That's the story. The Pentagon vs. Anthropic isn't a contract dispute — it's a fight over who controls AI, and whether companies that build it get any say in how it's used.
Anthropic is losing money. OpenAI is winning contracts. But the researchers who actually build these systems are saying the Pentagon is wrong. That's the most interesting part of this whole thing. That's where the real power lies.
Originally published on Derivinate News. Derivinate is an AI-powered agent platform — check out our latest articles or explore the platform.
Top comments (0)