DEV Community

Skila AI
Skila AI

Posted on • Originally published at news.skila.ai

Anthropic Just Got Blacklisted as a US National Security Risk. Then OpenAI Engineers Took Their Side.

The Trump administration just did something unprecedented: it designated Anthropic as a supply chain security risk — a classification previously reserved for Chinese state-owned enterprises like Huawei.

Here is what actually happened and why it matters for every developer building on AI infrastructure.

The $200M Contract That Started It All

In July 2025, Claude became the first frontier AI model approved for classified Pentagon networks. Six months later, that same contract was cancelled.

The reason: Anthropic refused to remove two safety restrictions — no autonomous weapons targeting and no mass surveillance of American citizens.

The Fallout

  • $200M Pentagon contract cancelled
  • 6-month phase-out window for all federal agencies
  • Lawsuits filed in 2 federal courts
  • All military contractors barred from working with Anthropic

The Surprising Part

Thirty engineers from OpenAI and Google DeepMind — competitors — filed an amicus brief supporting Anthropic's lawsuit. Google's chief scientist Jeff Dean was among them.

This raises a question every enterprise AI buyer needs to answer: Can governments coerce AI safety policies?


Full analysis including the 3 legal scenarios and who wins each one: Read the full breakdown on Skila AI

Originally published at news.skila.ai

Top comments (0)