DEV Community

zecheng
zecheng

Posted on • Originally published at lizecheng.net

Anthropic Just Told You Who They Are. Believe Them.

Anthropic Just Told You Who They Are. Believe Them.

Three things happened on February 24. Separately, they're news. Together, they're a strategy reveal most people will miss.

Pentagon gives Anthropic an ultimatum. Anthropic rewrites its safety policy. Anthropic launches enterprise plugins that moved $830 billion in market cap.

Same day. Same company. Not a coincidence.

Let's start with the threat. Defense Secretary Pete Hegseth sat down with Dario Amodei and said, in effect: give us unrestricted Claude access by Friday, or we designate you a supply chain risk and invoke the Defense Production Act — a Korean War-era law that lets the government conscript private companies into national security production. Whether they want to or not.

Here's what makes this interesting. Claude is already the only AI model running on classified US defense systems, deployed through Palantir. It was reportedly used in the operation to capture Venezuela's Maduro. The Pentagon doesn't want to replace Claude. They want to unleash it — mass surveillance, autonomous weapons, the full menu. Anthropic's usage policy explicitly bans both.

So Anthropic faces a choice no amount of fundraising prepared them for: comply and torch the "responsible AI" brand, or resist and get conscripted anyway.

Now look at what else dropped that same day.

RSP 3.0. Anthropic's new Responsible Scaling Policy. The old version had a hard line: don't train more powerful models unless safety measures are confirmed first. That line is gone. The new version says development will only be "delayed" if leadership believes Anthropic leads the AI race AND catastrophic risks are significant. Chief Scientist Jared Kaplan's logic: "If competitors are blazing ahead, pausing wouldn't help — it would result in a less safe world."

Read that again. The condition for slowing down now requires two things to be true simultaneously. And one of them — "we're in the lead" — is something Anthropic can always argue against. OpenAI exists. Google exists. There's always someone to point to.

This isn't a policy update. It's a permission structure.

Two weeks earlier, Anthropic closed a $30 billion raise at roughly $380 billion valuation. Annualized revenue: $14 billion. Claude Code alone contributes $2.5 billion of that. The money came in. The guardrails came down. The timing is not subtle.

And then — the enterprise play. Claude Cowork launched integrations with Google Workspace, Slack, DocuSign, FactSet, LegalZoom, Similarweb, WordPress, S&P Global, MSCI, and more. Private plugin marketplaces. Multi-step workflows across Excel and PowerPoint with context passing between apps.

Market reaction? Salesforce up 4%. Thomson Reuters up 11%. FactSet up 6%. Intapp up 7.1%. S&P gained 0.77% to 6,890.07, Nasdaq up 1.04% to 22,863.68. Weeks ago, Anthropic's legal plugin triggered an $830 billion global software selloff. This time, the same company catalyzed recovery.

You see the pattern now.

Raise capital. Loosen constraints. Expand in every direction — military, enterprise, developer — simultaneously. Anthropic is not pivoting from safety. They're graduating from it. "We were the responsible ones" becomes past tense, a credential on the resume, not an operating principle.

Zecheng's read on this: the company that built its brand on "we're the careful ones" just showed you that brand was a stage, not an identity. And honestly? It makes sense. You can't raise $30 billion and stay cautious. The investors didn't write those checks for restraint. They wrote them for market dominance.

The builder angle matters here too. Doug O'Laughlin from SemiAnalysis noted on Latent Space that 4% of public GitHub commits are now authored by Claude Code, projected to hit 20%+ by end of 2026. Liam Ottley's side-by-side testing showed Claude Code beating OpenClaw on every metric that matters for real business automation — terminal access, filesystem integration, API orchestration. The enterprise plugin launch isn't a feature announcement. It's infrastructure for making Claude the default operating layer for knowledge work.

Here's what I think happens next. The Pentagon situation resolves quietly — some version of expanded access with nominal policy language that lets both sides save face. The safety policy change is permanent. And the enterprise play is where the real war gets fought, not against governments, but against Microsoft and Google for who owns the AI workflow layer.

Anthropic told you exactly who they are on February 24. The question isn't whether you believe them. It's whether you've updated your model of the world accordingly.

Because the "safe AI company" era just ended. What comes next is just an AI company — with very, very good technology and $30 billion in pressure to use it.

Top comments (0)