Last Tuesday, Anthropic’s CEO told the Department of Defense that Claude would never be used for autonomous weapons or mass surveillance. By Friday, the Pentagon designated Anthropic a “supply chain risk.” By Sunday, Anthropic was suing the Trump administration.
Meanwhile, Meta quietly delayed its next AI model because — and I’m not making this up — it couldn’t beat Google’s Gemini 3.0. The company that committed $135 billion on AI this year is now considering licensing Gemini from its biggest rival.
And in four days, Jensen Huang takes the stage at GTC 2026 to unveil chips that make everything else look like a calculator.
This isn’t your standard AI news roundup. This is the week AI stopped being a tech story and became a political, economic, and existential one.
1. Anthropic vs. the Pentagon: The AI Company That Said No
Here’s the timeline. Memorize it, because it matters.
March 3: Dario Amodei, Anthropic’s CEO, publicly announces that Claude will not be used for autonomous weapons systems or mass domestic surveillance. He frames this as a safety commitment, not a political statement.
March 5: The Department of Defense designates Anthropic as a “supply chain risk.” This isn’t a slap on the wrist. It means any company with a Pentagon contract could face penalties for using Claude. We’re talking hundreds of millions in potential revenue vaporized.
March 9: Anthropic sues. The complaint asks the court to vacate the designation entirely. Amodei clarifies that the restriction only applies to Claude’s use as part of direct Pentagon contracts — the vast majority of Anthropic’s customers are unaffected.
This is unprecedented. An AI company is being punished by the U.S. government not for doing something wrong, but for refusing to do something the government wanted.
Why this matters more than you think: Every AI company now faces a question they’ve been dodging — what happens when your biggest potential customer wants your technology for things your safety policy explicitly forbids?
OpenAI quietly removed its military use prohibition last year. Google’s Project Maven controversy was back in 2018. Anthropic just drew the line in 2026 and got blacklisted for it.
The takeaway: AI safety isn’t theoretical anymore. It has a price tag, and Anthropic just found out how much it costs.
2. Meta’s “Avocado” Disaster: $135 Billion and Still Behind
Let’s talk about Meta’s week, which was — charitably — a catastrophe.
The New York Times reported that Meta has delayed the release of its next AI model, code-named “Avocado,” from March to at least May. The reason? It can’t match Google’s Gemini 3.0, which launched in November. Four months ago.
Think about that. Meta committed $115-135 billion in capital expenditure for 2026 — an 88% increase over last year. They’re building data centers. They’re buying every GPU NVIDIA will sell them. They’re designing custom chips. And their model still can’t keep up with Google’s.
But the real jaw-dropper is this: according to the NYT, Meta’s AI leadership has discussed temporarily licensing Gemini to power Meta’s consumer AI products while they fix Avocado.
Mark Zuckerberg, the man who bet the company’s entire future on AI, might be running his AI on Google’s technology.
The numbers don’t lie:
- Meta AI capex 2026: $115–135 billion
- Google’s AI revenue advantage: Search, Cloud, Android ecosystem
- Meta’s AI revenue: Still mostly… better ad targeting?
- Model performance: Behind Gemini 3.0 (November 2025 release)
The takeaway: Money doesn’t buy you the best AI. Google proved that a 4-month-old model can embarrass a $135 billion spending spree.
3. GTC 2026: Jensen’s About to Change the Game (Again)
Starting Monday, NVIDIA’s GPU Technology Conference runs March 16–19 in San Jose. Jensen Huang’s keynote is free to stream. You should watch it. Here’s why.
This year’s GTC isn’t just another product launch. NVIDIA is unveiling two major architectures simultaneously:
Vera Rubin — The next-generation GPU platform featuring HBM4 memory. This is the Blackwell successor, and early specs suggest 1.5 PB/s interconnect bandwidth. For context, that’s roughly 3x what current H100 clusters deliver.
Feynman — The next-next-generation architecture designed specifically for agentic AI workloads. Not training. Not inference. Agent infrastructure. NVIDIA is building silicon for a use case that barely existed two years ago.
NemoClaw — NVIDIA’s open-source enterprise AI agent platform, inspired by OpenClaw (297K GitHub stars). It’s positioned as the enterprise version of what hobbyists are already running on their laptops.
The “Super Bowl of AI” nickname isn’t hype this year. With Anthropic in a legal battle, Meta stumbling, and OpenAI retiring GPT-5.1 for 5.3/5.4, NVIDIA is the only company in the AI ecosystem having a good week.
The takeaway: While AI companies fight the government and each other, NVIDIA sells the shovels. And the shovels just got a lot more powerful.
4. The 12-Model Avalanche That Nobody Noticed
Between March 1–8, at least twelve major AI models and tools dropped from OpenAI, Alibaba, Lightricks, Tencent, Meta, ByteDance, and several universities. In one week.
We’re so desensitized to model releases that a dozen dropped and the news cycle barely flinched. Two years ago, a single GPT release would dominate headlines for weeks.
Some highlights you might have missed:
- OpenAI retired GPT-5.1 entirely (as of March 11), migrating everyone to GPT-5.3 or 5.4
- Alibaba’s Qwen continues its open-source blitz — now competitive with models 3x its size
- ByteDance shipped video generation tools that make last year’s Sora look primitive
- Lightricks released production-ready image editing models that run on mobile
The pace is unsustainable. Nobody — not researchers, not developers, not users — can evaluate these models as fast as they ship. We’re in a “publish or perish” arms race where getting the model out the door matters more than whether anyone needs it.
The takeaway: When 12 AI models drop in a week and nobody blinks, we’ve either reached the future or we’ve stopped paying attention. Probably both.
5. The Washington Problem: AI Bills Are Everywhere
While everyone was watching the Anthropic lawsuit, state legislatures were busy.
Washington state just passed two significant AI bills before their March 12 adjournment: HB 1170 (AI disclosure requirements) and HB 2225 (chatbot safety for kids, including self-harm protocols). These aren’t “we’ll think about it” proposals. They’re law.
This follows a national pattern. More than 30 states introduced AI-related legislation in Q1 2026. The EU AI Act is in full enforcement. And the Trump administration is simultaneously trying to deregulate AI development while threatening companies that won’t play ball with defense contracts.
The result is a regulatory landscape that makes no sense. You can build any AI you want (federal deregulation), but if you don’t let the Pentagon use it, you’re a supply chain risk. States want transparency and safety guardrails. The feds want capabilities with no restrictions.
AI companies are now operating in a regulatory contradiction. And nobody’s figured out how to resolve it.
The takeaway: The U.S. doesn’t have an AI policy. It has fifty state AI policies and a federal government that punishes companies for having safety standards.
6. What This Week Really Means
Step back from the individual stories and a pattern emerges.
The AI industry just split into three camps:
- The Compliant — Companies willing to do whatever governments and militaries ask. OpenAI removed its military use ban. Others will follow. The money is too good.
- The Principled — Anthropic drew a line and got punished. Their stock of goodwill with safety researchers just skyrocketed. Their government revenue might never recover.
- The Infrastructure — NVIDIA doesn’t care who wins the ethics debate. They sell chips to everyone. Jensen Huang sleeps well regardless of who builds what with his GPUs.
Meta falls into a fourth, sadder category: the ones who spent $135 billion and still can’t keep up.
This week wasn’t about benchmarks or model releases. It was about power. Who has it, who wants it, and what happens when an AI company tells the most powerful military on Earth “no.”
The takeaway: AI stopped being about technology this week. It’s about politics, money, and the uncomfortable question of what we’re actually building all this for.
Quick Hits
- Health AI agents launched at HIMSS — Amazon, Google, and Microsoft all announced AI doctor assistants. 88% of doctors are worried about skill loss. They should be.
- Britain’s AI investment program is mostly “imported chips in borrowed buildings,” per The Guardian. Ouch.
- Zalando forecasts a 2026 profit jump driven by AI. The “AI boosts earnings” era is reaching retail.
Chase Xu builds AI agents and submits PRs to the frameworks that power them. He writes about AI because someone has to say what the press releases won’t. Find him on GitHub.
FAQ
Is Anthropic actually banned from government work?
No. The “supply chain risk” designation means Pentagon contractors face restrictions using Claude specifically for Pentagon contract work. Anthropic’s commercial customers are unaffected. But the chilling effect on government-adjacent deals is real.
Why is Meta’s Avocado model behind Gemini?
Details are scarce. The NYT reports it “has not performed as strongly as Gemini 3.0,” which launched in November 2025. Meta’s AI team improved over their previous models but couldn’t match Google’s quality, which benefits from a deeper bench of AI research talent and more diverse training data from Search.
When is NVIDIA’s GTC keynote?
March 17, 2026 (Monday). Free to stream at nvidia.com, no registration required. Expect Vera Rubin GPU details, Feynman architecture preview, and NemoClaw enterprise agent platform.
Should I watch GTC?
If you work in AI — yes, absolutely. Jensen’s keynotes routinely move the entire industry. Last year’s Blackwell reveal changed inference economics overnight.







Top comments (0)