Claude went down globally this morning. And it might be the most important outage in AI history.
What Happened
HTTP 500 errors. Frozen prompts. Complete downtime across Claude.AI, Claude Code, and Opus 4.6. Anthropic attributed it to "unprecedented demand."
Why It Matters
Three days ago, the Pentagon demanded unrestricted military access to this exact system - including fully autonomous weapons and mass domestic surveillance. Dario Amodei refused, saying AI systems "are not yet safe or reliable enough to make critical life-or-death decisions."
Today proved him right.
If Claude can't reliably handle a traffic spike from enthusiastic users, it should not be making autonomous targeting decisions in a conflict zone. Full stop.
The Real Question
The Pentagon called Anthropic a "supply chain risk" for having safety standards. But what's the actual risk: a company that draws ethical lines, or autonomous weapons running on infrastructure that threw 500 errors this morning?
This isn't abstract AI ethics. This is an engineering reality check. Reliability is a prerequisite for autonomy. We're not there yet - and the people building these systems know it better than anyone.
Dario understood the assignment.
What do you think? Should AI companies be forced to provide unrestricted military access to their models?
Top comments (0)