When Your AI Finds Bugs That Have Been Hiding for 27 Years
Anthropic released a new model today. Sort of.
Claude Mythos is a general-purpose model, similar to Claude Opus 4.6, but with a terrifying specialty: it found vulnerabilities in every major operating system and web browser. So Anthropic decided not to release it publicly.
Instead, they launched Project Glasswing — a restricted access program that gives the model only to vetted security researchers and major tech partners. AWS, Apple, Microsoft, Google, and the Linux Foundation get access. The rest of us get to read about it.
What Makes Mythos Different?
The numbers from Anthropic's internal evaluations are stark:
- Opus 4.6: Near-0% success rate at autonomous exploit development
- Mythos Preview: 181 successful exploits out of several hundred attempts on Firefox's JavaScript engine, plus 29 more achieving register control
Nicholas Carlini, one of Anthropic's security researchers, said: "I've found more bugs in the last couple of weeks than I found in the rest of my life combined."
The 27-Year-Old Bug
The model found a TCP packet vulnerability in OpenBSD that had been sitting there for 27 years. Send a couple malformed packets to any OpenBSD server, and it crashes. The fix was a single line of code. The bug had survived decades of human auditing.
This is the new reality: AI can now find vulnerabilities that humans have missed for the entire history of the internet.
What Mythos Can Actually Do
The technical details are sobering:
Browser exploit chains: It wrote a Firefox exploit that chained four vulnerabilities together, using JIT heap spraying to escape both renderer and OS sandboxes
Privilege escalation: Found Linux vulnerabilities where a user with no permissions can elevate themselves to administrator
Remote code execution: Built an NFS server exploit granting root access to unauthenticated users by splitting a 20-gadget ROP chain across multiple packets
Autonomous testing: It doesn't just find bugs — it develops working exploits
Why Restrict Access?
Anthropic's argument is straightforward: the capability to find thousands of high-severity vulnerabilities across every major OS is dangerous. If this technology proliferates beyond responsible actors, every piece of critical infrastructure becomes vulnerable.
They're putting $100M in usage credits and $4M in direct donations toward open-source security organizations. They're giving partners time to fix what Mythos finds before the same capability becomes available to less scrupulous actors.
The Industry Response
Greg Kroah-Hartman, maintainer of the Linux kernel, noted that "something happened a month ago, and the world switched" — AI-generated security reports went from obvious slop to genuinely useful.
Daniel Stenberg of curl fame is now spending "hours per day" on AI-found vulnerabilities. Thomas Ptacek published "Vulnerability Research Is Cooked," arguing that the era of human-driven security research is ending.
What This Means for Developers
If you're running open-source software, you're already benefiting from Mythos-style discovery. The patches for OpenBSD and Linux are public. But the asymmetry is clear: defenders get a heads-up, but attackers won't wait forever.
The uncomfortable truth: the same model that's patching 27-year-old bugs could find exploits in your codebase tomorrow. The question isn't whether AI will transform security — it's whether you'll be ready when it does.
Project Glasswing isn't about hiding technology. It's about buying time. The model will eventually escape containment, either through competitors catching up or through inevitable leaks. But Anthropic's gamble is that by then, critical infrastructure will be hardened enough to survive.
Top comments (0)