DEV Community

Aamer Mihaysi
Aamer Mihaysi

Posted on

Anthropic Just Announced Project Glasswing: AI That Finds Zero-Day Vulnerabilities

Anthropic just dropped something that will reshape cybersecurity.

They announced Project Glasswing — a coordinated effort to use their new frontier model Claude Mythos Preview to find and fix vulnerabilities in the world's most critical software before bad actors can exploit them.

This isn't another incremental AI announcement. This is different.

What Makes This Different

The model hasn't been publicly released. Instead, Anthropic is making it available only to a carefully selected group of partners — Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, NVIDIA, Palo Alto Networks, and the Linux Foundation — plus 40+ organizations that build or maintain critical infrastructure.

Why the restriction? Because Mythos Preview can autonomously find and exploit zero-day vulnerabilities in every major operating system and web browser.

From their technical writeup:

In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses.

That's not hyperbole. That's what the model actually did.

The Capability Gap

Anthropic's internal testing showed that their previous Opus 4.6 model had a near-0% success rate at autonomous exploit development.

Mythos Preview? It's finding vulnerabilities that are often 10-20 years old. The oldest bug they've found so far was a 27-year-old vulnerability in OpenBSD — an operating system literally famous for its security.

This is the gap: we've gone from "AI helps with code" to "AI independently discovers security flaws that humans missed for decades."

The Defensive Play

Anthropic is committing $100M in usage credits for Mythos Preview access, plus $4M in direct donations to open-source security organizations.

The logic is straightforward: if AI has reached the point where it can find vulnerabilities faster than humans, we need to make sure the good actors get there first.

From the announcement:

Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe.

They're not wrong. If Mythos exists, so will its successors. And those successors will eventually be replicated by others.

What This Means

Three things:

1. Security research is about to change. The model found thousands of high-severity vulnerabilities. A human team would take months to find what this model found in weeks. The question isn't whether AI will transform vulnerability research — it's whether defenders can stay ahead.

2. Responsible disclosure gets harder. Over 99% of the vulnerabilities Mythos found haven't been patched yet. Anthropic can't talk about specifics because doing so would tip off attackers. This creates a new coordination problem at scale.

3. The race is on. Anthropic explicitly frames this as preparation for when these capabilities become widespread. The defensive coalition forming around Project Glasswing is essentially an attempt to buy time.

The Takeaway

We've talked about AI changing software development. We've talked about AI changing how we write and think.

This is AI changing security — not by helping developers write more secure code, but by independently finding the flaws we missed.

The model's name — Mythos — feels deliberate. We're entering a space where the line between "the model found a bug" and "the model wrote an exploit" becomes uncomfortably thin.

Project Glasswing is an attempt to point that capability in a defensive direction. It's a recognition that the technology exists now, that it will proliferate, and that the industry has a limited window to prepare.

The cybersecurity implications are real. The model isn't hypothetical. The question is whether we use this gap wisely.


Also this week: Z.ai released GLM-5.1, a 754B parameter model that's the same size as their previous GLM-5. It's available via OpenRouter and produces excellent SVG outputs with CSS animations — including a pelican on a bicycle that fixes its own animation when you point out it's broken.

Top comments (0)