An AI just found over 500 security holes in the software you use every day. Some of them let attackers take over your computer by tricking you into opening a file. And the kicker? The people who maintain that software can’t patch it fast enough.
What Happened
Anthropic’s Claude Opus 4.6, the same AI model that recently refused to help build autonomous weapons, just spent the last few weeks doing something very different: hunting bugs. Not cute little glitches. Real, exploitable, “someone can take over your machine” vulnerabilities in software used by millions of people.
The initiative is called MAD Bugs (Month of AI-Discovered Bugs), and it’s running through the end of April. So far, Claude has found over 500 high-severity zero-day vulnerabilities across production open-source projects. That includes critical remote code execution flaws in Vim, GNU Emacs, and Firefox, plus a fully working kernel exploit for FreeBSD.
Let that sink in for a second. An AI wrote a working exploit for an operating system kernel. From scratch. In eight hours.
The FreeBSD Exploit That Should Scare You
The most alarming finding is CVE-2026-4747, a remote kernel code execution vulnerability in FreeBSD. Claude didn’t just find the bug. It set up its own lab environment, analyzed the kernel source, identified the vulnerability, wrote the exploit, and delivered a working root shell. The entire chain, from setup to “you now own this machine,” took about eight hours of processing time.
This isn’t a proof-of-concept that needs a PhD to interpret. It’s a functional, deployable exploit for a widely used operating system. The kind of work that traditionally requires years of kernel security expertise and weeks of focused effort. Claude did it over lunch.
Your Text Editor Is a Trap Door
The Vim and Emacs vulnerabilities are arguably scarier for everyday users. CVE-2026-34714 (Vim, CVSS score 9.2) and a separate RCE in GNU Emacs both trigger when you open a file. That’s it. No clicking suspicious links, no running unknown executables. Just opening a file in your text editor.
Vim patched the bug quickly in version 9.2.0272. The GNU Emacs maintainers, on the other hand, declined to fix theirs. The vulnerability exploits Emacs’ Git integration (vc-git), which automatically runs Git operations when you open a file. A malicious .git/config file can hijack that process to execute arbitrary commands. The Emacs team apparently considers this a Git problem, not an Emacs problem.
If you’re an Emacs user reading this: maybe don’t open random project folders for a while.
Mozilla Said “Thank You.” Others Said Nothing.
The reactions from maintainers tell a story of their own. Mozilla worked directly with Anthropic’s Frontier Red Team, validated 14 high-severity bugs, issued 22 CVEs, and shipped patches in Firefox 148. That’s how responsible disclosure is supposed to work.
But Mozilla has a paid security team. Most open-source projects don’t. They’re maintained by volunteers who already have day jobs, and now they’re being handed vulnerability reports at a pace that no human team can match. When an AI can generate a credible critical bug report in hours, the industry-standard 90-day disclosure window starts looking very generous.
This mirrors a pattern we’re seeing across the tech world. Just weeks ago, stolen AI training data went up for auction, showing how quickly security threats are evolving in the AI era. The difference is that MAD Bugs is the “good guys” version. For now.
The Real Problem Nobody Wants to Talk About
Here’s the uncomfortable truth. If Anthropic can find 500 zero-days in a month, so can anyone else running a comparable model. The techniques aren’t classified. The models are commercially available. The only thing separating a responsible disclosure initiative from a cybercrime operation is the intention of the person typing the prompt.
Anthropic says it validates every bug with human researchers before reporting, coordinates patches with maintainers, and is working to automate safe patch development. That’s good. But it also means they’ve essentially built a vulnerability factory with a “please be nice” sign on the door.
The open-source ecosystem was already under strain. Volunteer maintainers were burning out long before AI entered the picture. Now they’re facing an avalanche of legitimate, high-severity bug reports that demand immediate attention, generated faster than any human team could produce them. The bottleneck isn’t finding bugs anymore. It’s fixing them.
What This Means for You
If you use Vim, update to 9.2.0272 or later. If you use Emacs, be very careful about which repositories you clone and open. If you use Firefox, make sure you’re on version 148. If you run FreeBSD, check for the latest kernel patches.
More broadly, this changes the security math for everyone. We’re entering a world where AI can audit millions of lines of code faster than any human team, and the bugs it finds are real. Scientists are already pushing the boundaries of what AI can do in unexpected domains, and security research is no exception. The question isn’t whether AI will find more zero-days. It’s whether we can patch them before someone less scrupulous than Anthropic decides to use them.
MAD Bugs runs through the end of April. Every few days, a new disclosure drops. If you work in software, you might want to keep an eye on red.anthropic.com. And maybe update your text editor while you’re at it.
🐾 Visit [the Pudgy Cat Shop](https://pudgycat.io/shop/) for prints and cat-approved goodies, or find our [illustrated books on Amazon](https://www.amazon.it/stores/author/B0DSV9QSWH/allbooks).
Originally published on Pudgy Cat
Top comments (0)