Yesterday, Cal.com closed their source code. One of the world's largest Next.js open source projects. Done. Co-founder Bailey Pumfleet figures that sharing your code in the age of AI is like handing out the blueprint to a bank vault to 100x more hackers than before. His partner Peer Richelsen followed up, saying any open-source application is at risk and should take all or the sensitive parts private. Meanwhile, Peter steipete (OpenClaw/OpenAI) responded "bad news" with a screenshot of GPT 5.4-Cyber reverse-engineering closed source without breaking a sweat.
Is open source dead?
TLDR: Cal.com is right to be scared. Mythos (Anthropic's cyber AI) cracked a 27-year-old OpenBSD bug last week. But their remedy collides with a principle that's 143 years old and that nobody in this entire debate has mentioned once. When you apply that principle to their decision, you realize they didn't solve the problem they think they solved.
They're not alone. Tailwind, cURL, Ghostty, tldraw, GitHub. Different projects, same reflex, same reason: AI. The open source ecosystem is in full retreat and nobody is stopping to ask if the retreat even leads somewhere safer.
I build with OSS every day. Hetzner, Postgres, Redis, Next.js, whatever npm package I pull without thinking twice. And now one of open source's poster boys just raised both hands. Everyone reacts. Nobody asks the real question.
Is closing the code the right response to a problem that is, itself, very real?
Mythos Cracked a 27-Year-Old OS
On April 7, Anthropic released the technical report on Mythos Preview. The numbers are not what you call subtle.
A 27-year-old TCP SACK integer overflow in OpenBSD, an operating system famous for its obsessive security posture. A 16-year-old heap buffer overflow in FFmpeg's H.264 decoder, present in nearly every phone, browser, and computer on the planet. Five million automated fuzzer runs never caught it. Mythos identified it through semantic reasoning about code logic, not brute force. A 17-year-old remote code execution vulnerability in FreeBSD's NFS server (CVE-2026-4747). Unauthenticated root access. Twenty-gadget ROP chain split over multiple packets. Fully autonomous.
On Firefox 147, Opus 4.6 turned known vulnerabilities into working exploits twice. Mythos did it 181 times.
Over 99% of the vulnerabilities remain unpatched. The model is not publicly available. Project Glasswing gave access to Amazon, Apple, Microsoft, and a handful of others. Anthropic committed $100M in usage credits. The same week, the announcement wiped $15 billion off cybersecurity stocks.
Now, the part Cal.com did not mention. Mythos found those bugs in open source code, yes. But Anthropic's own report states explicitly that Mythos finds and exploits zero-days in "every major operating system and every major web browser." Including closed-source. Two days after the announcement, AISLE (a cybersecurity startup) tested the exact showcase vulnerabilities against small, cheap, open-weights models. Eight out of eight detected the FreeBSD NFS vulnerability. The smallest model had 3.6 billion parameters and cost $0.11 per million tokens.
AISLE's conclusion: the moat in AI cybersecurity is the system, not the model.
Cal.com is right to panic. Wrong about the fix.
Kerckhoffs Warned Us. In 1883.
The entire "close everything" camp is building on a foundation that was debunked before electricity was common in homes.
Auguste Kerckhoffs, 1883, La Cryptographie Militaire. One sentence that built modern cryptography: a system must not require secrecy, and it should be able to fall into the enemy's hands without causing problems. 143 years. Never invalidated. Claude Shannon reformulated it in 1949: the enemy knows the system being used. Every serious security framework in existence since then assumes the attacker has the source code. That is not an ideological position. That is how you build systems that actually resist attack.
Cal.com just based their entire 2026 security strategy on a principle the industry abandoned before the light bulb. They locked the vault and assumed nobody can see through the walls. Mythos sees through walls. GPT 5.4-Cyber sees through walls. The next model, six months from now, will see through thicker ones.
Security through obscurity has a 143-year losing record.
We've Seen This Panic Before. It Ended Fine.
Late 90s, early 2000s. Automated fuzzing tools arrive. Same panic, same articles. Some of them on Slashdot, which gives you a sense of the era.
Maintainers complained about the workload explosion. Bad actors used the fuzzing tools before patches shipped. The sky was falling. A commenter named williamyf laid it out in the Cal.com Slashdot thread last week: same tone of articles back then, same predictions, same outcome. The big companies eventually stepped up with free tooling and compute for OSS projects. Maintainers adapted their procedures. The software world kept turning.
The answer was not to close the code. It was to adapt.
Cal.com is replaying a mistake the industry already corrected 25 years ago. The tool changed. Fuzzers then, LLMs now. The panic is identical. The correct response has not changed either.
(Honestly, if you had told a Slashdot commenter in 2001 that a scheduling startup would close its source in 2026 because of AI and call it a security strategy, they would have laughed you out of the thread.)
Closing the Code Is Just a Puzzle With More Steps
When a dev sees closed source, they don't see a wall. They see a puzzle. I know because I am that dev.
I mapped 27 undocumented Ghost endpoints in 17 minutes using Chrome DevTools Protocol. Full audit log. Database export in a single API call. TypeScript wrapper, 830 lines, 40/40 tests. Ghost is open source and I ran the whole experiment locally, publicly.
That was the publishable demo. I've since run the exact same method on three commercial applications. Products you probably use. Results identical. Two of those writeups will never come out.
The method does not care about your source code. It needs a browser. Chrome DevTools Protocol exposes every API call your application makes. An agent reads the traffic natively, iteratively, builds a complete map of your endpoints and data flow. No repo access. For Cal.com specifically, without touching their GitHub: TypeScript bundle is minified but not encrypted, mobile traffic is observable, every API call fires in DevTools the moment you load the scheduler.
Closing the code hides the blueprint. Not the building.
And the thing is, the devs who would have filed a GitHub issue about a permission check that leaks? Those devs now won't say a thing. You turned your most helpful users into silent bystanders.
Closing the code is a puzzle, not a wall.
You Just Locked the Empty Vault

Cal.com was, by their own description, the world's largest Next.js open source project. The community that found their bugs for free just vanished. That community owed them nothing. It showed up because the code was open.
cURL's Daniel Stenberg ran his bug bounty for 7 years. The confirmed-vulnerability rate started above 15%. By 2025, it collapsed below 5% because AI slop flooded the process until real reports drowned. He shut it down. Mitchell Hashimoto at Ghostty was more direct: "This is not anti-AI, this is anti-idiot." tldraw closed all external pull requests. Same problem, same exhaustion.
So the community is under stress everywhere. Fair. The question is whether closing helps or makes it worse.
Consider what Cal.com actually locked. Their Next.js code. And consider what they cannot lock, because it was never in the repo: their shipping velocity, their Google and Outlook integrations, their enterprise base, their five years of product experience. Red Hat's Linux is 100% open and IBM paid $34 billion for it. IBM did not buy the code. They bought the support, the certification, the trust.
Your code is the least defensible part of your business.
Karen from Accounting could have told them that. The asset on the balance sheet is the customer list and the renewal rate, not the GitHub repository. But nobody invites Karen to the security meeting.
And now for the part that should really worry Cal.com's customers. Close the code, and you don't just lose the people who filed issues for free. You start drifting back toward the world before open source. Broadcom bought VMware late 2023, customer bills went up 10x in six months. Oracle Database still sits at $47,500 per CPU. Your side project runs at $15/month on a Hetzner VPS because Linux, Postgres, Redis, Nginx are all open source. If every commercial OSS company closes their code out of Mythos panic, you don't lose the infrastructure layer. You lose the layers on top: schedulers, billing, analytics, auth. You drift back into a world where every component is an enterprise invoice.
That is what replaces open source. Not something better. Something more expensive. 🤷
The Real Answer Is Speed, Not Secrecy
Peter steipete chose the opposite direction. His strategy for OpenClaw: rapid iteration and code hardening, even though it introduced occasional regressions and people yelled at him for it. He sees it as the only way forward. I think he's right.
In a Mythos world, the defense is not hiding your code. The defense is patching faster than attackers exploit. Anthropic's own report says it: the advantage goes to whichever side gets the most out of these tools. In the short term, that could be attackers. In the long term, it should be defenders, because defenders have something attackers don't: the commit access to fix the code.
But "should" requires work. Publish a real SECURITY.md with an actual response SLA, not a template you copied from a GitHub starter repo. Automate your CVE scans and treat flagged dependencies like production incidents, not backlog items that sit for three sprints. Shorten your patch cycle. The gap between "vulnerability discovered" and "patch deployed" is the only window that matters now, and every day you leave it open is a day Mythos (or the next thing after Mythos) can walk through.
I ran my own dependency audit the week the Mythos report dropped. Found two outdated packages with known CVEs that had been sitting there since I last touched the project. Not because I didn't care. Because the process wasn't automated. That gap is what kills you. Not whether your code is on GitHub.
Open source plus rapid hardening is not open source plus hope for the best. It is disciplined work. But it's the only approach that survives in a world where the attacker's toolkit gets better every six months and closing the door doesn't actually close the door.
Long Live Open Source
So yes. Open source died yesterday. The one that counted on "many eyes" to compensate for the absence of real security discipline. The one that published code hoping the community would find the bugs. That one, Mythos buried it on April 7 when it found 27 years of bugs in OpenBSD.
Pumfleet is right on that specific point. That model is done.
What survives is the OSS that takes security as seriously as a proprietary project. That publishes its SECURITY.md with a real SLA. That pays its maintainers (Tailwind couldn't pay 8 people despite 75 million monthly downloads: that's a business model problem, not an open source problem). That iterates fast. That has an explicit threat model, not a wish to never run into a motivated attacker.
I read the Mythos paper, I watched Cal.com close their code, and I chose the opposite. I'm not closing anything. I'm accelerating my patch cycles. I'm publishing my advisories. I'm staying open, because the value of my work has never been in my code. Staying open is the best protection I have against what's coming.
The king is dead. Long live the king.
Sources
Anthropic, "Assessing Claude Mythos Preview's cybersecurity capabilities," red.anthropic.com, April 7, 2026.
AISLE, "AI Cybersecurity After Mythos: The Jagged Frontier," aisle.com, April 2026.
Cal.com, "Cal.com Goes Closed Source," cal.com/blog, April 14, 2026.
(*) The cover is AI-generated. Two French comic characters arguing about vault security while a lobster watches, amused. Standard Tuesday.
Top comments (0)