On April 21, 2026, Anthropic quietly removed Claude Code from its $20 Pro plan. No email, no announcement, no changelog. People noticed because the pricing page changed overnight. A few hours later it got reverted, and Anthropic's Head of Growth posted on X that it was a test on 2% of new signups.
Simon Willison wrote the best account of what happened. He pays for Claude Max himself and is not exactly a hater. His point is simple: the revert doesn't matter. What matters is that Anthropic considered it.
The experiment got pulled. The signal didn't.
It is going to get more expensive
Claude Code is back in Pro today. But if you zoom out a bit, the direction is obvious.
GitHub did the same thing on the same day with Copilot. Earlier in April, Anthropic blocked third-party tools from running on Pro and Max subscriptions. Enterprise plans moved to per-token billing. Weekly caps got tighter. Peak-hour limits got stricter.
Claude Code already makes Anthropic billions of dollars a year. The company just raised $30B. This is not a company in crisis. It is a company figuring out how to charge more for the people who use it the most.
The logic makes sense from their side. From ours, it means one thing: the tool we built our workflow around can change price on us overnight, and there is nothing we can do about it.
The gap nobody talks about
A hundred dollars a month in San Francisco is not a hundred dollars a month in Bilbao, Buenos Aires, Lagos, or Jakarta. It is not the same for a two-person agency billing in euros, or a freelancer on a retainer, or a bootstrapped startup watching runway.
Most of the AI tooling conversation assumes one user: a well-paid dev at a US tech company. For that person, $100/month is noise. For a lot of working developers, it adds up fast. An agency with 15 devs on Max is $18,000 a year, and that is before you count Cursor, Copilot, or anything else.
But the price is not even the main problem. The main problem is depending on something you do not control. When your main tool can 5x its price on a Tuesday as a "test", you have a problem that is bigger than the bill.
You do not have to build anything from zero
When people hear "build your own AI" they imagine training a model, GPU clusters, a whole research team. That is not what this is about.
It is more like cooking at home instead of eating out every day. You do not have to grow the wheat to make bread. You buy the pieces, you put them together, you eat.
The pieces already exist. There are solid open-source models you can run locally or use through an API for a fraction of what Claude costs. There are tools like Ollama that make running one of these models about as hard as installing an app. There are terminal workflows like OpenCode that work similar to Claude Code but against whatever model you want.
The other piece, which is actually the important one, is your stuff. Your codebase, your docs, your conventions. That is the part nobody else has. That is what makes a generic model useful for the work you actually do.
When the big players go open, you should notice
If this still sounds like a fringe thing, look at Cursor.
Cursor is one of the most popular AI coding tools right now. They just launched their flagship model, Composer 2, positioning it as their own in-house thing. Less than 24 hours later, a developer named Fynn found a model ID in their API traffic that gave it away: Composer 2 is built on top of Kimi K2.5, an open-weight model from a Chinese lab called Moonshot AI.
Cursor confirmed it and later published a technical report admitting the Kimi base. VC Tomasz Tunguz made the point well: a $50B company reaches near-parity with state-of-the-art at a fraction of the price, running on an open model they did not build. That is not ideology. That is math.
If a company valued at $50B reached that conclusion, a dev in Bilbao or an agency in Madrid can reach it too. At a smaller scale, same idea.
It is not only about saving money
Cost is the easy argument. The more interesting part is what you gain when you control the model.
You can shape it to your work. A base model that you use on your own code and your own patterns gets better at your specific job, in a way a generalist model cannot, no matter how big it is.
Your code stays with you. Depending on what you work on, that is the difference between "we can use AI" and "legal says no".
No rate limits reset every Tuesday. No weekly caps. No versions deprecated on someone else's schedule. No experiment changing your plan overnight.
None of this makes Claude irrelevant. What it does is make Claude one tool in a stack, instead of the stack.
What to try
The gap between what you can run locally and what you pay a subscription for is smaller than most people think. If you have not looked at this in a year, you are working with outdated assumptions.
A reasonable starting point: install Ollama, pull a coding model, plug it into OpenCode or something similar, and take the next small task you would give Claude Code. Give it to the local setup instead. Notice where it falls short and where it surprises you.
You will find that for a lot of the routine stuff (boilerplate, refactors, small features, tests), local models are closer to what you pay for than the marketing lets on. For hard problems, you will still want a frontier model. That is fine. The point is not to replace Claude. The point is to stop depending on it.
If running local is not an option, the same logic works through APIs. There are providers selling access to these open models for a lot less than Anthropic charges.
Closing
The question in 2026 is not "cloud or local". It is "which model for which task, and who decides".
If Claude Code costs $100/month next week, what happens to how you work? If the answer is "everything breaks", that is not really an AI problem. That is a dependency problem. And it is true for every other tool in your stack.
The hedge is not loyalty. It is having a plan B. Spend a Saturday on it.
Top comments (0)