If you use Claude Code every day like I do to build apps, you’ve probably experienced this too.
Even before thinking about whether the most painful part is when it suddenly stops in the middle of your work. You could be writing code, doing deep research, or finishing the structure of an article, and then you hit a usage limit. At that moment, your concentration on the work is completely broken.
Two days ago, that change
Anthropic has announced a computing resources partnership with SpaceX.
In addition, usage limits for Claude Code have also been increased.
Most notably, the 5-hour usage limit for Claude Code has been doubled for Pro, Max, Team, and sheet-based Enterprise plans.
For those using Claude Code, this is simply a welcome change.
However, I think it would be a shame to simply end this news with the statement that “Claude Code has become more usable.”
The core issue is that competition between AI development tools is no longer only about how intelligent the models are.
No matter how advanced an AI is, if it hits limitations when you need it most, it will stop working in practical applications. On the other hand, an AI that can keep working reliably for a long time without interruption has real value.
The usability of AI tools is not determined only by the chat interface users see on the surface. The real experience also depends on the infrastructure behind it, including computing resources, data centres, power supply, and GPU availability.
Link to Video:
The usage restrictions for Claude Code have doubled.
There are three main points to what Anthropic announced this time.
The first change is doubling the 5-hour usage limit for Claude Code.
This applies to Pro, Max, Team, and sheet-based Enterprise plans.
The second change is the removal of peak-time restrictions on Claude Code for Pro and Max accounts. This change directly addresses complaints about stricter restrictions during peak hours.
The third point is to increase the API rate limits for Claude Opus models significantly. This is more relevant to developers and enterprise users than to individual users.
In other words, this change is not so much about making it “a little easier to use” as about relaxing restrictions for people who use Claude seriously.
Claude Code, in particular, is a tool that proves more valuable in longer-term projects than in one-off questions.
Have them read the code.
Have them investigate the cause.
Have them correct the code while looking at multiple files.
Have them isolate the error.
Have them think about the implementation strategy.
Have them check it again after making the corrections.
When used in this way, limitations cease to be just numbers.
Will they stop midway through the process? Or can they be entrusted with the entire workflow from start to finish? This is where experience makes all the difference.
The essence of the matter is not that “Claude has become smarter,” but rather that “the amount of information available has increased.”
In AI news, attention tends to focus on stories like “a new model has been released,” “benchmarks have improved,” or “answer accuracy has increased.”
Of course, the model’s performance is important.
However, when you actually use it in work or development, other problems become apparent.
The question is
How long can it be used continuously?
For example, let’s say you’re using AI to fix code.
First, the problem is explained.
The AI reads the code.
It guesses the cause.
It proposes a solution.
It actually corrects.
An error occurs.
It investigates again.
It looks at other files as well.
It makes another correction.
This process doesn’t end with just one question.
Rather, the value of AI development tools lies in their ability to move the work forward through repeated interaction.
If a limit is encountered along the way, the workflow will be interrupted.
From the human side, once the process stops, there are costs involved in restarting. You end up in a situation where you’re wondering
“Where did we leave off?” “What did we fix?”, and “What do we need to check next?”
Therefore, doubling the usage limit for Claude Code is not simply a change in the upper limit.
This change is quite significant for those who use AI as a “worker” rather than as a “consultant .”
The partnership with SpaceX demonstrates that computing resources have become a competitive advantage for AI companies.
What’s even more interesting about this announcement is that the relaxation of restrictions on the Claude Code is being discussed in conjunction with a computing resources partnership with SpaceX.
Anthropic has stated that it has signed a contract to utilize the computing power of SpaceX’s Colossus 1 data centre, where it will have access to over 300 megawatts of new capacity and more than 220,000 NVIDIA GPUs.
What’s important here is that for AI companies, computing resources are no longer just a behind-the-scenes tool.
The user experience of an AI service is not determined solely by the model.
The model must be intelligent.
The UI must be user-friendly.
It must have strong tool integration.
And it must have sufficient computing resources.
Only when all of these elements are present does it finally become an AI that can be used in practical applications.
In particular, with development support tools like Claude Code, the load increases as the number of users grows.
Asking short questions via chat is a relatively light use case.
However, having it read the entire codebase, work across multiple files, or maintain long contexts will significantly increase the burden on computing resources.
In other words, AI companies need to be not only “companies that create good models,” but also “companies that secure large amounts of computing resources.”
I think this is the most significant point of this news.
The next way of looking at AI development tools, which also connects to Codex and Cursor. This story isn’t just about Claude Code.
This also relates to Codex, Cursor, Windsurf, GitHub Copilot, and other AI development tools.
From now on, when looking at AI development tools, simply asking
“Which model is the smartest?”
will no longer suffice.
Rather, the following perspective is necessary.
How long can it continue working?
How many files can it handle?
Is it less likely to encounter limitations?
Is it stable even during peak hours?
Does it not interrupt the human workflow?
Can it handle everything from error isolation to correction?
This is quite important in practical work.
When comparing the performance of AI systems, our attention is often drawn to benchmarks.
However, from the perspective of someone actually using the tool, there are many situations where the more important question is,
“Can I complete this task right now?”
An AI that is somewhat intelligent but quickly hits its limits.
An AI that is a little slower but can continue working steadily.
Which one is easier to use for work depends on the situation.
I think Anthropic’s recent move will slightly change the criteria for evaluating AI development tools.
What’s important to the user is how much of the work they can continue to delegate.
This relaxation of restrictions should be especially beneficial for those who use Claude Code extensively.
Conversely, if you only use it occasionally to ask questions, you might not notice much of a difference.
However, for those who use AI to read, correct, and verify code, and even track down other problems, it is quite meaningful.
AI development tools are evolving from simply providing answers to becoming integral parts of the workflow itself.
In that case, the quality of the first answer isn’t the only important factor.
It must not stop midway.
It must maintain context.
It must be able to handle long tasks.
When a person returns, they must be able to understand where the work left off.
These factors contribute to actual usability.
The news that the usage limit for Claude Code has been doubled is, on the surface, a relaxation of the limits for users.
However, behind this lies the reality that AI companies are making significant moves to secure computing resources.
Competition in AI tools isn’t determined solely by model performance.
Nor by UI alone.
Nor by agent functionality alone.
From now on, the key will be
How stably, for how long, and to how many people can we provide intelligent AI to?
The relaxation of restrictions in the Claude Code is a fairly visible example of this change.
From the user’s perspective, it’s simply a matter of “more usable resources.” But if you take a step back, you can see that the competition among AI development tools has expanded from comparing models to comparing infrastructure.
In the future, when looking at Codex, Cursor, and other AI development tools, I think it will become increasingly important to consider not just “which one is the smartest,” but also “which one can keep the work going without interruption.”
My experience as a heavy user
In my experience, I use Claude Code on my Mac every day.
I’ve hit the five-hour usage limit so many times that I can’t even remember the last one. On Tuesday evening, I ran five coding sessions one after another.
On Thursday afternoon, I also used it with Codex at the same time for cross-checking. That feeling of stopping just before “finishing” is something only people who use AI in their work can really understand. Even slowdowns when usage is high feel heavy.
I’ve also noticed it sometimes gets slower during peak hours, especially in the morning on the US East Coast. That slowdown was something I really wanted to see fixed.
So in that sense, today’s change is something I’ve been waiting for. But when I read about the idea of on-orbit GPUs, I had a different reaction.
I started to think about how far we are going just to make things more convenient.
I would highly appreciate it if you
❣ Join my Patreon: https://www.patreon.com/GaoDalie_AI
- Book an Appointment with me: https://topmate.io/gaodalie_ai
- Support the Content (every Dollar goes back into the video):https://buymeacoffee.com/gaodalie98d
- Subscribe to the Newsletter for free: https://substack.com/@gaodalie
Top comments (0)