Jensen Huang has been making bold claims these past two years—"the age of agentic AI has arrived," "trillion-dollar opportunities." In October last year, he said every NVIDIA engineer was using Cursor. At this year's GTC, he painted a picture of 75,000 humans paired with 7.5 million agents.
It sounds like coding agents are already everywhere. I checked the actual numbers.
The Numbers Are Surprising
Claude.ai has roughly 10–20 million monthly active users. Third-party estimates put Claude Code's weekly active users at around 1.6 million. Cursor has over 2 million users, with 1 million paying. On the open-source side, OpenCode has 140k stars on GitHub, and Cline has been installed over 5 million times in VS Code.
These aren't small numbers. But GitHub Copilot alone has nearly 20 million users, and a JetBrains survey from early this year found that 74% of developers are already using some kind of AI coding tool.
Copilot mainly does autocomplete—it doesn't qualify as an agent. The ones that actually work in agent mode—you give it a task, it reads files, writes code, runs tests itself—these tools combined have maybe a few million to over ten million users.
Something that the world's highest-valued company calls "world-changing" only has this many users. I originally guessed at least tens of millions.
It's bustling in China, though. The Kimi platform has over 30 million monthly active users. ByteDance's Trae has over 6 million registered. But these numbers include plenty of non-coding scenarios—actual coding agent users aren't that many.
Many people are discussing how to configure Skills, how to connect MCP. But maybe we need to take a step back: why can't so many people take that first step?
Not Just About Writing Code
The most common misunderstanding is treating coding agents as "tools for writing code." People who don't code think it's irrelevant; people who do code think it's just an upgraded Copilot.
In reality, these things do far more than code. Stitching videos, batch processing files, scraping data from websites, debugging unfamiliar software—it can do all of it. Essentially, it helps you control your computer to get tasks done; code is just its operating language.
Mobile phones are the exception. Phones are inherently GUI-centric, so coding agents don't work well on them. Those projects using models to control phones had their moment in the spotlight, but the approach is completely different and still not quite there.
The NPCs Have Woken Up
The deeper problem is mindset.
We've grown accustomed to predetermined interactions with software. Click here for this, drag there for that—everything has been designed. NPCs in games work the same way: they give you three options, you pick one; whether you read the dialogue or not doesn't matter.
Now the NPCs can suddenly think. They wait for you to speak, then do as you say.
This feeling is just like when ChatGPT first came out. I made quite a few tutorials teaching people how to use it, then realized most people got stuck on expression. They felt they needed to become "prompt engineers"—speaking precisely, making the AI obey, with fancier tricks for advanced users.
It's not that complicated. Just treat the coding agent like a colleague. It understands what you say and can browse files in your project. Explain what you want to do clearly, and you're mostly done.
Don't Do It Yourself
There's another pitfall related to habits.
The more capable you are, the easier it is to fall into. When facing a problem, the first instinct is to do it yourself. Like being a manager—clearly someone else can do it, but you always feel you'll be faster.
But coding agents might be ten times faster than you. The quality might be temporarily lower, but iteration speed makes up for it. The problem is once you start doing it yourself, you slide back into old habits—asking DeepSeek or Doubao when you hit a snag, fixing it yourself, using AI as a consultant. The work is still yours.
I now delegate 95% of my computer work to coding agents. Research, writing, programming, sending emails, operating web pages. Once you cross that threshold, you don't need anyone to teach you, because you can have it help you figure out how to use it better. That's the meta-skill.
Human in the loop
But don't be too optimistic either.
Concepts like agent managers and multi-agent orchestration sound beautiful—AI managing AI, fully autonomous operation. The direction is right, but we're not there yet.
In practice, having someone knowledgeable in the loop makes a huge difference in efficiency. Letting agents orchestrate themselves completely—tasks fall apart once they get complex. The Worker & Manager dynamic I discussed in a previous blog post is exactly about this.
For the short term, we still need someone in the middle. But this person's role isn't to do the work with their own hands—it's to clarify what is needed, check if the results are right, and make the call at key moments.
Once you cross this threshold, many things naturally fall into place. If you don't, you'll always be on the outside watching others use it.
Originally published at https://guanjiawei.ai/en/blog/coding-agent-adoption-gap
Top comments (0)