I closed Cursor after ten minutes.
It was September. Everyone was talking about AI agents and context windows and something called "agent mode" that supposedly changed everything. I downloaded Cursor, opened it, watched it index my codebase ... and felt my chest tighten. The interface was wrong. The shortcuts were wrong. Everything I'd spent fifteen years optimizing was suddenly foreign.
I went back to VS Code that afternoon. Told myself I'd try again later. Kept maxing out my GitHub Copilot credits instead.
Three weeks later, I tried again. This time I stayed long enough to build something real. Now I have ChatGPT, Claude, Cursor with multiple models, and OpenClaw + Ollama running in a virtual environment. I use different tools for different problems. I've become the person I couldn't imagine being in that first ten minutes.
The divide I almost fell into isn't about willingness to try new things. It's deeper than that. The split won't be good developers versus bad developers. It'll be AI masters versus everyone else. And the gap is already widening faster than most people realize.
The split won't be good developers versus bad developers. It'll be AI masters versus everyone else.
The $2,000 Engineer
I've been talking to principal engineers across the industry. Large enterprise companies. Household names. People whose scope spans systems most developers never touch. And I'm seeing a pattern that contradicts the mainstream narrative entirely.
The analysts say AI is a 2-5x multiplier. The principal engineers I'm talking to say that's wrong. They argue AI isn't multiplication ... it's exponentiation.
One of them spends over $2,000 a month on tokens. Not because he's wasteful. Because he's built systems that compound. He's orchestrating agent loops, chaining workflows, training algorithms on his codebase until they anticipate his architectural decisions. The output isn't faster code. It's different code. Solutions he wouldn't have reached through traditional means.
These aren't junior engineers copy-pasting Claude outputs. These are masters of their craft who recognized that systematic thinking ... the skill that got them to principal ... is the exact skill that unlocks AI's real potential.
They're not prompting. They're building operating systems.
They're not prompting. They're building operating systems.
The License Turned In
I have another story. An old colleague. Tech lead at a well-known lifestyle brand. Solid engineer. Smart person. Someone who could solve problems other people got stuck on.
His company doesn't mandate AI usage. Some teams use it, some don't. Leadership treats it like a preference ... Vim versus Emacs, tabs versus spaces. Personal choice.
He turned in his AI license.
Said he wanted to stay "pure." Keep his skills sharp the traditional way. Prove he could still code without assistance. And I get the instinct ... I had it in those first ten minutes with Cursor. The discomfort of a new tool feels like a threat to your identity.
But here's what I can't reconcile. No one can deny the speed. No one can deny the output. The engineers spending $2,000 a month aren't producing marginally better work. They're producing categorically different work. Solutions at different altitudes. Architectures that wouldn't emerge from traditional workflows.
My old colleague thinks he can catch up when he's ready. That the gap is just familiarity with a new interface. Ten minutes of discomfort, stretched across a few weeks.
He's wrong.
What Masters Actually Do
The narrative that AI is "just prompting" is dangerously incomplete. Yes, you can get value from good prompts. But the engineers who are pulling ahead aren't writing better prompts. They're thinking in systems.
A regular AI user goes to their codebase and says: "I need a function that does X." They write a prompt. They get frustrated when the output is non-deterministic. They try again. They settle for something close enough.
The master builds differently. They start with guardrails. Error handling patterns. Architectural constraints encoded into the environment. They train their AI on their standards until it knows ... before they ask ... what "good" looks like in their system.
Principal engineers have an unfair advantage here. They're already thinking in systems. They've spent years building mental models of how components interact, where complexity hides, which abstractions hold and which collapse. AI doesn't replace that thinking. It amplifies it.
Principal engineers have an unfair advantage. They're already thinking in systems.
The master engineer doesn't fight the non-determinism. They harness it. They know that asking the same question three times yields three valid approaches, and their job is selecting the right one for the context. They're curators, not just consumers.
This is why "I use ChatGPT sometimes" doesn't mean what people think it means. The tool is available to everyone. The operating system around the tool is what separates the masters from everyone else.
The Non-Determinism Problem
People complain that AI is unreliable. Same prompt, different outputs. Unpredictable behavior. Hard to trust.
But humans are equally non-deterministic. Ask an engineer the same architecture question three times ... morning, afternoon, after a bad deploy ... and you'll get three different answers based on sleep, stress, blood sugar, whether their standup ran long. We've always managed this variability with standards and accountability. Code review. Design patterns. Architectural decision records.
The same solution applies to AI. The problem isn't tool reliability. It's absence of standards.
The engineers who are thriving aren't luckier or working with better models. They've built constraint systems. Their AI operates inside encoded rules ... lint configs, architectural tests, workflow definitions that keep the creativity pointed in productive directions.
This is what I missed in my first ten minutes with Cursor. I thought the tool was the unlock. It's not. The unlock is the system you build around the tool.
The Two Camps
This story has been told before. New technology arrives. Two camps form. One embraces the change, builds expertise, pulls ahead. The other waits for the dust to settle, for best practices to emerge, for the tool to mature.
Only one camp ever wins.
The divide isn't about talent. Talent is too distributed, too random. The divide is timing. How early someone decided this was worth getting good at.
I see it in my own trajectory. Those first ten minutes of discomfort could have stretched into weeks, months, years. I could have kept maxing out Copilot credits in VS Code, telling myself I was staying current, while the real practitioners were building fundamentally different capabilities.
The engineers spending $2,000 a month on tokens aren't spending money. They're buying optionality. They're compounding advantages that will compound again before the late adopters even recognize the game has changed.
The engineers spending $2,000 a month on tokens aren't spending money. They're buying optionality.
What This Means for Leaders
If you're leading engineering teams, you have a decision to make. You can treat AI like a productivity tool ... optional, preference-based, something engineers pick up if they're interested. Or you can recognize it as a fundamental shift in what engineering excellence looks like.
The "purists" on your team aren't preserving their skills. They're preserving their comfort. And comfort is expensive in a market that's moving this fast.
The question isn't whether your engineers use AI. It's whether they've built systematic approaches to using it well. Whether they can describe their eval harnesses, their agent loops, their constraint systems. Whether they've thought beyond prompting to orchestration.
If they can't ... they're not in the first camp. They're in the second.
The measuring stick has changed. You're not being evaluated against the engineer next to you anymore. You're being evaluated against the one who started practicing eighteen months ago.
The split is here. The camps are formed. Darwin already explained what happens next.
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.
Top comments (1)
Thanks for the interesting read. Matches my intuition :)