It's everywhere, it's eating up every other dev topic debate: learn to dev with AI or you'll be replaced. As a dev, I can't help but feel threatened by these claims. Way more than I should.
I mean, I love hacking around with the new cool toy as much as anybody. I use code assistants all day long, from Cursor tab to RefactAI. I felt so clever when I automated ticket creation with Jira MCP with full codebase and commit history knowledge. I felt not so clever when my "instant" refactoring took me a full day to review and rewrite to fix "just that little bug". I love automating my work as much as the next dev, but I also love to measure the results of automation.
So when big names come up with dull articles about the "4 levels of dev with AI" which is that close to "top 5 agents to pop up a unicorn out of thin air in 2025", I feel something weird. They use code assistants dozens of times less than I do, yet they see "evidence" of a total world transformation I don't even see the beginning of in my day-to-day use.
Where is the collapse of LLM reasoning? Where are the small models winning in the end? Where are the control groups? Show me the data, I'll believe you. Where are the failures too? I've experienced more than one, surely you have data about that too if you're a big tech CTO, no? And then again, why prophesy so much before having the data? The natural move would be to train your team, maybe even a bit secretly, then outperform everybody and finally brag once you have secured the win. Right?
There must be something else.
"Something else" is often money
The first obvious reason is of course to have people burn billions in LLM tokens. This kind of narrative is really effective with C-levels who aren't really using it daily. It make them feel they are losing ground on a huge competitive advantage and they'll force their teams to use LLMs for everything.
There is a nice side effect: you get social experimentation for free. I mean, hacking around commands, cursorRules and CLAUDE.md files is fun, but really the final product will have most of it baked in. But people are doing it for free—nope, people are paying to do that! They are running the experimentation at scale while burning tokens, documenting failures on the way. How convenient for an LLM provider?
And last but not least: hype drives capital. Meredith Whittaker puts it better than anybody:
Now, that a token dealer overhypes their product, I get why. That other "big money people" would be so eager to buy it that they wouldn't run A/B testing or measure gains? There's a bit more than FOMO or paying rent forever to an LLM provider. When I hear "Most of your code will be written by AI", I can't help but hear "now I own your means of production, so you better keep a low profile".
The good ol' dehumanization dream
How many times have we seen a desperate look from a PO, customer or designer that says "But, it's just a form, why is it that hard? You lazy devs will invent anything to avoid working". Because, we are a bunch of unmanageable weirdos aren't we? Sometimes I wonder if I don't ship software just to prove I'm an uncontrollable smartass. Why do we always have an obscure HN link that makes our CEO's "big vision" fall flat? Why is there always one of us to run a script proving financial forecasts are wrong? Why do we keep inventing squads and guilds and chapters and epics? Can't we just sit—not STAND—at our desks and do as the boss says?
Now, if you're managing a lot of devs, I get the temptation: "I have a big vision for humanity—that incidentally makes me rich and famous, but that's not the point—AI transforms that into a polished product". No more "what about… ?", no more pushbacks. No more human messiness. And if humans are not happy, the "big boss" can now close the precious token tap and give it to someone else. This would be much simpler, right? This would be all too tempting to make the devs more replaceable, more interchangeable, wouldn't it? Truly, I find it hard to ignore the power dynamics behind all of that.
It's a pattern we've seen before, from the Luddites to mines, from silk workers to the automotive industry or agriculture. With any wave of automation, there's also a shift of power that gets pushed. Whether it's wages, whether it's unions, whether it's outsourcing, there's often a power struggle that goes far beyond the actual productivity gain. And power is where we believe it is, right? (Yes, I'm totally citing Game of Thrones as the top of my philosophical references).
Be careful what you wish for
For the sake of argument again, let's admit that AI indeed replaces a good part of "the work". Now let me put a caricatured problem to emphasize things. We have two teams designing the same product. One is business school only with zero tech or design knowledge and automates engineering and design with AI. The other one is a group of hacky devs and UX who will automate PPTs, marketing and C-level jobs. Who do you place your money on, honestly?
Fun fact, there are only two things where I'm sure AI is saving me a lot of time as of now: writing tickets and doing large searches. Claude Code (or Goose) with Jira MCP is excellent at writing tickets from messy feedback and looking in the codebase to see if the workload is OK. Perplexity can save me an afternoon of searching with excellent links I could never have found. Large code refactoring though? Some of them "worked", but I'm still unsure if I spent more time reviewing them and eliminating dead code than I would have writing them from scratch.
And then, what if in the end we only need very small local models, called by pretty sophisticated agents that are a lot of regular code? What if it's pseudo-code to code models, transformers or SyntaxGuidedSynthesis that wins? Why do we need "Language" models, if it's autonomous anyway? Or is it? What if writing a specialized agent for your codebase becomes a required skill? Kimi K2 or Devstral aren't that far away, we don't need an expensive Claude to grep and make todos, to name only a few things.
At this point, I think we just don't know what "AI for dev" means. We don't know "how much" and we don't know "what kind". I have zero clue which approach will work or not, nor at which magnitude. What I see though, is that there's one narrative that's pushed way harder than the others. One where devs lose power and buy tokens by the billions.
How convenient, because in many other narratives there are teams of 5 hackers disrupting giants on the cheap, like early Rails days on steroids.
What's the sane approach then?
If authoritarianism is bourgeoisie being scared, then what to think of Sam Altman being scared by GPT-5 and doing whatever he can to convince your boss that you're gonna be addicted to his tokens? Maybe he is more scared for his job than for ours. We don't have proof of either, but this should at least be explored, right?
In this light, the skeptic developer crowd is absolutely needed. We need to have some people not budging until the product is actually good with proven gains. They are the reference against which we measure, the control group. Early adopters should be outliers, not the bulk of the market. This is how we develop good products in the first place and why we don't like it forced on us. The urge to push an unfinished product feels dubious to us at best and rightfully so.
The sane approach is just to stay curious, test and measure. You're a hacky enthusiast? Fine, build some agents and test them. You're a pragmatic skeptic? Totally fine as well, don't use it until it makes your job really faster. We need to know that too. I totally understand that a large company would run an A/B test of teams with and without LLM tools and measure the output. I don't understand why they would force the results if it's really efficiency that is at stake.
All those prophecies really make me think that it's not only efficiency that is at stake though. It's shifting people's perception of where the power is. But remember, if a "visionary" CEO fires you to be replaced by AI, don't forget to run Claude on your full company folder and ask it to make the slides, financial projections and OKRs for the quarter. Whether you email that to them as a middle finger or keep it to yourself to spin up your own thing is up to you. Just note that maybe it's not you who's been replaced by AI in the end.
Cover is Waterfront by Joseph Kaplan
Top comments (25)
That is very well written and I couldn't agree more.
The hype is structural in the VC-backed tech industry
The reason skeptics are being dismissed is pretty clear too
It is difficult to get a man to understand something,
when his salary depends on his not understanding it.
-- Upton Sinclair
And just as I was reading your post. This was right below it on my DEV feed.
:/
When I am coding and in the zone, my brain is like a house of cards. To finish a complex thought, I need silence and focus. The problem with AI tools is that they are so unpredictable. Every time an autocomplete pops up, I am interrupted and have to evaluate: “Is this really what I want?” — and this disturbs my thinking so much that I still sometimes prefer to code with all AI turned off.
The same goes for prompting: it can derail me because I prompt, and then I need so much of my brain to review the too good-looking result. It takes me an hour to get into the zone and just minutes to be thrown out of it. And AI tools often do that. It’s like having a coworker sitting next to you with their own ideas and uncomfortable questions. Sometimes it helps, but often it doesn’t.
On the other hand, it can be powerful, and there are things I’ve found work really well. So I do use it a lot, and it brings me big productivity gains.
And yes, I would prefer to have these gains with smaller local tools instead of being dependent.
In the end, I guess the amount of time I spend tooling, playing around, and trying to keep up to date on this topic is equal to what it saves me. 😂
The “dev with AI or die” mantra misses a deeper truth: technology rarely kills craft, it reshapes it. Painters didn’t vanish with the camera. Writers didn’t vanish with the typewriter. Developers won’t vanish with AI, they’ll just be judged by what new things they BUILD because of it.
Well, I understand we must not be the film camera shop owner that blindly refuse to see the market. But there's one narative that's pushed way to hard indeed.
I really liked your article, especially the part about lockin. It’s an aspect that often gets overlooked, the more we use centralized llms, the more we become dependent on the token dealer, and the harder it becomes to break free.
That said, I firmly believe that "learn to use AI or die." Personally, I believe anyone who thinks otherwise is either categorically refusing to use it or is using it incorrectly.
You shouldn’t be writing code anymore, that’s no longer our job. Our job today has moved up a level, to something with much higher value, much closer to technical leadership.
We need to grow as architects, as testers, as creators of documentation. We need to learn how to define context, set boundaries, build memory banks, craft system instructions, and above all, in security.
Think of it as leveling up: imagine becoming a team lead who’s given a group of devs with great potential but zero knowledge of your business domain. You need to learn how to train them to deliver what you want. You need to build your team and learn how to manage it.
Start treating AI as a team to manage, not as a tool to use.
Many people don't. And that's why when you're one-sided and see the instant results of AI coding, you're more than happy. This is the TikTok movement of coding. Pull the lever, and something appears. Something is code that may work. And if it doesn't, then you'll have to pull the lever again. There's some clever mind engineering done here.
Totally agree that intelectual laziness is a huge trap here. Many people seems to be content with "less intelectual load, no matter if it's actually longer". There's indeed a junkfood/TikTok/sedentarity metaphor or something along those line.
Very good post!
Three weeks ago, I decided to stop using AI assistants daily for coding. I realized they were making me feel lazy, unmotivated, and even sleepy like running on autopilot, producing bad intentions and bad code.
I truly enjoy automating my work, but only to handle repetitive tasks so I can focus my energy on the logic and creative parts. After just four days of “detox” from LLMs and AI assistants, everything became clear again, and my brain returned to its natural way of thinking.
Yeah I think that LLM disconnection will become a habit to. Too easy and tempting to go for it by sheer laziness. But that's another topic...
I think (as you kinda mentioned) at the end of the day, this is all just about money. How many people can we scare into buying some large number of tokens for an LLM because they think they need it? How many new developers can we make believe that LLMs are the greatest thing since sliced bread?
I'm really tired of the anxiety-driven narratives around AI.
To me, AI is just a tool that helps me explain mistakes and restructure small pieces of code. It doesn't replace my thinking.
I primarily use ChatGOT to handle these minor tasks, allowing me to break free from some of the little chores. That feeling is quite refreshing.
I like this AND there are many voices:
AI is for sure getting better and cheaper from the H/W to the opensource models and agents mentioned.
I think of this as an awesome opportunity for the little guys to run fast and out-execute the big guys.
I find the challenge is in not worrying "too too" much about mankind our nature(we're nature too) is the challenge. Will AI help, hurt, or both? I expect both like most of our powerful inventions.
I appreciate your post. And thank you for nudging me to write down my current thoughts.
All the best to everyone in this community!