AI tools made developers feel 20% faster. Then researchers measured them at 19% slower. Read that again.
That's not a rounding error. That's a 39-...
For further actions, you may consider blocking this person and/or reporting abuse
This is the reality check the industry desperately needs. The "perception vs. reality" gap is a classic example of the Dunning-Kruger effect meeting high-speed autocomplete; we feel like we're flying because the cursor is moving, but we're often just creating more "dark matter" code that requires future cleanup.
It’s a sobering reminder that velocity (speed with direction) is not the same as speed (how fast you're moving). If the direction involves subtle bugs and architectural drift, we're just moving toward a legacy nightmare 19% faster.
I think the underline issue is that people missuse the tool. Honestly SDD in not Vibe coding. You can not expect from an agent to perform the task that you ask with you prompt in the way you like. This is why you need a more detailed and deeper context, able to survive across session, able somewhat to "learn" form existing and produced code. This is a looong discussion. Study you pointed did not take in consideration some parameter that are pillars of SDD. Like Scope definition, tool managements. As poor your specs are as poor will be the output. From there all the iterative work that is slowing down most of the "developers". I explain more here (dev.to/marcosomma/how-i-accidental...). Happy to get your feedback.
Good point, but it’s context-dependent: in familiar codebases, AI often adds review and debugging overhead, making experienced devs slower, while in greenfield or unfamiliar work it can still help. The real problem is teams relying on perceived productivity instead of measuring actual outcomes before making decisions.
All I can say is AI error adds up faster than we think. That's one of the weaknesses every AI model is facing right now.
It doesn't matter to those who don't look at the code generated by the AI. A thinly veiled nod to the creators of OpenClaw, Hermes Agent, Claude Code, who clearly embrace this. 🙃
Please don't do that on a serious project. The damage of a production bug (depending on the impact) can ruin a company.
Felt-speed vs measured-speed gap is the more interesting story. My guess: AI removes the friction of starting (no blank page anymore), which is what creates the felt-faster sensation. But the actual lag is in correction loops on AI output. The total wall-clock loses to the felt-relief.
the study measured experienced devs on their own familiar repos - arguably the worst scenario for AI. show me numbers for junior devs on greenfield tasks and the story might look totally different
Can you provide the source? I searched here, but I couldn't find it.
metr.org/blog/2025-07-10-early-202...
Here's an interesting exercise--replace "AI" with "Agile" throughout this post and see what you think.
Interesting gap perception gains don’t always match real productivity, especially when AI adds hidden overhead like review and correction.
yeah, you are right