How I Got There
It started with a number that scared me.
I was curious one week — how much code am I actually writing myself? So I trac...
For further actions, you may consider blocking this person and/or reporting abuse
This hits hard. I recently had my own 'Code Red' while building Commerza (a custom ecommerce engine). I got lazy and asked an AI to refactor a massive backend file—it butchered the logic and deleted 40% of my work.
Because I was 'vibe coding' and hadn't set up Git yet, I spent 9 hours manually rebuilding what I didn't understand as well as I thought. Now, I don't ship a single line unless I can explain the logic. Frameworks and AI are great, but the 'manual struggle' is where the actual engineering happens.
This story gave me chills thank you for sharing it.
9 hours manually rebuilding what I didn't understand that's the real cost. Not the time. The realization that you didn't actually understand what you thought you did.
No Git, no backup, AI butchered the logic that's the perfect storm. I've felt that pit in my stomach.
Now I don't ship a line unless I can explain the logic that's the rule. Slower. Harder. Real.
Manual struggle is where actual engineering happens quote of the year.
Thank you for this. 🙌
working without git.. is just asking for trouble
This is the most honest thing I've read about AI and coding.
The interview story 10 minutes to 45 minutes that's the kind of specific, embarrassing detail that makes this real. I've felt that freeze. It's terrifying.
The skill you're not practicing today is the skill you won't have tomorrow bookmarking that.
Thank you for saying the quiet part out loud. 🙌
Thank you.
Freeze is exactly the right word. It's not that we can't solve the problem. It's that our brains have been trained to reach for AI instead of reaching for ourselves.
I'm glad the skill you're not practicing line hit. I wrote it for myself. To catch myself before I prompt without thinking.
Thanks for being here. 🙌
Thank you this means a lot.
Freeze is exactly the right word. It's not that we can't solve the problem. It's that our brains have been trained differently now.
I'm glad the line hit. I wrote it to catch myself.
Thanks for being here. 😊
The problem isn't vibe coding. It's coding without feedback loops. If you generate code and ship it without reading it, sure, you'll atrophy. But if you prompt, read the output, understand why it works, and refactor what doesn't fit... you're still learning. Faster, even. The interview scenario proves you lost one specific skill (solving algo puzzles under pressure with no tools). That's a valid concern for interviews. Less clear it matters for actual production work where you always have tools available.
Fair pushback and you make good points. 🙏
You're right: the problem isn't vibe coding, it's coding without feedback loops. If you prompt, read, understand, refactor that's learning, not atrophying. I should have made that clearer.
The interview example is specific. But my concern is broader: habits scale. If you stop reading AI-generated code because it's usually right, that habit carries over. And one day it won't be right, and you won't notice until it's in production.
But you're absolutely right that with discipline, vibe coding can be a superpower.
Thanks for this genuinely helpful nuance. 🙌
What would happen if you told ai to do something using programming terminology. Instead of do me x feature. Build function x it takes data x and outputs data y inside the function while loop etc would you be able to keep your skills while not slowing down?
Thats kinda how i work, in my case its often to complex to be understanble in a huge description, i want things done in phases, steps. I notice that i dont like manual coding that much, i'm kinda done with small typos and boring typing. Sometimes it's just one function or a group a functions with one goal in mind, and i extend upon coding goals. Designs are mine, but i dont care much about sealed classes factories etc that's a thing AI can do fine, i'm more for the real world logic and coding ideas on how to solve something. I find myself more of a designer who gets the code as thinks the design. Though the problems i work on are hard so perhaps i am biased.
I dont work on shopping sites etc, i do emulations real world industrial automation.
This is such a smart question I think you're onto something.
You're describing outsourcing the typing, not the thinking. You still break down the problem, choose the approach, decide on the syntax AI just does the keystrokes.
Would that preserve skills? I think yes, mostly. Your design skills stay sharp. But syntax muscle memory might still fade.
I tried something similar during my experiment. It was faster than writing everything myself, and I felt more engaged than just prompting build x feature.
The real test: could you explain the code afterward without looking? If yes, you're learning.
Thanks for this you've given me something to think about. 🙌
the 8.3% number stings but that is not the real problem. question is whether you understand the other 91.7%. there is a huge gap between AI code you can explain and code you just hoped worked - that gap is where things break.
You’ve nailed the real issue. The 8.3% number was just a symptom the scary part is how much code I thought I understood but actually didn’t.
That gap you mentioned? That’s where subtle bugs, security issues, and technical debt quietly grow.
My experiment wasn’t against AI it was for understanding. If you can’t explain what the AI wrote, you don’t own the code. You’re just hosting it.
Thanks for putting it better than I did 🙌
exactly - and that debt is silent until it isn't. that's the part that never shows up in the commit history.
Through your fingers - into your mind!
Very honest post. Thanks for sharing.
Maybe it's not only about doing it yourself until you need the help, but to know what you know, and what not.
Things you don't know are way more important to understand by heart than just recognize patterns by reflex. Is knowing .map etc. helpful? Sure. Does this make the difference, manual code-wise? I am not sure.
Writing code, typing through your fingers what you don't understand helps you gaining the knowledge. copy/pasting from SO can be as harmful as having it generated.
Maybe going back to these old magazine times, where we just re-typed the Assembler code from pages 17-24 into our own can help :)
Through your fingers into your mind. that's a beautiful way to put it.
You're right that knowing syntax isn't the real differentiator. The real differentiator is knowing what you don't know. AI can fill gaps, but it can't tell you where the gaps are.
The magazine example is perfect. Typing Assembler from pages 17-24 wasn't efficient. But it worked. The code entered through the fingers.
Thanks for this perspective it's deeper than just "don't use AI." 🙌
i feel this hard—the vibe coding high is great until you’re staring at a blank screen during a technical interview. i use cursor for almost everything now and definitely notice my syntax memory slipping while my vision stays sharp. it’s a weird trade-off for moving fast on the big picture. still figuring it out in cursor—austin taught me: just start the thing, even if you have to google the basics again later.
This is such an honest comment thank you.
The vibe coding high is great until you're staring at a blank screen during a technical interview that's the whole problem in one sentence. Feels amazing until it doesn't.
Syntax memory slipping, vision staying sharp that's the weird trade-off. You can see the big picture but the basics are fading.
Austin's advice just start the thing, even if you have to google the basics again that's the real solution. Not waiting. Just doing.
Thanks for sharing this. You're not alone. 🙌
I see AI in two distinct ways:
1. Experienced developers
Seasoned developers will use AI effectively because they understand how to evaluate the cleaner and critical output. They can quickly analyze AI‑generated code, refine prompts, and tweak the model toward better results. For them, AI is a productivity multiplier tool for building robust AI agents, writing unit tests and JSDoc, debugging, reviewing code for performance and memory issues, and handling many other development tasks.
2. Inexperienced or underprepared developers
Freshers, junior developers, or those with weak fundamentals may rely on AI blindly for generating code and shipping it without proper understanding or review. Their primary focus tends to be meeting delivery timelines and keeping managers or clients satisfied, rather than code quality, maintainability, or long-term impact.
As AI models and agents continue to become more capable and sophisticated, developers who fall into the second group are at high risk of becoming irrelevant. Those who don’t invest in strengthening their fundamentals and leveling up their skills will gradually be pushed out of the industry.
This is such a clear framework thank you. 🙏
Same AI tool. Two different outcomes. The difference isn't the tool it's the foundation underneath.
Experienced developers use AI as a multiplier. Inexperienced developers use it as a crutch. The industry incentivizes speed, and AI delivers speed so the second group ships faster but never builds fundamentals.
Your prediction about them becoming irrelevant is harsh but probably accurate. The only real job security is growing alongside AI, not being replaced by it.
Thanks for this thoughtful breakdown. 🙌
I’ve been thinking about this a lot too, especially the losing syntax muscle memory part. I was wondering, do we actually need syntax muscle memory anymore?
Not fundamentals — those are obviously still critical. But the exact syntax for every method or language feature? We already rely on docs, autocomplete, AI. If I can understand the generated code, does it really matter whether I could have written that syntax from memory? The coding interviews are a relevant example of syntax memory importance, but say what if interviews eventually move away from syntax-heavy coding tasks?
I also do know know the answer, so I try a hybrid approach: for some tasks generate boilerplate with AI, but then refactor and improve code by hand, just to keep my syntax skills in shape 🙂
This is such a thoughtful question and honestly, I don't have a perfect answer.
You're right to ask: do we actually need syntax muscle memory anymore? If AI can generate it, and I can understand it, does it matter that I couldn't write it from scratch?
Here's where I land and I'm still figuring this out:
Syntax memory isn't valuable by itself. Nobody gets paid for knowing that map comes before filter But syntax memory is a proxy for something deeper: fluency.
When you don't have to pause for syntax, your brain is free to think about structure, architecture, edge cases. When you're constantly looking things up, you lose flow. And flow is where good design happens.
You're right that interviews might change. They probably will. But the deeper question isn't can I pass an interview? It's can I hold a complex system in my head without breaking flow?
Your hybrid approach generate boilerplate, refactor by hand that's actually brilliant. You get the speed of AI and the comprehension of doing it yourself. The refactoring step is where the learning lives.
Thank you for asking this it's the kind of nuance this conversation needs. 🙌
Fluency coming from syntax is a really valid point, I didn't look at it from that perspective, definitely gave me something to think about.
Between AI generation and writing by hand, maybe it is really about finding the right balance? Something you can only figure out in practice 🙂
Exactly this balance is the real answer. And you're right, you only figure it out by practicing and paying attention.
Thanks for the thoughtful discussion. 🙌
The "muscle atrophy" framing resonates. What I've started doing with my team: one day a week we turn off AI assistants entirely and pair program on a tricky bug. It's slower but the quality of architectural thinking that comes back is noticeable — people start reaching for first principles again instead of prompting their way around a problem. The trap isn't AI itself, it's using it as a substitute for understanding rather than a multiplier on it.
This is such a practical approach thank you for sharing.
One day a week, no AI, pair program on a tricky bug specific, doable, and the slower pace is the point. That's where thinking happens.
The trap isn't AI itself it's using it as a substitute for understanding rather than a multiplier that's the perfect framing. I'm borrowing that.
I love that this is a team practice, not just individual. That's how real change happens.
How has the team reacted to the no AI day"? 🙌
You used Claude to write this post mmhm I see the long dashes haha --- but I can relate, after using AI your brain just gets slower. and lazy. I think we are heading towards a time where nobdy can actually code anymore. just quality of prompts will determine results
Haha you caught me. 🙈 Guilty as charged on the em dashes.
But here's the honest truth: I used AI to help structure and organize my thoughts (disclosure is at the bottom). The experiences, the 8.3% statistic, the interview story, the 30-day experiment that's all mine. AI just helped me say it more clearly.
You're right though the fear of "nobody can actually code anymore" is real. That's exactly why I ran this experiment. Because I felt my brain getting slower and lazier. And I wanted to see if I could reverse it.
Quality of prompts will determine results that's the world we're heading toward. But I'd rather be the person who understands the code, not just the person who writes good prompts.
Thanks for the honest comment and for noticing the dashes. 😂🙌
The day 14 observation is the most telling part of this. Not the speed difference — the fact that you could explain the feature in 30 seconds without looking at the code. That's the real metric nobody tracks: how well do you understand what you just shipped?
I've noticed the same pattern with infrastructure work. I can prompt AI to generate a Terraform module or a CloudFormation stack in seconds, but if I can't explain why the subnet CIDR ranges are laid out the way they are, or why the security group rules reference specific ports, I've just created infrastructure debt that I'll pay for during the next production incident. The code works, but my mental model of the system has gaps — and those gaps are invisible until something breaks at 2am.
This is such an important addition thank you.
How well do you understand what you just shipped? that's the real metric. Not speed. Not tickets. Comprehension.
The infrastructure example is perfect. AI generates code that works. But your mental model has gaps. And those gaps are invisible until something breaks at 2 AM.
That's the debt nobody talks about. Not code debt. Mental model debt.
Thank you for this. 🙌
Really enjoyed reading this! , You are right As a developer we need to think each and every possible way to built logic or problem . But today our brain is just rotting...
Thank you and you're absolutely right. 🙏
Brain is just rotting that's the quiet fear most of us don't say out loud. Not dramatically. Just slowly. One prompted line at a time.
The good news? It's reversible. The brain comes back when you force it to work again.
Thanks for reading. 🙌
This hit harder than I expected. I tried going “no AI” for a few days and realized how much I’ve started depending on it for even small things like syntax or quick debugging.
That small things realization is the scariest part.
Not architecture or complex systems syntax and quick debugging. The stuff you used to do without thinking. When that starts feeling hard without AI, you know something has quietly shifted.
The fact that you tried no AI for a few days is more than most people do. Most never run the experiment.
The dependency isn't permanent. It comes back. But the first few days are humbling.
Thanks for this. 🙌
It is the same when I solve DSA questions. I will use AI at the last resort.
AI at last resort that's a simple rule that would save most of us.
DSA questions are the perfect example. Struggle first, then AI. That's where the learning happens.
Thanks for this. 🙌
AI saves our time.... Hard and long without it (
You're not wrong AI saves a ton of time. And going without it is genuinely hard and slow. I felt that too. 😅
The question isn't whether AI is useful. It's whether the time it saves is worth what it costs. And that answer is different for everyone.
For me, the slow way reminded me what I'm capable of without a prompt box. I don't want to lose that.
Thanks for the honest comment. 🙌