DEV Community

Cover image for I Talk to AI While I Code. Here's What Works, What Fails, and Where I Stop.
Devesh Korde
Devesh Korde

Posted on

I Talk to AI While I Code. Here's What Works, What Fails, and Where I Stop.

I'll be honest. A year ago, if you told me I'd be having full conversations with an AI while building features at work, I would have laughed. Not because I didn't believe the tech was coming, but because I didn't think it would actually be useful in the messy, context-heavy, "why is this CSS not working" reality of day-to-day development.

I was wrong.

I now use AI tools almost every day. Not as a replacement for thinking, but as something closer to a really fast colleague who never gets annoyed when I ask dumb questions at 11 PM. But I've also learned where it falls apart, where it confidently leads you off a cliff, and where I personally choose to not use it at all.

What Actually Works
Let me start with the stuff that has genuinely changed how I work. Not in a "this is the future" hype way, but in a "this saved me 45 minutes today" way.

Debugging Partner
This is the single biggest win. When I'm staring at an error message that makes no sense, or a component that renders fine locally but breaks in production, explaining the problem to an AI often gets me to the answer faster than StackOverflow ever did.

We had a page that got progressively slower the longer a user kept it open. No errors, no warnings, just gradual performance degradation. I described the component tree and the observable patterns we were using, and Claude caught that a switchMap inside a nested subscription wasn't completing when the parent component destroyed, because the outer observable was tied to a shared service that lived outside the component lifecycle. The subscription kept piling up silently. Not something you'd catch in a code review unless you were specifically looking for it.

The key here is that AI doesn't just search for your error message. It reasons about the interaction between different parts of your code. That's the difference.

Boilerplate and Repetitive Code
I write a lot of Angular components at work and React components for my personal projects. The amount of boilerplate involved in setting up a new component, a service, a route configuration, a form with validation, is significant. AI handles this extremely well.

I describe what I need in plain English. "Create an Angular component that takes a list of items and displays them in a table with sorting and a tooltip on each status column." And I get back something that's 80-90% correct. The remaining 10-20% is where my actual expertise comes in, adjusting it to fit our codebase, our styling conventions, our state management patterns.

That last bit is important. AI gives you a starting point. Your job is to shape it into something that belongs in your project.

Learning New Concepts
When I was exploring machine learning, I worked through algorithms like logistic regression, KNN, and Naive Bayes. AI was incredibly helpful here, not to write the code for me, but to explain the intuition behind the math.

"Why does KNN struggle with high-dimensional data?" is the kind of question where a textbook gives you a formal answer and AI gives you an analogy that actually clicks. Both are useful, but when you're learning something new and just need to build intuition, the conversational explanation is faster.

Same thing happened when I was setting up my blog with Next.js and MDX. I had questions about static generation, dynamic routes, metadata APIs. Instead of reading through three different docs pages and piecing it together, I could ask one question and get a focused answer with context.

Read full blog here....

Top comments (13)

Collapse
 
kalpaka profile image
Kalpaka

The subscription leak story is the one that sticks. Not because the fix was complex, but because the failure was invisible. No errors, no crashes, just gradual decay that looked like normal performance variance until it wasn't.

That's the debugging gap AI fills best and the one we talk about least. Not the "fix this error message" cases but the ones where you can't even articulate what's wrong yet. Describing the architecture to an external reasoner forces you to externalize the mental model, and often the act of describing is where you spot the seam.

One limitation I'd add: AI reasons about the snapshot you give it. It can't notice that this is the third time in six months your team has had a subscription leak at a service boundary. That pattern recognition across time is still entirely on you.

Collapse
 
sudodevesh profile image
Devesh Korde

Thanks for elaborating in a good way mate...

Collapse
 
maxothex profile image
Max Othex

The "rubber duck that talks back" framing is spot on. We use a similar mental model at Othex — AI as a thinking partner rather than a code generator. The shift changes everything about how you prompt and what you actually get out of the interaction.

The failure modes you describe (architecture, judgment calls) are real. Where we draw the line: AI can suggest patterns, but the decision of which pattern fits this context stays with the human. That contextual judgment is exactly what's hard to automate — it requires understanding the whole system, the team, the constraints.

One thing we've noticed: AI is great at "what are the tradeoffs here" questions but often terrible at "which tradeoff matters most for our specific situation." The second question requires knowing things that aren't in the codebase.

Collapse
 
sudodevesh profile image
Devesh Korde

True

Collapse
 
harsh2644 profile image
Harsh

Honest take, and rare to see someone actually define the limits instead of just hyping the wins. The part about not outsourcing architectural decisions resonated I've seen codebases where you can tell which files were "AI-shaped" vs. ones written with real project context in mind. The gap is usually in how edge cases are handled and how well the code fits the surrounding patterns. That 10-20% you mention is exactly where engineering judgment lives.

Collapse
 
sudodevesh profile image
Devesh Korde

Yeah, thats true

Collapse
 
wong2kim profile image
wong2 kim

The "rubber duck that talks back" framing is perfect. That's exactly how I use AI — not to write my apps for me, but to think through problems out loud with something that can actually respond.

As someone who builds iOS apps entirely with AI assistance (no CS background), the debugging partner role has been the biggest game-changer. SwiftUI's error messages are notoriously unhelpful, and having AI reason about the interaction between view hierarchies and state management has saved me countless hours.

One thing I'd add to the "where I stop" list: domain-specific product decisions. AI can scaffold a pregnancy tracking app in minutes, but it has no idea what information an expecting parent actually needs at week 28 vs. week 36. That judgment comes from talking to real users — and no amount of prompting replaces that.

Collapse
 
sudodevesh profile image
Devesh Korde

Yeah

Collapse
 
codingwithjiro profile image
Elmar Chavez

This is true. When I first discovered ChatGPT, I noticed I progressed faster in learning web development. Not by making it do the code but by being a coding partner that have the answers every "why" questions I have about a certain code. This answer is typically not what you can get from a simple Google search.

Collapse
 
sudodevesh profile image
Devesh Korde

Thats true

Some comments may only be visible to logged-in visitors. Sign in to view all comments.