Lessons from Building with AI Coding Assistants
So I've been building a lot of stuff with AI coding assistants lately—like, a lot. And one thing I keep running into is this: the AI will just... make decisions for you if you're not paying attention. Sometimes good ones! But sometimes it'll confidently march you right off a cliff.
This isn't me hating on AI coding. I'm fully bought in at this point. But there's a difference between using an AI coder and actually directing one.
Here's some stuff I've learned that might save you some headaches.
Challenge It When It Changes Your Approach
AI coders have opinions. They'll make architecture decisions without asking you first, and if you're not watching closely, you might not even notice until way later.
The Websocket Ambush
I was building something using pipes—a deliberate choice for my use case. Somewhere mid-conversation, the AI just... switched to websockets. No "hey, should we consider this?" No discussion at all. It just started writing websocket code like we'd agreed on it.
We hadn't agreed on anything.
I had to stop and say "no, go back, I want pipes for this." And it did! Immediately! But if I hadn't caught it, I would've ended up with an architecture I didn't want.
The "Write Scripts to Disk" Thing
Same project, different issue. The AI started writing executable scripts to disk as part of its solution. Technically functional? Sure. A good idea in most production environments? Absolutely not—that's a security red flag in a lot of contexts.
I had to explicitly ban that approach before it became a problem.
The pattern: AI optimizes for "does it work." You have to optimize for "is this appropriate for my situation"—security, compliance, maintainability, all that stuff that AI doesn't naturally think about.
Tell It to Do Its Own Testing
This one saves me a bit of time: make the AI test its own code before giving it to you.
The Bad Command Loop
I kept getting commands and code that would fail. Copy, paste, error. Copy, paste, different error. Over and over. Basically the AI was using me as its test runner.
Then it clicked—the AI can execute commands itself. It can iterate on its own. It can figure out what actually works before handing me something.
So I told it: "Test this yourself first. Iterate until it works, then show me the solution."
It actually worked!
It ran through several failed attempts on its own, found something that actually worked, and gave me that. All that debugging time I would've spent? Gone. The AI did it for me.
The takeaway: If you're testing code that keeps failing, ask yourself—could the AI have tested this itself? If yes, just tell it to do that. You're the architect, not the test runner.
Sometimes You Need Research, Not Code
Sometimes the AI is stumbling because it doesn't have enough context. It's trying things, hitting walls, trying more things—and you're both stuck.
Using Another AI to Unblock the First One
I had my AI coder grinding through a problem, throwing idea after idea at it. Nothing was sticking. We were both frustrated (yes, I personify it, don't judge me haha).
Here's what I ended up doing: I opened a separate chat with Gemini Pro—not to write code, but just to talk through approaches. "Hey, I'm trying to do X. What are the proper patterns for this? What should I be thinking about?"
Gemini did research from the web and gave me a solid breakdown of the right approach. I took that conversation and fed it to my AI coder.
Instant clarity. It stopped flailing and just... did the thing.
Why this works: Your coding AI is optimized for implementation. Sometimes you need a "thinking" conversation before you can have a "building" conversation. Just like having your requirements figured out before you start designing or building.
If you have access to Deep Research tools, those work great for this. Let the research tool go deep on the problem, then bring those findings back to your coder.
AI-Generated Deployment Scripts (Carefully!)
Okay, this one needs a big warning label, but it's too useful not to mention.
You can have your AI write helper scripts for cloud CLI commands—deployments, configuration, infrastructure setup. This saves a ton of time. But you absolutely have to review everything before running it.
Terraform Became My Friend
I needed to set up a backend app—VM or Cloud Run, SSL certs, static IPs, load balancer stuff. Lots of moving pieces.
I used Deep Research first to figure out the best approach. What's the right pattern? What services do I need? What order should things happen?
Then I had the AI generate a Terraform script based on that research. The script pre-checked dependencies, let me input my variables cleanly, created everything in the right order, and actually worked on the first run.
Honestly, this whole experience made me fall in love with Terraform. AI generating infrastructure-as-code plus Terraform's built-in safety features (plan before apply) is a really nice combo.
The Giant Caveat
ALWAYS ALWAYS review deployment scripts before running them. Check for:
- Destructive actions (
destroy,delete,rm -rfanything) - Resource creation that might cost more than you expected
- Security group or IAM changes that could open holes
- Anything touching production data
The AI doesn't understand the blast radius of infrastructure changes the way you do. It will happily generate a script that nukes your database if that's technically what you asked for.
For bigger infrastructure stuff, Terraform is great specifically because it has built-in safety mechanisms. But even then—review the plan output before applying.
Always Review, Always Ask Why
This is kind of the meta-lesson that wraps everything else together.
AI coding assistants will give you code. Lots of it. Fast. The temptation is to just ship it and move on. Don't.
What reviewing actually means:
- Actually read the code. Not skim—read. Does the logic make sense?
- Ask for reasoning. "Why did you use this approach?" is totally valid. If the AI can't explain it well, that's a red flag.
- Ask for documentation. Ask the AI to document how things work, or document design decisions etc. post development.
- Give feedback. "This is too complex" or "Can we simplify?" pushes it toward better solutions.
- Check the edges. AI is great at the happy path. It's often weak on error handling, edge cases, and security stuff.
Wrapping Up
After doing a lot of AI-assisted development, my productivity has genuinely gone way up. But not because the AI replaced my thinking—because it amplified it.
The pattern that works:
- You decide the approach. AI implements it.
- You validate the output. AI iterates based on your feedback.
- You catch the context-specific stuff. AI doesn't know your infrastructure, your security requirements, or your users.
- You stay curious. Ask why. Push back. Give direction.
We're the architects.
Top comments (0)