Look, we've all done it. You ask ChatGPT for a "quick helper function," it dumps 50 lines of code that looks legit, you paste it in, and—boom—CI is red and your PR is getting roasted.
I've been writing about AI tooling for my book, and I've definitely learned some of this the hard way. Here's what's been keeping me out of trouble (mostly) when working with these tools:
1. AI suggestions are drafts, not gospel.
Think of them like that one coworker who's really confident but wrong 30% of the time. It's a starting point. That's it.
2. Ignore the confidence.
AI doesn't second-guess itself. It'll hallucinate an entire API that doesn't exist and sound completely sure about it. Don't fall for it.
3. Small changes beat big rewrites.
Debugging 5 lines you tweaked? Easy. Debugging 500 lines of AI slop you don't fully understand? That's how you spend your weekend.
4. If you don't understand it, don't merge it.
Seriously. If the logic feels like magic, it's not clever—it's a time bomb. You can't maintain what you can't explain.
5. You own the commit.
The AI isn't getting paged at 3 AM when the site's down. You are. Act like it.
Bottom line: AI makes typing faster, but it doesn't make thinking optional.
So—what's your vibe check process for AI code? How are you catching the sketchy stuff before it ships? Drop your war stories below.
Top comments (1)
Thank you for sharing your experience!
I’ve been tending to trust AI code unconditionally lately, so I’ll keep this post in mind.