The first time I used Claude to help me write a FastAPI endpoint, I thought — okay, this is it. This is the thing that changes everything.
And it did. Just not in the way I expected.
I'm a backend engineer. I build APIs, design schemas, think about concurrency, deploy things to EC2 and hope they don't die at 2am. I use Claude and Cursor every day now. Genuinely can't imagine going back.
But somewhere in the last few months, I quietly picked up a second job nobody put in my offer letter.
AI output reviewer.
The Drunk Intern Problem
Here's the most accurate description of working with AI tools I've come across recently — it's like being handed an incredibly fast, highly enthusiastic, slightly drunk intern.
They ship fast. Like, embarrassingly fast. You give them a task and they're back in 3 seconds with something that looks completely reasonable.
And that's exactly the problem.
Because "looks completely reasonable" and "is actually correct" are two very different things. And the intern can't tell the difference. So now you have to.
I ran into this headfirst while building a seat-locking system for a real-time booking platform.
What I Was Building
The platform was Uptown — a venue and event booking product. The core feature: users browse available seats, select one, complete payment.
Simple enough until you think about what happens when two users try to book the same seat at the same time.
The flow needed a locking mechanism. When a user selects a seat, you temporarily lock it — say for 10 minutes — so nobody else can grab it while they're in the payment flow. Lock expires? Seat goes back to available.
Classic distributed systems problem. I'd read about it. Never actually had to build it under real pressure before.
So I did what any reasonable engineer does in 2025. I opened Claude and explained the problem.
The Clean Implementation That Wasn't
Claude came back fast. Clean endpoint, sensible schema, even handled the lock expiry with a background task. Looked great. Genuinely passed the vibe check.
Missed the race condition entirely.
Here's what happens at scale. Two users open the booking page. Same seat. Both hit the lock endpoint within milliseconds of each other.
Both requests check availability at nearly the same time. Both see the seat as available. Both proceed. Both write a lock.
Two users. One seat. Both think they won.
The AI read the availability and wrote the lock as two separate operations. No atomicity. No database-level guarantee that only one request wins. It optimized for "looks correct" — not "survives production."
The fix required rethinking the operation entirely — making the check and the write happen in a single atomic database call so only one request could ever win. Simple in hindsight. Not obvious if you haven't thought about what milliseconds actually mean in a live system.
The AI didn't think about milliseconds. It's never had to.
The Quiet Problem Nobody Talks About
Race conditions were the dramatic failure. But there was a slower, quieter one that crept up alongside it.
Database bloat.
Every seat selection created a lock record. Expired locks accumulated silently. Nobody cleaned them up. The table kept growing. Availability queries got slower. Nothing broke immediately — it just degraded, the way things do in production when nobody's watching closely.
AI didn't flag this either. It answered the question I asked — how do I lock a seat — not the question I should have also asked — what happens to this data six months from now.
That's not a complaint. That's just the limit of the tool. It lives in the moment of the prompt. Systems live in time.
So Where Does This Leave Us
I'm not writing this to dunk on AI tools. They've made me genuinely faster in ways that matter.
Boilerplate? Gone. Getting unstuck on syntax? Seconds. Exploring design options? Way faster than it used to be.
But there's a layer underneath all of that where the tools consistently fall short. Not because they're bad. Because that layer requires something they don't have — context about your specific system, judgment built from watching things break, and the experience of being responsible for something when it went wrong at 2am.
System design lives there. Production intuition lives there. The decision of which correct-looking option is actually right for your constraints — that lives there too.
AI gave me a fast, confident co-pilot. I still have to know where we're going.
The engineers who'll struggle aren't the ones AI is supposedly replacing. They're the ones who leaned on the tool before they built the judgment underneath it.
Still a backend engineer. Just with a weird new coworker and a much stronger opinion about atomic database operations.
Built with FastAPI, PostgreSQL, and one slightly drunk AI intern.
Top comments (0)