You've probably been there. You gave a prompt to AI, the output works, you shipped the feature, then two weeks later, something breaks in a way you didn't see coming.
Or a moment where you asked the right question and the AI handed you something genuinely brilliant.
That's what got me thinking, coding with AI is a lot like playing poker.
In a game of poker, the player gets a set of cards at the start. Some hands are clearly strong, while others might seem weak.
But the game isn’t just about the cards. It’s about how you play them
For a beginner, the game can feel like a gamble. Should they fold? Should they raise? Is their hand even good enough?
But some players can turn the tide into their favour. They pay attention to how others are playing, decide when to take risks, when to step back, and sometimes even win with cards that do not look promising at all.
This is what working with AI looks like. LLMs will hand you solutions, sometimes clean and elegant but sometimes brittle and wrong. some developers work their way around, guiding the AI to produce better results. But someone who is just starting out, may not grasp how much a solution can impact their work.
Lets dive in!
Here are few scenarios that I have came across in codebases and during my own work.
I'll go with the vibe ✨
A developer starts with thier day, picks up a task which is to create new UI page in the project. They quickly open up their favorite AI-powered IDE, paste in the design screenshot, and fire off the prompt!
And the AI delivers 🎉. The layout is right, the spacing looks good, the code is clean. It might even look better than the original. So it gets shipped.
Few moments later, the design team updates the brand colors. Every other page in the project adapts automatically except that one. Because the AI somehow didn't use the theme variables, existing component classes, or the design system the project uses. Although the AI built a good UI but some parts of it are made from scratch
Some additions in the prompt / workflow could result in a much better outcome
No time to die Optimize
When creating a table with selectable rows in javascript, for tracking which rows are selected, the AI might suggest to use new Set() which has O(1) lookups, clean API, the right tool for the job
const toggleRow = (id) => {
setSelectedRows(prev => {
const updated = new Set(prev);
updated.has(id) ? updated.delete(id) : updated.add(id);
return updated;
});
};
const isSelected = (id) => selectedRows.has(id); // O(1) — instant
A developer reads the code and hesitates. They have never used Set before. The code looks unfamiliar and slightly uncomfortable. So they swap it out with array.includes() something they know and trust.
const toggleRow = (id) => {...};
const isSelected = (id) => selectedRows.includes(id); // O(n) — loops
What they did not realize is that includes() loops through the entire array on every check. And with large number of rows and rapid selections, the UI might start to stutter.
Refactor to the moon 🚀
A developer picks up a large and messy function which has been in the codebase for a while. Everyone knows it needs refactor. They run it through the AI, and the result is genuinely good, cleaner structure, better separation, easier to read.
But they don't stop there.
"Can you optimize this further?" and the AI gets to work. Each pass introduces more abstraction, generic utilities, dynamic dispatch, configurable pipelines
The code is technically impressive. But made it much harder to onboard someone new in future.
// Messy, but everyone understood it
function processOrder(order) {
if (order.type === 'digital') { ... }
if (order.type === 'physical') { ... }
...
}
// First Pass — good refactor, clean, readable
function processOrder(order) {
const handler = getOrderHandler(order.type);
return handler.process(order);
}
// Second Pass — nobody asked for this
function processOrder(order) {
return OrderPipeline
.create(order.context)
.withMiddleware(OrderMiddlewareRegistry.resolve(order.type))
.execute(order);
}
The AI knows your tech stack but does not know your scale, requirements, or your users. That's the hand only you can play.
Just like you, I am also trying to navigate this new era of building with AI. Here are some of the things I have learned that help me in my day to day work and improving my game.
Learning to verify, not just accept
Focus on understanding the fundamental concepts. Something look unfamiliar? ask the AI to explain the concepts.
Use Plan mode, in you favorite AI tool / IDE, whenever a solution has multiple moving parts. It lets you see where the AI is headed before any code is written and you can modify the plan according to your needs.
Break your requirements into small chunks and explain it to the AI, this results in generating precise solution.
Treat every AI solution as a Pull request that needs review.
Learn about the best practices, security checklists. No matter what your stack is. AI won't always apply them unless you know to ask for.
Ask for help, try to discuss about issues / sollutions / features, whenever possible. Discussions unlock ideas, new ways of building sollution.
Try to keep up with latest AI innovations, new workflows, anything that makes things easier for you. So that you can focus on building what's important
The masters of thier craft
Ensure the codebase follows good achitecture, nameing convensions, rules, patterns. This give better references to the AI which results in consistent, manageable sollutions.
Try not to over engineer if possible. Some times an easier sollution which is well optimized, might be better for the long run and easier for the next developer to pick up.
Help / teach others whenever you can. Explaining a concept to a person is the same skill as guiding an AI effectively. If you can walk someone through a problem, you already know how to prompt your way to a good solution.
AI models are improving at a pace that's hard to keep up with. Maybe we are just a few months away, where a model will be inteligent enough to enforce best practices by default, catches every edge case, and never lets anyone ship bad code.
But not today. And we still got our pending tasks to push to prod.
We are all figuring this out as we go. I hope something in here made sense and helped in a way.
Now go break something 👀. That's where the real learning happens anyway! 👋



Top comments (0)