While doing bug fixes and feature work, we often use AI either for quick context or direct code changes.
But most of the time, we move too fast.
We assume the AI understands the entire system:
- business rules
- edge cases
- dependencies
- architectural decisions
- existing side effects
It does not.
And because AI responds confidently, it becomes very easy to trust the output without increasing our own scope of attention.
I learned this the hard way.
There were multiple instances where AI-generated fixes actually increased side effects in the codebase because I relied completely on the generated solution and did not think deeply enough about surrounding flows.
Over time, I realized that using AI effectively is less about prompting tricks and more about developing “AI soft skills”.
Not how to ask AI anything.
But how to think before asking AI something.
- Tests Are Your Safety Net
Before implementation, make sure you already know:
- positive scenarios
- negative scenarios
- edge cases
- regression-prone areas
Even writing them in pseudo form helps.
Something like:
- user should stay logged in after refresh
- dropdown should not close on inner click
- old API consumers should still work
- retry should happen only once
- loading state should disappear on failure
Once you have this clarity, you can ask AI to generate actual tests from those scenarios.
The difference is huge.
Without tests:
AI becomes the decision maker.
With tests:
AI becomes an implementation assistant.
- Don’t Ask AI to “Fix It”
Most prompts are dangerously incomplete.
Examples:
“Fix this bug”
“Refactor this component”
“Optimize this query”
But software systems are interconnected.
A better approach is giving AI:
- expected behavior
- constraints
- what should NOT break
- related modules
- architectural patterns already in use
Treat AI like a new engineer joining the team.
The more context you provide, the safer the output becomes.
- AI Changes How We Engineer
The more I use AI in development, the more I realize that coding is slowly becoming a communication problem.
Not just:
writing code
fixing bugs
generating components
But:
- defining scope clearly
- identifying constraints
- thinking about side effects
- communicating intent properly
The quality of AI output is heavily influenced by the quality of engineering thinking behind the prompt.
And honestly, this changes how I approach development now.
I pause more.
I think more about:
- what could break
- which flows are connected
- whether enough context was provided
- whether the generated code actually matches product expectations
AI can accelerate implementation.
But engineering attention still matters.
Maybe even more than before.
Curious how others are handling this.
Have AI-generated changes ever introduced hidden side effects in your projects?
Do you write tests first before asking AI to implement?
How much code generated by AI do you review deeply vs trust directly?
Have your prompting habits changed over time while working with AI tools?
Would love to know how other developers are adapting their workflows around this.
Top comments (0)