When I first started experimenting seriously with AI-assisted development, I ran into the same problem many people seem to hit.
The AI could write code.
But the results were inconsistent.
Sometimes the output was excellent. Other times it was strange, overly complicated, or subtly wrong. Even when the code worked, it sometimes didn’t quite fit the system it was supposed to belong to.
At first I thought this was just the nature of the tools.
But after a few weeks of experimenting I realised the real issue wasn’t the AI.
It was the way I was asking it to work.
The biggest mistake I was making was simple: I was asking the AI to start coding immediately.
Once I changed that, everything improved.
The problem with "just build this"
Early on my prompts looked something like this:
Build an API endpoint for validating invoices.
Or:
Refactor this validation function.
Or:
Add support for buyer-specific rules.
The AI would respond quickly and confidently. Files would appear, functions would change, and new structures would sometimes emerge that looked quite reasonable.
But there was a subtle issue.
The AI was making architectural decisions on my behalf.
Sometimes those decisions were fine. Other times they introduced inconsistencies or patterns that didn’t quite match the rest of the system.
The problem wasn’t that the AI was bad at coding.
The problem was that it was improvising.
The pattern that changed everything
Eventually I started using a much stricter prompt pattern.
Instead of asking the AI to implement something directly, I ask it to go through a small planning process first.
The pattern looks like this:
- Scan the repository
- Explain the relevant architecture
- Propose a minimal implementation plan
- Wait for approval
- Then implement the change
The difference this makes is surprisingly large.
When the AI explains the architecture first, it usually discovers existing structures that should be reused. When it proposes a plan, it often reveals assumptions that can be corrected before any code is written.
By the time implementation begins, the AI is no longer guessing.
It is executing a plan.
Why this works
There’s a simple reason this pattern is effective.
AI models are very good at generating code, but they don’t naturally pause to reason about the system unless you explicitly ask them to.
If you jump straight to implementation, the model fills in the missing context however it thinks is best.
That can work sometimes.
But if the system has any real complexity, it’s much safer to force the AI to describe the system first.
This accomplishes two things.
First, it gives the AI a chance to understand the existing architecture before making changes.
Second, it gives the human developer a chance to review the plan and catch mistakes before they turn into code.
What this looks like in practice
When I’m working on my current project, the prompt often looks something like this:
Scan the repository and explain how invoice validation currently works.
Then propose the smallest safe implementation plan for adding buyer-specific rules.
List the files that would need to change, and wait for approval before implementing anything.
The response usually includes a clear explanation of the existing system along with a step-by-step implementation plan.
Only after reviewing and approving that plan do I ask the AI to proceed.
At that point, implementation becomes very fast.
The role of the developer changes here
This workflow also changes how the developer interacts with the AI.
Instead of acting like someone requesting code, the developer starts acting more like someone directing an engineering process.
The AI proposes plans.
The developer approves or modifies them.
Then the AI implements the result.
In other words, the human remains responsible for the architecture, while the AI accelerates the implementation.
That separation turns out to be extremely important.
The surprising result
Once I started using this pattern consistently, the quality of the AI’s output improved dramatically.
The code fit the existing system better. Changes were smaller and more targeted. The AI was less likely to invent unnecessary abstractions or refactor unrelated parts of the repository.
The tools didn’t change.
The prompt did.
And that turned out to make all the difference.
A simple rule
If you’re experimenting with AI coding tools, one small change can improve your results immediately.
Don’t start with implementation.
Start with understanding.
Ask the AI to explain the system first, propose a plan second, and implement only after the plan is clear.
It slows the process down by a few minutes.
But it saves a lot of confusion later.
Top comments (0)