I have been a freelance frontend developer for a while now.
I use GitHub Copilot and Cursor every day. AI coding tools that are supposed to make me faster.
And one thing kept happening that drove me completely crazy.
Everything worked. And everything was wrong.
I would ask Copilot to build something.
It would build it.
Clean. Working. No errors.
And completely disconnected from everything else in my project.
- Wrong folder
- Wrong naming convention
- New utility function even though one already existed
- New component even though one was three files away
Every single time.
I kept thinking:
Does this thing even see my project? Or is it just generating into a void?
After a lot of frustration, I realized the answer is both.
Your AI does not read your project. It reads your prompt.
This sounds obvious when you say it out loud.
But it took me an embarrassingly long time to actually understand it.
When you open a new Copilot session and write a prompt, it does not automatically scan your entire codebase.
It works with what is in context.
What you give it.
What it can see.
If you do not explicitly point it to your React project structure, it improvises.
And improvised structure looks clean on the surface.
But underneath, it creates a parallel system next to yours.
- Two naming conventions
- Two utility patterns
- Two ways of organizing the same thing
Not because Copilot is broken.
Because it had no map.
What I started doing differently
I stopped assuming Copilot knew where it was.
Before every session, I now tell it explicitly:
- What the React project structure looks like
- Where components live
- What naming conventions we use
- What already exists before it builds anything new
It sounds like extra work.
It takes about thirty seconds per session.
The drift stopped almost immediately.
No more parallel systems.
No more rebuilding what already existed.
No more wrong folders.
The deeper problem
React project structure is just one thing your AI does not know unless you tell it.
There is also:
- Typing
- State management
- Accessibility
- Component boundaries
- useEffect patterns
Every area where you have not given your AI explicit constraints, it improvises.
And improvised code looks fine until it does not.
What actually makes AI coding predictable
It is not a better prompt.
It is a better structure.
The question is not whether your AI is good enough.
The question is whether you have given it enough structure to work with.
Want to find the gaps in your project?
If your AI output feels inconsistent, the problem is usually not the tool.
It is the structure underneath.
I built a free 20 point checklist that helps you identify exactly that.
The structural gaps that make AI output unpredictable.
No guessing.
No endless follow up prompts.
Just a clear picture of what to fix before you prompt again.
👉 Get the React AI Debug Checklist — free
And if you want the full system
Rules across:
- Architecture
- Typing
- State
- Accessibility
- And more
Top comments (0)