If you are building a Todo app in a single index.js file, AI coding assistants feel like magic. They predict the entire function before you finish typing the signature.
But if you are working in a production repo with fifty interacting components, a shared types library, and complex state management, the magic dissolves.
You type a function call, and the AI suggests an import that looks perfect. The variable names match your conventions. The path looks correct. You hit Tab.
Then your build fails.
The function fetchUserData does not exist in ../../utils/user. The AI hallucinated the interface because it fit the pattern, not the reality.
This is the "System Boundary Problem." It is the reason why AI tools are incredible at writing pure functions but terrible at architecting systems.
Here is the mechanical breakdown of why LLMs struggle with multi-file context, and how to work around it without fighting the tool.
The Mismatch Between Context Windows and Dependency Trees
To a compiler, your codebase is a graph. It resolves symbols by traversing trees of imports and exports. It knows definitively that UserType in module A is exactly the same shape as UserType in module B.
To an LLM, your codebase is a limited buffer of text tokens.
Even with "long context" models (128k+ tokens), the model cannot "see" your entire repository at once with the same fidelity as a compiler. It relies on Retrieval Augmented Generation (RAG) to find relevant snippets.
When you are typing in OrderService.ts, the AI tool tries to fetch relevant context. It grabs the current file. It might grab the open tabs. It searches for files with similar names.
But it often misses the implied context.
It might miss the global type definition file that modifies the Request object. It might miss the middleware that transforms the data payload before it reaches your function.
The result is code that is syntactically valid but architecturally wrong. The AI writes code for the file, not for the system.
Why AI Hallucinates Interfaces at Boundaries
The most dangerous failure mode in multi-file projects is the "confident hallucination" of APIs.
I recently watched a junior engineer debug a React component for an hour. The AI had suggested a prop called isLoading on a custom button component.
The code looked standard:
The problem? CustomButton didn't have an isLoading prop. It had a loadingState enum.
The AI predicted isLoading because statistically, that is how 90% of React buttons on GitHub are written. It regressed to the mean of the internet training data rather than reading the specific definition file in the project.
This happens because the definition file was outside the immediate context window. The model filled the gap with a probabilistic guess.
How Circular Dependencies Confuse Suggestions
LLMs struggle with circular logic because they generate text linearly.
In a complex backend, you often have services that depend on each other implicitly. UserModule needs AuthModule, which checks UserModule for permissions.
When you ask an AI to refactor a function involved in this cycle, it often tries to inline the logic or create a direct import that causes a circular dependency error at runtime.
It treats the code as a local optimization problem. It sees: "I need to get user permissions." It writes: "Import AuthService." It does not see: "Importing AuthService here creates a cycle that crashes the app on startup."
How to Force Context Into the Window
You cannot fix the model's blindness. But you can fix what you feed it.
If you are asking an AI to write code that touches multiple files, you have to manually stuff the context window.
Open Related Files Most coding assistants prioritize open tabs. If you are working on a controller, force-open the model file, the types file, and the utility file. Don't just rely on the AI to find them.
Flatten the Documentation If you are working with a complex internal API, do not expect the AI to "know" how it works.
I keep a "scratchpad" file where I paste the raw Typescript interfaces or function signatures I am working with. I feed this scratchpad to the AI as context.
If the documentation is too long, I condense it. I sometimes use a Document Summarizer to strip a 50-page documentation file down to just the key class definitions and method signatures. I paste that summary at the top of my prompt. It gives the model the map without burning the token limit.
Why You Must Audit Imports First
The first thing to check in any AI-generated code block is the imports.
Do not check the logic first. Check the top of the file.
Does this path exist?
Is this named export actually exported?
Is this library actually in package.json?
AI loves to import libraries that should exist but don't. It will happily import date-fns even if your project uses moment.js, simply because date-fns is popular in the training set.
Defining the Scope of AI Tasks
We need to stop treating AI as a "Partner" and start treating it as a "Function Generator."
If you ask AI to "Refactor the authentication flow," it will fail. The scope is too wide. The dependencies are too scattered.
If you ask AI to "Write a regex to validate this email string," it will succeed. The scope is local.
For the in-between tasks—like refactoring a class—you need to break it down.
I use a Task Prioritizer workflow for this. I list the dependencies involved in the refactor, map out the order of operations (e.g., "Update Interface -> Update Service -> Update Controller"), and then execute them one by one. I don't let the AI plan the refactor. I only let it type the characters.
The Codebase is the Truth
The most important lesson for working with AI in a large repo is this: The AI does not know your codebase. It only knows your open file.
Every time you accept a suggestion, you are bridging the gap between the AI's training data (the internet average) and your specific reality.
If you treat the suggestion as a "draft" rather than a "solution," you catch the hallucinations early. If you treat it as correct by default, you will spend your entire afternoon debugging imports that don't exist.
-Leena:)
Top comments (0)