DEV Community

Cover image for Boundary Blindness: Why LLMs Struggle with Encapsulation and Scope
supal vasani
supal vasani

Posted on • Originally published at boundaryblindness.hashnode.dev

Boundary Blindness: Why LLMs Struggle with Encapsulation and Scope

Failure Modes of LLM-Assisted Codebases(2/n)

In physical architecture, walls separate rooms, dividing the functionality of a house.
Similarly, in a codebase, we have logical distance. A file in src/utils is logically far from a file in src/features/billing.

We seperate concerns so that  authentication logic doesn’t leak into payments and UI components don’t reason about database state. Encapsulation is how we prevent complexity from spreading.

But when you feed your whole codebase into a long-context LLM, it loses track of where logic actually exists.

FOR EXAMPLE

Imagine you provide an LLM with two files: AccountService.ts and ProfilePage.tsx.

AccountService.ts contains a private, fragile method calculateInterestInternal() and a private parameter_balance. It also exposes a public API called getPublicBalance().

ProfilePage.tsx is a UI component that should only display the final value.

you give the prompt ,“Show the user’s balance on the profile page and add an ‘Interest Earned’ label.” The LLM sees the_balance variable and the calculateInterestInternal() logic. Instead of calling the public API, it re-derives or directly calls the internal interest logic inside the UI and bypasses the public API entirely. The result is nothing crashes, code also passes the linters and UI depends upon the the internal finance logic.

The real problem arrives when you refactor the interest calculation, backend is correct but UI will give wrong output as it is using different logic, this is boundary blindness, this is not a bug its architecture damage.

The “Global Scope” Illusion

In a Transformer, directories, files, and modules do not exist as constraints. There is no tree or graph of ownership — only a flat probability surface. LLMs use self-attention. Each token in the context window calculates a relationship with every other token and asks “Which other tokens help me predict this token?” But It does not ask Does another method already exist? Is this allowed? Will this violate architecture? Problems arise when sensitive or internal information is used directly instead of public methods or APIs.

Visibility Becomes Permission

In transformer we have global vector space, In traditional engineering, we hide information.
The "Agreement": We hide implementation details behind an Interface (API). We trust that the consumer of the API cannot see how things work behind the scene.

The LLM Violation: When you provide the implementation file in the context window, the LLM "sees through" the interface. It sees the raw logic.

Why the Model Chooses Internals

LLMs choose the path of least resistance. If a private function like calculateInterestInternal() is more semantically relevant to the prompt — or closer than the public API getPublicBalance() — the model skips the validation layer because it is closer to the desired outcome in vector space.

The model optimizes for semantic proximity, not architectural intent.

Failure Modes:

1) Identity Confusion

If you have a UserValidator in Auth and a UserValidator in Shipping, the model sees the naming pattern and merges them. It produces a hybrid validator where your login flow suddenly checks shipping ZIP code constraints.

2) Internal State Leakage

UI or controller code uses internal state instead of public APIs. This leads to loss of a single source of truth, desynchronized behavior, and refactors that never finish.

3) Logic Forking

The model reimplements logic instead of calling it. Because the model does not understand “single source of truth,” it assumes the logic exists somewhere and recreates a similar version. Now there are two implementations, and neither stays correct forever.

4) Layer Inversion

Because the model sees capability, it makes the UI validate business rules, services create new models, and tests encode production rules. The system runs, but responsibilities are inverted, leading to classic architectural rot.

5) Contextual Over-Pollution

Large (200k+) context windows remove locality. The model forgets which file it is editing and which layer it is in, and it adds unnecessary utilities simply because they are available.

Why Reviewers Miss This

Reviewers usually check:

  • Does it work?
  • Does it read well?
  • Does it pass tests?

They don’t check:

Where was this logic supposed to live?

Conclusion

If Temporal Collapse (Post 1) makes the AI forget when code was written, Boundary Blindness makes it forget where code belongs.

This is not hallucination, lack of knowledge, or misunderstanding syntax.
It is correct reasoning applied in the wrong place, which leads directly to invariants becoming implicit.

In the next part of this series, we will look at Post 3: Invariant Decay, and how the loss of boundaries leads to the quiet erosion of a system’s core rules.

Follow along for the rest of the series on engineering-grade AI development.

Top comments (0)