In my earlier blog, I wrote about architectural drift and how AI, while accelerating delivery, can quietly push systems away from their original intent. This post is not meant to counter that argument or claim a fix. Instead, it tries to answer a more grounded question: if drift is inevitable, how do we reduce it when using AI for small, everyday change requests?
This is written from observation, not authority. Most of what follows comes from noticing patterns—what tends to go wrong, what helps a little, and what at least makes the damage visible sooner.
Architecture, in both the earlier blog and this one, does not mean diagrams or heavyweight documentation. It’s not about how neat the boxes look. Architecture here means the decisions that make change safe: where boundaries exist, who owns what responsibility, how data flows, and which assumptions the system quietly depends on to behave predictably. When those decisions are not preserved, AI doesn’t break the system immediately—it just optimizes locally until the global shape no longer makes sense.
One important shift in mindset helped me personally. Instead of treating AI as a faster way to generate code, I started treating it as something that needs to be constrained before it can be helpful. Most AI-assisted changes in real systems are small. But even small changes, when repeated without architectural awareness, compound into drift. So the goal stopped being “get the correct output” and became “preserve the shape while making the change.”
This is where non-negotiables start to matter. These are not suggestions or preferences. They are constraints that must hold true regardless of the change being requested. Things like not introducing new service boundaries, not duplicating business logic, not altering data contracts, or not crossing architectural layers. When these are left implicit, AI fills in the gaps with assumptions that feel reasonable in isolation but are harmful in aggregate.
Another lesson was where these constraints belong. They cannot be an afterthought. The architectural intent has to be part of the original prompt, not something clarified later. Each change request should start with the same foundation, explain the impact first, and only then move toward implementation. Once a change is applied on shaky intent, fixing it later almost always costs more.
Code duplication deserves special mention because it’s a subtle failure mode. AI is very good at re-implementing logic in slightly cleaner ways. Without explicit instructions, it will create parallel paths that look fine but slowly fragment behavior. If reuse matters—and it usually does—it needs to be said clearly. “Reuse existing logic.” “Do not create parallel implementations.” “Extend, don’t replicate.” These aren’t things the model reliably infers on its own.
Context windows add another layer of complexity. The AI can only reason over what it currently sees. Feeding it large files or entire repositories often backfires, because constraints pushed earlier in the conversation can silently drop out of scope. A more realistic approach is to assume the model only has visibility into a small set of files and to name them explicitly. Architecture then becomes a compact working context, not a document dump.
A simple prompt structure helped more than expected. Something along the lines of stating the non-negotiables first, defining the assumed scope, and asking the model to explain architectural impact before writing code. Not because it makes the AI smarter, but because it forces a pause before execution. That pause alone reduces accidental drift.
It’s important to be honest about the limits of this approach. None of this is a cure. Architectural drift is a natural property of evolving systems. The goal here isn’t to prevent change, but to ensure change happens deliberately rather than by inference. These practices don’t eliminate risk; they just make it visible earlier, while correction is still possible.
AI is exceptionally good at moving systems forward. Architecture exists to make sure we don’t move forward blindly. If architectural intent isn’t carried explicitly into our prompts, the system will still evolve—just not in a direction we consciously chose.
This isn’t about control.
It’s about preserving shape while allowing change.
Example Architecturally Constrained Change Request Prompt
You are implementing a small, incremental change to an existing system.
The following architectural constraints are non-negotiable and must be preserved:
The system follows a layered architecture.
Business logic resides exclusively in the domain layer.
Adapters are responsible only for data translation, not decision-making.
Existing service boundaries and data contracts must remain unchanged.
No new abstractions, services, or parallel implementations may be introduced.
Assume your effective context is limited to the following files:
OrderService.py, OrderValidator.py, OrderRepository.py.
Do not infer, recreate, or duplicate logic that may exist outside this scope.
Change request:
Add validation for a new optional field in the order payload.
Reuse existing validation mechanisms wherever applicable.
Avoid duplication and ensure behavior remains centralized.
Before implementing the change, explicitly state:
the architectural layer where the change belongs
whether the change risks violating any constraint listed above
If a violation is identified, stop and explain the risk.
If no violation exists, implement the minimal code change required, ensuring the system’s architectural shape remains intact.
Top comments (0)