<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Duncan Brown</title>
    <description>The latest articles on DEV Community by Duncan Brown (@dbrown).</description>
    <link>https://dev.to/dbrown</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dbrown"/>
    <language>en</language>
    <item>
      <title>AI Works Better When Behaviour Is Explicit</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Mon, 06 Apr 2026 16:53:29 +0000</pubDate>
      <link>https://dev.to/dbrown/ai-works-better-when-behaviour-is-explicit-35fp</link>
      <guid>https://dev.to/dbrown/ai-works-better-when-behaviour-is-explicit-35fp</guid>
      <description>&lt;p&gt;AI coding assistants are generally quite good at producing code.&lt;/p&gt;

&lt;p&gt;However, they are less reliable when they have to decide what that code should do.&lt;/p&gt;

&lt;p&gt;In other words, they struggle less with syntax than with intent.&lt;/p&gt;

&lt;p&gt;Given a clear description of behaviour, an assistant can often produce a reasonable implementation. Given an ambiguous one, it will still produce something — but that “something” may not align with what was actually intended.&lt;/p&gt;

&lt;p&gt;It’s a consequence of how we describe systems, and not a failure of the model.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem Is Ambiguity
&lt;/h2&gt;

&lt;p&gt;In most codebases, behaviour is only partially explicit.&lt;/p&gt;

&lt;p&gt;Some of it lives in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;method names
&lt;/li&gt;
&lt;li&gt;comments
&lt;/li&gt;
&lt;li&gt;documentation
&lt;/li&gt;
&lt;li&gt;tests
&lt;/li&gt;
&lt;li&gt;conversations between developers
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest is assumed.&lt;/p&gt;

&lt;p&gt;Those assumptions work reasonably well when the same people are working on the system and context is shared. They break down as soon as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;new developers join
&lt;/li&gt;
&lt;li&gt;features are modified across teams
&lt;/li&gt;
&lt;li&gt;systems grow in scope
&lt;/li&gt;
&lt;li&gt;or code is generated rather than written
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI coding assistants make the breakdown more visible.&lt;/p&gt;

&lt;p&gt;They don’t share your assumptions.&lt;/p&gt;

&lt;p&gt;They operate on what is written, not what is implied.&lt;/p&gt;

&lt;p&gt;Try this experiment: pick an open-source project and scan its code, PRs, and tickets for the kinds of artifacts mentioned above.&lt;/p&gt;

&lt;p&gt;Are you able to write a functional specification that perfectly describes the system’s intended behaviour? Do you think a coding agent would perform any better?&lt;/p&gt;




&lt;h2&gt;
  
  
  Where BDD Fits
&lt;/h2&gt;

&lt;p&gt;Behaviour-driven development is often discussed as a testing technique.&lt;/p&gt;

&lt;p&gt;More accurately, it is a way of making behaviour explicit.&lt;/p&gt;

&lt;p&gt;A scenario like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gherkin"&gt;&lt;code&gt;&lt;span class="nf"&gt;Given &lt;/span&gt;a document is submitted
&lt;span class="nf"&gt;When &lt;/span&gt;it is reviewed
&lt;span class="nf"&gt;Then &lt;/span&gt;it should receive a score between 0 and 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;doesn’t describe implementation.&lt;/p&gt;

&lt;p&gt;It describes intent, and that distinction matters.&lt;/p&gt;

&lt;p&gt;When behaviour is expressed this way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there is less room for interpretation
&lt;/li&gt;
&lt;li&gt;there are fewer implicit assumptions
&lt;/li&gt;
&lt;li&gt;and there exists a clearer boundary between what the system does and how it does it
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That clarity certainly benefits humans, but it also benefits systems that generate code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for AI-Assisted Development
&lt;/h2&gt;

&lt;p&gt;When behaviour is &lt;em&gt;implicit&lt;/em&gt;, AI has to infer intent.&lt;/p&gt;

&lt;p&gt;That inference is where things start to go wrong.&lt;/p&gt;

&lt;p&gt;An assistant may:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;implement the “most likely” interpretation
&lt;/li&gt;
&lt;li&gt;generalize beyond what was intended
&lt;/li&gt;
&lt;li&gt;introduce edge cases that were never discussed
&lt;/li&gt;
&lt;li&gt;or omit constraints that were never stated explicitly
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result often looks reasonable in isolation, but it may not match the actual expectations of the system.&lt;/p&gt;

&lt;p&gt;When behaviour is expressed explicitly — for example, through Gherkin-style scenarios — that ambiguity is reduced.&lt;/p&gt;

&lt;p&gt;The assistant no longer has to guess what the system should do.&lt;/p&gt;

&lt;p&gt;It can focus on how to implement what has already been defined.&lt;/p&gt;

&lt;p&gt;This shifts the problem from interpretation to execution as we move from an &lt;em&gt;imperative&lt;/em&gt; style to a &lt;em&gt;declarative&lt;/em&gt; style.&lt;/p&gt;




&lt;h2&gt;
  
  
  BDD as a Constraint System
&lt;/h2&gt;

&lt;p&gt;In previous discussions, I’ve described architecture as a constraint system.&lt;/p&gt;

&lt;p&gt;Patterns like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bounded contexts
&lt;/li&gt;
&lt;li&gt;aggregates
&lt;/li&gt;
&lt;li&gt;dependency direction
&lt;/li&gt;
&lt;li&gt;ubiquitous language
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;all restrict how a system is allowed to evolve.&lt;/p&gt;

&lt;p&gt;Behaviour-driven development introduces another form of constraint:&lt;/p&gt;

&lt;p&gt;It constrains &lt;strong&gt;behaviour&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A well-defined set of scenarios limits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what the system is &lt;strong&gt;expected&lt;/strong&gt; to do
&lt;/li&gt;
&lt;li&gt;how it &lt;strong&gt;should respond under specific conditions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;and what &lt;strong&gt;outcomes&lt;/strong&gt; are considered &lt;strong&gt;valid&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These constraints operate at a different level than architectural boundaries, but they serve the same purpose.&lt;/p&gt;

&lt;p&gt;They reduce the space in which incorrect changes can occur.&lt;/p&gt;

&lt;p&gt;For humans, this improves communication.&lt;/p&gt;

&lt;p&gt;For AI-assisted workflows, it reduces guesswork.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tooling, Not the Point
&lt;/h2&gt;

&lt;p&gt;Frameworks like Cucumber or other Gherkin-based tools are often used to execute these scenarios.&lt;/p&gt;

&lt;p&gt;That’s useful, but it’s not the most important part.&lt;/p&gt;

&lt;p&gt;The primary value of BDD in this context is not test execution.&lt;/p&gt;

&lt;p&gt;It’s the act of making behaviour explicit.&lt;/p&gt;

&lt;p&gt;You can get much of the benefit even without a full BDD toolchain, as long as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;behaviour is clearly described
&lt;/li&gt;
&lt;li&gt;expectations are shared
&lt;/li&gt;
&lt;li&gt;and scenarios are treated as part of the system’s definition
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tooling helps enforce it, similarly to how we might &lt;a href="https://dev.to/dbrown/architecture-is-a-constraint-system-3cb1"&gt;use ArchUnit to enforce architectural constraints&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The clarity is what makes it work.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Helps — and Where It Doesn’t
&lt;/h2&gt;

&lt;p&gt;Making behaviour explicit improves outcomes, but it does &lt;strong&gt;not&lt;/strong&gt; eliminate the need for discipline.&lt;/p&gt;

&lt;p&gt;BDD does not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;define architectural boundaries
&lt;/li&gt;
&lt;li&gt;prevent poor domain modelling
&lt;/li&gt;
&lt;li&gt;or replace the need for governance
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It complements those things.&lt;/p&gt;

&lt;p&gt;It works best when combined with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/dbrown/architecture-is-a-constraint-system-3cb1"&gt;clear architectural constraints&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;a &lt;a href="https://dev.to/dbrown/ai-coding-assistants-and-the-erosion-of-ubiquitous-language-301a"&gt;well-defined ubiquitous language&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;and &lt;a href="https://dev.to/dbrown/ai-governance-doesnt-need-to-start-big-30e"&gt;enforcement mechanisms&lt;/a&gt; that keep the system aligned over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without those, even well-written scenarios can (and do) drift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;AI coding assistants are not inherently unreliable.&lt;/p&gt;

&lt;p&gt;As anyone who's used one knows, they are sensitive to ambiguity.&lt;/p&gt;

&lt;p&gt;When intent is &lt;em&gt;implicit&lt;/em&gt;, they &lt;strong&gt;infer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When behaviour is &lt;em&gt;explicit&lt;/em&gt;, they &lt;strong&gt;implement&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Behaviour-driven development is an excellent way to make that intent visible.&lt;/p&gt;

&lt;p&gt;Not as a testing technique alone, but as a constraint on what the system is supposed to do.&lt;/p&gt;

&lt;p&gt;In systems that evolve quickly — whether through teams, automation, or AI-assisted development — that constraint becomes increasingly valuable.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ai</category>
      <category>testing</category>
      <category>ddd</category>
    </item>
    <item>
      <title>AI Governance Doesn’t Need to Start Big</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:46:20 +0000</pubDate>
      <link>https://dev.to/dbrown/ai-governance-doesnt-need-to-start-big-30e</link>
      <guid>https://dev.to/dbrown/ai-governance-doesnt-need-to-start-big-30e</guid>
      <description>&lt;p&gt;I was recently contacted by a professional on LinkedIn about my experience with commercial AI governance platforms.&lt;/p&gt;

&lt;p&gt;The assumption behind the question was clear: that “AI governance” is something that requires a formal product, a structured framework, or a sufficiently large organization before it becomes relevant.&lt;/p&gt;

&lt;p&gt;In my experience, that assumption is backwards.&lt;/p&gt;

&lt;p&gt;Governance doesn’t begin when you adopt a platform; rather, it begins the moment you introduce AI into a system.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Common Assumption
&lt;/h2&gt;

&lt;p&gt;There’s a tendency to think about governance as something that arrives later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;once the system becomes complex enough
&lt;/li&gt;
&lt;li&gt;once there are enough users
&lt;/li&gt;
&lt;li&gt;once risk becomes visible
&lt;/li&gt;
&lt;li&gt;once the organization can justify the investment
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, teams start evaluating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;governance frameworks
&lt;/li&gt;
&lt;li&gt;compliance tooling
&lt;/li&gt;
&lt;li&gt;vendor platforms
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until then, governance is often treated as optional, or deferred entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Waiting
&lt;/h2&gt;

&lt;p&gt;The issue with this approach is not that governance tools are unnecessary.&lt;/p&gt;

&lt;p&gt;It’s that delaying governance allows systems to evolve without constraints. Even a well-architected solution - which serves as a "front line" constraint - can allow AI features and integrations to drift in ways that aren't necessarily initially accounted for.&lt;/p&gt;

&lt;p&gt;And once patterns are established — even informal ones — they tend to persist.&lt;/p&gt;

&lt;p&gt;You start to see things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-related changes bypassing normal review processes
&lt;/li&gt;
&lt;li&gt;prompt or instruction updates made without traceability
&lt;/li&gt;
&lt;li&gt;unclear ownership of AI-driven behaviour
&lt;/li&gt;
&lt;li&gt;inconsistent handling of data boundaries or outputs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these decisions are individually catastrophic.&lt;/p&gt;

&lt;p&gt;But they accumulate.&lt;/p&gt;

&lt;p&gt;By the time governance is introduced formally, it’s often correcting an already established system rather than guiding its evolution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Start Small, But Start Explicitly
&lt;/h2&gt;

&lt;p&gt;Governance does not need to begin as a framework or a product.&lt;/p&gt;

&lt;p&gt;It can begin as a small set of explicit practices.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign a clear owner for each AI feature
&lt;/li&gt;
&lt;li&gt;Require pull request review for prompt or instruction changes
&lt;/li&gt;
&lt;li&gt;Define unacceptable outputs before releasing a feature
&lt;/li&gt;
&lt;li&gt;Log prompts and outputs for later inspection
&lt;/li&gt;
&lt;li&gt;Establish basic rules for what data is allowed to reach the model
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these require a platform.&lt;/p&gt;

&lt;p&gt;None of them require a large organization.&lt;/p&gt;

&lt;p&gt;But each one introduces a constraint, and those constraints shape how the system evolves (noticing a pattern here?).&lt;/p&gt;




&lt;h2&gt;
  
  
  Governance as an Iterative System
&lt;/h2&gt;

&lt;p&gt;These practices don’t need to be complete from the start.&lt;/p&gt;

&lt;p&gt;They can (and should) evolve.&lt;/p&gt;

&lt;p&gt;As the system grows, you might add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more structured evaluation processes
&lt;/li&gt;
&lt;li&gt;clearer data classification rules
&lt;/li&gt;
&lt;li&gt;formal review cadences
&lt;/li&gt;
&lt;li&gt;stronger enforcement through tooling
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point, a governance platform may make sense.&lt;/p&gt;

&lt;p&gt;But by then, it is supporting an existing set of practices rather than defining them.&lt;/p&gt;

&lt;p&gt;That distinction matters.&lt;/p&gt;

&lt;p&gt;A tool can reinforce governance, but it cannot replace it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Fits Architecturally
&lt;/h2&gt;

&lt;p&gt;In previous posts, I’ve written about architectural constraints — boundaries, layering, and ubiquitous language.&lt;/p&gt;

&lt;p&gt;Governance plays a similar role.&lt;/p&gt;

&lt;p&gt;It constrains how a system is allowed to change.&lt;/p&gt;

&lt;p&gt;Without governance, systems drift.&lt;/p&gt;

&lt;p&gt;With even minimal governance, that drift slows down: not because change is prevented, but because change becomes deliberate.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Way to Think About Readiness
&lt;/h2&gt;

&lt;p&gt;One of the reasons I built a small &lt;a href="https://ai-readiness.duncanbrown.tech" rel="noopener noreferrer"&gt;AI readiness assessment tool&lt;/a&gt; was to make these governance dimensions more visible.&lt;/p&gt;

&lt;p&gt;Most teams can answer questions about models or infrastructure.&lt;/p&gt;

&lt;p&gt;Fewer can answer questions about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ownership
&lt;/li&gt;
&lt;li&gt;traceability
&lt;/li&gt;
&lt;li&gt;data boundaries
&lt;/li&gt;
&lt;li&gt;failure handling
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Those&lt;/em&gt; are the questions that tend to matter later.&lt;/p&gt;

&lt;p&gt;If you’re evaluating AI readiness, it’s worth starting there.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;AI governance doesn’t need to feel like a large, future initiative.&lt;/p&gt;

&lt;p&gt;It can start with a handful of explicit rules.&lt;/p&gt;

&lt;p&gt;The important part is not completeness.&lt;/p&gt;

&lt;p&gt;It’s that the system begins with constraints.&lt;/p&gt;

&lt;p&gt;From there, governance can evolve alongside the system itself, and if a formal governance product or framework becomes helpful when your system becomes sufficiently complex, its integration into that system becomes much smoother.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>architecture</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Architecture Is a Constraint System</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Tue, 10 Mar 2026 14:58:17 +0000</pubDate>
      <link>https://dev.to/dbrown/architecture-is-a-constraint-system-3cb1</link>
      <guid>https://dev.to/dbrown/architecture-is-a-constraint-system-3cb1</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Good architecture is about more than just structure: It’s about constraints that limit how a system can decay as it evolves.  &lt;/p&gt;

&lt;p&gt;Patterns from domain-driven design such as bounded contexts, aggregates, and ubiquitous language work because they restrict how change can occur.&lt;/p&gt;




&lt;p&gt;When people talk about software architecture, the conversation usually centers on structure.&lt;/p&gt;

&lt;p&gt;We draw diagrams showing layers, services, modules, and boundaries. We talk about patterns like ports and adapters or hexagonal architecture. The focus tends to be on &lt;em&gt;how the system is arranged&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But arrangement is only the visible part of architecture.&lt;/p&gt;

&lt;p&gt;The deeper role of architecture is to define how a system is allowed to evolve.&lt;/p&gt;

&lt;p&gt;Every architectural choice introduces constraints: rules about which components may depend on others, how concepts are expressed in code, and where certain kinds of logic are permitted to live. These constraints rarely attract much attention when things are going well. They mostly become visible when they are violated. That's a big part of their job, after all.&lt;/p&gt;

&lt;p&gt;Without them, systems still function. Code compiles. Features ship. But the system gradually becomes harder to reason about, especially for humans. Dependencies spread, terminology drifts, and the boundaries that once made the design understandable begin to blur.&lt;/p&gt;

&lt;p&gt;Well-designed architecture slows that process down: Not by preventing change, but by narrowing the range of changes that are possible without deliberate effort. A good architecture also makes it easy to add to the system and evolve it.&lt;/p&gt;

&lt;p&gt;This is why architectural discipline matters most when a system begins evolving quickly. The faster a system changes, the more valuable those constraints become.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraints in Practice
&lt;/h2&gt;

&lt;p&gt;Architectural constraints often appear in patterns that experienced developers recognize immediately, even if we rarely describe them explicitly as constraints.&lt;/p&gt;

&lt;p&gt;Consider dependency direction in a layered or hexagonal system: In a ports-and-adapters architecture, the domain model defines the core policy of the system, while infrastructure implements the mechanisms that support it. Dependencies flow inward toward the domain, not outward from it.&lt;/p&gt;

&lt;p&gt;That rule is not just stylistic. It constrains how the system evolves. When infrastructure concerns leak into the domain layer — logging, persistence frameworks, messaging libraries, etc. — the domain model becomes coupled to its environment, something which any most architectures work to avoid. Over time, the domain stops representing the business and starts representing the technology stack. Whether in focused microservices or expansive monoliths, the result is the same: the domain stops representing the business.&lt;/p&gt;

&lt;p&gt;Domain-driven design introduces several patterns that reinforce this same idea.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bounded contexts&lt;/strong&gt; constrain where particular models are allowed to exist. A concept that is meaningful in one context may not belong in another. Crossing that boundary without translation creates ambiguity. It also potentially allows for duplicated, unaligned implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anti-corruption layers&lt;/strong&gt; (ACLs) provide another form of constraint: They prevent external models from directly shaping or unduly influencing the internal domain model, forcing integration to occur through a deliberate translation step. Without that boundary, external concepts tend to seep into the core model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggregates&lt;/strong&gt; also function as constraints. They define consistency boundaries within the domain, limiting where invariants &lt;em&gt;must&lt;/em&gt; hold and how state changes are coordinated. This restricts how data can be modified and prevents uncontrolled coupling between entities.&lt;/p&gt;

&lt;p&gt;Even a &lt;strong&gt;ubiquitous language&lt;/strong&gt; acts as a constraint system. When domain terminology is shared between developers, domain experts, and stakeholders, the codebase becomes an executable form of the language of the business. Renaming or redefining those concepts becomes a deliberate architectural decision rather than a casual refactor.&lt;/p&gt;

&lt;p&gt;None of these patterns exist merely to make diagrams "look neat."&lt;/p&gt;

&lt;p&gt;They restrict how change can occur, and that restriction is precisely what makes systems of all sizes understandable over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Constraints Are Missing
&lt;/h2&gt;

&lt;p&gt;When these constraints are weak - or, worse, implicit - systems do not fail immediately.&lt;/p&gt;

&lt;p&gt;They drift, as we've discussed in some of my earlier posts.&lt;/p&gt;

&lt;p&gt;Bounded contexts begin to blur as concepts migrate between modules without translation. External models slowly reshape internal ones because no anti-corruption layer stands in the way. Aggregates lose their consistency boundaries as logic spreads across services. The ubiquitous language fragments as developers introduce new terms that appear clearer locally but weaken the shared vocabulary of the domain.&lt;/p&gt;

&lt;p&gt;Each change is individually reasonable.&lt;/p&gt;

&lt;p&gt;Taken together, they gradually dissolve the architecture.&lt;/p&gt;

&lt;p&gt;This is why architectural discipline rarely fails all at once. Instead, it erodes through a sequence of small decisions that no one notices until the system becomes difficult to reason about.&lt;/p&gt;

&lt;p&gt;At that point, the architecture no longer constrains the system’s evolution.&lt;/p&gt;

&lt;p&gt;Instead, it merely documents what once existed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encoding Constraints
&lt;/h2&gt;

&lt;p&gt;Conceptual patterns alone are rarely enough to preserve architectural intent over time.&lt;/p&gt;

&lt;p&gt;Rather, they must be reinforced by artifacts and tooling that make those constraints explicit.&lt;/p&gt;

&lt;p&gt;Some will argue that these practices amount to over-engineering, especially for smaller systems.&lt;/p&gt;

&lt;p&gt;These arguments couldn't be further from the truth.&lt;/p&gt;

&lt;p&gt;It's like having a disaster recovery plan: Too many businesses believe they don't really need one; that is, until their first data catastrophe.&lt;/p&gt;

&lt;p&gt;Architectural artifacts provide the human-readable layer of governance. A specification document, a ubiquitous language definition, or a context map makes the intended structure of the system visible. These artifacts capture decisions that would otherwise live only in conversations or memory.&lt;/p&gt;

&lt;p&gt;Tooling provides the enforcement layer and pays immediate dividends.&lt;/p&gt;

&lt;p&gt;Static analysis rules, architectural tests, or module boundaries ensure that violations are caught early. Tools like ArchUnit (a favourite of mine) can verify dependency direction, preventing infrastructure concerns from leaking into the domain layer or enforcing separation between contexts.&lt;/p&gt;

&lt;p&gt;Together, these layers create a system of overlapping constraints.&lt;/p&gt;

&lt;p&gt;The artifacts describe the architecture.&lt;/p&gt;

&lt;p&gt;The tooling defends it.&lt;/p&gt;

&lt;p&gt;This combination makes architectural intent durable enough to survive the natural pressure of ongoing change.&lt;/p&gt;

&lt;p&gt;This is the real (not-so-)secret sauce when it comes to software engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture and Change Velocity
&lt;/h2&gt;

&lt;p&gt;Constraints matter most when systems begin to evolve quickly.&lt;/p&gt;

&lt;p&gt;As systems grow and change accelerates, implicit assumptions begin to fail.&lt;/p&gt;

&lt;p&gt;Features are added by new developers who were not present when the architecture was first established. Integrations multiply. Refactoring becomes more frequent. Small changes accumulate across multiple bounded contexts.&lt;/p&gt;

&lt;p&gt;When this happens, architecture stops being a static description of the system and becomes a mechanism for governing its evolution.&lt;/p&gt;

&lt;p&gt;The patterns described earlier — bounded contexts, anti-corruption layers, aggregates, ubiquitous language — are not simply modelling techniques. They are also constraints that limit how the system can change without deliberate effort.&lt;/p&gt;

&lt;p&gt;Tooling and architectural artifacts reinforce those constraints.&lt;/p&gt;

&lt;p&gt;Architectural tests prevent dependencies from drifting in the wrong direction. Explicit ubiquitous language definitions anchor meaning across teams. Context maps clarify where concepts are allowed to exist.&lt;/p&gt;

&lt;p&gt;These mechanisms do not eliminate change -- they shape it, and that shaping becomes increasingly valuable as the rate of change increases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Software architecture is often described in terms of structure: layers, modules, services, and patterns.&lt;/p&gt;

&lt;p&gt;In practice, its deeper role is to constrain the ways a system can decay.&lt;/p&gt;

&lt;p&gt;When constraints are clear, enforced, and shared across the team, systems can evolve quickly without losing their integrity. When they are implicit, erosion becomes almost inevitable.&lt;/p&gt;

&lt;p&gt;Domain-driven design provides many of the conceptual tools needed to establish those constraints. Tooling and governance practices help ensure they remain intact as systems grow.&lt;/p&gt;

&lt;p&gt;Architecture, in this sense, is about preserving a system's ability to remain understandable as it changes, not just designing them.&lt;/p&gt;

&lt;p&gt;This becomes even more important in environments where systems evolve rapidly — whether driven by larger teams, increased automation, or AI-assisted development.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ddd</category>
      <category>softwaredesign</category>
      <category>ai</category>
    </item>
    <item>
      <title>Why Most Teams Overestimate Their AI Readiness (It’s an Architecture Problem)</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Thu, 05 Mar 2026 19:06:02 +0000</pubDate>
      <link>https://dev.to/dbrown/why-most-teams-overestimate-their-ai-readiness-its-an-architecture-problem-40g1</link>
      <guid>https://dev.to/dbrown/why-most-teams-overestimate-their-ai-readiness-its-an-architecture-problem-40g1</guid>
      <description>&lt;p&gt;Integrating an AI model into an application is relatively straightforward.&lt;/p&gt;

&lt;p&gt;Building a system that can safely evolve once AI becomes part of it, however, is not.&lt;/p&gt;

&lt;p&gt;When organizations talk about “AI readiness,” the conversation usually centers around questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which model should we use?&lt;/li&gt;
&lt;li&gt;Which vendor should we choose?&lt;/li&gt;
&lt;li&gt;How good are our prompts?&lt;/li&gt;
&lt;li&gt;Can our infrastructure handle the load?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While those questions do matter, they rarely determine whether an AI-enabled system remains stable over time.&lt;/p&gt;

&lt;p&gt;In practice, long-term success depends far more on the surrounding &lt;strong&gt;architecture and governance&lt;/strong&gt; of the system itself.&lt;/p&gt;

&lt;p&gt;AI readiness is more a question of architecture and not one of tooling.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;AI readiness is often framed as a tooling problem; one of models, APIs, or infrastructure.&lt;/p&gt;

&lt;p&gt;In practice, it’s usually a governance problem.&lt;/p&gt;

&lt;p&gt;Systems that successfully ship AI features tend to have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;explicit architectural boundaries,&lt;/li&gt;
&lt;li&gt;clear domain language,&lt;/li&gt;
&lt;li&gt;operational guardrails,&lt;/li&gt;
&lt;li&gt;and processes that can absorb increased change velocity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without those, AI not only adds capability, but it also accelerates architectural drift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Is the "Easy" Part
&lt;/h2&gt;

&lt;p&gt;Most modern backend systems can integrate an AI model with relatively little effort. Third-party APIs and services have made sure of that.&lt;/p&gt;

&lt;p&gt;You add an adapter, wire up an API call, and expose a new capability through the application layer. &lt;/p&gt;

&lt;p&gt;Heck, you might even introduce a full-on &lt;a href="https://martinfowler.com/bliki/AntiCorruptionLayer.html" rel="noopener noreferrer"&gt;Anti-Corruption Layer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At that point, the system appears “AI-enabled.”&lt;/p&gt;

&lt;p&gt;But integration success is not the same thing as production readiness.&lt;/p&gt;

&lt;p&gt;Once AI becomes part of a system, the nature of the system itself changes.&lt;/p&gt;

&lt;p&gt;AI features introduce new forms of behaviour that traditional application architectures were not designed to govern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outputs that are probabilistic rather than deterministic&lt;/li&gt;
&lt;li&gt;decisions that must be traceable after the fact&lt;/li&gt;
&lt;li&gt;prompts and model configurations that evolve over time&lt;/li&gt;
&lt;li&gt;operational guardrails that must be enforced outside the core domain model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These concerns expand the governance surface area of the system.&lt;br&gt;
In other words, the system begins evolving faster.&lt;/p&gt;

&lt;p&gt;And faster evolution exposes weaknesses in architectural discipline.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Readiness Is Governance Maturity
&lt;/h2&gt;

&lt;p&gt;A system that is genuinely ready to support AI features tends to demonstrate several characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architectural boundaries are explicit and enforceable.&lt;/li&gt;
&lt;li&gt;Domain language is clearly defined and consistently applied.&lt;/li&gt;
&lt;li&gt;Refactoring discipline is supported by automated tests.&lt;/li&gt;
&lt;li&gt;Governance mechanisms exist to prevent silent structural drift.&lt;/li&gt;
&lt;li&gt;Teams can increase change velocity without destabilizing the system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These characteristics have little to do with AI itself: They are properties of a well-governed software architecture.&lt;/p&gt;

&lt;p&gt;AI simply makes them non-optional.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where Systems Actually Break
&lt;/h2&gt;

&lt;p&gt;When problems appear, they rarely originate in the model itself.&lt;/p&gt;

&lt;p&gt;They usually emerge in the surrounding system.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architectural boundaries begin to erode as AI-related behavior spreads across layers.&lt;/li&gt;
&lt;li&gt;Cross-cutting concerns such as logging, tracing, or validation leak into domain code.&lt;/li&gt;
&lt;li&gt;Domain language becomes inconsistent as new abstractions appear.&lt;/li&gt;
&lt;li&gt;Review processes struggle to keep up with change velocity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these failures are caused by AI directly.&lt;/p&gt;

&lt;p&gt;They are symptoms of insufficient governance, and AI increases the rate at which those weaknesses surface.&lt;/p&gt;




&lt;h2&gt;
  
  
  Evaluating Readiness
&lt;/h2&gt;

&lt;p&gt;Because these factors are architectural rather than purely technical, they are often overlooked when organizations evaluate readiness.&lt;/p&gt;

&lt;p&gt;More useful questions tend to look more like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who is accountable for the behaviour of this AI feature?&lt;/li&gt;
&lt;li&gt;What categories of data are allowed to reach the model?&lt;/li&gt;
&lt;li&gt;Can the system reconstruct an AI-driven decision end-to-end?&lt;/li&gt;
&lt;li&gt;How are unacceptable outputs defined and tested?&lt;/li&gt;
&lt;li&gt;What happens operationally if the feature fails?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions are less about models and more about governance.&lt;/p&gt;

&lt;p&gt;They determine whether the surrounding system can sustain AI behaviour safely over time.&lt;/p&gt;

&lt;p&gt;Those governance dimensions — such as accountability, data boundaries, observability, guardrails, and incident response — are far more predictive of long-term success than model selection.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Structured Assessment
&lt;/h2&gt;

&lt;p&gt;To make these dimensions easier to reason about, I built a small readiness assessment tool that evaluates AI adoption through an architectural and governance lens.&lt;/p&gt;

&lt;p&gt;The assessment uses a deterministic scoring model to evaluate areas such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;architectural boundary discipline&lt;/li&gt;
&lt;li&gt;governance maturity&lt;/li&gt;
&lt;li&gt;semantic alignment&lt;/li&gt;
&lt;li&gt;observability and auditability&lt;/li&gt;
&lt;li&gt;tolerance for increased change velocity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rather than focusing on vendor selection or model capabilities, the goal is to surface structural and governance risks that often remain implicit.&lt;/p&gt;

&lt;p&gt;If you're curious, you can try the assessment here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ai-readiness.duncanbrown.tech" rel="noopener noreferrer"&gt;AI Feature Production Readiness Assessment&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal isn’t to produce a definitive score; rather, it’s to make architectural and governance readiness visible before AI accelerates the system’s evolution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;AI does not fundamentally change software architecture, but it does change the &lt;strong&gt;rate at which systems evolve&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When evolution accelerates, implicit rules become fragile, as I've discussed in a &lt;a href="https://dev.to/dbrown/when-ai-refactors-break-layering-and-how-to-prevent-it-k"&gt;previous article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the AI era, readiness is less about models and more about governance.&lt;/p&gt;

&lt;p&gt;Architectural discipline determines whether speed becomes progress — or erosion.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ai</category>
      <category>devops</category>
      <category>leadership</category>
    </item>
    <item>
      <title>AI Coding Assistants and the Erosion of Ubiquitous Language</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:09:37 +0000</pubDate>
      <link>https://dev.to/dbrown/ai-coding-assistants-and-the-erosion-of-ubiquitous-language-301a</link>
      <guid>https://dev.to/dbrown/ai-coding-assistants-and-the-erosion-of-ubiquitous-language-301a</guid>
      <description>&lt;p&gt;In my previous two posts, I’ve written about integrating AI into Spring Boot systems without collapsing architectural boundaries, and about how AI-assisted refactors can quietly erode layering.&lt;/p&gt;

&lt;p&gt;There’s another form of drift that’s even subtler still:&lt;/p&gt;

&lt;p&gt;Semantic drift.&lt;/p&gt;

&lt;p&gt;AI coding agents don't just change structure faster. They change language faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  Ubiquitous Language Is Structural, Not Cosmetic
&lt;/h2&gt;

&lt;p&gt;In domain-driven design, ubiquitous language isn’t just naming preference.&lt;/p&gt;

&lt;p&gt;It’s the shared vocabulary across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers
&lt;/li&gt;
&lt;li&gt;Domain experts
&lt;/li&gt;
&lt;li&gt;Product owners
&lt;/li&gt;
&lt;li&gt;Stakeholders
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When done well, the code mirrors the language of the business. This has the obvious benefit of allowing technical and non-technical people to describe and understand the same concepts with little friction.&lt;/p&gt;

&lt;p&gt;The name of an aggregate, entity, value object, or concept is not arbitrary — it encodes meaning.&lt;/p&gt;

&lt;p&gt;That meaning becomes part of the system’s architecture. &lt;/p&gt;

&lt;p&gt;Meaning is easier to erode than structure.&lt;/p&gt;

&lt;p&gt;Let's see how.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Simple Example of Semantic Drift
&lt;/h2&gt;

&lt;p&gt;Imagine a domain centred around document reviews.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Review&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Score&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the business, everyone says:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Review”&lt;/li&gt;
&lt;li&gt;“Score”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll assume for now that the definitions of those terms are &lt;em&gt;implicit&lt;/em&gt;; that is, it's assumed that everyone "just knows" what someone is talking about when they use the words "Review" and "Score", and that there is no explicitly defined ubiquitous language.&lt;/p&gt;

&lt;p&gt;Now imagine asking an AI assistant to “improve naming clarity",  “refactor for readability", or even, "implement a more robust and comprehensive scoring system".&lt;/p&gt;

&lt;p&gt;You might end up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Assessment&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Rating&lt;/span&gt; &lt;span class="n"&gt;rating&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From a purely linguistic perspective, this looks reasonable.&lt;/p&gt;

&lt;p&gt;From a domain perspective, however, it may not be, and in fact most likely is not.&lt;/p&gt;

&lt;p&gt;If stakeholders still talk about “reviews” and “scores,” but the code now talks about “assessments” and “ratings,” the shared language has split.&lt;/p&gt;

&lt;p&gt;Nothing breaks; tests still pass; layers remain intact.&lt;/p&gt;

&lt;p&gt;But meaning has drifted. Developers begin to make use of these new terms, eschewing the "older" versions, even as those new terms begin to take on additional meanings.&lt;/p&gt;

&lt;p&gt;Now imagine a meeting between the developers, management, and stakeholders. When one side refers to either set of terms, one of two outcomes is likely:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The other side(s) quietly assume that the use of the term is synonymous with &lt;em&gt;their&lt;/em&gt; term, and so too is their understanding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The other side(s) question the use of the term, and the side using the term does some vocal hand-waving and says, "They're pretty much the same thing," which may or may not be accurate.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Either way, the door has been opened for future issues that will likely compound over time.&lt;/p&gt;

&lt;p&gt;(There's actually a third, correct way to handle the above scenario, and that is to actively question the use of the term and insist on establishing an explicit, mutually-understood definition and to add it to a shared ubiquitous language document.&lt;/p&gt;

&lt;p&gt;Obvious - and frequently ignored.)&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Risk
&lt;/h2&gt;

&lt;p&gt;Layering drift breaks structure.&lt;/p&gt;

&lt;p&gt;Semantic drift breaks alignment.&lt;/p&gt;

&lt;p&gt;A system can preserve its ports and adapters and &lt;em&gt;still&lt;/em&gt; lose its conceptual integrity.&lt;/p&gt;

&lt;p&gt;Once a ubiquitous language fragments - or is never formalized in the first place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discussions become translation exercises.&lt;/li&gt;
&lt;li&gt;Onboarding slows down.&lt;/li&gt;
&lt;li&gt;Invariants become harder to reason about.&lt;/li&gt;
&lt;li&gt;Governance loses its anchor.&lt;/li&gt;
&lt;li&gt;Refactoring becomes riskier because intent is no longer clear.&lt;/li&gt;
&lt;li&gt;Assumptions and connotations make their way into code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And AI accelerates this.&lt;/p&gt;

&lt;p&gt;AI assistants frequently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduce “more common” synonyms.&lt;/li&gt;
&lt;li&gt;Replace domain-specific terms with generic ones.&lt;/li&gt;
&lt;li&gt;Suggest loaded abstractions like &lt;code&gt;Manager&lt;/code&gt;, &lt;code&gt;Handler&lt;/code&gt;, or &lt;code&gt;Processor&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Collapse business vocabulary into framework vocabulary.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As with AI-assisted refactors, none of these changes are malicious.&lt;/p&gt;

&lt;p&gt;They might be locally optimal.&lt;/p&gt;

&lt;p&gt;But neither structure nor semantics are local.&lt;/p&gt;

&lt;p&gt;They are cumulative.&lt;/p&gt;




&lt;h2&gt;
  
  
  Making Ubiquitous Language Explicit
&lt;/h2&gt;

&lt;p&gt;If structural boundaries need governance, so does language.&lt;/p&gt;

&lt;p&gt;Ubiquitous language should live not only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In conversations.&lt;/li&gt;
&lt;li&gt;In someone’s head.&lt;/li&gt;
&lt;li&gt;In scattered comments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should exist as a canonical artifact.&lt;/p&gt;

&lt;p&gt;Although it may sound tempting, don't bury that artifact in &lt;code&gt;AGENTS.md&lt;/code&gt;, and don't allow that "artifact" to be implied by class names alone.&lt;/p&gt;

&lt;p&gt;Something like a dedicated &lt;code&gt;spec.md&lt;/code&gt; or &lt;code&gt;ubiquitous-language.md&lt;/code&gt; file. I prefer to put mine in a dedicated &lt;code&gt;docs/&lt;/code&gt; folder off the project root.&lt;/p&gt;

&lt;p&gt;Ideally, it should be structured and machine-parseable. Even a YAML file is a good idea.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Review:
  definition: An evaluation of a submitted document.
  invariants:
    - Has exactly one Score.
    - References one Document.
  synonyms_disallowed:
    - Assessment
    - Evaluation

Score:
  definition: A normalized integer between 0 and 100 representing review quality.
  invariants:
    - Range: 0..100
  synonyms_disallowed:
    - Rating
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not documentation for humans alone.&lt;/p&gt;

&lt;p&gt;It can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Referenced during code review.&lt;/li&gt;
&lt;li&gt;Used to validate naming conventions.&lt;/li&gt;
&lt;li&gt;Fed into AI prompts as a constraint.&lt;/li&gt;
&lt;li&gt;Parsed by tooling to generate glossaries.&lt;/li&gt;
&lt;li&gt;Used by front-end interfaces to maintain consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once language is explicit, it can be protected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The language must be versioned.&lt;/strong&gt; Any proposed change to the language should go through an established workflow and agreed upon by all required parties. Rolling back should be as simple as rolling back to a previous commit. I cannot stress this enough.&lt;/p&gt;

&lt;p&gt;Some additional guidelines for using your ubiquitous language artifact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ideally, keep it in a single shared repository and treat it as the sole source of truth. I prefer to keep mine in a backend application that has tasks for exposing and/or exporting the language for consumption by other projects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can (and should) also be updated as necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a guardrail to your &lt;code&gt;AGENTS.md&lt;/code&gt; to let the coding agent know &lt;strong&gt;not&lt;/strong&gt; to introduce any new terms or concepts that aren't already a part of the ubiquitous language definition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have the agent confer with the developer should the agent feel a new term or concept should be used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tokenize the terms and verbiage of any user-facing interfaces (e.g. web apps) to draw from the ubiquitous language. A great bonus use of this is for hovering over domain terms as a form of embedded glossary.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Governance Protects Meaning
&lt;/h2&gt;

&lt;p&gt;Architectural governance is not only about enforcing layers.&lt;/p&gt;

&lt;p&gt;It is about preserving intent.&lt;/p&gt;

&lt;p&gt;There are at least two dimensions to protect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structural boundaries (what depends on what).&lt;/li&gt;
&lt;li&gt;Semantic boundaries (what things mean).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Structural governance might be enforced through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ArchUnit rules.&lt;/li&gt;
&lt;li&gt;Module boundaries.&lt;/li&gt;
&lt;li&gt;Build-time checks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Semantic governance requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Canonical definitions.&lt;/li&gt;
&lt;li&gt;Explicit vocabulary.&lt;/li&gt;
&lt;li&gt;Deliberate review of term changes.&lt;/li&gt;
&lt;li&gt;Guardrails in AI-assisted workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If AI is part of your development process, it must be aligned not only with your layering rules, but with your language.&lt;/p&gt;

&lt;p&gt;AI cannot respect a domain vocabulary that is not formalized.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Structural drift is quiet but visible.&lt;/p&gt;

&lt;p&gt;Semantic drift is quieter.&lt;/p&gt;

&lt;p&gt;Both compound over time.&lt;/p&gt;

&lt;p&gt;If you care about architectural longevity in AI-assisted systems, governance must protect both structure and meaning.&lt;/p&gt;

&lt;p&gt;Without that, speed quickly becomes erosion.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ddd</category>
      <category>ai</category>
    </item>
    <item>
      <title>When AI Refactors Break Layering (And How to Prevent It)</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Thu, 26 Feb 2026 23:38:22 +0000</pubDate>
      <link>https://dev.to/dbrown/when-ai-refactors-break-layering-and-how-to-prevent-it-k</link>
      <guid>https://dev.to/dbrown/when-ai-refactors-break-layering-and-how-to-prevent-it-k</guid>
      <description>&lt;p&gt;In a previous post, I wrote about integrating AI into a Spring Boot system without compromising architectural boundaries: We can consider that the initial integration problem.&lt;/p&gt;

&lt;p&gt;There’s a second, more subtle problem: AI-assisted refactoring changes the risk profile of cross-cutting modifications.&lt;/p&gt;

&lt;p&gt;Not all “refactors” are created equal.&lt;/p&gt;

&lt;p&gt;Classic refactoring — in the Fowler sense — is local, behaviour-preserving, and structural, e.g.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract method.
&lt;/li&gt;
&lt;li&gt;Rename class.
&lt;/li&gt;
&lt;li&gt;Move function.&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are relatively safe when existing tests are solid. &lt;/p&gt;

&lt;p&gt;(In fact, I'd argue that developers are unlikely to attempt a refactor of any import on a production codebase larger than a few classes without automated tests in place. If they value their sanity, at least.)&lt;/p&gt;

&lt;p&gt;But many AI-generated “refactors” are not local or even of the Fowler variety.&lt;/p&gt;

&lt;p&gt;They are often cross-cutting changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add logging across the application.
&lt;/li&gt;
&lt;li&gt;Introduce tracing.
&lt;/li&gt;
&lt;li&gt;Add validation.
&lt;/li&gt;
&lt;li&gt;Add retries.
&lt;/li&gt;
&lt;li&gt;Add metrics.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These can and frequently do touch multiple layers at once.&lt;/p&gt;

&lt;p&gt;And that’s where architectural boundaries quietly start to erode.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Concrete Example: Logging Done “Everywhere”
&lt;/h2&gt;

&lt;p&gt;Imagine you ask an AI assistant:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt; Add structured logging to the application using SLF4J.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;What often happens next is predictable.&lt;/p&gt;

&lt;p&gt;You start seeing this appear across multiple classes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.slf4j.Logger&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.slf4j.LoggerFactory&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Domain aggregate, entity, or value object&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Review&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;Logger&lt;/span&gt; &lt;span class="n"&gt;log&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="nc"&gt;LoggerFactory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getLogger&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Review&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the coding assistant’s perspective, this is a reasonable implementation.&lt;/p&gt;

&lt;p&gt;From an architectural perspective, it’s a boundary violation.&lt;/p&gt;

&lt;p&gt;Considering the case above, the domain model now depends on an infrastructure concern.&lt;/p&gt;

&lt;p&gt;Nothing breaks immediately (most likely), but a rule has been silently crossed.&lt;/p&gt;

&lt;p&gt;Now multiply that pattern across tracing, metrics, validation, or retries, and the boundaries/layering start to blur further.&lt;/p&gt;

&lt;p&gt;This isn’t a malicious change; it’s just a locally optimal one in the "eyes" of the coding assistant.&lt;/p&gt;

&lt;p&gt;It's clear, however, that, unchecked, those changes compound and create possible pitfalls in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern Behind the Example
&lt;/h2&gt;

&lt;p&gt;Logging is just one example.&lt;/p&gt;

&lt;p&gt;As mentioned above, the same pattern shows up when you ask an AI assistant to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add metrics across the application.&lt;/li&gt;
&lt;li&gt;Introduce tracing.&lt;/li&gt;
&lt;li&gt;Add validation rules.&lt;/li&gt;
&lt;li&gt;Implement retries or timeouts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are all cross-cutting concerns: They touch multiple layers at once.&lt;/p&gt;

&lt;p&gt;An AI assistant, optimizing for task completion, will often choose the shortest path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import the library directly.&lt;/li&gt;
&lt;li&gt;Add the dependency where it’s immediately needed.&lt;/li&gt;
&lt;li&gt;Modify multiple classes in place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a purely functional perspective, these are reasonable.&lt;/p&gt;

&lt;p&gt;From an architectural perspective, it’s risky, because architectural boundaries are rarely (if ever) encoded in the type system alone.&lt;/p&gt;

&lt;p&gt;They exist as intent, and intent is fragile when it’s implicit.&lt;/p&gt;

&lt;p&gt;("Making the implicit explicit," as Eric Evans suggests, is one of the greatest strengths of domain-driven design, in my professional opinion.)&lt;/p&gt;

&lt;p&gt;If architectural boundaries exist only as convention, they will eventually be crossed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Governance Is About Alignment, Not Rigidity
&lt;/h2&gt;

&lt;p&gt;To be clear, architectural governance is not the same as business rule enforcement.&lt;/p&gt;

&lt;p&gt;Domain invariants — things like scoring logic or validation rules — are enforced at runtime by the application itself. They belong within domain aggregate roots.&lt;/p&gt;

&lt;p&gt;Architectural boundaries are notably different: They shape how the system is allowed to evolve.&lt;/p&gt;

&lt;p&gt;If you’re using DDD or hexagonal architecture (and hopefully you are if you're reading this article), those boundaries are intentional and confer myriad benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The domain does not depend on infrastructure.&lt;/li&gt;
&lt;li&gt;Controllers do not bypass application use cases.&lt;/li&gt;
&lt;li&gt;Infrastructure does not bleed upward.&lt;/li&gt;
&lt;li&gt;Slices remain isolated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those "boundary rules" don’t enforce behaviour; rather, they enforce structure, and structure (along with sufficient coverage by automated tests) is what makes long-term refactoring safe.&lt;/p&gt;

&lt;p&gt;When changes can be generated quickly — whether by a team of developers or a single developer using AI — I argue that &lt;strong&gt;implicit rules are not enough.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Governance needs to exist in more than one form, whether your goal is to leverage AI coding assistance or go "full human" mode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human-readable contracts (&lt;strong&gt;what the architecture allows&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Build-time enforcement (&lt;strong&gt;tests that fail on boundary violations&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;AI-aware constraints (&lt;strong&gt;so automated edits respect the same rules&lt;/strong&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal isn’t to create a straitjacket. It’s to ensure that the developer, the compiler, and any AI assistant are all operating under the same architectural intent.&lt;/p&gt;

&lt;p&gt;The above makes the implicit explicit. Adding overlapping (but equally responsible) guardrails (automated tests, a well-defined &lt;code&gt;AGENTS.md&lt;/code&gt; or &lt;code&gt;CLAUDE.md&lt;/code&gt; file, etc.) is the best way to achieve this goal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Architectural governance isn’t about rigidity. At least, it doesn't have to be.&lt;/p&gt;

&lt;p&gt;It’s about alignment — across developers, build tooling, and AI assistants.&lt;/p&gt;

&lt;p&gt;When boundaries are explicit and enforceable, refactoring becomes safer and architectural drift slows dramatically.&lt;/p&gt;

&lt;p&gt;(For those interested, I’ve packaged these ideas into a small Spring Boot baseline that demonstrates the pattern end-to-end. Details are available in my profile if you’re interested.)&lt;/p&gt;

&lt;p&gt;I’ll be writing more on architectural governance and AI-assisted development in the coming weeks. If this topic resonates, feel free to follow along.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ddd</category>
      <category>ai</category>
      <category>springboot</category>
    </item>
    <item>
      <title>AI + Spring Boot: How to Avoid Architectural Drift</title>
      <dc:creator>Duncan Brown</dc:creator>
      <pubDate>Mon, 23 Feb 2026 20:31:40 +0000</pubDate>
      <link>https://dev.to/dbrown/ai-spring-boot-how-to-avoid-architectural-drift-1kbo</link>
      <guid>https://dev.to/dbrown/ai-spring-boot-how-to-avoid-architectural-drift-1kbo</guid>
      <description>&lt;p&gt;Over the past couple of years, I’ve watched a lot of teams add AI features to backend systems.&lt;/p&gt;

&lt;p&gt;The pattern is predictable.&lt;/p&gt;

&lt;p&gt;It starts small:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Let’s just call OpenAI from this controller.”&lt;/li&gt;
&lt;li&gt;“We’ll clean it up later.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A few weeks later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SDK types leak into the domain.&lt;/li&gt;
&lt;li&gt;Prompt strings are scattered across services.&lt;/li&gt;
&lt;li&gt;Tests start hitting real APIs.&lt;/li&gt;
&lt;li&gt;Infrastructure concerns bleed into business logic.&lt;/li&gt;
&lt;li&gt;The boundaries that once felt clear become blurry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing explodes immediately; it just becomes harder to reason about.&lt;/p&gt;

&lt;p&gt;Teams move fast and often leave discipline for later (only for it to become technical debt).&lt;/p&gt;

&lt;p&gt;While this isn't always a big deal, in my experience, it's bitten more teams in the rear end than not.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Risk
&lt;/h2&gt;

&lt;p&gt;AI integration isn’t inherently messy, but integrating it without structural discipline is.&lt;/p&gt;

&lt;p&gt;Most examples online focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to call the API
&lt;/li&gt;
&lt;li&gt;How to parse JSON
&lt;/li&gt;
&lt;li&gt;How to stream responses
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Very few focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where that logic belongs
&lt;/li&gt;
&lt;li&gt;How to preserve domain purity
&lt;/li&gt;
&lt;li&gt;How to keep tests deterministic
&lt;/li&gt;
&lt;li&gt;How to prevent provider lock-in
&lt;/li&gt;
&lt;li&gt;How to keep AI-assisted refactors from eroding structure
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s where long-term problems start.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Safer Pattern (DDD + Hexagonal)
&lt;/h2&gt;

&lt;p&gt;If you’re using Spring Boot with DDD or hexagonal architecture (I argue DDD paired with an hexagonal architecture is rarely a bad idea), AI integration should respect the same boundaries as any other external dependency.&lt;/p&gt;

&lt;p&gt;We don't need to follow strict DDD for less-complex systems, but, at a minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The domain defines a port.
&lt;/li&gt;
&lt;li&gt;The application layer orchestrates use cases.
&lt;/li&gt;
&lt;li&gt;The infrastructure layer implements the adapter.
&lt;/li&gt;
&lt;li&gt;Profiles control whether you use:

&lt;ul&gt;
&lt;li&gt;A stub (deterministic)&lt;/li&gt;
&lt;li&gt;A real provider (OpenAI, etc.)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Domain port&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;AiService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;AiAnalysis&lt;/span&gt; &lt;span class="nf"&gt;analyzeDocument&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DocumentContent&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The domain depends on this interface — not on any SDK.&lt;/p&gt;

&lt;p&gt;Then, in infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Infrastructure adapter&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OpenAiAdapter&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;AiService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;AiAnalysis&lt;/span&gt; &lt;span class="nf"&gt;analyzeDocument&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DocumentContent&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Call provider SDK&lt;/span&gt;
        &lt;span class="c1"&gt;// Parse structured JSON&lt;/span&gt;
        &lt;span class="c1"&gt;// Map to domain AiAnalysis&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The quick, immediate wins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Controllers never call OpenAI directly.
&lt;/li&gt;
&lt;li&gt;The domain never imports SDK types.
&lt;/li&gt;
&lt;li&gt;Tests can swap in a stub implementation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Perhaps most importantly, the integration stays contained.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stub by Default
&lt;/h2&gt;

&lt;p&gt;One pattern I’ve found especially valuable:&lt;/p&gt;

&lt;p&gt;Make stub AI the default.&lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests are deterministic.
&lt;/li&gt;
&lt;li&gt;Local development does not require API keys.
&lt;/li&gt;
&lt;li&gt;You can simulate issue scenarios reliably.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then enable the real provider via profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;SPRING_PROFILES_ACTIVE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This separates development discipline from production capability.&lt;/p&gt;

&lt;p&gt;It may sound obvious, especially in the age of TDD, but I still see developers coupling specific AI providers (and, indeed, providers of various services) to their codebase right out of the gates, tests included.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI-Assisted Development Needs Guardrails
&lt;/h2&gt;

&lt;p&gt;There’s another layer most people ignore at their own peril.&lt;/p&gt;

&lt;p&gt;Even if your architecture starts clean, AI coding assistants can gradually degrade it.&lt;/p&gt;

&lt;p&gt;Without guardrails, tools will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduce framework dependencies into the domain
&lt;/li&gt;
&lt;li&gt;Inline prompt strings in controllers
&lt;/li&gt;
&lt;li&gt;Refactor across layers
&lt;/li&gt;
&lt;li&gt;Create “utility” dumping-ground classes
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution isn’t to stop using AI tools, especially when--targeted correctly--they can amplify your skillsets and productivity.&lt;/p&gt;

&lt;p&gt;(As a small aside, I've found treating a coding agent as a smart "junior developer" of sorts to be the starting point of an effective mindset.)&lt;/p&gt;

&lt;p&gt;It’s to make your architectural constraints explicit.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain purity rules
&lt;/li&gt;
&lt;li&gt;Ports &amp;amp; adapters boundaries
&lt;/li&gt;
&lt;li&gt;No SDK usage outside infrastructure
&lt;/li&gt;
&lt;li&gt;Deterministic testing requirements
&lt;/li&gt;
&lt;li&gt;Profile-based wiring constraints
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Paired with something like ArchUnit (a personal favourite of mine), those constraints become build-time enforcement.&lt;/p&gt;

&lt;p&gt;That dramatically reduces architectural drift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built a Template Around This
&lt;/h2&gt;

&lt;p&gt;I ended up building a Spring Boot template that demonstrates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vertical slice structure
&lt;/li&gt;
&lt;li&gt;DDD + hexagonal boundaries
&lt;/li&gt;
&lt;li&gt;AI behind a domain port
&lt;/li&gt;
&lt;li&gt;Stub AI by default
&lt;/li&gt;
&lt;li&gt;Profile-gated OpenAI integration
&lt;/li&gt;
&lt;li&gt;In-memory + Postgres adapters
&lt;/li&gt;
&lt;li&gt;Flyway migrations
&lt;/li&gt;
&lt;li&gt;ArchUnit enforcement
&lt;/li&gt;
&lt;li&gt;An &lt;code&gt;AGENTS.md&lt;/code&gt; governance file designed for AI-assisted development
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s intentionally minimal and doesn’t try to solve every problem.&lt;/p&gt;

&lt;p&gt;What it does do is establish a disciplined baseline you can extend safely.&lt;/p&gt;

&lt;p&gt;If you’re integrating AI into backend systems and care about architectural longevity, it may be useful to you, even in the age of vibe coding.&lt;/p&gt;

&lt;p&gt;Feel free to leave any questions you might have in the comments! And if you think the template I put together might be of use to you, please let me know.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>springboot</category>
      <category>ddd</category>
      <category>java</category>
    </item>
  </channel>
</rss>
