It’s an hour until you’re free for the weekend, and you’re trying to knock out one
last ticket before you escape into whatever assuredly action-packed plans await you.
You spot a seemingly harmless task: "Add Middle Initial to User Name Display."
You chuckle. Easy. A palate cleanser. A victory lap.
You assign the ticket, flip it from New to Active, and let your IDE warm up
while you drift into a pleasant daydream about not being here.
But then the search results begin to appear.
Slowly. Line by line.
And your reverie begins to rot.
IUserNameStrategy,UserNameContext,UserNameDisplayStrategyFactory,
StandardUserNameDisplayStrategy,FormalUserNameDisplayStrategy,
InformalUserNameDisplayStrategy,UserNameDisplayModule, …
Incredulous, your chuckle hardens into a throat-scraping noise somewhere
between a laugh and a cry.
"What in the Gang-of-Fuck is happening here," you think, feeling your pulse tick up.
"Either someone read Refactoring.Guru like it was scripture and decided to
baptize the codebase in every pattern they learned, or a grizzled enterprise
veteran escaped some Netflix-adjacent monolith and is trying to upskill in
TypeScript. Because surely no sane developer would build this much redirection
for such a trivial feature… right?"
That tiny, spiraling task is a perfect microcosm of a continuous debate across engineering circles: when does abstraction help, and when does it become a hindrance?
I recently stumbled across the humorous article You're Not Building Netflix: Stop Coding Like You Are by Adam — The Developer. But though it resonates in many ways, its broader critique is ultimately misdirected.
Adam opens with a more complete version of the code mocked in my prologue, and uses the verbosity and obscurity of that abstraction pile as the springboard for a rebuke of using enterprise patterns pretty much across the board. While the author does allow for some abstractions, they're limited in application and scope.
The problem isn’t that the complaint is wrong. It’s that it points the finger at the wrong culprit, just about literally missing the forest for the trees.
Abstractions are fundamental and essential. They are the elementary particles of software, the quarks and leptons that bind into the subatomic structures that become the atoms our earliest techno-wizards fused into molecules. Today we combine those same basic elements into the compounds and contraptions made for use by millions. Without abstractions, we are left helpless in the ever-increasing parallel streams of pulsating electrical currents, rushing through specialized, intricately forged rocks that artisan wizards once trapped lightning inside and somehow convinced to think.
But even with all that power at our disposal, the way we use these building blocks matters. Chemistry offers a fitting parallel. Food chemists, for example, have spent decades learning how to repurpose industrial byproducts into stabilizers, textures, preservatives, and anything else that can be quietly slipped into a recipe. Much of this work is impressive and innovative, but some of it is little more than creative waste disposal disguised as convenience: a brilliant hack in the short term and a lingering problem in the long one.
Developers can fall into the same pattern. We learn a new technique or pattern or clever trick and then spend the next year pouring it into every beaker we can find. We are not always discovering better processes. Sometimes we are just repackaging the same product and calling it progress. When that happens, we are not solving problems. We are manufacturing new ones that future maintainers will curse our names over.
A developer must be architect, engineer, mechanic, and driver all at once. It is fine to know how to fix a specific issue, but once that problem is solved, that knowledge should become a building block for solving the next one. If we keep returning to maintain the same solution day after day, then what we built was never a solution at all. It was a slow-burning maintenance burden that we misfiled as "done."
Abstractions exist to reduce complexity, not to multiply it. Their purpose is to lighten the cognitive load, to lift the details off your desk so you can see the shape of the problem in front of you. Terse, repetitive, wire-on-the-floor code that looks like it tumbled out of a flickering green CRT from 1999 may impress the authors who have stared at machine code long enough to discern hair color from a data stream, but it does not serve the broader team or the system that outlives them. Abstractions only do their job when they are aligned with the actual problem being solved, and that brings us to the part many developers skip entirely: modeling your software after the problem you are solving.
Seeing the Problem Before Solving It
When you build a system, any system, even a disposable script, the first responsibility is understanding why it exists. What problem does it address. Has that problem been solved before. If so, what makes the existing solution insufficient for you now. Understanding that difference is the foundation that everything else must sit on.
I learned this the hard way as a homeowner. My house is old enough to have grounded me if I talked back to it as a teenager. A couple of years ago we went through a near-total remodel. We did some work before and shortly after our daughter was born, but within a year new problems started surfacing. We brought in a structural engineer. The slab foundation was heaving. After some exploration we discovered the culprit: the original cast iron sewage line had split along both the top and bottom, creating pressure changes and settling issues throughout the house.
The fix was not small. We pulled up every inch of flooring. Replaced baseboards. Repaired drywall. Fixed the broken line. Repainted entire sections. Redid trim. Installed piers. Pumped in foundation foam. Cashed in favors. Lost many weekends. And yet, even with all that, it still cost far less than buying an equivalent house in the current market at the current rates.
The lesson is simple. Things are rarely a total loss. Even when a structure looks hopeless, even when someone has effectively set fire to best practices, even when regulations or markets or technologies have shifted dramatically, there are almost always assets worth salvaging inside the wreckage. You should not bulldoze unless you know you have truly exhausted the alternatives.
Before throwing away any system and starting another from scratch, assess what you already have. Understand what is broken, what is sound, and what simply needs reinforcement. Software, like houses, tends to rot in specific places for specific reasons. Understanding those reasons is what separates renovation from reinvention.
The Nightstand Problem
The same principle applies at a smaller scale. You may already own a perfectly functional house with perfectly functional furniture, yet still want a nightstand you do not currently possess. Your choices are straightforward. You can hope someone has decided to let go of one that happens to meet your criteria. That is the open source gamble. You can buy one, constrained only by budget and whatever definition of quality the manufacturer is committed to that week. Or you can build one yourself, limited only by your skills, imagination, and tolerance for sawdust.
If your goal is personal satisfaction or experimentation, then by all means build the nightstand. But if your goal is to sell or support a product that helps make money, you are no longer just hobby-carpenting. You are operating in the domain of enterprise software.
And when you are building enterprise software, you must view the system from the top down while designing from the bottom up. From the top down, you think about every consumer of your system. In academic terms these are actors. Any system intended to be used, whether by humans or machines, is defined by the interactions between its actors and its responsibilities. Even an autonomous system is both an actor and the architected environment it operates within.
This perspective matters because it forces your abstractions to model the real world rather than some internal taxonomy of clever names. Good abstractions emerge from an understanding of the domain. Bad abstractions emerge from an understanding of a design pattern book.
And if you want maintainability, clarity, and longevity, you always want the first.
Building from Both Directions
Designing software means working from two directions at once. On one hand, you must understand the behavior your system must exhibit. On the other hand, you must understand the shape of the world it lives in. Systems are not invented whole cloth; they crystallize out of the interactions between intentions and constraints. If you ignore either direction, you end up with something brittle, confused, overbuilt, or perpetually unfinished.
There is nothing sacred about any particular architectural style. Pick Domain-Driven, Clean, Vertical Slice, Hexagonal, Layered, or something entirely different. The choice matters far less than your consistency and your commitment to encapsulating concerns properly. Different problems require different arrangements of the same conceptual ingredients. From high altitude, two domains may look identical. Once you descend toward the details, you often discover that one is a bird and the other is an airplane. The trick is knowing when to zoom out and when to zoom in.
Plenty of developers jump immediately into code, but the outside of the system is always the real beginning. What is it supposed to do. Who uses it. Who does it talk to. Who builds it. Who runs it. Who deploys it. Who monitors it. How do you prove it works. These questions define the problem space, and the problem space determines the boundaries and responsibilities your abstractions must reflect.
Even something as small as a script must obey this reality.
Consider a simple provisioning script. First it reads a certificate from the local filesystem so it can authenticate with a remote host. Next it opens an SFTP connection to a distribution server and retrieves a zip file. Then it extracts the archive to a temporary directory provided by the operating system. Finally it executes whatever installers or configuration commands the archive contains.
On the surface this is straightforward, yet every step is shaped by the environment in which it operates. Tools differ between platforms. Available executables change. File paths and separators vary. Temporary directory locations vary. Even the existence or reliability of SFTP clients varies. None of this means we must implement every possible alternative upfront, but it does mean we should acknowledge the existence of alternatives and avoid designing ourselves into a corner where adding support later requires rewriting the entire script.
This principle scales upward. You may choose to place your application data inside a database, but scattering SQL statements across your codebase is an anti-pattern in nearly any architecture not explicitly about database engines or ORM internals. Unless you are writing an RDBMS, data access is rarely the star of the show. The real substance lives in the application logic that interprets, transforms, regulates, or composes that data. Mixing data access concerns directly into that logic creates friction. Separating them reduces friction, which improves maintainability, which improves confidence, which improves speed.
The guiding question is always the same: does this choice help my system model the problem more clearly, or does it merely model my current implementation?
If it is the former, great. If it is the latter, you are accumulating technical debt even if the code looks clean.
Abstractions aligned with the domain allow your system to grow gracefully. But abstractions aligned with your tooling force your system to grow awkwardly and inconsistently.
This is the difference between designing from both directions and designing from just one.
Behavior as the Backbone of Architecture
At some point in every software project, the discussion inevitably turns to architecture. Engineers debate whether they should adopt Domain-Driven Design or Clean Architecture, whether their services ought to be hexagonal, layered, vertical-sliced, modular, or some other fashionable geometric configuration, and whether interfaces belong everywhere or nowhere at all. These conversations are interesting, even entertaining, but they often drift into abstraction for abstraction’s sake. The problem is rarely the patterns themselves; rather, it is that these debates frequently occur in a vacuum, disconnected from the actual behaviors the system must exhibit. Humans love patterns, but software only cares about whether it does the right thing.
The most reliable way to design a system, therefore, is to begin with its behavior. A system exists to do something, and if we do not articulate that something clearly, everything downstream becomes guesswork and improvisation. This is precisely where behavior-driven development demonstrates its value. I explore this more deeply in BDD: Make the Business Write Your Tests, but in short, BDD forces us to express the responsibilities of the system in language that is precise, verifiable, and shared by both technical and nontechnical stakeholders. A behavior becomes a specification, a test, a boundary, and a contract all at once.
From an architectural perspective, this shift in thinking is transformative. When we model the largest and most meaningful behaviors first and place an abstraction around them, we create an outer shell that defines the system at a conceptual level. From there, we move inward, breaking behaviors down iteratively into smaller and more specific responsibilities. Each division suggests a natural abstraction, but these abstractions are not arbitrary. They emerge directly from the behavior they represent. They are shaped not by the developer’s preferred patterns but by the needs of the domain itself. This recursive approach ensures that abstractions mirror intent rather than implementation details.
Importantly, this recursion is not fractal. We are not attempting to subdivide reality endlessly. Rather, we refine behaviors only until they are sufficiently well understood to be implemented cleanly. Much as one does not explain quantum chromodynamics to teach someone how to scramble an egg, we do not decompose software beyond what clarity and accuracy require. And while many languages encourage the use of interfaces as the primary mechanism for abstraction, the interface is not the abstraction itself. It is merely a convenient way to enforce a contract. The real abstraction is the conceptual boundary it represents. Whether that boundary is expressed as an interface, a type, a configuration object, or a module is irrelevant as long as the contract is clear and consistent.
This is why starting with abstractions like an IHost that orchestrates an IApplication works so well. These constructs mirror the system’s highest-level behaviors. Once defined, they allow us to drill inward, step by step, carving out responsibilities until the domain takes shape as a set of interlocking, behavior-aligned components. When abstractions are created this way, they tend to be stable. They align with the problem domain rather than the transient needs of a particular implementation, and therefore they seldom need to change unless the underlying behavior changes.
Frequent modification of an abstraction is a warning sign. A well-formed abstraction typically changes only under three conditions: the business behavior has evolved, an overlooked edge case has surfaced, or the original abstraction contained a conceptual flaw. Outside of those circumstances, the need to repeatedly modify an abstraction usually indicates that its boundaries were drawn incorrectly. When adjusting one behavior forces changes across multiple components, the issue is rarely "too many" or "too few" abstractions in an abstract sense. Instead, it is a failure of alignment. The abstraction does not adequately contain the concerns it is supposed to model, and complexity is leaking out of its container and into the rest of the codebase.
Modern tooling makes this problem even more evident. With the availability of source generators, analyzers, expressive type systems, code scaffolding, and dynamic configuration pipelines, there is increasingly little justification for sprawling boilerplate or repetitive structural code. Boilerplate is not a mark of engineering rigor. It is simply untested and uninteresting glue repeated dozens of times because someone did not take steps to automate it. Good abstractions, by contrast, elevate meaning. They allow the domain to be expressed directly without forcing the developer to wade through noise.
This leads naturally to what I consider the ideal state of modern development: a system that is entirely automated from the moment code touches a repository until the moment it reaches a production-like environment. Compilation, testing, packaging, deployment, orchestration, and infrastructure provisioning should not require human involvement. The only manual step should be expressing intent in the form of new or updated behaviors. Every function that exists within the system should originate as a behavior-driven specification capable of running the entire application inside a controlled test environment, complete with containerized dependencies and UI automation tools such as Playwright. Those same tests should also be able to stub dependencies so the scenarios can run in isolation. When the system itself is treated as the first unit under test, orchestration becomes a priority rather than an afterthought.
Achieving this level of automation depends on stability, and that stability depends on disciplined abstraction. Any element that may vary across environments, including configuration values, credentials, infrastructure, connection details, and policies, must be isolated behind settings and contracts that the application can consume without knowing anything about the environment it runs in. Once this encapsulation is in place, behavior-driven specifications can operate confidently, verifying the correctness of the system from the outside in even while its internal components remain free to evolve.
Finally, it is worth stating explicitly that hand-writing repetitive boilerplate code in a CRUD-heavy application, such as repositories, controllers, mappers, DTOs, validators, or entire edge-to-edge layers, is not admirable craftsmanship. It is busywork. If you have twenty entities with identical structural behavior and you are manually writing twenty sets of nearly identical files, the issue is not insufficient discipline. It is insufficient automation. Whether through source generators, templates, reflection-based pipelines, or dynamic modules, these problems can and should be solved generically. Engineers should focus their manual effort on the places where meaning lives: the domain, the behavior, and the boundaries.
Good abstractions do not eliminate complexity; they contain it. Bad abstractions distribute it. And behavior-driven, problem-first design is how we tell the difference.
From Story to Spec: Describing Behavior First
To make this concrete, return to our original "Add Middle Initial to User Name Display" ticket. Most teams would handle this with a couple of unit tests directly against whatever UserNameService or UserNameFormatter happens to exist. The tests would exercise a particular class, call a particular method, and assert on a particular string. That can work, but it starts at the implementation, not at the behavior.
If instead we begin with behavior, the specification sounds more like this:
When a user has a middle name, show the middle initial between the first and last name.
When a user does not have a middle name, omit the gap entirely.
When a display style changes (for example, "formal" versus "informal"), the rules about how the middle initial appears should still hold.
That is the contract. It does not mention classes, factories, or strategies. It talks about what the system must do from the outside.
With something like my project TinyBDD, that kind of behavior becomes executable in a fairly direct way. Using the xUnit adapter, a scenario might look like this:
using TinyBDD.Xunit;
using Xunit;
[Feature("User name display")]
public class UserNameDisplayScenarios : TinyBddXunitBase
{
[Scenario("Standard display includes middle initial when present")]
[Fact]
public async Task MiddleInitialIsRenderedWhenPresent()
{
await Given("a user with first, middle, and last name", () =>
new UserName("Ada", "M", "Lovelace"))
.When("formatting the user name for standard display", user =>
UserNameDisplay.Standard.Format(user))
.Then("the result places the middle initial between first and last", formatted =>
formatted == "Ada M. Lovelace")
.AssertPassed();
}
[Scenario("Standard display omits missing middle initial")]
[Fact]
public async Task NoMiddleInitialWhenMissing()
{
await Given("a user with only first and last name", () =>
new UserName("Ada", null, "Lovelace"))
.When("formatting the user name for standard display", user =>
UserNameDisplay.Standard.Format(user))
.Then("no dangling spaces or periods appear", formatted =>
formatted == "Ada Lovelace")
.AssertPassed();
}
}
In these scenarios, the behavior is the first-class citizen. The test does not care whether you use a UserNameDisplayStrategyFactory, a dependency-injected IUserNameFormatter, or a static helper hidden in a dusty corner of your codebase. It cares that given a user, when you format their name, you get the right string.
The abstractions are already visible in the code, but only as a side effect of expressing behavior:
UserName represents the domain concept of a person’s name, not a UI or persistence model.
UserNameDisplay.Standard represents a particular display style that the business cares about.
The behavior is encoded in the transition from UserName to the formatted string, not in a particular class hierarchy.
Notice what is not present: we do not have separate strategies for every permutation of name structure, locale, and display preference. We have a single coherent abstraction around "displaying a user name in the standard way," and the test drives the rules we actually need.
Letting Abstractions Fall Out of the Domain
Once you have a behavior-focused spec, the abstractions almost draw themselves. One reasonable implementation might look like this:
public sealed record UserName(
string First,
string? Middle,
string Last);
public interface IUserNameDisplay
{
string Format(UserName name);
}
public sealed class StandardUserNameDisplay : IUserNameDisplay
{
public string Format(UserName name)
{
if (!string.IsNullOrWhiteSpace(name.Middle))
{
return $"{name.First} {name.Middle[0]}. {name.Last}";
}
return $"{name.First} {name.Last}";
}
}
public static class UserNameDisplay
{
public static readonly IUserNameDisplay Standard = new StandardUserNameDisplay();
}
This is not an argument that every trivial formatting problem deserves an interface and a concrete class. You could inline this logic in a static helper and your tests above would still pass. The point is that the abstraction here is small, meaningful, and directly aligned with the behavior we care about. If later the domain grows to include multiple display styles, cultures, or localization concerns, there is already a clear seam to extend. You can introduce additional IUserNameDisplay implementations where and when they are genuinely needed, not because a pattern catalog declared that every noun deserves a factory.
If, however, you discover that adding a new behavior requires touching half the classes in the system, that is a sign you have modeled implementation variants rather than domain concepts. The behavior spec remains constant; the code churn reveals where your abstractions are misaligned.
Scaling the Same Idea Up to the System Level
So far this is all very local. A name goes in, a formatted string comes out. Real systems have much more interesting behaviors: accepting traffic, orchestrating workflows, integrating with external services, healing from transient failures, deploying safely, and so on.
The same discipline still applies. You can treat the application itself as the unit under test and express its behavior with the same style of specification. A high-level scenario might read something like this:
Given a configured application host and its dependencies
When the host starts
Then the public API responds to a health probe
And all critical services report healthy
And any failing dependency is surfaced clearly rather than silently ignored
As an executable TinyBDD scenario, that might look like:
using TinyBDD.Xunit;
using Xunit;
[Feature("Application startup and health")]
public class ApplicationHealthScenarios : TinyBddXunitBase
{
[Scenario("Host starts and exposes a healthy API surface")]
[Fact]
public async Task HostStartsAndReportsHealthy()
{
await Given("a test host with default configuration", () =>
TestApplicationHost.CreateDefault())
.When("the host is started", async host =>
{
await host.StartAsync();
return host;
})
.Then("the health endpoint returns OK", async host =>
await AssertHealthEndpointOk(host, "/health"))
.And("all critical health checks pass", async host =>
await AssertCriticalChecksPass(host))
.AssertPassed();
}
private static Task AssertHealthEndpointOk(TestApplicationHost host, string path)
{
// This could exercise a real HTTP endpoint against a TestServer or containerized instance.
// The assertion lives here, but the behavior is defined in the scenario above.
throw new NotImplementedException();
}
private static Task AssertCriticalChecksPass(TestApplicationHost host)
{
// Could query IHealthCheckPublisher, metrics, logs, or an in-memory probe endpoint.
throw new NotImplementedException();
}
}
The implementation details behind TestApplicationHost are intentionally omitted here, because they are not the main point. What matters is that at the boundary, we are still describing behavior: the host starts, the API responds, health checks pass. Internally, TestApplicationHost can wrap an IHost, use Testcontainers, spin up a WebApplicationFactory, or compose a full stack in Docker. The abstraction exists to let the behavior remain stable while infrastructure details evolve.
This is the same pattern you used on the small scale with UserNameDisplay, only now it operates at the level of the entire application. The outermost abstraction represents the system as it is experienced from the outside. Everything underneath exists to satisfy that experience.
Declarative Core, Automated Edge
Once specifications like these exist, they become the backbone of your automation. The natural end state is a development flow where:
A new behavior is introduced as a TinyBDD scenario.
That scenario boots the application in a realistic but controlled environment.
The rest of the stack compiles, configures, and deploys itself into a test harness without manual intervention.
The same scenarios run in local, CI, and pre-production environments, with only configuration differing between them.
The actual application code can then remain highly declarative. Controllers or handlers describe what should happen when a request arrives. Domain services express rules and policies in terms of value objects and aggregates. Infrastructure concerns hide behind interfaces and adapters. Source generators or templates can remove boilerplate around repetitive CRUD or mapping concerns. The tests remain focused on behavior: does the system do what we said it would do.
Abstractions in this world are not ornamental. They are the scaffolding that holds the behavior in place while the infrastructure and implementation details shift around it. As long as the specifications stay clear and the boundaries remain aligned with the domain, you can move quickly without losing correctness. And if you ever find yourself adding yet another UserNameDisplayStrategyFactoryFactory just to keep a scenario passing, you will at least have a clear, behavior-centric lens through which to recognize that something has gone wrong.
Design Patterns, Declarative Thinking, and Solving the General Case
Before closing, there is one more point worth addressing, because it tends to resurface in every conversation about abstraction: the role of design patterns. Patterns are often misunderstood in practice. For some teams they become a dictionary of shapes to copy. For others they become a superstition, something to be avoided because they "feel enterprise." In reality, design patterns are nothing more than reusable expressions of relationships that occur frequently in software. They only become harmful when applied without context.
Used well, patterns are a form of declarative modeling. They describe how things relate, not how many classes must be introduced to satisfy a template. This distinction is one reason I created PatternKit
which contains fluent implementations of every GoF pattern. The aim is not to celebrate patterns for their own sake, but to show that they can be expressed clearly and idiomatically, without the ceremony that has accumulated around them over the past few decades. A fluent strategy or builder is readable because it conveys meaning, not because it adheres to a UML diagram. A properly shaped composite or decorator is useful because it matches the problem at hand, not because the catalog says "now is the time."
Patterns at their best are accelerants for thought. They give structure to complex behavior. They reveal seams in the domain. They help us express intent without prescribing a particular class arrangement. When applied declaratively, patterns become lightweight tools that reinforce clarity rather than obstacles that obscure it.
This is the same principle that guides good abstractions. We should always aim to solve the general case when appropriate, rather than re-solving the same narrow problem in twenty different places. Shared operations belong in helpers or extensions not because we want fewer lines of code, but because meaning belongs in one place rather than scattered across many. Wrapping behavior in a well-designed abstraction is not indulgence; it is about shaping the domain so the rest of the system can grow without friction. Once the domain is sufficiently modeled, higher-order helpers, pipelines, or policy objects can provide a unified vocabulary for orchestrating that domain. These are the moments when patterns shine: when they articulate a common structure behind several similar problems and offer a clean way to express the variation between them.
Patterns should not be forced; they should emerge. If you find yourself retrofitting the domain to suit a pattern, the pattern is wrong. If a pattern clarifies the domain, it is the right one. When in doubt, your behaviors and your domain model will tell you the truth.
Closing Thoughts
At every scale, from a one-line helper to a distributed system with dozens of services, the same principles hold. Begin with behavior. Shape abstractions around the real problems rather than your implementation preferences. Allow patterns to emerge naturally when they clarify meaning. Lean on automation and declarative structures to eliminate noise. Let stability arise from good boundaries rather than rigid frameworks. And above all, keep re-examining the domain as it evolves. Systems live for years; code is rewritten many times. The only reliable compass is the domain itself.
You do not hate abstractions. What you hate are the wrong ones: misplaced layers, premature hierarchies, needless ceremony, half-understood patterns applied because someone did not want to think. But abstractions that arise from behavior and domain, shaped carefully and used intentionally, are not burdens. They are the tools that let us build systems that last.
And if you stay anchored to this style of approach, you won’t end up navigating a small cavern system of strategies and factories and wondering how a middle initial turned into a guided tour of every pattern someone ever took a little too seriously. You won’t need to fear abstraction at all. Instead, you will wield it where it belongs.
Top comments (0)