DEV Community

Cover image for Software Engineering Is Living The Golden Hammer Antipattern — And Everyone Loves It
Leon Pennings
Leon Pennings

Posted on • Originally published at blog.leonpennings.com

Software Engineering Is Living The Golden Hammer Antipattern — And Everyone Loves It

Why the industry simultaneously agrees with Brooks and ignores him — and why it's structured to stay that way

The Paradox Nobody Talks About

Ask any experienced software engineer about essential versus accidental complexity. They will nod. Ask them about Brooks' central argument in No Silver Bullet — that the hard part of software is the conceptual work of understanding the problem, not the mechanical work of expressing it in code. They will nod again.

Then watch what happens when the next project starts.

Someone opens Spring Initializr. Someone proposes microservices. Someone puts Kubernetes in the architecture diagram before a single domain concept has been named. The technology stack is decided in the first week. The business domain is still being understood in month six.

Nobody in that room forgot Brooks. The choice was never really about Brooks.

That is the paradox this essay is about. Not that the industry is ignorant of the problem — but that it is structured to reproduce it perfectly, indefinitely, at enormous and invisible cost.

What Brooks Actually Said

In 1975, Frederick Brooks published The Mythical Man-Month, based on his experience managing the development of OS/360 at IBM. The project was late, over budget, and initially didn't work particularly well. Brooks spent the rest of his career trying to understand why.

The insight most people remember is the coordination problem. Adding people to a late software project makes it later. Nine women cannot make a baby in one month. Communication overhead scales quadratically. You cannot parallelise work that is fundamentally interdependent. Everyone knows this. It shows up in every post-mortem, every engineering blog, every conference talk about why the rewrite took three years instead of six months.

What people remember less clearly is the deeper argument Brooks made in his 1986 essay No Silver Bullet, later added to the anniversary edition of the book.

Brooks drew a distinction between two kinds of complexity in software. Essential complexity is inherent to the problem itself — the rules, the relationships, the invariants, the genuine difficulty of the business domain being modelled. Accidental complexity is everything else — the tools, the frameworks, the infrastructure, the deployment machinery, the coordination overhead introduced by the way we choose to build systems.

His claim was precise and devastating: there is no silver bullet because the hard part of software is essential complexity, and no tool or methodology can compress it. You cannot automate your way out of needing to understand the problem. You cannot framework your way past the conceptual work.

Then he said something that was either ignored or misunderstood: the industry's persistent belief that the next tool, the next methodology, the next architectural pattern will finally solve the problem of software difficulty is itself the symptom of failing to make this distinction.

That was 1986. Since then the industry has produced structured programming, object orientation, UML, SOA, agile, microservices, event-driven architecture, CQRS, cloud-native development, and AI-assisted coding.

Each one arrived as a silver bullet. Each one was greeted with the same enthusiasm. Each one was applied before the domain was understood.

Brooks' own framework predicted every step of it

The Golden Hammer The Industry Forgot To Question

There is a well-known antipattern in software called the golden hammer. It describes the tendency to over-apply a familiar tool regardless of whether it fits the problem. Named after Maslow's observation that if all you have is a hammer, everything looks like a nail.

The modern software industry does not have one golden hammer. It has a coordinated set of them — and they are chosen as a bundle, before the problem is understood, in almost every project that starts today.

The bundle looks like this: a popular framework for the application layer, microservices for decomposition, an event-driven or REST-based communication model, a cloud platform for deployment, and Kubernetes for orchestration. The specific tools vary by organisation and year. The pattern does not vary.

What makes this particular golden hammer different from the textbook antipattern is a crucial property: it is unfalsifiable.

A normal golden hammer eventually gets retired. Something demonstrates it was the wrong tool — the screw still won't turn, the nail bent, the joint failed. There is a moment of visible failure that creates pressure to reconsider.

The modern software stack has no such moment. If the system runs in production, the stack gets the credit. If the system struggles — if changes are expensive, if the team grows endlessly, if understanding the codebase requires months of archaeology — the blame goes to requirements changing, team turnover, business complexity, or simply the nature of software. The stack is never in the dock.

This is not an accident. It is a structural property of how software success is defined. A system running in production passes the only test anyone applies. There is no test for whether it could have been built at a fraction of the cost with a fraction of the complexity. Nobody built that version. Nobody ever does.

The golden hammer persists not because people are lazy or ignorant — but because the thing that should replace it is invisible to every organisational instrument the industry has built.

Agile Was The Correction. Then It Was Captured.

In 2001, the Agile Manifesto proposed something that was, underneath its somewhat vague language, a precise epistemological claim.

Software development is fundamentally a process of learning. You do not fully understand the domain at the start. You build a version of your understanding, expose it to reality — specifically to the domain experts who live in that business every day — and you refine it. Each iteration is not primarily a delivery mechanism. It is a question: did we understand the domain correctly?

The working software at the end of a sprint is not the point. It is the test. The test of whether your conceptual model of the business — your understanding of what the domain actually is, what rules govern it, what concepts belong together — corresponds to reality. Domain experts are not approving features. They are stress-testing your model.

That is what Agile was. A mechanism for continuously refining essential understanding through structured contact with reality.

That is not what Agile became.

What Agile became was a process for efficiently transcribing user stories into framework components. Two-week sprints. Velocity points. Definition of done. Backlog refinement. The ceremonies survived. The epistemology was quietly discarded.

And then CI/CD completed the transformation.

Continuous integration and continuous deployment are genuinely valuable practices for managing the operational complexity of releasing software. But they introduced a subtle and devastating redefinition of what "production ready" means.

Before, production readiness was at least nominally connected to domain correctness — does this system correctly implement the business? After, production readiness means the pipeline is green. Tests pass. Build succeeds. Deploy proceeds.

These are not the same question. A passing test suite validates that the code does what the code was written to do. It says nothing about whether the code was written to do the right thing. Whether the domain concepts are correctly identified. Whether the invariants are correctly enforced. Whether the model reflects the business reality or merely the user story that described one interaction with it.

You can have one hundred percent test coverage and zero domain correctness. The pipeline will be green. The system will go to production. The retrospective will be positive.

The feedback loop Agile promised — between domain experts and the conceptual model being built — was replaced by a feedback loop between the code and its own tests. We optimised the loop while removing the thing it was supposed to validate.

The Sociological Lock-In

So far this looks like an intellectual failure. Engineers and organisations that know better making choices they shouldn't. A problem of discipline or culture that better education might eventually correct.

It is not. It is structural. And the structure actively selects against correction.

Consider how a software project begins. Before a single domain conversation happens, several things must occur. The project must be staffed. That requires a job posting. A job posting requires a technology stack. The project must be estimated. Estimation requires a known architecture. The kickoff deck must be prepared. The kickoff deck needs something in the architecture diagram.

All of these organisational necessities demand a technology decision at the precise moment when the only intellectually honest answer is: we don't know yet. We haven't understood the domain.

That answer is organisationally impossible to give. So the stack gets chosen. Not out of ignorance. Not out of laziness. Out of genuine organisational necessity. The machinery of project initiation requires it.

And once the stack is chosen, it shapes everything that follows. The hiring criteria. The team composition. The onboarding process. The architecture decisions. The decomposition strategy. The system that emerges is not primarily a model of the business domain. It is primarily an expression of the technology choices made before the domain was understood.

This is not the worst part.

The worst part is what happens at the hiring stage.

Conceptual thinking — the ability to reason about what a business concept actually is, what it should own, what it should never be responsible for, where the real boundaries lie — is extremely difficult to assess in an interview. It requires time, domain context, and a level of conversation that most hiring processes cannot accommodate. It does not show up cleanly on a CV.

Tool fluency shows up immediately. Spring Boot, Kubernetes, Kafka, event-driven architecture — these are expressible, searchable, assessable. You can screen for them in thirty seconds. You can test them in a one-hour technical interview. You can verify them with a take-home assignment.

So organisations hire for tool fluency. Not because they don't value conceptual thinking. Because tool fluency is what their hiring process can see.

The consequence is a team that reaches for the familiar tools. The team ships systems using those tools. Those systems run in production. The hiring criteria get validated. The loop closes.

Engineers who push back on premature technology decisions get filtered out at the CV screen, outvoted in the kickoff meeting, or labelled as impractical idealists who don't understand how real projects work. The selection pressure is quiet, consistent, and almost entirely invisible.

When everyone hired thinks the same way, the golden hammer stops looking like a hammer. It looks like engineering.

The Cost Nobody Can See

Here is the claim that cannot be proven and cannot be dismissed.

A system built with a full modern distributed stack — framework, microservices, cloud infrastructure, orchestration — could in many cases have been built far more simply, maintained by a fraction of the team, and been more correct, more stable, and more responsive to business change.

That statement cannot be verified. Because the simpler version was never built. Nobody built it. The team that chose the distributed architecture never built the alternative to compare against. The organisation that approved the budget never saw a competing proposal. The engineers who maintained the system never worked on a well-modelled equivalent.

This is not a gap in the data. It is the mechanism of the problem.

Brooks identified it precisely: most systems are built only once. There is no second system built with different assumptions, run for five years, and compared on total cost of ownership, ease of change, and conceptual correctness. The counterfactual does not exist. Therefore the cost of the wrong choice is permanently invisible.

And here is what makes it truly unfalsifiable: the entire industry is paying the same inflated price. There is no reference point. When every team uses the same stack, incurs the same coordination overhead, grows to the same size, and struggles with the same maintenance costs — those costs stop being visible as costs. They become the definition of what software costs. Normal and wasteful become indistinguishable.

But the difference is not just in cost. It is in what the work actually consists of every single day.

In a team organised around accidental complexity, the daily work is about the technology. Configuring services. Connecting components. Managing framework upgrades. Fixing pipeline failures. Debugging integration issues. Updating dependencies. Understanding the codebase means knowing which service owns which endpoint and how the data flows between them. The business domain is somewhere in there, translated into controllers and DTOs and event schemas, but it is not what the day is about.

In a team organised around essential complexity, the daily work is about the domain. Which concept owns this responsibility. What this rule actually means. What the domain expert said yesterday that changed how they understand the model. The implementation follows from that understanding — and because the model is clear, the implementation is the smaller part of the day, not the larger.

The difference is visible — immediately and without any instrumentation — in the daily standup.

In one team, the language is technical. Spring, Kafka, the pipeline, the service, the endpoint, the migration. Progress is reported in terms of tickets and story completion. The word "business" appears occasionally, usually in the phrase "business requirement."

In the other team, the language is conceptual. The Order, the Invoice, the Payment, what a Shipment is responsible for, whether a Client and a User are really the same thing. Technology appears occasionally, usually briefly, because the implementation of a well-understood concept is rarely the hard part.

You do not need metrics or cost analyses to know which team is working on the right problems. You need one standup.

If every item on the standup is about accidental complexity — go back. Ask what the essential complexity actually demands. Then and only then choose the technology that serves it.

If every garage in the world were built to the standard of a luxury hotel, nobody would know a garage could cost less. The price would simply be what it is. The inflated standard would be the only standard anyone had ever seen.

That is where the software industry is today. Paying Burj Al Arab prices for a garage that needed to store a jar of paint. And maintaining a universal, genuine, unforced consensus that this is simply what garages cost.

Two Rules That Cost Nothing

Most prescriptions for this problem are expensive. Hire differently. Retrain your engineers. Adopt a new methodology. Bring in consultants. Run workshops.

These are not wrong. But they require budget, time, and organisational will that most teams do not have in the moment a project starts.

There are two rules that cost nothing, require no external help, and can be applied starting tomorrow.

Do not choose technology upfront.

Technology enters the project when the domain demands it, not when the kickoff deck needs an architecture diagram. The first weeks of a project produce domain understanding — what the business actually is, what concepts exist in it, what rules govern them. Technology choices follow from that understanding, added only when essential complexity makes them necessary, and only to the degree that it does.

This feels impossible in most organisations. The job posting needs a stack. The estimate needs an architecture. The kickoff slide needs something in the boxes.

Those are real constraints. They are also exactly the organisational machinery that inverts Brooks before the first line of code is written. Recognising that the machinery is the problem is the first step toward not letting it make the decision by default.

Mandate that standups should be about business concepts only. Never technology.

This is the litmus test made into a practice. If someone says "I'm working on the Kafka consumer," the immediate question is: what business concept does that serve, and does that business concept actually require it? If the answer is unclear, the technology choice is premature. If the answer is clear, state the business concept first and let the technology be the footnote it should be.

A standup where every item is about services, frameworks, pipelines, and endpoints is a standup where the team has been captured by accidental complexity. It will feel entirely normal. It will sound like engineering. The terminology will be confident and precise.

But the business domain — the essential complexity that justifies the system's existence — will be invisible. And a team that cannot talk about the business in its daily standup is a team that is not working on the business. It is working on the technology that was supposed to serve it.

These two rules do not solve the problem entirely. The sociological pressures remain. The hiring pipelines remain. The organisational machinery remains. But they create two moments — one at the start of a project, one every single day — where the inversion becomes visible. Where someone can point at the standup and say: we have not mentioned a business concept in three days. What are we actually building?

That question, asked consistently, is more powerful than any methodology.

Closing

The most expensive software is the software everyone agrees is fine.

It runs in production. The pipeline is green. The team is stable. The architecture is recognisable. The job postings write themselves. The onboarding takes three months instead of three days, but that is just how software works. The changes take longer than they should, but the domain is complex. The team keeps growing, but the system keeps growing too. The costs keep rising, but software is expensive.

None of this is inevitable. All of it is a consequence of a single inversion: accidental complexity chosen before essential complexity is understood. A choice made not out of ignorance, but out of organisational necessity, sociological pressure, and the permanent invisibility of the alternative.

Brooks saw it in 1975. Named it clearly. Watched the industry quote him extensively and change nothing.

The golden hammer is not a mistake. It is the product. The template is not a shortcut. It is the destination. The assembly is not the means. It has become the craft.

Two rules. No technology upfront. Standups about the business only.

They will feel radical. They are just Brooks, applied.

Everyone agrees with Brooks.

Then the next project starts.

Top comments (0)