DEV Community

Vivian Voss
Vivian Voss

Posted on • Originally published at vivianvoss.net

Why We Reach for the Layer

A pen-and-ink illustration in the tradition of a 1900s newspaper plate. On the left side, a quote by Antoine de Saint-Exupéry reads:

On Second Thought — Episode 06

The ORM hides the SQL. The cache hides the ORM. The service mesh hides the services. The operator hides the YAML, which already hid the kubelet, which already hid the container, which already hid the process. By Tuesday, nobody quite remembers what the original problem was. They are too busy configuring its sixth wrapper.

This is the post about that wrapper.

The Axiom

When something does not work as wished, one adds a layer on top. The pattern is invisible because it is universal. We do it in code, in infrastructure, in process, in organisation. We wrap APIs in clients, clients in adapters, adapters in service objects, service objects in factories. We wrap deploys in pipelines, pipelines in operators, operators in platforms, platforms in portals. We wrap teams in tribes, tribes in chapters, chapters in centres of excellence. The plaster is the one tool that fits every wound, and the wound, rather conveniently, is never the one that gets examined.

The reflex is so deeply trained that the alternative does not occur as an option. The question "could we remove the underlying thing instead of wrapping it?" is rarely asked because the team that built the underlying thing is in the next room, the project that delivered it is in the previous quarter's review, and the engineer who would have to do the removal has thirteen tickets that close more cleanly with a wrapper. So the wrapper goes in, and a year later, the wrapper has its own wrapper.

The Origin

Layered architecture has a perfectly respectable origin. Edsger Dijkstra published "The Structure of the THE-Multiprogramming System" in CACM in May 1968, introducing disciplined layering as a means of bounding complexity. Each layer presented a strictly defined interface to the layer above it; an engineer could reason about one layer without holding the entire stack in their head. It was a brilliant move and remains one.

David Parnas, four years later, gave the underlying principle its enduring name. His 1972 paper, "On the Criteria To Be Used in Decomposing Systems into Modules", introduced Information Hiding: a module should hide what is likely to change behind a stable interface, so that change in one place does not propagate to all the others. Layers were one application of the principle. The intent was to contain complexity, not to defer it.

Somewhere between Parnas's paper and the third generation of cloud abstractions, the verb shifted. Containing became postponing. The layer that once prevented the lower one from leaking now exists primarily to defer the moment in which one would have to look at it. The Kubernetes operator does not hide a stable abstraction; it hides a YAML format that nobody wishes to read. The retry decorator does not bound a clean interface; it papers over an upstream service that has never been made reliable. The ORM does not abstract the database; it postpones the conversation about what the queries should actually be.

The vocabulary survived. The discipline did not. One does notice the inversion.

The Cost

Manny Lehman, working at IBM in the early 1970s, formulated what came to be called the laws of software evolution. The second law, in its mature form: the complexity of an evolving system increases unless explicit work is done to maintain or reduce it. Few sentences in computer science have aged this well. Lehman compared it, half-seriously, to the second law of thermodynamics: entropy is the default; order requires energy. The work to maintain or reduce, in practice, is the work that nobody is funded to do, because it produces no new feature, ships no new ticket, and leaves no diagram for the architecture review.

Defensive code-paths multiply as a consequence. Every API call gets wrapped in retries. Every value gets wrapped in null-checks. Every cache gets wrapped in invalidation logic. Phil Karlton, working at Carnegie Mellon and later at Netscape, is widely credited with the observation, somewhere around 1970, that there are two hard things in computer science: cache invalidation and naming things. The line was popularised by Tim Bray around 1996. We have, with rather industrious enthusiasm, made the first one our default architectural pattern, and we still cannot agree on the name of the variable that holds the result.

The cost is not only the cache. The cost is what happens to the people inside the system. The Senior Engineer's day shifts from building to understanding. She spends the morning tracing why a request that should take six milliseconds is taking eight hundred, walks through three retry decorators, two adapter classes, a service mesh sidecar, and a fallback strategy that has not been triggered since 2023, and finds at the bottom of all of it a database query that wants for an index. The index goes in; the eight hundred milliseconds become six. The retry decorator stays. The adapter stays. The sidecar stays. Removing them would be another quarter of work, and the quarter has other plans.

The Junior Engineer never gets to building, because the layers between her and the system have grown taller than the system itself. She is taught the operator before the syscall, the framework before the language, the platform before the protocol. When the abstraction breaks (and it always does), there is no layer beneath to fall back to. The foundation was never taught. It was skipped.

This was the substance of Episode 02. It is also the substance of this one. The two are linked because the layer-reaching reflex and the foundation-skipping curriculum are two halves of the same economy: an industry that compounds abstractions because compounding abstractions can be hired for, certified for, conferenced about, and sold. Reduction cannot be hired for. There is no certification.

The Question

Reduction is the hardest discipline in software. It looks easy from the outside because the result is, by definition, small. The result is small because someone has spent twenty years making it small.

SQLite, the most widely deployed database engine on earth, carries roughly 156,000 lines of mature C code (the canonical figure published by the project for version 3.42, May 2023). It has stayed one library because, every time a feature was proposed, the maintainers asked whether the existing surface could be made to do the work instead. The test suite is 92 million lines. The library is 156,000. That ratio is not an accident. It is the operational definition of reduction as a discipline.

awk has run essentially unchanged since Aho, Weinberger, and Kernighan published it at Bell Labs in 1977. Forty-nine years on, the language is the same language. Engineers who learned it in the 1980s can read code written this morning. The design was small enough to be finished, and the maintainers had the discipline to recognise that "finished" is a category that exists.

pf, the OpenBSD packet filter, has been one configuration file with one syntax since OpenBSD 3.0 in December 2001. Daniel Hartmeier began writing it in June 2001, after IPFilter was removed for licensing reasons. The syntax has been refined; the model has not been replaced. Twenty-five years later, an administrator who learned pf in 2003 can read a pf.conf written this week. There is no v2. There is no successor. There is no parallel implementation that one is encouraged to migrate to. There is one tool that does the work it was built to do.

None of these were elegant by accident. They were elegant by patience, which is the one resource the sprint cycle does not allocate. Each of them required a maintainer or a small team to refuse, repeatedly, the temptation to add. Refusing is not a quarterly metric. It is not a ticket category. It is not a Slack reaction. It is the silent work that holds the small body of software that the rest of the industry quietly stands on without thanking.

The deeper question is not whether we should layer less. The deeper question is what kind of organisation, what kind of contract, what kind of incentive structure could allow reduction to be a fundable activity rather than a private virtue practised by the few. Today it is funded only by accident: by maintainers who are paid for something else, by retired engineers donating evenings, by small institutions that never grew into the structures that would have stopped them from doing it.

What would happen if a team were given one sprint, just one, not to add a layer but to remove one? Who has the authority to ask the question? Who would carry the cost of the answer being yes?

The plaster is cheap. The wound is not.

Read the full article on vivianvoss.net →


By Vivian Voss — System Architect & Software Developer. Follow me on LinkedIn for daily technical writing.

Top comments (0)