This blog post was jointly written by Brittany DePoi and Jonathan E. Magen.
Introduction
Structured programming with the goal of modularity was developed in the late 1960s. It's been several decades since, and yet even the most experienced among us get bitten by modularity issues. This goes for the modularity of our code as well as the modularity of our systems themselves. Nowhere is this emphasized more strongly than in the prevalence of so-called "integrated releases" which, more often than not, involve multiple products being released in not-so-perfect synchrony. Such events are often scheduled quarterly and can last well into the night, if not take up an entire weekend. In some companies, these integrated releases fall almost entirely on traditional operations teams as the authors of various pieces of software have long since thrown their work product "over the wall". Highlighting the artificial separation between teams so prevalent in legacy organizations, the separation targeted for erasure by the modern DevOps movement.
Whether you are conceptually breaking apart code into understandable units (soft modularity) or physically into separate microservices, libraries or artifacts (hard modularity), how you structure the dependencies between your modules is critical.
With the hype, and overhead, of microservices, it is important to consider lessons in hard modularity that extend beyond language or architecture. It is possible to conceptually break apart business logic and have many small artifacts, but be in an even messier situation than before!
Monolithic microservices, often referred to as distributed monoliths, result when deployable artifacts remain overly coupled. A microservice is only as small as a system's largest confidently and independently deployable unit.
Soft Modularity: widely known, usually beloved
We define soft modularity as conceptually breaking apart code into understandable units. It is a topic widely studied in schools, and hopefully used in practice. Soft modularity is the kind of modularity which results in object orientation, function boundaries, or even breaking a program up into separate files. It can be easily argued that there is inherent value in such organization: the maintainability is supported through abstraction which reduces cognitive load.
Therefore, a soft modularity scale of maturity ranges from spaghetti code to appropriately segregated logic or data. Note that such maturity does not necessarily align with module granularity; as both a single-file, giant codebase and an over-engineered component hierarchy are problematic in their own ways. Striving towards soft modularity maturity is critical for software maintainability. It is our contention, however, that an exclusive focus on soft modularity is only one side of the coin.
Enter Hard Modularity
The existence of a soft modularity implies a hard modularity. And so, we define hard modularity as "physically" breaking apart logical units into separate services, libraries, or artifacts. Symptoms and causes of insufficient hard modularity may include:
- "Integrated" releases
- Data consistency bugs emerging between multiple components from using databases as a communication layer (shared mutable state)
- Multiple, differing definitions of data validity from different conceptions held by cooperating stakeholders
- Having to change at the pace of your slowest-adapting consumer from lacking clean contracts (e.g. interfaces without versioning)
- Being surprised by breaking changes from libraries (we've all been bitten by using
latest
) - Inability to adapt to changing conditions or huge cognitive overhead due to inappropriately abstracted/inaccurate domain models
Hard modularity adds critical value as well: the manageability is supported through decoupling which reduces synchronous movements.
Putting it all together
It's not that hard and soft modularity are two opposite ends of the same spectrum, or even rungs on a ladder. Instead, they are two axes of the same 2-dimensional space, into which we can plot the locations of system archetypes familiar to us. Envisioning this plane, we can easily consider the relative locations of classical monoliths, distributed monoliths, all the way to well-architected microservices.
Hard
Modularity
▲ Well-Architected
│ Microservices
│ x
│
│
│
│
│ Well-Architected
│ Monolith
│ x
│
│
│
│
│
│
│
│ Distributed
│ Monolith
│ x
│ "Traditional"
│ Monolith
│ x
│ Soft
└────────────────────────────────────────▶ Modularity
This isn't to say that one is, in all situations and for all cases, better than the other. Both axes apply to applications regardless of granularity: traditional monoliths, microservices, and "Serverless" functions can benefit from each kind of maturity. In fact, plenty of teams are able to mitigate negative effects, such as those listed above, successfully with a traditional monolith.
Consider the case of a monolith written in Elixir/OTP, which actually supports incredibly high soft modularity through constructs like GenServers and other OTP Behaviors. An entire service might be developed, deployed, and managed monolithically (perhaps using some of the awesome tooling available for the BEAM) and still support a rapid pace of change sufficient to meet requirements. soft modularity maturity supports maintainability while hard modularity maturity facilitates effective operational management under continuous change. We, as an industry, have grown to appreciate such qualities when aspiring to continuous integration/continuous delivery (CI/CD).
If we can agree that some applications should be monolithic, it stands to reason that they should also pursue hard modularity as good architecture practice to relieve coupling pain points. To illustrate this we can look to ElasticSearch as a perfect example of a well-architected monolith whose development and operation remain unencumbered when handling a rapid pace of change. We can attribute its long-term success to aspects of hard modularity. It manages its own internal state, communicates with message passing (an API), enforces strong versioning, and provides appropriate abstractions for easy use of features. Another familiar example would be the unassailable success of the Linux kernel, in spite of a recent trend away from monolithic kernels. You can read more here.
In contrast to the success of a well-architected monolith, something to watch out for is the emergence of a distributed monolith. This occurs when a service is broken into multiple pieces without proper attention to hard modularity concerns. The perils of this have been expounded upon by others. Suffice it to say an attempt at a collection of microservices which lack both the soft modularity needed to develop it effectively, and the hard modularity needed to manage it effectively constitutes the worst of both worlds.
Think of a collection of microservices which can individually have all the right logic in each service (high soft modularity maturity). However, if it does not also have appropriate separation of data manipulation (especially important for microservices sharing a database), versioning (interfaces and artifacts), and deployment automation you still have a distributed monolith. A good measure of hard modularity maturity, among others, is being able to deploy on a Friday.
Discussion Points for Product Teams
We tend to think about soft modularity:
- Tables in databases
- Classes/Functions/Size of services
- Components in a webapp
But soft modularity on its own isn't enough. Teams need to consider hard modularity: Not just what logic belongs in each of the pieces, but breaking the arrows between them.
- Release automation as code
- Decoupled deployment and release
- Versioning (artifacts, APIs, schemas) and compatibility
- Upgrade strategy
- Abstraction benefits versus overhead
- Logging dependencies
Don't be a blocker your dependencies have to accommodate. Insure yourself, and don't break others.
Questions to ask when evaluating your service:
- In what ways does the service isolate itself from dependencies?
- How does the service insulate itself against cascade failures?
- What does the service do to prevent itself from being the cause of a cascade failure?
- Can the service be deployed independently?
- Can the service be tested in isolation?
- Can changes be made to management automation for one service without impacting other services?
- Can a botched release result in downtime for external dependencies?
- Do changes in responses cause failures in consumer logic?
More Related Reading
- Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems
- Graceful degradation: Harvest and Yield in the age of microservices
- BeyondProd: A new approach to cloud-native security
- Microservice Architecture Patterns
- Microservices: A definition of this new architectural term
Learning More & Getting in Touch
We really love this stuff. Please feel free to reach out to Brittany or Jonathan to discuss these ideas further.
Top comments (0)