Welcome to another Sunday Blog on System Design.
This Sunday, we’ll explore how to deal will requirements and changes while designing a system.
Here are the rules we've already summarized. As I often say:
Learn the rules, follow the rules — and only once you're good at it, should you even think about bending them.
I suggest going through the these two articles to fully grasp the ideas discussed here: Part 1 | Part 2
- Avoid functional decomposition (what we were doing in universities), and remember: a good system design speaks — through how components interact.
- The client should not be the core business. Let the client be the client — not the system.
- Decompose based on volatility — list the areas of volatility.
- There is rarely a one-to-one mapping between a volatility area and a component.
- List the requirements, then identify the volatilities using both axes: — What can change for existing customers over time? — Keeping time constant, what differs across customers? What are different use cases? (Remember: Almost always, these axes are independent.)
- Verify whether a solution/component is masquerading as a requirement. Verify it is not variability. A volatility is not something that can be handled with if-else; that’s variability.
-
Use the layered approach and proper naming convention:
- Names should be descriptive and avoid atomic business verbs.
- Use
<NounOfVolatility>Manager
,<Gerund>Engine
,<NounOfResource>Access
. - Atomic verbs should only be used for operation names, not service names.
-
layers given in the template should correspond to 4 question:
- Client : Who interacts withe system
Managers : What is required of the system
Engines : How system performs the business logic
ResourceAccess : How system access the resources
Resource : Where is the system state
Validate if your design follows the golden ratio (Manager:Engine). Some valid ratios: 1:(0/1), 2:1, 3:2, 5:3.
More than 5 Managers? You’re likely going wrong.Volatility decreases from top to bottom, while reusability increases from top to bottom.
Slices are subsystems. No more than 3 Managers in a subsystem.
Design iteratively, build incrementally.
-
Design Don'ts
- A Client should not call multiple Managers for a single use case.
- A Client must not call Engines.
- Managers must not queue calls to more than one Manager in the same use case. The need to have two (or more) Managers respond to a queued call is a strong indication that more Managers (and maybe all of them) would need to respond — so you should use a Pub/Sub Utility service instead.
- Engines and
ResourceAccess
services do not receive queued calls. - Clients, Engines,
ResourceAccess
, orResource
components do not publish events. - Engines,
ResourceAccess
, andResources
do not subscribe to events. This must be done in a Client or a Manager. - Engines never call each other.
-
ResourceAccess
services never call each other.
Requirements and Changes
Requirements change. Accept it—that’s what requirements do.
Changing requirements drive demand for software professionals, ensuring job security and better compensation.
Designing systems strictly based on initial requirements is a flawed and painful approach. Despite being a common practice, it often leads to failure because requirements are inherently incomplete, inaccurate, or subject to change. Capturing every use case up front is nearly impossible, and even if it were done perfectly, changes are inevitable. Designing against rigid requirements leads to wasted effort, rework, and frustration. Instead, systems should be built to accommodate change, as designing against static requirements is ultimately futile.
Core Use Case
In any system, most use cases are simply variations of a few essential behaviors. These essential behaviors are called core use cases, which represent the fundamental business needs of the system and rarely change. All other use cases—such as error handling, customer-specific adaptations, or incomplete scenarios—are non-core and change frequently.
Despite a system potentially having hundreds of use cases, there are usually only 1 to 5 core use cases. These are not always clearly stated in requirement documents and must be discovered through analysis and abstraction. Identifying core use cases is a key responsibility of the architect, often requiring iteration and collaboration with stakeholders. While you shouldn't design directly against detailed requirements, analyzing them helps reveal what’s truly core and what is volatile.
As an architect, your primary goal is to identify the smallest set of components needed to support all core use cases. Since non-core use cases are just variations of these, they can be handled by different interactions among the same components—not by changing the architecture itself.
This approach is called composable design. It focuses on building flexible, reusable components rather than targeting specific use cases, which are often incomplete, inconsistent, and subject to change. Implementation-level changes (like integration logic inside managers) may occur, but the architecture remains stable.
Composable design makes systems resilient to requirement changes and enables validation by checking if all core use cases can be satisfied through specific component interactions. This can be done with call chain diagrams, which show how components interact to fulfill a use case, offering a practical way to verify the design without needing perfect or complete requirements.
Here is an example of call chain diagram:
Call chain diagrams are a fast and simple way to validate whether a system design can support a specific use case by showing interactions between components. However, they have limitations—they don’t show the order, duration, or frequency of calls, and can become unclear with complex interactions. Despite this, they are often sufficient for basic validation and are especially useful for communicating with nontechnical stakeholders due to their simplicity.
Smallest set
As an architect, your mission is to design the smallest possible set of components that can support all core use cases, minimizing complexity and development effort. "Smallest" doesn’t mean a single monolithic component, nor does it mean one component per use case—both extremes are poor designs due to internal complexity or high integration cost.
Instead, aim for an architecture with around 10–20 components, which strikes a balance between simplicity and flexibility. This range, seen across various systems (like the human body or a car), is powerful due to combinatorics: a small number of reusable components can be combined in many ways to support numerous use cases.
Good architecture encapsulates volatility and uses logical layers (e.g., Managers, Engines, Resources) to remain adaptable as requirements evolve. Once you’ve reached a component set that can’t be reasonably reduced further without compromising clarity or function, you’ve found your optimal design—your smallest set.
Design Duration
Identifying core use cases and areas of volatility may take weeks or months, but that’s part of requirements analysis, not design. Once this groundwork is done, producing a valid design using composable principles (like The Method) should take a day or a week at most—and with experience, possibly just a few hours. The key idea is that design itself is fast when you're clear on what the system truly needs.
Handling Change
A fundamental rule of system design is:
Features are always aspects of integration, not implementation.
This means features emerge not from isolated code or components, but from how those components are combined. It's a universal and fractal rule—whether it's a car transporting you or a laptop enabling word processing, the feature arises from integration, not any single part.
Trying to implement features directly (as if they were standalone pieces of code) goes against how systems truly work. Functional decomposition, which focuses on coding features in isolation, leads to rigid, fragile systems that are hard to change—since changes affect many areas at once.
Fighting change by deferring it or dismissing user needs kills a system. Customers need immediate solutions, not promises for the next release. A system that can't adapt quickly will be abandoned, even if it's still technically alive. To keep a system alive and relevant, architecture must embrace change—and fast response to evolving requirements is essential.
The key to handling change is not avoiding it—but containing its impact. In a well-architected system using volatility-based decomposition (as defined in The Method), changes typically affect use cases, which are implemented by Managers. While a Manager might need to be rewritten due to a change in behavior, the core components it integrates—Engines, ResourceAccess, Resources, Utilities, and Clients—remain intact.
This structure ensures that:
- Managers are expendable and cheap to rewrite.
- Most of the system's effort and complexity lies in the reusable components beneath the Manager.
- By preserving and reusing these components, you contain the cost and effort of adapting to change.
This approach allows rapid adaptation without major rewrites—true agility. You don’t redesign the whole system when a requirement changes—you just rewire the existing pieces.
Updated rules
Lets update the set of rules we learnt. We will keep these on finger tips while designing system in future articles.
- Avoid functional decomposition (what we were doing in universities), and remember: a good system design speaks — through how components interact.
- The client should not be the core business. Let the client be the client — not the system.
- Decompose based on volatility — list the areas of volatility.
- There is rarely a one-to-one mapping between a volatility area and a component.
- List the requirements, then identify the volatilities using both axes: — What can change for existing customers over time? — Keeping time constant, what differs across customers? What are different use cases? (Remember: Almost always, these axes are independent.)
- Verify whether a solution/component is masquerading as a requirement. Verify it is not variability. A volatility is not something that can be handled with if-else; that’s variability.
-
Use the layered approach and proper naming convention:
- Names should be descriptive and avoid atomic business verbs.
- Use
<NounOfVolatility>Manager
,<Gerund>Engine
,<NounOfResource>Access
. - Atomic verbs should only be used for operation names, not service names.
-
layers given in the template should correspond to 4 question:
- Client : Who interacts withe system
Managers : What is required of the system
Engines : How system performs the business logic
ResourceAccess : How system access the resources
Resource : Where is the system state
Validate if your design follows the golden ratio (Manager:Engine). Some valid ratios: 1:(0/1), 2:1, 3:2, 5:3.
More than 5 Managers? You’re likely going wrong.Volatility decreases from top to bottom, while reusability increases from top to bottom.
Slices are subsystems. No more than 3 Managers in a subsystem.
Design iteratively, build incrementally.
Design with the smallest set of reusable components needed to support core use cases. A good architecture integrates ~10–20 components to support them composably. Features are outcomes of integration, not implementation.
-
Design Don'ts
- A Client should not call multiple Managers for a single use case.
- A Client must not call Engines.
- Managers must not queue calls to more than one Manager in the same use case. The need to have two (or more) Managers respond to a queued call is a strong indication that more Managers (and maybe all of them) would need to respond — so you should use a Pub/Sub Utility service instead.
- Engines and
ResourceAccess
services do not receive queued calls. - Clients, Engines,
ResourceAccess
, orResource
components do not publish events. - Engines,
ResourceAccess
, andResources
do not subscribe to events. This must be done in a Client or a Manager. - Engines never call each other.
-
ResourceAccess
services never call each other.
Conclusion
Starting next week, we’ll begin exploring real-world examples of software design using The Method.
See you next Sunday!
Here are links to previous articles in case you missed them:
- Why Functional Decomposition Leads to Bad System Design
- System Design Basics: Why The Right Method Matters
- The Anti-Design Thinking in System Design
- Volatility-Based Decomposition: A System Design Example
- Principles of Volatility-Based Decomposition in System Design
- Template for System Design Using ‘The Method’
- Template for System Design Using ‘The Method’: Part II
- Design Don’ts in System Design with ‘The Method’
Top comments (0)