A thoughtful comment on my recent LinkedIn post raised three concerns about the Independent Variation Principle that deserve a serious answer.
The concerns are, in essence: that proactive separation by change driver risks massive over-fragmentation; that the existence of a driver does not entail its activation with relevant probability; and that to eliminate subjectivity, the change driver must become a constant or be determined with high probability.
I take these concerns seriously, because they point to the exact place where IVP must distinguish itself from the Single Responsibility Principle.
If IVP inherits SRP's problems, the formal apparatus adds overhead without benefit.
It does not inherit them, and the reasons are structural.
IVP does not over-fragment
The concern is valid against SRP.
SRP is a separation-only criterion: every new "reason to change" creates another module boundary, with no counterweight.
The more reasons you discover or speculate about, the more you split.
Over-fragmentation is a real and well-documented consequence of strict SRP application.
IVP does not work this way.
It has four axioms, and two are directly relevant here.
IVP-3 separates elements with different change driver assignments into different modules.
IVP-4 unifies elements with the same change driver assignment into the same module.
The result is a partition: exactly as many modules as there are distinct driver sets, no more, no fewer.
Over-fragmentation in the structural sense --- splitting co-varying elements across separate modules --- is ruled out by IVP-4, which forces co-varying elements back together.
SRP operates at the class level, where no unification counterpart exists.
Martin's Common Closure Principle (CCP) addresses unification at the package level, but it is a separate principle operating at a different granularity, with the same definitional problem: "change for the same reason" is as undefined as SRP's "reason to change."
IVP provides both separation and unification at any granularity, with a single formal criterion.
That is why strict SRP application tends toward class explosion, while IVP's partition constraint structurally prevents it.
What about speculative drivers --- drivers that might emerge in the future but have no current evidence?
IVP's answer is not "separate just in case."
It is the opposite.
A speculative driver has no knowledge slice: there is no domain knowledge to embody, because the driver does not yet exist in the system's causal structure.
The distinction is not about likelihood but about current causal existence.
The tax authority is a driver because it currently governs elements in the system --- tax rules are already encoded, and the authority can issue changes that force modifications to those elements, whether or not it does so this year.
A regulation that does not yet exist governs no current elements and therefore has no knowledge slice to separate.
When it comes into existence and begins governing elements, it becomes a driver at that point, and the decomposition is updated accordingly.
Assigning a speculative driver to existing elements introduces spurious inequality in the driver assignments, splitting elements that currently co-vary for no reason.
This decreases the quality of the decomposition.
IVP formally prescribes operating on the current driver structure.
The same logic that motivates YAGNI in feature development applies structurally here: speculative drivers are not merely unnecessary boundaries --- they actively degrade the decomposition by introducing spurious splits.
Drivers are not predictions
A change driver is not a prediction that change will happen.
It is an identification of what could cause change within the system's operational domain --- the causal structure that governs how domain knowledge enters and evolves in the system.
The tax authority exists as a source of potential change whether or not it updates tax rates this year.
A database technology is a separate source of variation from a messaging system whether or not you plan to replace either one.
IVP does not ask "how likely is this change?"
It asks "if this change happened, what would be forced to change with it?"
That is a question about causal propagation structure, not about the likelihood of the triggering event.
A driver that rarely activates does not make its boundary wasteful.
All module boundaries carry some fixed cost --- cognitive overhead, interface ceremony, build structure --- but the cost of maintaining a correct boundary for a dormant driver is small and predictable.
The cost of not having the boundary when the driver activates is large and unpredictable: the change propagates through a structure that never accounted for the variation, touching modules that have no reason to be involved.
The alternative --- merging modules because "it probably won't change" --- saves little in the meantime and pays the full propagation cost when it does.
IVP makes subjectivity empirical
This concern rests on the assumption that identifying change drivers introduces a new subjectivity problem.
In practice, every modularization approach already demands the same kind of cognitive work --- understanding what varies independently of what.
Where they differ is in what they do with that understanding.
SRP asks practitioners to identify "reasons to change."
Separation of Concerns asks for "concerns."
DDD asks for "bounded contexts."
If we read port-and-adapter boundaries in Clean Architecture and Hexagonal Architecture as implicitly demarcating sources of independent variation, then these architectural styles, too, involve judgments about what varies independently of what.
Each of these frameworks provides useful heuristics, and experienced practitioners apply them with real skill.
But in each case, the boundary criterion is fuzzy: SRP's "reason to change" is undefined, DDD's bounded context boundary is diagnosed by observing where the ubiquitous language breaks down --- a real signal, but not one that resolves competing boundary placements --- and Clean Architecture's Dependency Rule governs dependency direction but offers no criterion for deciding which layer a given element belongs to in the first place.
The difference is what IVP provides in return.
IVP offers a formal independence test: can this source of change activate without forcing changes to elements governed by that other source?
That test shifts disagreements from irresolvable differences of classification ("is this one responsibility or two?") to concrete, falsifiable factual questions ("does a GDPR change force a SOX change in this system?").
The question can be answered by tracing regulatory dependencies and data flows in the domain --- it does not require predicting whether GDPR will actually change.
Two architects may still reach different answers based on different experience with the domain.
But the nature of their disagreement changes: it becomes empirical rather than classificatory, and it can be investigated by examining domain evidence rather than settled by argument from authority.
Change drivers do not need to be constants.
They do not need high-probability activation.
They need to be causally identifiable --- which, for infrastructure drivers, they generally are, and for business domain drivers, they are through the same counterfactual reasoning and domain analysis that architects already perform.
Where to go from here
The full formal treatment is developed in Volume 1 of the IVP book series (forthcoming).
Top comments (0)