Every developer who's lived through a major migration knows the feeling.
The new system looks clean on the whiteboard. The architecture is elegant. You can explain it to a product manager in fifteen minutes. Everyone agrees it's the right direction.
And then the migration begins.
And then you discover that the migration — not the destination system — is the actual problem. The one nobody planned for properly. The one that takes three times as long as estimated and generates failure modes nobody anticipated.
IPv6 is the canonical example. The protocol itself isn't particularly complicated — you could explain it to a motivated high school student. But the real-world rollout across thousands of heterogeneous systems, legacy hardware, and entrenched network assumptions? Still not done. Decades later. Still being worked on.
Migration from System A to System B is almost always harder than designing System B.
The System A Nobody Talks About
Here's a thesis: the social infrastructure we operate inside — territorial nation-states, standing armies, modern finance, bureaucracy, the welfare state, the whole stack — is a System A. It emerged in the nineteenth century under specific conditions, for specific reasons, to solve specific coordination problems of the industrial era.
Digitalization is a protocol change.
And we're spending almost all of our collective attention debating what System B should look like, while almost nobody is seriously modeling the migration itself.
This is a mistake that developers should recognize immediately.
The Signal-to-Noise Problem in How We Think About This
The way most public discourse handles this is... not good.
If you apply a simple filter — "will this still matter in two years?" — to your news feed, roughly 95% of it disappears immediately. The political scandals, the AI tool announcements, the platform drama. Gone. What's left is the actual structural change: the forces that are genuinely reshaping how societies coordinate and how existing systems handle (or fail to handle) new conditions they were never designed for.
The noise is urgent. The signal is important. They're almost always different things.
One useful mental model: imagine a historian writing a hundred years from now, trying to explain what actually happened during this period. What would they flag as significant? What would look, from their vantage point, like the things we should have been paying attention to?
That's a surprisingly effective filter. Applied consistently, it basically inverts your media consumption.
The Wau Holland Test
In the 1980s, Wau Holland co-founded the Chaos Computer Club in Germany — the beginning of the hacker movement in Europe, and the precursor to modern digital civil liberties organizations like the Electronic Frontier Foundation.
Two things he said in 1981 have stayed with me.
First: at the founding of the CCC, they were genuinely afraid computers might be banned. Not because they were paranoid, but because they understood that these devices were so structurally disruptive that a rational state shouldn't really be able to permit them. (His theory on why they weren't banned: banks needed Excel to make money on financial markets. Institutional capture by incumbent interests as the mechanism of permitting radical technology. Checks out.)
Second: What is it to crack open a computer, compared to cracking open society?
He was saying in 1981 what most technologists are still reluctant to say now: we're not building tools. We're reshaping the fundamental coordination infrastructure of human civilization. The social fabric itself is in play.
That's not hype. It's just what happens when you introduce a general-purpose communication substrate that's orders of magnitude more efficient than anything that existed before. You change everything that depends on communication — which is everything that requires coordination — which is basically all of society.
Virilio's Invariant
Paul Virilio had a useful principle: whoever invents the airplane also invents the plane crash.
More precisely: every expansion of your action space is coupled to an expansion of your problem space. New capabilities come with new vulnerabilities. You don't get one without the other.
This isn't pessimism. It's just a property of complex systems. When you extend the stack, you extend the attack surface.
The important thing is that these two spaces — action space and problem space — don't expand symmetrically. They follow different dynamics. Problem spaces tend to open on their own (you don't need to do anything to generate new problems when you deploy a new system). Action spaces open too, but whether you use them is a choice.
In a digital century, the question isn't whether new problems will emerge. They will. The question is whether you're going to try to solve digital-era problems with pre-digital tools, or whether you're going to build for the actual conditions.
This is true at the infrastructure level. It's also true at the institutional level. Legal systems, governance frameworks, economic coordination mechanisms — all of these are currently being stress-tested against conditions they were never designed for. The response so far is mostly: patch the legacy system.
Any developer knows how that ends.
The 1914 Deployment Scenario
Here's the scenario I find most clarifying.
Imagine it's 1914 and you have reasonably good visibility into what the next decade looks like. Not perfect visibility — you don't know names or exact dates — but you can see the structural trajectory clearly. World war. Twenty million dead. And attached to it, almost mechanically, a second conflict with sixty million more.
Now: what are your obligations?
If you start a startup, you're probably going to watch it get destroyed by a war you could see coming. If you optimize for your current position in a political system, you're optimizing for a system that won't exist in five years.
But more than that: there's a question about what you say. About whether the capacity for anticipation creates a responsibility to act on it.
I think it does. And the corollary I find most uncomfortable: anyone who can see the trajectory and stays silent is, in some meaningful sense, complicit in what follows.
This is not a comfortable thought. But I think it's the correct one.
What This Actually Asks of Technical People
If you're building systems — infrastructure, applications, platforms — you're participating in the migration whether you think about it that way or not. Every architectural decision has a downstream effect on what System B looks like and how smooth or violent the transition is.
That's not a reason to panic. It's a reason to think carefully about what you're building and what the second and third-order effects are.
The Oppenheimer question — what is the responsibility of technical action? — isn't a history lesson. It's a current production issue.
We're doing a live migration of the coordination infrastructure of human civilization. Rollbacks are not available. The staging environment is inadequate. The documentation is incomplete.
This is the actual problem. The rest is noise.
Alex Markowetz hosts The Gesamtschau, a podcast using computer science as a lens for understanding societal change. Episode 1 is out now in six languages.
Top comments (0)