Welcome to Part 3 of the **Cloud Fragility* series. In Part 2, we exposed how Identity acts as a single point of failure. Today, we tackle the financial physics of the cloud: why your data is stuck, even if your code is portable.*
The Great API Distraction
For the past fifteen years, we obsessed over the wrong kind of lock-in. Everyone worried: “If I use DynamoDB or Azure Functions, am I trapping my code forever?”
So, we poured billions of hours and dollars into building abstraction layers, adopting Kubernetes, and patching together generic Terraform providers—all just to keep our compute “portable.”
We pretty much won that battle. Now, containers run anywhere you want. Moving code isn’t that hard. But while we were busy making our compute portable, the cloud giants were quietly making our data immovable.
These days, real vendor lock-in isn’t about APIs at all. It comes down to physics and money—the mass of your data and the toll roads built to keep it parked right where it is.
The Physics of Data Gravity
“Data Gravity” sounds like theory, but it’s painfully real. The more data you pile up, the harder it gets to move. Apps and services stick to it, like satellites caught in orbit.
In the AI era, data gravity isn’t just a force—it’s a black hole.
If you’ve got petabytes of training data or years of transaction logs in AWS S3, moving that mountain to Google Cloud isn’t just a “migration project.” It’s a physics problem.
Transferring that much data over the internet? It takes forever. And trying to keep live databases in sync across providers—good luck. The operational risks are huge. Cloud providers know this. They built their pricing to make gravity work for them.
The Egress Fee Trap (The Roach Motel Model)
Here’s the trap: Egress fees.
Cloud networking pricing works like a roach motel. Data checks in for free, but you’ll pay through the nose to get it out. Ingress? Free. They’ll run gold-plated fiber to your door to get your data onto their platform.
But try pulling 5 petabytes out of AWS us-east-1 to move it somewhere else, and suddenly you’re looking at a six-figure exit tax.
This changes how people architect systems. Instead of designing for the best tech, architects design to dodge the exit tax. You keep your data in AWS not because it’s the best place for it, but because you can’t afford the ransom to move.
When it costs $200,000 just to get your data out the door, you’re not a customer. You’re a hostage.
The “Private Networking” Handcuffs
Egress fees are the blunt tool. The sneakier, more effective lock-in comes through proprietary networking.
Services like AWS PrivateLink, Azure Private Endpoints, and Google Private Service Connect are pitched as security features. And they are—no argument there. They keep traffic off the public internet.
But they’re also dangerously sticky architectural glue.
Once you wire up your entire microservices jungle using PrivateLink endpoints, you’re stuck. Tearing that out is way harder than refactoring code. You haven’t just used their VMs—you’ve baked their proprietary networking into your app’s DNA.
Moving to another cloud means rewiring your entire nervous system.
The Brutal Reality of “Multi-Cloud Networking”
A lot of companies try multi-cloud to avoid these traps, only to end up with twice the complexity and double the cost.
Connecting AWS to Azure isn’t simple. It means expensive middlemen services, VPN tunnels with spotty performance, or dedicated fiber that takes months to set up. And if you try using the public internet for this? That’s a resilience nightmare (see our guide on Why the Public Internet is Not an SLA).
So what happens? Most “multi-cloud” setups turn into isolated islands of data that rarely talk, because the toll to connect them is just too high.
Architecting for Data Freedom
If you want real leverage over your cloud provider, you have to design for data mobility from day one—and yes, it costs more upfront.
- Acknowledge the Exit Tax: Always factor egress fees into your TCO. If a solution looks cheap but has hidden egress risks, it’s not actually cheap.
- Neutral Territory Data: For your most critical datasets, think about housing them in carrier-neutral facilities (like Equinix) on your own hardware, and connecting to clouds with dedicated, low-latency links. You own the data gravity. The clouds just rent access.
- Avoid Proprietary Plumbing When You Can: Be careful with deep networking integrations like PrivateLink. Always ask yourself, “If I had to move this to another provider tomorrow, how long would it actually take?”
Series Context: The Physics of Failure
- Part 1: Covered how shared dependencies cause cascading failures.
- Part 2: Explained how Identity is the single control plane that can lock you out.
- Part 3 (Current): Shows how Networking is the financial and physical barrier to leaving.
- Part 4: Will tie this all together, explaining how these architectural traps led directly to the massive cloud bill increases seen in 2026.
The pattern is clear: The code is portable. The data is anchored.
>_ Engineering Artifacts & Tools
The ability to move your application’s code from AWS to Azure is technically interesting, but financially irrelevant if your data is too heavy and too expensive to move along with it. True portability isn’t about APIs. It’s about the physics of moving bits.
Stop guessing at your exit tax. We just launched the Engineering Workbench, featuring our Cloud Egress Calculator to help you model true data movement costs out of AWS, Azure, and GCP.
Need the code? Access our Terraform modules and routing architectures in the Canonical Architecture Specifications hub.

Top comments (0)