In the last two years, three themes keep popping up in conversations with CIOs and CTOs I work with:
1) we are too dependent on a handful of cloud and SaaS giants,
2) regulators are pushing harder on data sovereignty,
3) AI is powerful, but nowhere near the magic “replace everyone” story we were sold.
Put together, this is exactly the mix behind today’s talk about decentralization, sovereign clouds and the “AI hangover”.
I'm the CTO at Pynest, a software development and staff-augmentation company working with distributed teams in the US and Europe. We sit in the middle of all of this: modern cloud architectures, strict data rules, and clients who want AI everywhere but are suddenly more cautious than 12–18 months ago.
Centralization Got Us Far — And Into Trouble
For a decade, the default answer to almost any infra question was: “Just put it on a hyperscaler”. The same happened in security tooling: endless “platform consolidation” via acquisitions, fewer vendors, bigger suites.
The result is very strong capabilities, but also massive single points of failure and huge targets. When one of the big providers suffers a breach or a regional outage, the blast radius is now entire sectors, not single apps. That “too much power in too few hands” problem is exactly what the Mozilla Foundation has been warning about for years in its Internet Health work.
From a CTO chair this now looks less like “efficiency” and more like concentration risk.
Sovereign Clouds: Control, Compliance… And Fragmentation
In Europe the answer is not just “more multi-cloud”, but “more sovereign cloud”. EU initiatives like Gaia-X and national sovereign cloud programs in Germany and France are trying to ensure that sensitive data lives under local laws, not only under US CLOUD Act and similar regimes.
For our EU-based clients this already shows up in requirements:
- Critical workloads must run on EU-controlled infrastructure, or at least in EU-only regions with clear legal separation.
- There must be an exit strategy: data formats, APIs and contracts that allow migration away from a single hyperscaler.
- Security and compliance teams want clear answers to “who can touch this data, from which jurisdiction, and under what process”.
From the engineering side, that means more work on abstraction layers, standard interfaces and data-layer design. Instead of “one big cloud”, we design for a fabric: some workloads on a sovereign provider, some still on AWS/Azure/GCP, tied together with clear contracts and strong identity.
It is less “move everything to sovereign cloud” and more “treat sovereignty as a first-class constraint in architecture”.
The AI Hangover: From “Replacement” to “Augmentation”
A lot of the centralization push came from AI as well. The promise was: plug your data into a huge model hosted by a huge vendor and watch the magic. In reality, we got something more mundane but still useful.
Even big industry voices are pushing back on the “AI will replace developers” hype. AWS CEO Matt Garman recently called plans to replace junior staff with AI “one of the dumbest things I’ve ever heard”, arguing that you kill your future senior talent if you remove entry-level roles.
From what we see at Pynest:
- AI coding tools absolutely speed up repetitive tasks, code search and experiments.
- They do not replace the hard parts: architecture, debugging in messy systems, trade-offs under constraints, working with real stakeholders.
- Teams that tried to “replace juniors with AI” quickly ran into a simple issue: nobody is growing into the next generation of seniors.
So yes, there is an “AI hangover”: expectations are being corrected. AI is moving into a more realistic place — as an accelerator, not a magic outsourcing of thinking.
Decentralization in Practice: Architecture, Not Slogans
What does “decentralization” actually mean for a CIO or CTO, beyond slogans?
From my perspective, there are four practical shifts:
Data and workloads become region-aware by design
You model where data is allowed to live and which services can talk across borders. Sovereign cloud zones, EU-only storage, “US only” partitions for some clients — this becomes part of your domain model, not just a hosting checkbox.You reduce deep lock-in to a single stack
You do not have to go full multi-cloud with everything, but you do design for portability in critical paths: neutral data formats, open standards, portable CI/CD, Identity as a central layer instead of provider-specific glue.AI is brought closer to the data, not the other way around
Instead of shipping all your sensitive datasets to some central “AI factory”, you bring models to where the data already safely lives — via private endpoints, on-prem deployments or sovereign providers that support AI workloads.Resilience beats “one big platform”
The goal is not a perfectly unified tool, but graceful failure: if a provider, region or product dies, you degrade but do not go dark.
This is less about ideology and more about operational survival in a world of sanctions, new laws and very creative attackers.
How We Approach It at Pynest
On our projects at Pynest we see these themes from two sides: as a development partner and as a staff-augmentation provider embedded into client teams.
A few practical patterns we use:
“Soft multi-cloud” for regulated clients
For EU fintech or health projects we often design with a primary sovereign/EU cloud and a secondary hyperscaler, with data clearly split: PII and core transactions stay in the sovereign zone, anonymized or aggregated data can go to global AI services.Data contracts instead of one “mega warehouse”
Rather than copy everything into one central place, we use lakehouse-style setups with strong data contracts between domains. That makes it easier to move or re-host pieces without breaking the whole system.AI as a co-pilot on top of that fabric
Our AI work (observability, test generation, support automation) sits on top of this distributed architecture instead of dictating it. If a provider changes pricing or regulations shift, you should be able to swap out an AI component without re-building your entire platform.
Inside Pynest itself, we follow the same logic: critical HR and candidate data lives in controlled locations; our internal AI agents (for resume parsing, onboarding workflows, internal support) work with those datasets under strict access rules rather than sending everything to random external tools.
What CIOs Should Be Asking in 2026
If I had to reduce all of this to a short checklist for 2026, it would be:
“Where are we too centralized?”
Which single vendors, clouds or products can take down key parts of our business if they fail or change terms?“Where does sovereignty really matter?”
For which datasets and workloads do we need legal and technical control over where data lives and who can see it?“Is AI driving our architecture, or the other way round?”
Are we bending our systems around the latest AI product pitch, or fitting AI into a deliberate, resilient design?“Do we have an exit plan?”
If tomorrow we had to leave a provider or move a region for political, regulatory or cost reasons, do we know roughly how we would do it?
The tension between centralization and decentralization is not going away. Sovereign clouds will grow, AI will keep evolving, and big platforms will still be there. The job of modern technology leaders is to design systems that benefit from scale without becoming hostages to it.
For me, that is the real story underneath the buzzwords: not “centralized vs decentralized” as a religion, but “how do we build systems and teams that can survive the next wave of change without a full rebuild every three years?”
Top comments (0)