A contact on LinkedIn asked a question that every cloud architect eventually hears:
“Your manager says "We need to be Multi Cloud, AWS plus GCP. In 6 months."
You’re currently 100 % in AWS. Do you push back, agree, or propose a middle path? The reason behind the request matters more than the request itself.”
Here is exactly how I answered and why.
The Hidden Costs of the Multi-Cloud Trend
Transitioning to a Multi Cloud architecture is often sold as a strategic victory. When management sets a six month deadline to integrate GCP into an existing 100 % AWS environment, the first job of any engineer is to evaluate operational reality rather than marketing hype. Drawing on eight years of professional experience as a Solutions Architect, I consider this one of the most dangerous directives an engineering team can receive.
Questioning the Directive First
The very first step is always to clarify the objective. Is the company facing strict regulatory compliance that genuinely requires two clouds? Or is management simply afraid of "vendor lock in"? If the reasoning is
fear based rather than business-driven, the resulting architecture will be flawed from day one.
The one non negotiable exception is Mergers and Acquisitions. If your company just acquired an organization running natively on GCP, integrating that environment is a hard business mandate, not a trend.
Evaluating the True Costs
Data Egress
Cloud providers want your data to stay inside their ecosystem. Moving even moderate volumes of data between AWS and GCP triggers significant egress fees. The hyperscalers let data in for free but charge heavily to move it out. The network architecture required to bridge the two environments adds complexity and cost that is rarely budgeted.
Team Capacity
Forcing a single team to master both AWS and GCP is an engineering
anti pattern. The alternative , hiring a completely new team or launching extensive retraining programs , this cannot be done securely or effectively in just six months.
Architectural Coupling
The danger level of a six month timeline depends entirely on your compute layer.
If your AWS environment relies heavily on proprietary managed services like Lambda and DynamoDB, a GCP integration is an operational nightmare.
However, if your architecture is already heavily containerized using EKS and stateless microservices, dropping those workloads into Google Kubernetes Engine is significantly less complex.
Pipeline Fragmentation
Managing infrastructure state across two hyperscalers requires immense discipline. The cognitive load of preventing configuration drift while deploying to two different environments is almost never factored into management timelines. Securing two separate Identity and Access Management perimeters at the same time doubles the risk of a breach.
Here is a minimal Terraform example that illustrates the immediate fragmentation:
# AWS provider
provider "aws" {
region = "eu-west-1"
}
# GCP provider already doubling the cognitive load
provider "google" {
project = "my-gcp-project"
region = "europe-west1"
}
# Two separate remote backends become mandatory
terraform {
backend "s3" { } # AWS state
# GCP state needs its own backend GCS
}
A single terraform apply now touches two completely different ecosystems. State drift detection, IAM policies, and security scanning all become twice as complex.
When (and only when) Multi Cloud actually makes sense
In rare cases Multi Cloud is the right call: strict data-residency regulations that force workloads into specific GCP regions, a highly specialized service (such as BigQuery for massive analytics that has no cost-effective AWS equivalent), or a true disaster recovery strategy that demands geographic and provider diversity.
When those conditions are met, the safe middle path is not a big bang six month migration. Start with a narrow, non-critical "proof of concept" workload in GCP (e.g., a new analytics pipeline), keep the core platform in AWS, abstract common patterns with Terraform modules, and enforce strict cost and security gates before any production traffic moves.
Conclusion
Multi Cloud is not inherently bad, but rushing into it for the wrong reasons is expensive, risky, and almost always avoidable. The reason behind the request matters more than the request itself. Ask why first. Then protect the team and the architecture with data, not dogma.
Sources
AWS Data Transfer Out Pricing (to Internet / other clouds):
https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer
Martin Fowler
“Don’t get locked up into avoiding lock-in” (Multi Cloud discussion):
https://martinfowler.com/articles/oss-lockin.html
HashiCorp
Workspace Best Practices for HCP Terraform (Multi Cloud state management):
https://developer.hashicorp.com/terraform/cloud-docs/workspaces/best-practices
Top comments (0)