DEV Community

Rex Zhen
Rex Zhen

Posted on

AWS SRE's First Day with GCP: 7 Surprising Differences

AWS SRE's First Day with GCP: 7 Surprising Differences

Introduction

I've worked with AWS for over 10 years across different employers, building and maintaining production infrastructure at scale. Despite hearing about GCP for years, I never seriously explored it—until last weekend.

I decided to start a personal ML project in GCP, thinking "how different could it be?" Four hours later, I was genuinely impressed. Not just by the features, but by how GCP approaches cloud infrastructure fundamentally differently.

Here's my honest take: When I look back at AWS now, it reminds me of Perl and Jenkins. They still survive in production because they have all the features modern solutions offer—but often through workarounds accumulated over years. AWS works, absolutely. But GCP feels like it was designed with hindsight.

Let me share the 7 differences that surprised me most.

1. Organization Structure: Finally, Hierarchies That Make Sense

In AWS:
Organizations, OUs (Organizational Units), and Control Tower were added as afterthoughts—literal add-ons introduced years after AWS launched. Managing multi-account structures feels like retrofitting organization onto a system that wasn't designed for it.

In GCP:
The hierarchy is natural and intuitive: Organization → Folders → Projects. It's exactly like organizing your local filesystem. Need to group projects by team? Create a folder. Need to separate dev/staging/prod? Subfolders. Want to apply policies at any level? Just do it.

Why it matters:
When you're building infrastructure templates for multiple projects, GCP's structure lets you organize and manage resources the way your brain actually thinks about them. In AWS, you're constantly fighting the account model.

2. Encryption Keys: Default Keys That Actually Work Across Projects

In AWS:
KMS default keys cannot be shared across accounts. If you want cross-account encryption, you need customer-managed keys (CMKs) with complex cross-account IAM policies. It's easy to either leave security gaps or accidentally lock yourself out. The permission model is messy.

In GCP:
Default encryption keys work seamlessly across projects within your organization. Need custom keys? The permission model is straightforward and maintainable. You can grant access without the IAM policy gymnastics AWS requires.

Real impact:
I spent an embarrassing amount of time in my AWS days debugging "Access Denied" errors on S3 buckets with KMS encryption across accounts. GCP eliminates this entire class of problems.

3. Cross-Zone Data Transfer: FREE

This one blew my mind.

In AWS:
Cross-AZ (Availability Zone) data transfer costs $0.01/GB in each direction—effectively $0.02/GB for round-trip traffic. For high-throughput applications, this adds up fast.

In GCP:
Cross-zone data transfer within the same region is completely free. Zero. Nada.

Why this is huge:

  • Regional Kubernetes clusters? No cost penalty for pod-to-pod communication across zones.
  • Multi-AZ databases? Replication traffic is free.
  • High-availability architectures don't cost extra just for being resilient.

This single difference can save thousands of dollars monthly for data-intensive workloads.

4. Network Resources: Shared VPC Changes Everything

In AWS:
VPCs are tightly bound to individual accounts. Want centralized network management? You need Transit Gateway ($36/month base + data transfer fees), VPC peering, or complex PrivateLink configurations. Each approach has tradeoffs.

In GCP:
Shared VPC lets you create network resources in one project (say, an SRE/platform project) and share them with other projects. Developers in application projects don't even see—let alone manage—the underlying network configuration.

The paradigm shift:
As an SRE, this is a dream. I can:

  • Manage all networking in a dedicated "management" project
  • Grant developers access to their application projects
  • Developers deploy without touching VPCs, subnets, or firewall rules
  • Centralized network policies and security controls

In AWS, achieving this level of separation requires significantly more architectural complexity.

5. Security Groups → Firewall Rules: Making Sense Again

In AWS:
Security Groups are... fine. But the name is confusing (they're not really "groups"), and the attachment model (per-instance/ENI) can get messy at scale.

In GCP:
They're called Firewall Rules. They work at the VPC level. You can target resources by tags, service accounts, or IP ranges. The model just makes sense, especially if you came from traditional system administration.

Why I prefer this:
As someone who managed firewalls before moving to cloud, GCP's firewall rules feel intuitive. Apply rules at the network level, target specific workloads with tags. It's how you'd think about network security naturally.

6. Global Load Balancer: A Feature AWS Doesn't Really Have

This is where GCP really shines.

In AWS:
You have regional load balancers (ALB, NLB) and Route 53 for DNS-based routing. Want true global load balancing? You're building it yourself with health checks, failover, and complicated DNS configurations.

In GCP:
Global HTTP(S) Load Balancer gives you:

  • A single anycast IP that routes to the nearest healthy backend globally
  • Automatic SSL termination at 150+ edge locations worldwide
  • Seamless integration with Cloud CDN
  • Built-in DDoS protection with Cloud Armor

Real-world impact:
Imagine a user in Tokyo connecting to your application hosted in Iowa:

  • Without Global LB: SSL handshake happens in Iowa (150ms latency) = ~450ms total connection time
  • With Global LB: SSL handshake happens in Tokyo (5ms latency) = ~50ms total connection time

For global applications, this is a game-changer. The Global LB + Cloud CDN combination gives you CDN-like performance for dynamic content, not just static assets.

7. Terraform + CI/CD: Finally, A Clear Pattern

In AWS:
Setting up Terraform with CI/CD involves:

  • Creating IAM roles for Terraform
  • S3 backend for state (which account?)
  • DynamoDB for state locking
  • Cross-account assume-role chains for multi-account deployments
  • Custom scripts to manage all of this

The solution works but feels cobbled together.

In GCP:

  • Create a management project
  • Enable Storage API
  • Create a GCS bucket for Terraform state (versioning built-in, locking built-in)
  • Use service account impersonation
  • Done.

It's cleaner, more straightforward, and easier to understand—especially when onboarding new team members.

Cost Comparison

After 4 hours of exploration, I compared pricing for the services I evaluated. GCP is consistently less expensive across the board:

  • Compute instances: 10-20% cheaper for equivalent specs
  • Storage: Regional GCS ($0.020/GB) vs Regional S3 ($0.023/GB)
  • Cross-zone transfer: FREE vs $0.02/GB
  • Kubernetes: GKE control plane is free (under 15,000 pods); EKS charges $0.10/hour ($73/month per cluster)

The savings add up quickly, especially for multi-cluster or high-throughput workloads.

The Elephant in the Room: Why Are Companies Still on AWS?

If GCP is cheaper, more intuitive, and has better features for modern architectures—why does AWS dominate the market?

The answer is the same reason some companies still run Perl codebases and Jenkins pipelines in 2025: inertia, existing investment, and organizational momentum.

AWS has:

  • First-mover advantage: Launched in 2006; most enterprises built on AWS before GCP was viable
  • Ecosystem lock-in: Countless third-party tools, integrations, and marketplace solutions
  • Enterprise sales muscle: AWS has built deep relationships over 15+ years
  • Talent pool: More engineers with AWS experience
  • Feature breadth: AWS still has more services overall (though GCP is catching up)

It's not that AWS is bad. It's that it's carrying 15+ years of architectural decisions, backward compatibility, and accumulated complexity.

I admire the engineers who maintain production Perl code with fresh eyes every day. Similarly, I respect AWS engineers navigating the complexity—because AWS absolutely can do everything GCP does. It just requires more workarounds.

My Takeaway

After 10 years with AWS, spending a day with GCP felt like taking off a weighted vest I didn't know I was wearing.

GCP isn't perfect. The documentation can be sparse compared to AWS. The service catalog is smaller. The community is smaller. These are real considerations.

But for greenfield projects—especially if you're building modern, cloud-native applications—GCP's design philosophy feels like it learned from AWS's growing pains.

If you're an AWS veteran who's never seriously looked at GCP, I'd encourage you to spend a weekend trying it. You might be surprised.


Discussion

What's your experience switching between cloud providers? Have you found similar differences, or do you disagree with my takes? Let me know in the comments—I'm curious to hear from both AWS and GCP veterans.


Rex Zhen is a Senior SRE with 10+ years of AWS experience, currently exploring GCP for ML infrastructure projects. Connect on LinkedIn or follow for more cloud engineering insights.


Appendix: What I Built Today

For the curious, here's what I actually deployed in my first 4 hours:

  • Two-project architecture:

    • Management project: Terraform state, networking, monitoring
    • Application project: GKE, Vertex AI, ML workloads
  • Infrastructure:

    • GCS bucket with versioning for Terraform state
    • Terraform backend configured
    • Planning: VPC with Shared VPC setup, GKE cluster with CPU/GPU node pools

The fact that I got this far in 4 hours—including learning GCP concepts from scratch—speaks to GCP's intuitive design.

Total cost estimate: ~$15-23/month (using Spot VMs and destroy-when-not-in-use pattern).


Tags: #GCP #AWS #CloudEngineering #SRE #DevOps #Kubernetes #Terraform #CloudComputing

Top comments (0)