DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on

AWS vs. GCP: A Developer’s Guide to Picking the Right Cloud

Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a first of its kind tool for helping you automatically index API endpoints across all your repositories. LiveAPI helps you discover, understand and use APIs in large tech infrastructures with ease.

Cloud platforms are the backbone of modern development, and choosing between Amazon Web Services (AWS) and Google Cloud Platform (GCP) can feel like picking a favorite code editor—both are powerful, but they shine in different ways. As developers, we want tools that are reliable, scalable, and easy to integrate into our workflows. This article breaks down the key differences between AWS and GCP, focusing on services that matter most to developers. We’ll dive into compute, storage, databases, networking, pricing, and more, with practical examples to show how they work.

Why Compare AWS and GCP?

AWS has been the cloud giant since 2006, boasting a massive service catalog and a global footprint. GCP, younger but scrappy, entered the scene in 2011 with a focus on developer-friendly tools and Google’s expertise in AI and data. AWS dominates market share (about 32% in 2024), while GCP holds around 10%, but market share doesn’t tell the whole story. Your choice depends on your project’s needs—whether it’s a startup MVP, a machine learning pipeline, or a global app.

This comparison will help you understand where each platform excels, with tables for quick reference and code examples to ground the concepts. Let’s jump in.

Compute: EC2 vs. Compute Engine

Compute services are where your apps live, so let’s start with AWS EC2 (Elastic Compute Cloud) and GCP Compute Engine. Both offer virtual machines (VMs) for running applications, but their setup and flexibility differ.

  • EC2 provides a wide range of instance types (e.g., t3.micro for lightweight apps, g5.xlarge for GPU workloads). It’s highly customizable, with options for spot instances to save costs.
  • Compute Engine focuses on simplicity and performance, leveraging Google’s infrastructure for fast VM spin-up. It’s great for workloads needing consistent performance.

Key Differences:

Feature AWS EC2 GCP Compute Engine
Instance Types 400+ types, highly specialized Fewer types, streamlined options
Spot/Preemptible VMs Spot instances (up to 90% discount) Preemptible VMs (up to 80% discount)
Autoscaling Robust, part of Auto Scaling Groups Managed instance groups, simpler setup
Custom Machine Types Limited Highly flexible (choose CPU/RAM)

Example: Launching a simple Node.js server on both.

For EC2, you’d use an Amazon Linux 2 AMI, SSH into the instance, and set up Node.js:

#!/bin/bash
# Run on an EC2 instance (Amazon Linux 2)
sudo yum update -y
sudo yum install -y nodejs
node --version
# Output: v20.x.x
echo "const http = require('http');
http.createServer((req, res) => {
  res.write('Hello from EC2!', 'utf-8');
}).listen(8080);" > server.js
node server.js
# Output: Server running on port 8080
Enter fullscreen mode Exit fullscreen mode

For Compute Engine, you’d use a similar setup on a Debian-based VM:

#!/bin/bash
# Run on a Compute Engine instance (Debian)
sudo apt-get update
sudo apt-get install -y nodejs
node --version
# Output: v20.x.x
echo "const http = require('http');
http.createServer((req, res) => {
  res.write('Hello from Compute Engine!', 'utf-8');
}).listen(8080);" > server.js
node server.js
# Output: Server running on port 8080
Enter fullscreen mode Exit fullscreen mode

Takeaway: Use EC2 for fine-grained control and a massive instance catalog. Choose Compute Engine for quick setup and custom machine types. Check AWS EC2 pricing for details.

Serverless: Lambda vs. Cloud Functions

Serverless lets you focus on code without managing servers. AWS Lambda and GCP Cloud Functions are the go-to options here.

  • Lambda supports multiple languages (Node.js, Python, Java, etc.) and integrates tightly with AWS services like S3 and API Gateway. It’s battle-tested for complex workflows.
  • Cloud Functions is lightweight, ideal for event-driven tasks like responding to HTTP requests or Pub/Sub messages. It’s simpler but less feature-rich.

Key Differences:

Feature AWS Lambda GCP Cloud Functions
Runtimes Node.js, Python, Java, Go, etc. Node.js, Python, Go, Java (limited)
Max Execution Time 15 minutes 9 minutes
Cold Start Latency Higher (varies by language) Lower (Google’s infra advantage)
Event Triggers S3, SNS, API Gateway, etc. Pub/Sub, HTTP, Storage, etc.

Example: A simple HTTP-triggered function to return a JSON response.

AWS Lambda (Python):

import json

def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps({'message': 'Hello from Lambda!'})
    }
# Output: {"message": "Hello from Lambda!"}
Enter fullscreen mode Exit fullscreen mode

GCP Cloud Functions (Python):

def hello_world(request):
    return {'message': 'Hello from Cloud Functions!'}
# Output: {"message": "Hello from Cloud Functions!"}
Enter fullscreen mode Exit fullscreen mode

Takeaway: Lambda is better for complex, AWS-integrated serverless apps. Cloud Functions is simpler for lightweight, event-driven tasks. See GCP Cloud Functions docs.

Storage: S3 vs. Cloud Storage

Storage is critical for hosting files, backups, or static assets. AWS S3 (Simple Storage Service) and GCP Cloud Storage are object storage solutions with similar capabilities but different flavors.

  • S3 is the industry standard, with features like versioning, lifecycle policies, and fine-grained access control.
  • Cloud Storage emphasizes simplicity and Google’s global network for fast access.

Key Differences:

Feature AWS S3 GCP Cloud Storage
Storage Classes Standard, Glacier, Intelligent-Tiering Standard, Nearline, Coldline, Archive
Access Control IAM, bucket policies, ACLs IAM, simpler bucket-level policies
Data Transfer Speed Fast, depends on region Very fast (Google’s network)
Static Website Hosting Built-in Requires extra setup

Example: Uploading a file using Python’s SDK.

AWS S3 (boto3):

import boto3

s3 = boto3.client('s3')
bucket_name = 'my-bucket'
file_name = 'example.txt'

with open(file_name, 'w') as f:
    f.write('Hello, S3!')
s3.upload_file(file_name, bucket_name, file_name)
print(f'Uploaded {file_name} to {bucket_name}')
# Output: Uploaded example.txt to my-bucket
Enter fullscreen mode Exit fullscreen mode

GCP Cloud Storage (google-cloud-storage):

from google.cloud import storage

client = storage.Client()
bucket_name = 'my-bucket'
file_name = 'example.txt'

with open(file_name, 'w') as f:
    f.write('Hello, Cloud Storage!')
bucket = client.bucket(bucket_name)
blob = bucket.blob(file_name)
blob.upload_from_filename(file_name)
print(f'Uploaded {file_name} to {bucket_name}')
# Output: Uploaded example.txt to my-bucket
Enter fullscreen mode Exit fullscreen mode

Takeaway: S3 is feature-rich and great for complex workflows. Cloud Storage is simpler and faster for global access. Explore S3 documentation.

Databases: RDS vs. Cloud SQL

For relational databases, AWS RDS (Relational Database Service) and GCP Cloud SQL are managed solutions supporting MySQL, PostgreSQL, and more.

  • RDS offers deep integration with AWS services and supports engines like Aurora (AWS’s proprietary DB).
  • Cloud SQL is developer-friendly, with automatic backups and strong PostgreSQL support.

Key Differences:

Feature AWS RDS GCP Cloud SQL
Supported Engines MySQL, PostgreSQL, Aurora, etc. MySQL, PostgreSQL, SQL Server
Scalability Vertical and horizontal (Aurora) Vertical, read replicas
Backup Automation Built-in, customizable Built-in, simpler setup
High Availability Multi-AZ deployments Regional instances

Example: Connecting to a PostgreSQL database.

AWS RDS (Python with psycopg2):

import psycopg2

conn = psycopg2.connect(
    host="my-rds-instance.us-east-1.rds.amazonaws.com",
    database="mydb",
    user="admin",
    password="password"
)
cursor = conn.cursor()
cursor.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name VARCHAR(50))")
cursor.execute("INSERT INTO users (name) VALUES ('Alice')")
cursor.execute("SELECT * FROM users")
print(cursor.fetchall())
conn.commit()
conn.close()
# Output: [(1, 'Alice')]
Enter fullscreen mode Exit fullscreen mode

GCP Cloud SQL (Python with psycopg2):

import psycopg2

conn = psycopg2.connect(
    host="35.XXX.XXX.XXX",
    database="mydb",
    user="admin",
    password="password"
)
cursor = conn.cursor()
cursor.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name VARCHAR(50))")
cursor.execute("INSERT INTO users (name) VALUES ('Bob')")
cursor.execute("SELECT * FROM users")
print(cursor.fetchall())
conn.commit()
conn.close()
# Output: [(1, 'Bob')]
Enter fullscreen mode Exit fullscreen mode

Takeaway: RDS is ideal for AWS-centric apps and Aurora’s performance. Cloud SQL is simpler for standard SQL databases. See Cloud SQL docs.

Networking: VPC vs. VPC Network

Networking is about securely connecting your resources. AWS VPC (Virtual Private Cloud) and GCP VPC Network let you create isolated environments.

  • AWS VPC is highly configurable, with subnets, route tables, and security groups.
  • GCP VPC Network is simpler, with global routing and fewer manual configurations.

Key Differences:

Feature AWS VPC GCP VPC Network
Scope Regional Global (spans regions)
Firewall Rules Security groups, network ACLs Firewall rules (simpler)
Peering VPC peering, Transit Gateway VPC Network Peering
Subnet Management Manual Automatic or manual

Takeaway: AWS VPC is for granular control in complex setups. GCP VPC Network is easier for global apps. Check AWS VPC guide.

AI/ML: SageMaker vs. Vertex AI

For machine learning, AWS SageMaker and GCP Vertex AI provide tools for building, training, and deploying models.

  • SageMaker is a full-fledged ML platform with Jupyter notebooks, model hosting, and AutoML.
  • Vertex AI leverages Google’s AI expertise, offering pre-trained models and simpler workflows.

Key Differences:

Feature AWS SageMaker GCP Vertex AI
Pre-trained Models Limited Extensive (e.g., Vision AI, NLP)
Notebook Integration Built-in Jupyter Colab or custom notebooks
AutoML SageMaker Autopilot Vertex AI AutoML
Deployment Endpoints, batch transform Endpoints, simpler setup

Example: Training a simple model (pseudo-code, as ML setup is complex).

SageMaker (Python):

import sagemaker
from sagemaker.sklearn import SKLearn

sagemaker_session = sagemaker.Session()
role = "arn:aws:iam::123456789012:role/SageMakerRole"
sklearn_estimator = SKLearn(
    entry_point="train.py",
    role=role,
    instance_type="ml.m5.large",
    framework_version="0.23-1"
)
sklearn_estimator.fit({"train": "s3://my-bucket/train-data"})
print("Model trained!")
# Output: Model trained!
Enter fullscreen mode Exit fullscreen mode

Vertex AI (Python):

from google.cloud import aiplatform

aiplatform.init(project="my-project", location="us-central1")
job = aiplatform.CustomTrainingJob(
    display_name="my-training-job",
    script_path="train.py",
    container_uri="gcr.io/cloud-aiplatform/training/sklearn-cpu.0-23:latest"
)
job.run(
    machine_type="n1-standard-4",
    replica_count=1,
    training_input={"dataset": "gs://my-bucket/train-data"}
)
print("Model trained!")
# Output: Model trained!
Enter fullscreen mode Exit fullscreen mode

Takeaway: SageMaker is robust for custom ML workflows. Vertex AI is better for leveraging Google’s AI models. Explore Vertex AI docs.

Pricing: Cost Structures and Savings

Pricing is a big factor. AWS and GCP both use pay-as-you-go models, but their cost structures differ.

  • AWS has complex pricing with many variables (e.g., instance type, region, data transfer). Savings Plans and Reserved Instances can cut costs.
  • GCP offers simpler pricing and sustained-use discounts (automatic for long-running VMs). Committed Use Discounts are similar to AWS Reserved Instances.

Key Differences:

Feature AWS GCP
Free Tier 750 hours/month (t2/t3.micro) $300 credit, 1 micro VM free
Sustained Discounts Savings Plans (up to 72% off) Automatic (up to 30% off)
Billing Granularity Per-second (some services) Per-second
Cost Transparency Complex, use Cost Explorer Simpler, use Billing Dashboard

Takeaway: AWS is cost-effective with planning but complex. GCP is simpler with automatic discounts. Use GCP Pricing Calculator.

Picking the Right Cloud for Your Project

Both AWS and GCP are powerful, but your choice depends on your needs:

  • Choose AWS if you need a massive service catalog, deep integrations, or are already in the AWS ecosystem. It’s great for enterprises or complex, multi-service apps.
  • Choose GCP if you prioritize simplicity, fast global performance, or Google’s AI/ML tools. It’s ideal for startups or data-heavy projects.

Tips for Deciding:

  • Prototype first: Use free tiers to test both platforms.
  • Check integrations: Ensure your tools (e.g., CI/CD, monitoring) work well with the platform.
  • Consider team expertise: AWS has a steeper learning curve; GCP is more developer-friendly.

For example, a Node.js microservice with a PostgreSQL backend might lean toward GCP for simpler setup and cost savings. A global e-commerce platform with heavy integrations might favor AWS for its robust ecosystem.

Ultimately, both platforms are production-ready. Experiment, compare costs, and pick what aligns with your team’s skills and project goals.

Top comments (3)

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

pretty cool breakdown tbh, always helps me rethink my own setup decisions - you think choice between these two really matters long-term or is it more about how people use them?

Collapse
 
shrsv profile image
Shrijith Venkatramana • Edited

Choice can matter - especially startups where velocity matters. I've seen people who bootstrapped their business with a dozen of AWS Lambda for ex. They would've gone out of biz if they had picked something more complex or too simple. It was "just right" for their needs.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.