DEV Community

Cover image for Using Google Cloud Storage as an S3 Alternative (AWS S3 --> GCS Guide)
Kamal Rhrabla
Kamal Rhrabla

Posted on

Using Google Cloud Storage as an S3 Alternative (AWS S3 --> GCS Guide)

If you’re coming from AWS, Amazon S3 is usually the default choice for object storage. On Google Cloud, the direct equivalent is Google Cloud Storage (GCS).

This article explains how to think about GCS if you already know S3, including:

  • Concept mapping (bucket/object/policy)
  • Common features (versioning, lifecycle, encryption)
  • Access control (IAM vs ACLs)
  • Signed URLs (pre-signed URLs equivalent)
  • A small hands-on walkthrough with gcloud (and optional gsutil)

1) Quick mapping: S3 concepts → GCS concepts

AWS S3 Google Cloud Storage
Bucket Bucket
Object Object
Prefix / “folder” Object name prefix (folders are virtual)
S3 Versioning Object Versioning
S3 Lifecycle Rules Lifecycle Management
Bucket Policy / IAM Cloud IAM policies (preferred)
ACLs ACLs (supported, but often discouraged)
SSE-S3 / SSE-KMS Google-managed encryption / CMEK (Cloud KMS)
Pre-signed URL Signed URL
S3 Event Notifications Pub/Sub Notifications + Eventarc (common patterns)

Mental model: GCS is object storage like S3: you store objects in buckets, and you control access with policy.


2) Why choose GCS over S3?

You might choose GCS when:

  • Your workloads already run on GCP (Cloud Run, GKE, Compute Engine)
  • You want tight integration with IAM, Cloud KMS, Cloud Logging, BigQuery
  • You need a clean path to analytics or pipelines (e.g., GCS → BigQuery/Dataflow)

You might stick with S3 when:

  • Your ecosystem is primarily AWS-based
  • You heavily rely on S3-specific features or tooling already standardized in your org

3) Create a bucket (the GCS way)

Pick a naming + location strategy

  • Bucket names are globally unique
  • You choose a location type (region / dual-region / multi-region)
  • You can also choose a storage class (Standard, Nearline, Coldline, Archive)

Create a bucket with gcloud

Replace placeholders and run:

PROJECT_ID="your-project-id"
BUCKET_NAME="your-unique-bucket-name"
LOCATION="us-central1"

gcloud storage buckets create gs://$BUCKET_NAME \
  --project=$PROJECT_ID \
  --location=$LOCATION
Enter fullscreen mode Exit fullscreen mode

If you used aws s3 mb s3://..., this is the equivalent.


4) Upload, download, list objects

Upload a file

echo "hello gcs" > hello.txt

gcloud storage cp hello.txt gs://$BUCKET_NAME/hello.txt
Enter fullscreen mode Exit fullscreen mode

List objects

gcloud storage ls gs://$BUCKET_NAME
Enter fullscreen mode Exit fullscreen mode

Download the object

rm hello.txt
gcloud storage cp gs://$BUCKET_NAME/hello.txt hello.txt
cat hello.txt
Enter fullscreen mode Exit fullscreen mode

5) Access control: IAM vs ACLs (S3 users pay attention here)

The recommended approach: IAM + Uniform bucket-level access

On GCS, the modern best practice is typically:

  • Use IAM for access control
  • Enable Uniform bucket-level access (UBLA)
  • Avoid object-level ACL complexity unless you really need it

Enable UBLA:

gcloud storage buckets update gs://$BUCKET_NAME --uniform-bucket-level-access
Enter fullscreen mode Exit fullscreen mode

Then grant permissions using IAM.

Example: grant read-only access to a user for a bucket

USER_EMAIL="dev@example.com"

gcloud storage buckets add-iam-policy-binding gs://$BUCKET_NAME \
  --member="user:$USER_EMAIL" \
  --role="roles/storage.objectViewer"
Enter fullscreen mode Exit fullscreen mode

S3 analogy:

  • This is closer to controlling access via IAM policy rather than relying on object ACLs.

6) Public buckets: how to do it (and how to avoid accidents)

Public access is often a footgun. If your goal is “public website assets,” consider using:

  • Cloud Storage + Cloud CDN (or a load balancer) depending on needs

If you do need public read access, do it intentionally.

Make objects publicly readable (example)

gcloud storage buckets add-iam-policy-binding gs://$BUCKET_NAME \
  --member="allUsers" \
  --role="roles/storage.objectViewer"
Enter fullscreen mode Exit fullscreen mode

If this feels similar to an S3 bucket policy allowing s3:GetObject for "Principal": "*", it is.

Tip: For production, pair this with guardrails:

  • Organization policies
  • Security reviews
  • Logging + alerting

7) Signed URLs: the “pre-signed URL” equivalent

Signed URLs are a common pattern when you want:

  • Users to download private objects without logging in
  • Users to upload directly to storage without exposing service credentials

Generate a signed URL (example)

There are multiple ways to do this on GCP (CLI, libraries, service accounts).
A typical pattern is:

  • Use a service account with permission to sign URLs
  • Generate a URL with an expiration time

If you want, tell me your stack (Node.js / Python / Go) and I’ll add a copy/paste snippet to generate:

  • Signed download URLs
  • Signed upload URLs (PUT/POST)

8) Lifecycle rules: manage cost like S3 lifecycle policies

Lifecycle rules help you:

  • Delete objects older than N days
  • Move objects to cheaper classes after N days
  • Clean up incomplete uploads (S3 concept; in GCS you typically manage temp objects differently)

Example lifecycle JSON (delete after 30 days)

Create a file named lifecycle.json:

{
  "rule": [
    {
      "action": { "type": "Delete" },
      "condition": { "age": 30 }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Apply it:

gcloud storage buckets update gs://$BUCKET_NAME --lifecycle-file=lifecycle.json
Enter fullscreen mode Exit fullscreen mode

9) Versioning: protect against accidental overwrites

Enable object versioning:

gcloud storage buckets update gs://$BUCKET_NAME --versioning
Enter fullscreen mode Exit fullscreen mode

This is very similar to enabling S3 Versioning: you keep older generations of objects when overwritten/deleted (depending on lifecycle/retention settings).


10) Encryption: default vs customer-managed keys (CMEK)

By default, GCS encrypts data at rest. If you need tighter control (compliance, audit, key rotation requirements), you can use:

  • CMEK with Cloud KMS

This resembles S3 SSE-KMS.

If you want, I can write a follow-up post:

  • “GCS + Cloud KMS (CMEK) step-by-step”
  • including least-privilege IAM for the KMS key

11) Logging and auditing (don’t skip this)

For production:

  • Make sure you understand Cloud Audit Logs for storage access
  • Use Cloud Logging to track access patterns
  • Add alerts for suspicious behavior (e.g., many deletes, access from unexpected identities)

12) Common pitfalls for S3 users moving to GCS

  1. Bucket names are global (same as S3), so naming can be frustrating—plan ahead.
  2. “Folders” aren’t real (same as S3). Everything is just object prefixes.
  3. IAM vs ACLs: don’t mix unless you know why—prefer UBLA + IAM.
  4. Public access is easy to misconfigure—avoid “quick public” in real systems.
  5. Costs depend on storage class, operations, and egress—use lifecycle rules early.

Wrap-up

If you know S3, you already understand 80% of GCS. The biggest shift is usually access control: on GCP, you’ll typically standardize on IAM + Uniform bucket-level access, plus signed URLs for controlled sharing.

Top comments (0)