DEV Community

Juan Diego Isaza A.
Juan Diego Isaza A.

Posted on

Cloudflare R2 vs S3: Real Costs, Latency, and Ops

If you’re running VPS-hosted apps and you’re debating cloudflare r2 vs s3, the decision isn’t just “cheaper storage vs bigger ecosystem.” It’s about where your bytes sit, how they leave the bucket, and how much operational friction you’re willing to accept when you scale.

R2 and S3 both speak “S3-compatible object storage,” but they behave very differently in pricing, edge delivery, and tooling maturity. Below is the pragmatic comparison I wish someone had handed me before I built yet another media pipeline.

1) Pricing: the egress trap vs predictable bills

The headline difference is simple:

  • Amazon S3: storage is reasonably priced, but egress fees and request costs can turn into the dominant line item.
  • Cloudflare R2: markets itself around zero egress fees (especially compelling when paired with Cloudflare’s network). You still pay for storage and operations, but egress is the thing that usually explodes.

For VPS-hosting workloads, egress is often the hidden tax:

  • Serving images/video to users
  • Shipping backups to another region
  • Feeding analytics/ML pipelines
  • CDN cache misses

In practice, S3 can be perfectly fine for internal data flows (within AWS or near it), but for public delivery-heavy workloads, egress is where teams get surprised.

Opinionated take: if your app is user-facing and bandwidth-heavy, R2’s pricing model is easier to reason about. If you’re deep in AWS, S3’s “everything is one VPC hop away” can outweigh the bill shock.

2) Performance and latency: edge adjacency matters

Object storage latency is rarely about raw disk speed—it’s about how many networks you traverse.

  • S3 is region-based. Latency is great inside the same AWS region and predictable across AWS services.
  • cloudflare R2 is designed to play nicely with Cloudflare’s edge. If your traffic is global and you already use Cloudflare CDN, R2 can reduce painful “origin-to-user” distance.

For VPS hosting on providers like digitalocean or hetzner, you’re typically not colocated with AWS regions. If your VPS is in a European DC and your users are global, S3 may add a cross-provider hop plus a region hop. R2 can be a better fit when your distribution layer is already Cloudflare.

Rule of thumb:

  • Mostly compute inside AWS + private data paths → S3 wins on integration and predictable regional performance.
  • Global content delivery + external VPS compute → R2 often wins on “time-to-first-byte in the real world,” mainly because it fits the edge-first pattern.

3) Ecosystem and features: mature platform vs focused product

S3 is the industry baseline. That matters.

S3 strengths

  • Feature depth: lifecycle rules, replication patterns, event notifications, Object Lock, inventory reports, and more
  • Broad tooling support: everything integrates with S3 first
  • Enterprise governance: IAM policies are complex but powerful

R2 strengths

  • Simplicity: fewer knobs, fewer footguns
  • S3-compatible API for common operations
  • Works naturally when your delivery and security stack is Cloudflare (WAF, CDN, cache rules)

The tradeoff: if you need very specific S3 features (certain replication workflows, specialized governance, or “AWS-native everything”), R2 may feel constrained. If you want a straightforward blob store behind a CDN for a VPS-hosted app, S3 can feel like overkill.

4) Actionable example: point a VPS app at R2 or S3 with the same code

Because R2 is S3-compatible, you can often use the same AWS SDK code and swap endpoints/credentials.

Here’s a minimal Node.js example using AWS SDK v3 that works for both. The key is: for R2, set the custom endpoint and keep region as auto (or a dummy value depending on your setup).

import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "node:fs";

const isR2 = process.env.OBJECT_STORE === "r2";

const client = new S3Client({
  region: isR2 ? "auto" : process.env.AWS_REGION,
  endpoint: isR2 ? process.env.R2_ENDPOINT : undefined,
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
  },
});

const Body = fs.createReadStream("./backup.tar.gz");

await client.send(
  new PutObjectCommand({
    Bucket: process.env.S3_BUCKET,
    Key: `backups/backup-${Date.now()}.tar.gz`,
    Body,
    ContentType: "application/gzip",
  })
);

console.log("Uploaded");
Enter fullscreen mode Exit fullscreen mode

Operational tip for VPS hosting: run uploads from your VPS (e.g., a cron job) and keep your object store as the source of truth. Then put a CDN in front of it for reads—especially for media.

5) What I’d choose for VPS hosting (and why)

For typical VPS-hosted products—Next.js sites, API backends, SaaS dashboards, image-heavy marketing pages—my default pick is Cloudflare R2 when:

  • The workload is egress-heavy (public downloads, images, video segments)
  • You already use Cloudflare for DNS/CDN/WAF
  • You want cost predictability without spending time on “S3 bill archaeology”

I still choose S3 when:

  • The architecture is AWS-centric (Lambda, ECS, Athena, Glue, etc.)
  • Governance requirements demand S3’s mature policy/retention tooling
  • You rely on S3-specific features that R2 doesn’t match 1:1

Soft note on hosting: if you run your app on digitalocean or hetzner, pairing a lean VPS with object storage that matches your traffic pattern can make a bigger difference than upgrading CPU tiers. For bandwidth-heavy apps, using Cloudflare’s stack end-to-end (including R2) is often the cleanest path; for AWS-heavy pipelines, S3 remains the safest long-term default.


Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.

Top comments (0)