DEV Community

Juan Diego Isaza A.
Juan Diego Isaza A.

Posted on

Cloudflare R2 vs S3 for VPS Hosting: Real Costs & UX

If you’re weighing cloudflare r2 vs s3 for a VPS-hosted app, you’re really deciding where your bandwidth bill and operational friction will land. Object storage is “cheap” until you start serving files at scale, and then egress, request pricing, and latency become the whole game.

1) The decision drivers for VPS-hosted apps

When you run on a VPS (DigitalOcean, hetzner, linode, vultr—pick your flavor), object storage usually backs one of these:

  • User uploads (avatars, videos, documents)
  • Static assets (JS/CSS/images) offloaded from your VPS
  • Backups and logs
  • Build artifacts / container layers

For VPS hosting, the practical constraints aren’t just durability and API compatibility—both providers are solid there. The constraints are:

  • Egress cost predictability: your VPS provider already charges for bandwidth; adding object storage egress can double-charge you.
  • Latency to your compute: if your VPS is in a different region/provider than your object store, every request pays the network tax.
  • S3 ecosystem gravity: tools, SDK defaults, IAM policies, and third-party integrations often assume S3.

Opinionated take: if your product serves a lot of public files, egress dominates. If your product is mostly private storage and integration-heavy, S3’s ecosystem dominates.

2) Pricing reality: egress is the knife fight

The headline difference most developers feel: Cloudflare R2 markets zero egress fees (when used through Cloudflare’s network), while Amazon S3 charges for data transfer out.

What that means in practice for a VPS stack:

  • If your VPS app serves files directly from object storage to end users, S3 egress can become your largest variable cost.
  • R2’s no-egress positioning can be a real lever when you have heavy download traffic.

But don’t stop at egress:

  • Request costs: list, put, get, and multipart uploads add up. Workloads with tons of tiny GETs might surprise you.
  • Lifecycle policies: S3 has extremely mature lifecycle tiers and archival options; R2 is improving but isn’t S3-deep.
  • Hidden network path: “no egress” isn’t magic if your traffic pattern pulls data out to somewhere that isn’t effectively on the same edge path.

A useful mental model:

  • High public traffic + CDN-friendly assets → R2 tends to win on cost clarity.
  • Private enterprise-y workflows + lots of integrations/tiers → S3 tends to win on features.

3) Performance and architecture with a VPS

Latency depends on where compute lives and how you deliver content.

Serving public assets

With a VPS (say on DigitalOcean) you generally want:

  1. Store objects in R2 or S3
  2. Put a CDN in front (often cloudflare)
  3. Make your VPS mostly uninvolved in file delivery

R2 fits naturally if you’re already using Cloudflare at the edge. If your app is global and your users are everywhere, pushing traffic to the edge matters more than shaving a few milliseconds between your VPS and the object store.

Serving private objects

Private downloads (signed URLs, authenticated access) can be fast on both, but pay attention to:

  • Where your signing/auth logic runs (your VPS)
  • Whether your object store supports the auth scheme your libraries assume (S3 patterns are the default)

If you’re running in hetzner or another cost-efficient VPS environment, the delta between “cheap compute” and “expensive egress” becomes even more obvious. You can save $10–$30/month on a VPS and accidentally add $300/month in S3 transfer fees.

4) S3 compatibility: migration friction is real

R2 supports an S3-compatible API, and for many apps that’s enough. But “compatible” doesn’t always mean “drop-in identical,” especially around:

  • IAM policy nuance
  • Certain headers/edge cases in SDKs
  • Eventing and deep AWS-native integrations

If your stack is already AWS-centric (Lambda triggers, Athena, Glue, etc.), S3 is the path of least resistance.

If your stack is VPS-centric (Docker on a single node, Postgres, Redis) and you just need reliable blobs, R2’s S3-style API usually gets you there with less cost anxiety.

Actionable example: generating a signed URL (S3/R2 style)

This Node.js snippet generates a presigned GET URL using AWS’s SDK. It works with S3, and can work with R2 by setting a custom endpoint.

import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const client = new S3Client({
  region: "auto",
  endpoint: process.env.S3_ENDPOINT, // e.g. https://<accountid>.r2.cloudflarestorage.com
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
  },
});

const command = new GetObjectCommand({
  Bucket: process.env.S3_BUCKET,
  Key: "uploads/report.pdf",
});

const url = await getSignedUrl(client, command, { expiresIn: 60 });
console.log(url);
Enter fullscreen mode Exit fullscreen mode

Operational tip: in a VPS-hosted app, presigned URLs keep file streaming off your server, which reduces CPU, bandwidth, and failure modes.

5) What I’d choose (and when) in VPS hosting

If you’re building a typical VPS-hosted SaaS—uploads, static assets, moderate scale—Cloudflare R2 is hard to ignore. The no-egress story aligns with the most common pain point: paying twice for bandwidth (once to your VPS provider, again to your object store).

If you’re building something deeply integrated with AWS services, or you need the full matrix of storage classes, replication options, and “AWS-native everything,” S3 is still the default for a reason.

Soft recommendation to keep it practical: if you already deploy on DigitalOcean or hetzner and you’re fronting your app with cloudflare, R2 often produces the cleanest bill and simplest CDN story. If your infrastructure roadmap is AWS-first, stick with S3 and spend your energy optimizing caching and lifecycle policies instead of fighting platform gravity.


Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.

Top comments (0)