If you’re weighing cloudflare r2 vs s3, you’re probably not debating “object storage” in the abstract—you’re trying to ship a faster, cheaper VPS-hosted app without getting wrecked by bandwidth bills or operational overhead.
What matters for VPS-hosted workloads
When you run on a VPS (think app servers on digitalocean, hetzner, Linode, or Vultr), object storage usually backs one of these:
- Static assets (images, JS bundles, downloads)
- User uploads (profile pics, documents)
- Backups and log archives
- Media storage for APIs
In VPS hosting, the hidden cost is often egress (data leaving the storage provider). The second pain point is latency when your storage is far from your compute. The third is S3-API compatibility—because you want tooling to “just work.”
Cloudflare R2 vs S3: cost, egress, and real-world bills
Here’s the most opinionated, practical summary:
- AWS S3 is the default because it’s mature, feature-rich, and integrates with everything. But it’s also the place where egress costs can quietly dominate your invoice if you’re serving lots of public assets.
- Cloudflare R2 is compelling because it’s designed to reduce or eliminate the classic object-storage tax: paying to move your own data out.
In a VPS setup, you frequently serve assets to end users over the public internet. With S3, that often means either:
- Paying S3 egress directly, or
- Putting CloudFront in front (another moving part), or
- Aggressively caching elsewhere to avoid repeated origin fetches
With cloudflare R2, the pitch is simple: predictable storage costs without the usual egress sting (depending on how you deliver content). If your app is bandwidth-heavy—image galleries, mod downloads, podcast files—R2 can change the economics.
That said, S3 still wins if you need deep enterprise features (complex replication patterns, certain compliance regimes, or AWS-native integrations). R2 is newer and intentionally narrower.
Performance and architecture: where your VPS lives matters
Object storage latency isn’t magic; it’s mostly geography plus network path.
If your VPS is on hetzner in Germany and your object storage is on the other side of the planet, you’ll feel it in:
- Time-to-first-byte on uploads/downloads
- API response times if your app fetches from storage on request
- Worker/cron jobs that process many small objects
Practical guidance:
- If your app server frequently reads objects during requests, pick storage close to compute—or introduce caching.
- If most access is from browsers, your delivery network matters more than the location of your VPS.
With R2, you’re naturally nudged toward an edge-first pattern: store in R2, deliver through Cloudflare’s network. With S3, you can do the same with CloudFront, but you’re now operating more AWS pieces.
For many VPS-hosted apps, the simplest “good” architecture is:
- VPS serves dynamic pages/API
- Object storage holds uploads/assets
- CDN caches public content aggressively
S3 compatibility and an actionable migration-style example
Both choices can be S3-API compatible in day-to-day tooling, which matters because it keeps migrations and multi-cloud setups realistic.
A practical way to keep your VPS code portable is using the AWS SDK and environment variables, so you can swap endpoints without rewriting everything.
Here’s a minimal Node.js example that can talk to either AWS S3 or Cloudflare R2 by changing config:
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "node:fs";
const client = new S3Client({
region: process.env.S3_REGION || "auto",
endpoint: process.env.S3_ENDPOINT, // e.g. https://<accountid>.r2.cloudflarestorage.com
forcePathStyle: true, // often needed for S3-compatible providers
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY,
},
});
async function upload() {
const file = fs.readFileSync("./example.png");
await client.send(new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: "uploads/example.png",
Body: file,
ContentType: "image/png",
}));
console.log("Uploaded");
}
upload().catch(console.error);
On a VPS (systemd, Docker, or bare Node), you just set the environment variables per environment:
- AWS S3: standard AWS region + no custom endpoint
-
R2:
S3_ENDPOINTpoints to your R2 endpoint; region often set toauto
This is the lowest-friction path if you want to test R2 without committing.
Which should you pick for VPS hosting?
My take:
- Pick S3 if you’re already deep in AWS, you need every advanced knob, or you’re optimizing for “known quantity” more than cost.
- Pick Cloudflare R2 if egress is your biggest pain, you serve lots of public assets, or you want a simpler bill for bandwidth-heavy workloads.
Two pragmatic decision tests:
- Do you pay for a lot of outbound traffic today? If yes, R2 is worth a serious trial.
- Do you rely on AWS-native features beyond basic object storage? If yes, S3 will likely save engineering time.
Finally, consider where your VPS is. If you’re running app servers on digitalocean or hetzner, you can keep your core compute costs low and still choose R2 or S3 based on bandwidth and workflow. In practice, I’ve seen teams pair a cheap VPS with R2 for user uploads and get a noticeably calmer monthly bill—without turning the architecture into a science project.
In the end, both are viable. Start with the one that reduces your biggest recurring cost or operational risk, and keep your code S3-compatible so switching later stays boring.
Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.
Top comments (0)