If you’re choosing object storage for a VPS-backed app, cloudflare r2 vs s3 is the comparison that keeps coming up—because storage pricing and egress can quietly become your biggest “why is my bill exploding?” line item.
What actually matters for VPS-hosted workloads
On VPS hosting, you typically run your API, workers, and cron jobs on machines you control (or at least rent). Object storage is your durable “bucket of blobs” for:
- User uploads (images, videos, PDFs)
- Backups and archives
- Static assets for web apps
- Data pipelines (logs, exports)
For these workloads, the deciding factors aren’t marketing checklists—they’re:
- Egress costs (moving data out to the public internet or to your VPS)
- Latency to your compute (where your VPS sits)
- S3 API compatibility (tooling and SDK friction)
- Operational ergonomics (IAM, policies, lifecycle rules)
Pricing and egress: the non-negotiable difference
This is where Cloudflare R2 is intentionally disruptive.
Amazon S3
S3 storage pricing is rarely the problem. The networking is. Egress and request costs can dominate if you serve assets publicly or have chatty workloads.
- Great durability and ecosystem maturity
- But egress can punish “VPS + public downloads” architectures
- You’ll also pay per-request, and those add up under load
Cloudflare R2
R2’s headline is zero egress fees (in many common scenarios), which changes the math for serving files to users or syncing content across regions.
My take: if your app is asset-heavy (images, downloads, media), egress is the first thing to model, not the last. For a small VPS app that suddenly gets traction, S3 egress is the kind of surprise that can force a redesign.
That said, don’t treat “no egress” as “no cost.” You still pay for storage and operations, and you should validate request pricing against your access pattern.
Performance and topology: where your VPS lives matters
With VPS hosting, geography is architecture.
- If your VPS is in a data center close to AWS regions, S3 latency is usually predictable.
- If your users are global, a CDN in front of either service often matters more than raw bucket latency.
R2 sits closer to Cloudflare’s network edge conceptually, and it pairs naturally with Cloudflare’s caching layer. S3 can do similar things, but you typically assemble the pieces yourself.
Opinionated rule of thumb:
- If your app serves lots of public assets: R2 + caching is hard to ignore.
- If your app is internal, batchy, or AWS-native: S3 still feels like the default.
Also consider where your compute runs. Many teams host on providers like digitalocean or hetzner for simple, cost-effective VPS deployments. In those cases, you’re already “off AWS,” so the S3 ecosystem benefits need to be real, not theoretical.
Compatibility and operations: S3 is the lingua franca
S3 won because it became the API everyone supports.
S3 strengths
- Mature IAM and policy tooling
- Deep lifecycle management options
- Broad integration: backups, CI/CD tools, data platforms
R2 strengths (and where to be careful)
R2 is S3-compatible for many operations, which is a huge deal. Most common SDK calls work, and many tools that speak S3 will talk to R2.
But “S3-compatible” isn’t always “S3-identical.” If you rely on edge-case behaviors, advanced IAM workflows, or specific AWS integrations, you should test your exact toolchain.
In practice: for typical VPS workloads (uploads, backups, static assets), S3-compatibility is usually enough.
Actionable example: point an S3 SDK at R2
Here’s a minimal Node.js example using the AWS SDK v3 to upload a file to an S3-compatible endpoint (like R2). This is the quickest way to validate your migration path.
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "node:fs";
const client = new S3Client({
region: "auto",
endpoint: process.env.R2_ENDPOINT, // e.g. https://<accountid>.r2.cloudflarestorage.com
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
async function upload() {
const Body = fs.createReadStream("./avatar.png");
const cmd = new PutObjectCommand({
Bucket: process.env.R2_BUCKET,
Key: "uploads/avatar.png",
Body,
ContentType: "image/png",
});
await client.send(cmd);
console.log("Uploaded");
}
upload().catch(console.error);
If you can upload, list, and read objects with your existing tooling, most migrations become a pricing decision rather than a rewrite.
Recommendation matrix (and a soft landing)
If I were deploying a VPS-hosted app today, I’d decide like this:
- Choose S3 if you’re deeply tied to AWS services, need the most mature IAM/policy surface area, or want maximum “it just works” integration with third-party tools.
- Choose Cloudflare R2 if egress is a major risk, you serve lots of public content, or you want an S3-like workflow without the typical bandwidth penalties.
For teams running lean VPS stacks—say on digitalocean droplets or hetzner servers—R2 can be a practical way to keep bills predictable while still using familiar S3 tooling. If you already use cloudflare for DNS/CDN, the operational simplicity is a nice bonus—but it shouldn’t be the only reason.
Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.
Top comments (0)