If you’re picking object storage for a VPS stack, cloudflare r2 vs s3 isn’t a theoretical debate—it directly changes your bill, your latency, and how painful egress surprises get. For most VPS-hosted apps, the real question is: do you need AWS’s ecosystem and features, or do you need predictable costs and simple global delivery?
Pricing & egress: where most VPS budgets die
The defining difference is egress.
- Amazon S3 charges for storage and requests, and typically charges data transfer out (egress) when users download your files. Those costs compound fast for image-heavy sites, downloads, backups you restore often, or API-driven media.
- Cloudflare R2 is positioned around zero egress fees (when used as intended in the Cloudflare network), which is the single most compelling reason it’s showing up in modern VPS architectures.
Opinionated take: if your VPS is on a fixed monthly budget, unpredictable egress is the #1 way object storage turns into a problem. S3 is still “the default,” but that default is optimized for AWS-centric environments—less so for a single VPS serving public assets.
The caveat: S3 can be cheap if your data rarely leaves the bucket (internal processing) or you’re using AWS services that keep traffic inside AWS. If you’re hosting on a VPS outside AWS, you’re more likely to pay egress.
Performance & latency for VPS-hosted apps
Performance isn’t just “which is faster,” it’s where your users are and where your compute is.
- S3 performance is excellent, but you typically pair it with CloudFront (or another CDN) for global delivery. That adds configuration surface area but gives you mature caching, signed URLs, origin failover patterns, etc.
- R2 sits naturally close to Cloudflare’s edge and pairs well with Cloudflare caching. For typical VPS workloads (Next.js assets, user uploads, downloadable builds), you can often get solid global delivery without stitching multiple AWS services together.
In the VPS_HOSTING world, many people run compute on digitalocean, hetzner, or similar providers and just want object storage that doesn’t punish them when traffic spikes. In that scenario, R2’s “global by default” story is attractive.
One more nuance: latency from your VPS to the object store matters for write-heavy flows (uploads, transforms, frequent reads without caching). If your app server sits in a region far from the storage region, you’ll feel it. S3 has many regions; with R2 you’ll likely design around edge caching and/or asynchronous processing.
Features & compatibility: S3 wins on depth, R2 wins on simplicity
S3 is the most feature-complete object storage platform:
- Rich eventing patterns (SNS/SQS/Lambda)
- Mature lifecycle policies and replication options
- Deep IAM and policy tooling
- A huge ecosystem of integrations
R2 focuses on the common 80%:
- S3-compatible API (critical for drop-in tooling)
- Straightforward buckets/keys and public/private patterns
- Easy pairing with Cloudflare Workers for lightweight processing
Practical guidance: if you rely on specific S3-only features (complex replication topologies, very granular IAM org structures, tight AWS-native event pipelines), S3 is still the safest choice. If you mostly need “store files, serve files, don’t get wrecked by egress,” R2 is compelling.
Actionable example: switch a VPS backup from S3 to R2 (S3 API)
Most backup tools speak the S3 API. That means you can often point them at R2 with minimal changes.
Here’s a concrete example using rclone on a VPS to push nightly backups to an S3-compatible endpoint (works for S3 and typically for R2 by changing endpoint/credentials):
# Install rclone (varies by distro)
# Debian/Ubuntu:
# sudo apt-get update && sudo apt-get install -y rclone
# Configure a remote (interactive):
rclone config
# Then run a backup sync (example directory -> bucket path)
# Replace:
# REMOTE: name you set in rclone config
# bucket-name
# your/app
# local backup folder
rclone sync /var/backups REMOTE:bucket-name/your/app \
--s3-provider Other \
--s3-endpoint https://YOUR_S3_COMPAT_ENDPOINT \
--s3-access-key-id $S3_ACCESS_KEY \
--s3-secret-access-key $S3_SECRET_KEY \
--fast-list \
--transfers 8 \
--checkers 16
Notes that matter in production:
- Keep credentials in environment variables or a root-only config file.
- Test restore. Cheap backups are useless if restores are slow or broken.
- If you’re serving public assets, put a CDN in front (Cloudflare makes this easy for R2; S3 commonly uses CloudFront).
So, which should you choose for VPS hosting?
My rule of thumb:
- Choose S3 if you’re already on AWS, you need AWS-native integrations, or you want the most battle-tested feature set and don’t mind managing egress with a CDN and careful architecture.
- Choose Cloudflare R2 if you run your app on a VPS (think hetzner/digitalocean) and you want predictable costs for public downloads, images, and user-generated content.
Soft nudge: if you’re already using cloudflare for DNS, CDN, or WAF, R2 tends to fit naturally into that toolbox—especially for “static assets + uploads” workloads where egress pricing is the whole game.
Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.
Top comments (0)