DEV Community

yep
yep

Posted on • Originally published at yepchaos.com

Object Storage & CDN Journey

A chat application needs reliable object storage — media uploads, backups, logs. Sounds simple, but there’s lot of choices. I went through six different solutions before landing on something that actually made sense.

The S3 API

Before getting into the journey, one thing worth explaining: almost every object storage provider today implements the S3 API — the interface originally built by AWS for their Simple Storage Service.

It's a RESTful interface: buckets as containers, objects accessed by unique keys, HTTP methods for everything. The key thing is it's a standard. Providers like Wasabi, MinIO, Backblaze, Cloudflare R2 — they all speak S3. That means I can swap providers without rewriting application logic, just change the endpoint and credentials. That portability matters a lot when you're still figuring out the right fit.

The Provider Journey

AWS S3

The obvious starting point. Reliable, feature-rich, integrates with everything. I used it early on and it worked fine — but the pricing model is higher than others. I stopped using it before things got expensive.

Backblaze B2

Backblaze B2 has egress-free pricing, which sounds great. The problem: it only has American data centers. My servers and users aren't in America, so the latency was noticeable and unacceptable for a real-time chat app.

Tigris (via Fly.io)

Tigris (Fly.io) provides globally distributed, S3-compatible storage with low latency, addressing the B2 latency limitations. However, its pricing model includes per-request charges in addition to storage. For an API-heavy workload like a chat system, this would scale poorly, so I decided not to go with it.

MinIO

I actually deployed MinIO in my cluster. It's open-source, S3-compatible, and simple to run. But running it yourself means managing infrastructure, handling high availability, paying for the compute. For a small project it's overkill — I was spending more time on storage ops than on the actual product.

Wasabi

Wasabi has egress-free pricing and good performance. I settled here for a while. But there's a catch: Wasabi doesn't support public bucket permissions.

For private files, that's fine — I generate pre-signed URLs from the backend, the user gets a temporary link, no credentials exposed. But for public files like profile pictures, I had to build a backend service to forward them to users. Extra latency, extra backend load, not ideal.

I made it work, but then realized a bigger problem.

The Wasabi Pricing Problem

Wasabi charges for a minimum of 1TB regardless of how much you actually store. My total data — user uploads, database backups, cluster backups — was under 10GB. I was paying $8/month to store 10GB. That's bad.

Fixing the Public File Problem First

Before I figured out the pricing issue, I spent time solving the public file latency problem with Cloudflare caching. Worth documenting because it works well if you're stuck on Wasabi or something like this.

The setup: every public file request goes through my backend at /api/v1/media/file/*. I set Cloudflare cache rules on that path — mark responses eligible for cache, force an edge TTL of 1 year, bypass backend Cache-Control headers. Once a file is cached at Cloudflare's edge, it never hits my backend or Wasabi again.

Here's a real cached response:

  • CF-Cache-Status: HIT — served from Cloudflare's edge, not my backend
  • Age: 774 — seconds it's been cached at the edge
  • Cache-Control: max-age=31536000 — browser caches it for 1 year too

Zero extra cluster resources, no Wasabi bandwidth on repeat requests, low latency globally. If you're using Wasabi and hitting this problem, this approach works.

Because of the fixed fee, i decided to move anyway.

Final Setup: Cloudflare R2

Cloudflare R2 has a free tier of 10GB. My entire dataset fits in that. No egress fees, native CDN built in — so no need for the Cloudflare caching workaround above (though good to know it works). I moved everything to R2 and now pay nothing for storage.

For backups, I'm keeping Backblaze B2 in mind for when data grows — egress-free and cheap for large volumes, as long as the latency to my users is acceptable for backup use cases (it is).

Current state:

  • Cloudflare R2 — user uploads, all active data, everything under 10GB (free tier)
  • Backblaze B2 — future home for backups once R2 free tier isn't enough

The egress-free advantage of Wasabi turned out to be irrelevant at my scale. Under 1TB, you're just paying the minimum anyway. R2's free tier made the decision easy

Top comments (0)