DEV Community

Sean Brill
Sean Brill

Posted on

Building a Developer-First Cloud Storage Platform on Azure Blob (Lessons Learned)

When you build apps long enough, you eventually run into file storage.

User uploads. Media previews. Private downloads. Public sharing. Expiring links. Access control.

On paper, services like S3 and Firebase Storage solve this. In practice, I kept running into friction:

  • Overly complex permission models
  • Confusing bucket structures
  • Boilerplate-heavy integrations
  • Public/private edge cases
  • Performance surprises with large files

So I decided to build a storage layer from scratch on top of Azure Blob Storage and document what I learned along the way.

This eventually became FileFreak.io, but the more interesting part is the architecture and tradeoffs.


Why Not Just Use S3 Directly?

You absolutely can.

But most applications don't need the full flexibility of S3. They need:

  • Secure uploads
  • Private-by-default storage
  • Controlled sharing
  • Clean metadata management
  • Reliable streaming
  • Good UX

The problem is not raw storage. It's everything around it.


Architecture Overview

High-level stack:

  • Backend: Node.js with streaming-based request handling
  • Storage: Azure Blob Storage (hot tier)
  • Database: MSSQL for metadata and access control
  • Auth: JWT sessions + Argon2 password hashing
  • Realtime: WebSockets for upload progress tracking

Azure Blob handles durability and scale.
The backend handles logic, security, and developer ergonomics.


Handling Large Uploads Without Blowing Memory

One of the biggest mistakes I made early on was buffering too much data in memory.

The correct approach is fully streaming uploads:

req.pipe(blockBlobClient.uploadStream());
Enter fullscreen mode Exit fullscreen mode

Key lessons:

  • Never buffer entire files in memory
  • Respect backpressure
  • Destroy streams properly on error
  • Handle partial uploads cleanly
  • Watch for ERR_STREAM_WRITE_AFTER_END issues

Streaming architecture matters more than people think.


Private by Default

Storage systems tend to default toward public access or complicated ACLs.

Instead, I designed the system around:

  • All files private by default
  • Signed access for downloads
  • Public links as explicit opt-in
  • Optional password protection
  • Expiration timestamps stored in metadata

Security is easier to reason about when the default state is locked down.


Real-Time Upload Progress

Instead of polling for upload status, I used WebSockets to emit progress events during streaming uploads.

This significantly improves UX compared to traditional form-based uploads.

It's a small detail, but it makes the system feel modern.


Metadata Is Everything

Blob storage is not your database.

Every file is paired with structured metadata stored in MSSQL:

  • Owner ID
  • Folder hierarchy
  • Access level
  • Public token (if generated)
  • Expiration time
  • Size
  • MIME type

Storage handles durability.
The database handles logic.

Trying to overload blob metadata quickly becomes painful.


Performance Considerations

Azure Blob hot tier pricing is attractive at around $0.02 per GB.

But storage cost is rarely the main expense.

The real considerations are:

  • Egress bandwidth
  • API compute
  • Streaming efficiency
  • Database load
  • File preview handling

Optimizing for streams instead of buffers made a measurable difference in memory stability under load.


Building the UI Layer

On the frontend, I focused on:

  • A file explorer-style dashboard
  • Drag-and-drop uploads
  • Nested folders
  • Trash and restore
  • In-browser previews for images, videos, and PDFs
  • Secure sharing links

The goal was to reduce friction, not increase flexibility.

Most apps don't need 500 configuration flags.
They need reliability and clarity.


What's Next

The next phase is exposing the storage layer as:

  • A public API
  • SDKs for easy integration
  • Programmatic file management
  • Expanded permission controls

The interesting question is not whether storage exists.
It's whether developers want maximum flexibility
or sensible defaults with guardrails.


Quick Feature Snapshot

What's live today:

  • Private-by-default cloud storage
  • Fast uploads with real-time progress
  • Streaming uploads and downloads
  • In-browser previews
  • Folder organization and trash restore
  • Secure public sharing links with optional password protection and expiration

Coming next:

  • Public developer API
  • SDKs for integration
  • Enhanced team permissions

I'm Curious

For those of you who've built apps involving file uploads:

  • What's the most frustrating part?
  • Is the pain UX, permissions, pricing, performance, or something else?
  • Would you trade flexibility for simplicity?

If you're curious about what this evolved into, it became FileFreak.io. But the architecture lessons were the real takeaway.

Top comments (0)