Here's a question most developers never think to ask: when a user uploads a file in your app, where does that file actually go first?
If you're using any standard SDK setup - multer in Express, Django's request.FILES, Rails's ActionDispatch - the answer is: through your server. The file lands on your server, sits in memory or a temp directory, and then your server streams it up to S3 or R2 or whatever storage backend you're using.
That flow looks like this:
User → Your Server → Cloud Storage
And it causes three problems that most SaaS developers quietly accept as normal:
1. Latency doubles. The file has to travel to your server and then from your server to storage. Two hops instead of one. For a 10MB file, that's noticeable. For a 100MB file, it's painful.
2. You pay for bandwidth you didn't need to use. Every byte that passes through your server costs you compute time and egress fees, depending on your hosting setup. You're essentially paying to be a middleman.
3. Your server becomes a bottleneck. Multiple concurrent large uploads can saturate your server's network interface or exhaust memory. You end up scaling your API servers to handle file upload load, which is a genuinely expensive solution to a problem that shouldn't exist.
The alternative: presigned URLs
Cloud storage providers solved this years ago. The idea is simple: instead of accepting the file on your server and forwarding it, your server generates a short-lived signed URL that gives the client direct write access to a specific location in storage. The client then uploads directly.
The flow becomes:
User → Cloud Storage (directly)
↑
Your Server (just issues the URL, never sees the file)
Your server's only job is to generate the URL. The actual file transfer bypasses it entirely.
Here's what that looks like in practice with Cloudflare R2 as an example:
// On your server - generate a presigned URL
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
const client = new S3Client({
region: 'auto',
endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY,
},
})
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: `uploads/${filename}`,
ContentType: contentType,
})
const presignedUrl = await getSignedUrl(client, command, { expiresIn: 3600 })
// Return this URL to the client
// On the client — upload directly to R2
await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
})
The client gets a URL, uploads directly to R2, and your server never touches the file data. Upload speed improves significantly because you've eliminated one of the two network hops. No extra bandwidth costs. No memory pressure on your API server.
Why most developers still don't use presigned URLs
Knowing about presigned URLs is one thing. Actually using them consistently in a multi-tenant SaaS is another.
The moment you move beyond a single-user app, you run into questions that presigned URLs alone don't answer:
Tenant isolation. If every user uploads to uploads/${filename}, you immediately have a collision problem. So you build a namespacing convention: tenants/${userId}/${filename}. Fine - but now you're generating keys, managing that convention in every part of your codebase that touches storage, and hoping a bug never lets a key from one tenant resolve to another tenant's path.
Quota enforcement. Presigned URLs are issued before the upload happens. If User A has used 4.8GB of their 5GB quota and tries to upload a 500MB file, you need to check that before issuing the URL, not after. So now you need to track per-tenant storage consumption in your database, update it after every upload, decrement it after every delete, and handle the race conditions when multiple uploads happen simultaneously.
Usage tracking. Your product probably wants to show users how much storage they've consumed. That means a database column, a counter that stays in sync with actual storage, and a query path to surface that data per tenant. Another moving part.
None of this is impossibly hard. But it's a week of work that has nothing to do with your actual product, and it's work you'll repeat on the next SaaS you build too.
Solving the whole problem at once
This is the gap Tenantbox fills. It's a thin API layer that handles the multi-tenancy concerns, isolation, quota enforcement, usage tracking while also keeping the presigned URL architecture intact.
You send it a tenant_id and a filename. It returns a presigned URL. Your client uploads directly to Tenantbox storage. Your server never touches the file. But the namespacing, the quota check, and the usage update all happen automatically on Tenantbox's side.
// Step 1 — request a presigned URL from TenantBox
const { data } = await axios.post(
'https://api.tenantbox.dev/api/storage/upload/',
{
tenant_id: 'user_123',
filename: 'report.pdf',
content_type: 'application/pdf',
},
{ headers: { 'Authorization': 'Bearer sk_live_xxx' } }
)
// Step 2 — client uploads directly to R2
await axios.put(data.presigned_url, fileBuffer, {
headers: { 'Content-Type': 'application/pdf' },
})
No tenant setup in advance, the first upload for a tenant_id creates the tenant automatically. Quota limits are enforced before the URL is issued, so an over-quota upload fails at the request stage, not mid-transfer. Storage consumption is tracked per tenant without any database work on your side.
The $0 egress fee is worth noting too because files go client → Tenantbox storage directly.
Demo
When to use each approach
To be direct about when you actually need something like Tenantbox versus raw presigned URLs:
Raw presigned URLs are fine if you have a single-tenant app, all users share the same storage pool, and you don't need per-user quota enforcement. Many apps never need more than this.
You need the multi-tenant layer if you're building a B2B SaaS, each customer should have isolated storage, you need to enforce per-customer storage limits, or you want to show customers their storage usage. At that point, you're going to build the isolation and tracking logic anyway the question is whether you want to build and maintain it yourself or use something that's already built.
The broader principle
The underlying architecture - presigned URLs for direct client-to-storage transfers is worth internalising regardless of what storage solution you end up using. The performance benefit alone is significant, and it simplifies your server considerably by removing file data from the request handling path entirely.
If you're still routing uploads through your server, try switching one endpoint to presigned URLs and measure the difference in upload completion times. The gap is usually larger than people expect.
Tenantbox is free to get started at tenantbox.dev if you want the multi-tenancy layer without building it yourself.
What does your current file upload architecture look like? I'm curious whether most people reading this are already using presigned URLs or still going through the server. Drop a comment.
Built by Voltageitlabs.
app.arcade.software
Top comments (0)