Table of Contents
- The Big Picture: Storage as Postgres + Object Store Hybrid
- Buckets, Objects, and the Metadata Layer
- Uploads: From TUS Resumable to S3-Compatible Magic
- Security Deep Dive: Row-Level Security Meets File Access
- Serving Files: Signed URLs, Public Buckets, and Edge Delivery
- Advanced Patterns: Image Transformations and Integrations
- Common Pitfalls and Pro Tips
- Wrapping Up: Level Up Your File Handling
The Big Picture: Storage as Postgres + Object Store Hybrid
Supabase Storage isn't just another blob store it's a clever marriage of PostgreSQL for metadata and access control, and an S3-compatible object store (primarily Amazon S3 under the hood, with support for others) for the actual heavy lifting of file bytes.
Why this hybrid?
- Postgres gives you battle-tested relational power: fine-grained Row Level Security (RLS), transactions, and queries.
- S3 gives you infinite scale, durability, and cheap storage for raw bytes.
The result? You get file permissions as expressive as your database policies, while the heavy files live where they belong on object storage.
Pro Tip:-
Think of Storage as "Postgres wearing an S3 hat." Every file operation starts in Postgres (auth, policy check), then proxies to S3.
Note:-
Don't assume Storage is purely serverless like Firebase—it's backed by real Postgres instances, so heavy metadata operations (listing thousands of files) can hit query limits if you're not careful.
Buckets, Objects, and the Metadata Layer
Everything starts with buckets. You create them in the dashboard or via API:
Javascript
const { data, error } = await supabase.storage.createBucket('avatars', {
public: false,
});
Internally:
- Buckets are rows in a Postgres table (
storage.buckets). - Each bucket has config like
public,allowed_mime_types, etc.
Files (objects) live in another table (storage.objects):
-
id,bucket_id,name(path),metadata(JSONB),owner(user UUID), etc. - The actual file path in S3 is something like
bucket-name/user-id/random-uuid.ext, but you never see that—Supabase abstracts it.
This metadata layer lets you query files relationally:
SQL
-- Example: Find all avatars owned by a user
SELECT * FROM storage.objects
WHERE bucket_id = 'avatars'
AND owner = 'uuid-of-user';
Tip
Use metadataJSONB for custom tags (e.g., { "project_id": 123 })—query them efficiently with GIN indexes if needed.
Accessibility Note
When storing images, always add alt text metadata and enforce it in your frontend forms—Supabase won't do it for you.
Uploads: From TUS Resumable to S3-Compatible Magic
Tsx
// React example with progress
const uploadFile = async (file: File) => {
const { data, error } = await supabase.storage
.from('avatars')
.upload(`public/${user.id}/${file.name}`, file, {
upsert: true,
contentType: file.type,
});
if (error) console.error(error);
};
Behind the scenes:
- Client → Supabase Storage API (Node.js/Fastify service).
- API checks auth + RLS policy for insert on storage.objects.
- For large files, Supabase uses TUS protocol (resumable uploads) via libraries like Uppy.
- Chunked data streams directly to S3 (multipart upload).
- Once complete, metadata row is committed in Postgres transactionally.
Since late 2024/early 2025, Supabase Storage is fully S3 compatible can use AWS SDKs directly:
Javascript
// Using AWS SDK v3 with Supabase credentials
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: 'auto', // Supabase uses 'auto'
endpoint: 'https://<project-ref>.supabase.co/storage/v1/s3',
credentials: {
accessKeyId: 'your-supabase-access-key',
secretAccessKey: 'your-supabase-secret-key',
},
forcePathStyle: true,
});
Security Deep Dive: Row-Level Security Meets File Access
This is where Supabase shines.
Policies live on storage.objects table, just like database tables.
Example policy: Users can only upload to their own folder:
SQL
CREATE POLICY "Users can upload own avatars"
ON storage.objects FOR INSERT
TO authenticated
WITH CHECK (
bucket_id = 'avatars'
AND (storage.foldername(name))[1] = auth.uid()::text
);
-
storage.foldername(name)extracts path segments. auth.uid()pulls the JWT user ID.Combine with public buckets for CDNs—public buckets bypass auth but still respect allowed_mime_types.
Serving Files: Signed URLs, Public Buckets, and Edge Delivery
Public bucket file: https://<project-ref>.supabase.co/storage/v1/object/public/avatars/user123/photo.jpg
Private: Use signed URLs (time-limited):
Javascript
const { data } = await supabase.storage
.from('private-docs')
.createSignedUrl('reports/q4.pdf', 60); // 60 seconds
Internally, signed URLs are JWTs verified by the Storage API, which proxies to S3 with presigned URLs.
Edge caching? Supabase leverages CDN providers for public assets-low-latency global delivery.
Advanced Patterns: Image Transformations and Integrations
Supabase Storage supports on-the-fly image transforms (via imgproxy integration):
Javascript
// Resize to 200x200
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('user123/photo.jpg', {
transform: { width: 200, height: 200, resize: 'cover' }
});
Under the hood: Requests hit an edge worker that pulls from S3, transforms, caches.
Pair with Edge Functions for custom processing (watermarking, virus scanning).
Pro Tip
Cache transformed images aggressively use Cache-Control headers in metadata.
Common Pitfalls and Pro Tips
Pitfall: Listing huge buckets → Use pagination + search params.
Pitfall: Signed URLs expiring mid-session → Refresh on demand.
Pro Tip: Monitor usage via pg_stat_statements for metadata bottlenecks.
Pro Tip: For ultra-scale, consider direct S3 uploads with presigned URLs generated from Edge Functions.

Top comments (0)