Why Supabase Storage?
Supabase Storage provides an S3-compatible object storage solution with built-in CDN, image transformations, and Row Level Security. If you need file uploads with authentication baked in, this is your shortcut.
Quick Start
npm install @supabase/supabase-js
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
'https://your-project.supabase.co',
'your-anon-key'
);
// Upload a file
const { data, error } = await supabase.storage
.from('avatars')
.upload('public/avatar.png', file, {
cacheControl: '3600',
upsert: false,
});
console.log('Uploaded:', data.path);
Image Transformations on the Fly
No need for a separate image processing service:
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('public/avatar.png', {
transform: {
width: 200,
height: 200,
resize: 'cover',
quality: 80,
},
});
console.log(data.publicUrl);
Signed URLs for Private Files
const { data, error } = await supabase.storage
.from('private-docs')
.createSignedUrl('report.pdf', 60);
console.log(data.signedUrl);
List and Manage Files
const { data, error } = await supabase.storage
.from('documents')
.list('contracts', {
limit: 100,
offset: 0,
sortBy: { column: 'created_at', order: 'desc' },
});
data.forEach((file) => {
console.log(`${file.name} - ${file.metadata.size} bytes`);
});
// Move a file
await supabase.storage
.from('documents')
.move('contracts/old.pdf', 'archive/old.pdf');
// Delete files
await supabase.storage
.from('documents')
.remove(['contracts/draft.pdf']);
Row Level Security for Storage
The killer feature - storage policies using SQL:
CREATE POLICY "Users can upload to own folder"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'avatars' AND
auth.uid()::text = (storage.foldername(name))[1]
);
CREATE POLICY "Public read access"
ON storage.objects FOR SELECT
USING (bucket_id = 'public-assets');
S3-Compatible Protocol
Use any S3 client with Supabase Storage:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
forcePathStyle: true,
region: 'us-east-1',
endpoint: 'https://your-project.supabase.co/storage/v1/s3',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key',
},
});
await s3.send(
new PutObjectCommand({
Bucket: 'documents',
Key: 'report.pdf',
Body: fileBuffer,
})
);
Resumable Uploads for Large Files
import * as tus from 'tus-js-client';
const upload = new tus.Upload(file, {
endpoint: `${SUPABASE_URL}/storage/v1/upload/resumable`,
retryDelays: [0, 3000, 5000],
headers: {
authorization: `Bearer ${accessToken}`,
'x-upsert': 'true',
},
metadata: {
bucketName: 'videos',
objectName: 'large-video.mp4',
contentType: 'video/mp4',
},
onProgress: (bytesUploaded, bytesTotal) => {
console.log(`${((bytesUploaded / bytesTotal) * 100).toFixed(1)}%`);
},
});
upload.start();
Real-World Use Case
A SaaS founder needed user-uploaded documents with access control. Instead of building S3 + CloudFront + Lambda@Edge for auth, they used Supabase Storage with RLS policies. Setup time: 30 minutes instead of 2 days. The CDN handles image transforms automatically - no Sharp, no ImageMagick, no Lambda functions.
Building something with file storage? I create custom data pipelines and automation tools. Check out my web scraping toolkit on Apify or reach me at spinov001@gmail.com for custom solutions.
Top comments (0)