If you've ever built file upload functionality in NestJS, you know the pain. This post is for you.
The All-Too-Familiar Story
You're building a NestJS backend. The product team wants file uploads — avatars, documents, whatever. Simple enough, right?
Then reality sets in.
You wire up multer, wrestle with S3's SDK, write a wrapper service, add a custom interceptor, figure out how to validate file types (and realize checking the extension isn't safe), figure out what happens when you need to test all this, and then the stakeholder says: "Can we support Google Cloud Storage too? We're moving vendors next quarter."
And you're back to square one.
The Real Pains Every NestJS Developer Faces
You already know these. Let's just name them so we can solve them.
| # | Pain |
|---|---|
| 1 |
Vendor lock-in — S3, GCS, Azure, and local fs all have different SDKs. Switching providers means rewriting your upload service. |
| 2 | Upload boilerplate — Multer gives you the raw file. You still have to pick a destination, generate a filename, call your storage service, and return the URL. Every time. |
| 3 |
Extension spoofing — Checking file.mimetype or the filename extension trusts the user. A renamed .exe sails right through. |
| 4 | Filename chaos — Original name (collision-prone), manual UUID (scattered), or nothing. No standard, pluggable way to control it. |
| 5 |
Multi-tenant path gymnastics — You prepend users/${userId}/ everywhere and hope you never miss one. |
| 6 | Untestable uploads — Mock the AWS SDK (brittle), spin up a real S3 in CI (slow), or skip tests entirely (dangerous). |
| 7 | No observability — No audit trail of what got uploaded or by whom. No health check before Kubernetes routes traffic. |
| 8 |
Plain-text sensitive files — Object-level encryption means wiring KMS, managing keys, and never missing a single put call. One slip and you have a compliance problem. |
| 9 | Silent upload failures — S3 returns a 503, your upload fails, the user gets a 500. No retry, no fallback. |
| 10 | Uploads proxied through your server — Every file goes browser → NestJS → S3. Double bandwidth, Node.js event loop blocked, server is the bottleneck. |
Introducing @fozooni/nestjs-storage
@fozooni/nestjs-storage is a unified, driver-based storage module for NestJS that addresses every one of these pains behind a clean, consistent API.
Supported backends: Local filesystem, Amazon S3, Cloudflare R2, Google Cloud Storage, Azure Blob Storage, MinIO, Backblaze B2, DigitalOcean Spaces, Wasabi
One API to rule them all — switch storage backends by changing a config value. Your application code doesn't change.
It's MIT licensed, ships CJS + ESM with full TypeScript definitions, tested on Node 18, 20, and 22, and backed by 400+ tests.
Installation
npm install @fozooni/nestjs-storage
# or pnpm add / yarn add
Install only the peer dependencies you need:
# For S3, Cloudflare R2, MinIO, B2, DigitalOcean, or Wasabi
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
# For Google Cloud Storage
npm install @google-cloud/storage
# For Azure Blob Storage
npm install @azure/storage-blob
# For file upload interceptors (multer bridge)
npm install multer @types/multer
# For magic bytes validation
npm install file-type
# For health checks
npm install @nestjs/terminus
# For OpenTelemetry tracing
npm install @opentelemetry/api
# For event emitter bridge (optional)
npm install @nestjs/event-emitter
Quick Start
Register the module in your AppModule:
import { Module } from '@nestjs/common';
import { StorageModule } from '@fozooni/nestjs-storage';
@Module({
imports: [
StorageModule.forRoot({
default: 'local',
disks: {
local: {
driver: 'local',
root: './storage',
},
},
}),
],
})
export class AppModule {}
Use it in any service:
import { Injectable } from '@nestjs/common';
import { StorageService } from '@fozooni/nestjs-storage';
@Injectable()
export class UploadService {
constructor(private readonly storage: StorageService) {}
async upload(filename: string, data: Buffer) {
await this.storage.put(`uploads/${filename}`, data);
return this.storage.url(`uploads/${filename}`);
}
}
That's it. Now let's solve the real pains.
Pain → Solution: A Deep Dive
Solution 1: Vendor Lock-In — Unified API + Zero-Config Driver Swap
Configure multiple disks and switch between them without touching your business logic:
StorageModule.forRoot({
default: 'local', // ← Change this one value to switch backends
disks: {
local: { driver: 'local', root: './storage' },
s3: { driver: 's3', bucket: 'my-bucket', region: 'us-east-1', key: process.env.AWS_KEY, secret: process.env.AWS_SECRET },
r2: { driver: 'r2', bucket: 'my-bucket', accountId: process.env.CF_ACCOUNT_ID, key: process.env.R2_KEY, secret: process.env.R2_SECRET },
gcs: { driver: 'gcs', bucket: 'my-bucket', projectId: 'my-project', keyFilename: '/path/to/keyfile.json' },
azure: { driver: 'azure', containerName: 'uploads', accountName: process.env.AZURE_ACCOUNT, accountKey: process.env.AZURE_KEY },
minio: { driver: 'minio', bucket: 'my-bucket', endpoint: 'https://minio.internal', key: process.env.MINIO_KEY, secret: process.env.MINIO_SECRET },
},
})
Every disk implements the same FilesystemContract interface — put, get, delete, copy, move, url, temporaryUrl, exists, size, getMetadata, and more. Swap the default key and your app code is untouched.
Want to inject a specific disk directly? Use @InjectDisk:
@Injectable()
export class BackupService {
constructor(
@InjectDisk('s3') private readonly s3: FilesystemContract,
@InjectDisk('local') private readonly local: FilesystemContract,
) {}
async backup(path: string) {
const file = await this.s3.get(path);
await this.local.put(`backups/${path}`, file);
}
}
Need to switch environments dynamically? forRootAsync with ConfigService:
StorageModule.forRootAsync({
imports: [ConfigModule],
useFactory: (config: ConfigService) => ({
default: config.get('STORAGE_DRIVER', 'local'),
disks: {
local: { driver: 'local', root: config.get('STORAGE_ROOT', './storage') },
s3: { driver: 's3', bucket: config.get('AWS_BUCKET'), region: config.get('AWS_REGION'), key: config.get('AWS_KEY'), secret: config.get('AWS_SECRET') },
},
}),
inject: [ConfigService],
})
Solution 2: Boilerplate Interceptors — StorageFileInterceptor
Forget manual file-to-storage wiring. One decorator handles the full upload-to-storage flow:
@Controller('upload')
export class UploadController {
@Post('avatar')
@UseInterceptors(
StorageFileInterceptor('avatar', {
disk: 's3',
path: 'avatars',
namingStrategy: new UuidNamingStrategy(),
limits: { fileSize: 5 * 1024 * 1024 },
}),
)
uploadAvatar(@UploadedFile() file: StoredFile) {
// file.path, file.url, file.size, file.mimetype — all ready to use
return { url: file.url };
}
@Post('gallery')
@UseInterceptors(StorageFilesInterceptor('photos', 10, { disk: 's3', path: 'gallery' }))
uploadGallery(@UploadedFiles() files: StoredFile[]) {
return files.map((f) => ({ url: f.url, path: f.path }));
}
}
By the time your controller method runs, the files are already stored. StoredFile gives you path, public URL, size, MIME type, original name, and disk name.
Solution 3: Extension Spoofing — MagicBytesValidator
Don't trust what the user tells you the file is. Check the actual bytes:
@Post('document')
uploadDocument(
@UploadedFile(
new ParseFilePipe({
validators: [
new FileExtensionValidator({ allowedExtensions: ['jpg', 'png', 'pdf'] }),
new MagicBytesValidator({ allowedTypes: ['image/jpeg', 'image/png', 'application/pdf'] }),
],
}),
)
file: Express.Multer.File,
) {
// Validated at the binary level — not just the extension
}
A file renamed from .exe to .pdf will fail the magic bytes check. Supported formats include JPEG, PNG, GIF, PDF, ZIP, Word documents, MP3, MP4, WebM, 7z, gzip, BMP, TIFF, PSD, and more.
Solution 4: Messy Filenames — Pluggable Naming Strategies
Four built-in strategies, usable per-call or as a disk-level default:
new UuidNamingStrategy() // → avatars/550e8400-e29b-41d4-a716-446655440000.jpg
new HashNamingStrategy() // → avatars/d8e8fca2dc0f896fd7cb4cb0031ba249.jpg (deduplication!)
new DatePathNamingStrategy() // → avatars/2026/03/16/550e8400-...jpg
new OriginalNamingStrategy() // → avatars/my-photo.jpg
HashNamingStrategy is especially useful for deduplication — the same file content always produces the same filename. Need custom logic? Implement NamingStrategy with a single generate(file): string method.
Solution 5: Multi-Tenant Path Chaos — Scoped Disks
Instead of manually prefixing every path, create a scoped disk:
const userDisk = storage.scope(`users/${userId}`);
await userDisk.put('avatar.jpg', buffer); // → users/user-123/avatar.jpg
await userDisk.putFile('documents', file); // → users/user-123/documents/uuid.jpg
const exists = await userDisk.exists('avatar.jpg'); // → checks users/user-123/avatar.jpg
Scopes are nestable:
const tenantDisk = storage.scope(`tenants/${tenantId}`);
const privateDisk = tenantDisk.scope('private');
await privateDisk.put('contract.pdf', data);
// → tenants/acme/private/contract.pdf
No more scattered prefix concatenation. No more missed paths.
Solution 6: Untestable Uploads — FakeDisk + Assertions
describe('UploadService', () => {
let storageUtils: StorageTestUtils;
beforeEach(async () => {
const module = await Test.createTestingModule({
imports: [StorageModule.forRoot({ default: 'local', disks: { local: { driver: 'local', root: './tmp' } } })],
providers: [UploadService],
}).compile();
storageUtils = StorageTestUtils.fake(module, 'local');
});
it('stores the uploaded file', async () => {
const service = module.get(UploadService);
const fakeFile = StorageTestUtils.fakeFile('test.jpg', Buffer.from('fake-image'));
await service.upload(fakeFile);
storageUtils.assertExists('uploads/test.jpg');
storageUtils.assertCount(1);
storageUtils.assertContentEquals('uploads/test.jpg', Buffer.from('fake-image'));
});
});
FakeDisk is a full in-memory implementation. Your tests run fast, offline, and with zero side effects. Assertions cover existence, content, counts, and directory state.
Solution 7: No Observability — Events + Health Checks
Storage Events — Hook into every file operation:
@Injectable()
export class AuditLogger {
constructor(private readonly storageEvents: StorageEventsService) {
storageEvents.on(StorageEvents.PUT, (event) => {
this.log('UPLOAD', event.disk, event.path, event.timestamp);
});
storageEvents.on(StorageEvents.DELETE, (event) => {
this.log('DELETE', event.disk, event.path, event.timestamp);
});
}
}
Events emitted: PUT, PUT_FILE, DELETE, DELETE_MANY, COPY, MOVE. Each carries disk name, path, and timestamp. If you have @nestjs/event-emitter installed, events flow through it automatically — no configuration needed.
Health Checks — Kubernetes-ready storage health:
@Get()
@HealthCheck()
check() {
return this.health.check([
() => this.storageHealth.check('storage', 'local'),
() => this.storageHealth.checkDisks('storage', ['s3', 'gcs']),
]);
}
Health checks write and read a small probe file. If the round-trip works, the disk is healthy. Your readiness probe will catch S3 connectivity issues before your app starts serving traffic.
Solution 8: Plain-Text Sensitive Files — EncryptedDisk
Wrap any disk with transparent AES-256-GCM encryption. No code changes in the rest of your app:
const encryptedDisk = storage.encrypted('s3', {
key: process.env.ENCRYPTION_KEY, // 32-byte hex key
});
await encryptedDisk.put('contracts/signed.pdf', pdfBuffer);
// → Stored on S3 as encrypted bytes, IV prepended per object
const decrypted = await encryptedDisk.get('contracts/signed.pdf');
// → Returns original plaintext buffer
Every object gets a unique IV. The same put call twice produces different ciphertext. Works with any driver — local, S3, Azure, GCS, or any custom backend.
Solution 9: Transient Failures — RetryDisk + CachedDisk
RetryDisk adds automatic retry with full-jitter exponential backoff:
const reliableDisk = storage.withRetry('s3', {
maxRetries: 3,
baseDelay: 100, // ms
maxDelay: 5000, // ms
factor: 2,
jitter: true,
});
Retries on StorageNetworkError (transient failures, 5xx responses). Skips retry on StorageFileNotFoundError and StoragePermissionError — no point retrying a 404. Emits a storage.retry event on each attempt so you can track it.
CachedDisk eliminates redundant metadata round-trips:
const cachedDisk = storage.cached('s3', {
ttl: 60_000, // 60 seconds default TTL
methods: { exists: 30_000, size: 10_000 }, // per-method overrides
});
Caches exists, size, lastModified, mimeType, and getMetadata. Auto-invalidated on any write. Especially useful in request handlers that check existence before uploading.
Solution 10: Proxied Browser Uploads — Presigned POST
Instead of routing every upload through your NestJS server, generate a one-time presigned POST URL and let the browser upload directly to S3:
@Post('presign')
async getUploadUrl(@Body() dto: { filename: string; contentType: string }) {
const { url, fields } = await this.storage.presignedPost(`uploads/${dto.filename}`, {
expires: 300, // 5 minutes
maxSize: 10 * 1024 * 1024,
allowedMimeTypes: ['image/jpeg', 'image/png'],
});
return { url, fields };
}
The client POSTs directly to S3 using the returned url and fields. Your server never touches the file bytes. Works with S3, R2, and GCS.
The Disk Decorator Pattern
One of the most useful architectural features in @fozooni/nestjs-storage is composable disk decorators. Every decorator wraps a disk, adds behavior, and delegates everything else transparently.
Your App
↓
RetryDisk ← automatic retries on transient failure
↓
CachedDisk ← metadata caching
↓
OtelDisk ← OpenTelemetry span per operation
↓
CdnDisk ← CDN URL rewriting + signed CloudFront URLs
↓
EncryptedDisk ← AES-256-GCM transparent encryption
↓
S3Disk ← actual storage
Each decorator extends DiskDecorator, an abstract base class that auto-delegates all 30+ interface methods to the wrapped disk. You only override the methods you care about. You can compose them in any order, stack as many as you need, and the abstraction never leaks.
Available decorators:
| Decorator | What it adds |
|---|---|
EncryptedDisk |
AES-256-GCM at-rest encryption |
CachedDisk |
Metadata caching with TTL |
RetryDisk |
Exponential backoff retry |
ReplicatedDisk |
Write replication to multiple backends |
CdnDisk |
CDN URL rewriting + CloudFront signed URLs |
OtelDisk |
OpenTelemetry tracing per operation |
QuotaDisk |
Byte-level storage quota enforcement |
ScopedDisk |
Path-prefix isolation |
All are available as factory methods on StorageService: storage.encrypted(), storage.cached(), storage.withRetry(), storage.replicated(), storage.withTracing(), storage.withQuota(), storage.scope().
Advanced Features
File Versioning
VersionedDisk wraps any driver and takes a snapshot before every overwrite. Works with S3 native versioning, GCS object generations, or a local .versions/ directory:
const versioned = storage.disk('docs'); // configured as VersionedDisk
const versions = await versioned.listVersions('contracts/signed.pdf');
await versioned.restoreVersion('contracts/signed.pdf', versions[1].versionId);
Range Requests (HTTP 206)
Serve video, large PDFs, or binary downloads with partial content support:
@Get('stream/:path')
async stream(@Param('path') path: string, @Req() req: Request, @Res() res: Response) {
return this.storage.serveRange(path, req, res);
// → Sets Content-Range, Content-Length, HTTP 206 automatically
}
Disk Migration
Moving from S3 to R2? The StorageMigrator streams files without loading them all into memory:
for await (const progress of migrator.migrate('s3', 'r2', {
concurrency: 10,
verify: true, // checksum verification per file
deleteSource: false, // dry-run first
})) {
console.log(`${progress.transferred}/${progress.total} — ${progress.currentFile}`);
}
Streaming ZIP Archives
Bundle multiple stored files into a ZIP and stream it directly to the client:
@Get('export')
async export() {
return StorageArchiver.createZip(['reports/a.pdf', 'reports/b.pdf'], 's3');
// → Returns StreamableFile, no files loaded into memory simultaneously
}
Reads your nestjs-storage.config.json in the project root. Useful for scripts, debugging, and CI pipelines.
Bonus: Convenience Methods You'll Actually Use
// Inverse of exists()
if (await storage.missing('uploads/avatar.jpg')) {
throw new NotFoundException('Avatar not found');
}
// Read and JSON-parse in one call (supports Zod schema for validation)
const config = await storage.json<AppConfig>('config/settings.json', AppConfigSchema);
// File checksum
const hash = await storage.checksum('uploads/contract.pdf', 'sha256');
// Bulk delete with partial failure handling
const { succeeded, failed } = await storage.deleteMany(['tmp/a.jpg', 'tmp/b.jpg', 'tmp/c.jpg']);
// Stream a file to the HTTP response — sets Content-Type, Content-Length, Content-Disposition
@Get('download/:path')
download(@Param('path') path: string) {
return this.storage.getStreamableFile(path);
}
// Temporary files with auto-expiry
await storage.putTemp('session/preview.pdf', pdfBuffer, 300); // expires in 5 minutes
// S3: uses native Expires header. Local: writes .ttl sidecar, cleaned up by StorageTempCleanupService.
// Typed driver-specific metadata
const meta = await disk.getMetadata<S3FileMetadata>('uploads/photo.jpg');
meta.etag; // typed
meta.storageClass; // typed
meta.versionId; // typed
GitHub & Installation
The package lives at: https://github.com/fozooni/nestjs-storage
npm install @fozooni/nestjs-storage
Full documentation, API reference, and more examples are in the README.
Ideas, Contributions & What's Next
The package ships 9 backends, 8 disk decorators, 400+ tests, and a full streaming/versioning/migration API. Here's what's being considered for what comes next:
-
More cache backends — Redis-backed
CacheBackendandQuotaStoreimplementations -
Image transformation pipeline — Resize/compress on upload via
sharp - More schematics — Generate interceptors, health check controllers, and test scaffolding
- Additional cloud providers — as the ecosystem grows
If you have an idea, open a GitHub issue. If you want to contribute:
- Fork the repo
- Create a feature branch
- Write tests
- Open a PR
First-time contributors are welcome. The DiskDecorator pattern makes adding new behaviors especially straightforward — most decorators are under 100 lines.
Give It a Star ⭐
If @fozooni/nestjs-storage saves you time or solves a problem you've been dealing with, consider starring the repo on GitHub. It helps other developers find the package, and it means a lot to the people building it.
https://github.com/fozooni/nestjs-storage
Have questions? Drop a comment below. Found a bug? Open an issue on GitHub. Built something cool with this package? Share it — I'd love to see it.
Top comments (0)