DEV Community

Cover image for Efficient S3 File Uploads: Speed & Large File Handling in NestJS
Adam - The Developer
Adam - The Developer

Posted on • Edited on

Efficient S3 File Uploads: Speed & Large File Handling in NestJS

Uploading files efficiently to S3 isn't just about getting data from point A to point B—it's about doing it fast, reliably, and at scale. Whether you're handling 5MB images or 5GB videos, the right approach makes all the difference.

📑 Table of Contents


🚀 The Core Strategy: Direct-to-S3 Uploads

Never route files through your server. This is the #1 performance killer.

❌ The Wrong Way (Slow & Resource-Heavy)

Client → Your Server → S3

Problems: Server bottleneck, memory spikes, timeouts, infinite scalability.

✅ The Right Way (Fast & Scalable)

Client → S3 (directly)

Your Server → Generates presigned URL only

Benefits: Maximum speed, no server memory issues, infinite scalability


📦 Implementation: Basic Presigned URL Upload

S3 Service Setup

// s3.service.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuidv4 } from 'uuid';

@Injectable()
export class S3Service {
  private readonly s3Client: S3Client;
  private readonly bucketName: string;

  constructor(private configService: ConfigService) {
    this.s3Client = new S3Client({
      region: this.configService.get('AWS_REGION'),
      credentials: {
        accessKeyId: this.configService.get('AWS_ACCESS_KEY_ID'),
        secretAccessKey: this.configService.get('AWS_SECRET_ACCESS_KEY'),
      },
      // Performance optimizations
      maxAttempts: 3,
      requestHandler: {
        connectionTimeout: 5000,
        socketTimeout: 5000,
      },
    });
    this.bucketName = this.configService.get('AWS_BUCKET_NAME');
  }

  async generatePresignedUrl(
    filename: string,
    contentType: string,
    fileSize: number,
  ): Promise<{ uploadUrl: string; key: string }> {
    const key = `uploads/${uuidv4()}/${filename}`;

    const command = new PutObjectCommand({
      Bucket: this.bucketName,
      Key: key,
      ContentType: contentType,
      ContentLength: fileSize,
    });

    // Short expiration for security, adequate for upload speed
    const uploadUrl = await getSignedUrl(this.s3Client, command, {
      expiresIn: 900, // 15 minutes
    });

    return { uploadUrl, key };
  }
}
Enter fullscreen mode Exit fullscreen mode

Upload Controller

// upload.controller.ts
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { UploadService } from './upload.service';
import { JwtAuthGuard } from '../auth/jwt-auth.guard';

@Controller('api/upload')
@UseGuards(JwtAuthGuard)
export class UploadController {
  constructor(private readonly uploadService: UploadService) {}

  @Post('initiate')
  async initiateUpload(@Body() body: {
    filename: string;
    contentType: string;
    fileSize: number;
  }) {
    return await this.uploadService.initiateUpload(
      body.filename,
      body.contentType,
      body.fileSize,
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Client-Side Upload with Progress

import axios from 'axios';

async function uploadFileToS3(file: File, token: string) {
  // Step 1: Get presigned URL
  const { data } = await axios.post(
    '/api/upload/initiate',
    {
      filename: file.name,
      contentType: file.type,
      fileSize: file.size,
    },
    {
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${token}`,
      },
    }
  );

  const { uploadUrl, key } = data;

  // Step 2: Upload directly to S3 with progress tracking
  await axios.put(uploadUrl, file, {
    headers: {
      'Content-Type': file.type,
    },
    onUploadProgress: (progressEvent) => {
      if (progressEvent.total) {
        const percentage = Math.round((progressEvent.loaded / progressEvent.total) * 100);
        updateProgressBar(percentage);
      }
    },
  });

  return key;
}
Enter fullscreen mode Exit fullscreen mode

🚄 Multipart Upload: For Large Files (100MB+)

For files over 100MB, use S3's multipart upload. This provides:

  • Faster uploads: Parallel part uploads
  • Resumable uploads: Retry individual failed parts
  • Better reliability: Network issues don't kill the entire upload

Multipart Upload Service

// multipart.service.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import {
  S3Client,
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand,
  AbortMultipartUploadCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuidv4 } from 'uuid';

@Injectable()
export class MultipartService {
  private readonly s3Client: S3Client;
  private readonly bucketName: string;
  // Optimal chunk size for network efficiency
  private readonly CHUNK_SIZE = 10 * 1024 * 1024; // 10MB

  constructor(private configService: ConfigService) {
    this.s3Client = new S3Client({
      region: this.configService.get('AWS_REGION'),
      credentials: {
        accessKeyId: this.configService.get('AWS_ACCESS_KEY_ID'),
        secretAccessKey: this.configService.get('AWS_SECRET_ACCESS_KEY'),
      },
    });
    this.bucketName = this.configService.get('AWS_BUCKET_NAME');
  }

  async initiateMultipartUpload(
    filename: string,
    contentType: string,
  ) {
    const key = `uploads/${uuidv4()}/${filename}`;

    const command = new CreateMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      ContentType: contentType,
    });

    const response = await this.s3Client.send(command);

    return {
      uploadId: response.UploadId,
      key: key,
      chunkSize: this.CHUNK_SIZE,
    };
  }

  async getPresignedPartUrl(
    key: string,
    uploadId: string,
    partNumber: number,
  ): Promise<string> {
    const command = new UploadPartCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
      PartNumber: partNumber,
    });

    // Longer expiration for large file parts
    return await getSignedUrl(this.s3Client, command, { 
      expiresIn: 3600 // 1 hour
    });
  }

  async completeMultipartUpload(
    key: string,
    uploadId: string,
    parts: Array<{ PartNumber: number; ETag: string }>,
  ) {
    const command = new CompleteMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
      MultipartUpload: { 
        Parts: parts.sort((a, b) => a.PartNumber - b.PartNumber) 
      },
    });

    return await this.s3Client.send(command);
  }

  async abortMultipartUpload(key: string, uploadId: string) {
    const command = new AbortMultipartUploadCommand({
      Bucket: this.bucketName,
      Key: key,
      UploadId: uploadId,
    });

    await this.s3Client.send(command);
  }
}
Enter fullscreen mode Exit fullscreen mode

Multipart Controller

// multipart.controller.ts
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { MultipartService } from '../s3/multipart.service';
import { JwtAuthGuard } from '../auth/jwt-auth.guard';

@Controller('api/upload/multipart')
@UseGuards(JwtAuthGuard)
export class MultipartController {
  constructor(private readonly multipartService: MultipartService) {}

  @Post('initiate')
  async initiateMultipart(
    @Body() body: { filename: string; contentType: string; fileSize: number },
  ) {
    return await this.multipartService.initiateMultipartUpload(
      body.filename,
      body.contentType,
    );
  }

  @Post('part-url')
  async getPartUrl(
    @Body() body: { key: string; uploadId: string; partNumber: number },
  ) {
    const url = await this.multipartService.getPresignedPartUrl(
      body.key,
      body.uploadId,
      body.partNumber,
    );
    return { url };
  }

  @Post('complete')
  async completeMultipart(
    @Body() body: {
      key: string;
      uploadId: string;
      parts: Array<{ PartNumber: number; ETag: string }>;
    },
  ) {
    return await this.multipartService.completeMultipartUpload(
      body.key,
      body.uploadId,
      body.parts,
    );
  }

  @Post('abort')
  async abortMultipart(@Body() body: { key: string; uploadId: string }) {
    await this.multipartService.abortMultipartUpload(body.key, body.uploadId);
    return { success: true };
  }
}
Enter fullscreen mode Exit fullscreen mode

Client-Side Multipart Upload with Parallel Parts

import axios from "axios";

class MultipartUploader {
  constructor(file, options = {}) {
    this.file = file;
    this.chunkSize = options.chunkSize || 10 * 1024 * 1024; // 10MB
    this.maxConcurrent = options.maxConcurrent || 3; // Upload 3 parts simultaneously
    this.onProgress = options.onProgress || (() => {});
    this.uploadedBytes = 0;
  }

    async upload() {
    // Step 1: Initiate multipart upload
    const { data: initData } = await axios.post(
        '/api/upload/multipart/initiate',
        {
            filename: this.file.name,
            contentType: this.file.type,
            fileSize: this.file.size,
        },
        {
        headers: {
            'Content-Type': 'application/json',
            Authorization: `Bearer ${token}`,
        },
        }
    );

    const { uploadId, key, chunkSize } = initData;
    this.chunkSize = chunkSize;

    // Step 2: Calculate parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const parts = Array.from({ length: numParts }, (_, i) => i + 1);

    const completedParts: any[] = [];

    // Step 3: Upload parts in batches (concurrency limit)
    while (parts.length > 0) {
        const batch = parts.splice(0, this.maxConcurrent);

        const batchPromises = batch.map(async (partNumber) => {
        const start = (partNumber - 1) * this.chunkSize;
        const end = Math.min(start + this.chunkSize, this.file.size);
        const blob = this.file.slice(start, end);

        // Get presigned URL for this part
        const { data: partData } = await axios.post(
            '/api/upload/multipart/get-presigned-url',
            { key, uploadId, partNumber },
            { headers: { Authorization: `Bearer ${token}` } }
        );

        const { uploadUrl } = partData;

        // Upload the part
        await axios.put(uploadUrl, blob, {
            headers: { 'Content-Type': this.file.type },
            onUploadProgress: (e) => {
            if (e.total) {
                const percentage = Math.round((e.loaded / e.total) * 100);
                updateProgressBar(partNumber, percentage);
            }
            },
        });

        return { PartNumber: partNumber, ETag: partData.etag };
        });

        const results = await Promise.all(batchPromises);
        completedParts.push(...results);
    }

    // Step 4: Complete multipart upload
    await axios.post(
        '/api/upload/multipart/complete',
        { key, uploadId, parts: completedParts },
        { headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` } }
    );

    return key;
    }

    async uploadPart(key: string, uploadId: string, partNumber: number) {
    // Step 1: Get presigned URL for this part
    const { data: urlData } = await axios.post(
        '/api/upload/multipart/part-url',
        { key, uploadId, partNumber },
        {
        headers: {
            'Content-Type': 'application/json',
            Authorization: `Bearer ${token}`,
        },
        }
    );

    const { url } = urlData;

    // Step 2: Extract chunk
    const start = (partNumber - 1) * this.chunkSize;
    const end = Math.min(start + this.chunkSize, this.file.size);
    const chunk = this.file.slice(start, end);

    // Step 3: Upload chunk via Axios
    const response = await axios.put(url, chunk, {
        headers: { 'Content-Type': this.file.type },
        onUploadProgress: (e) => {
        if (e.total) {
            this.uploadedBytes += e.loaded;
            const totalProgress = (this.uploadedBytes / this.file.size) * 100;
            this.onProgress(totalProgress);
        }
        },
    });

    // Step 4: Extract ETag from response headers
    const etag = response.headers['etag'];
    if (!etag) throw new Error(`Part ${partNumber} upload failed: missing ETag`);

    return {
        PartNumber: partNumber,
        ETag: etag.replace(/"/g, ''),
    };
    }

}

// Usage
const uploader = new MultipartUploader(file, {
  maxConcurrent: 5, // Upload 5 parts at once for faster speed
  onProgress: (percentage) => {
    console.log(`Upload progress: ${percentage.toFixed(2)}%`);
    updateProgressBar(percentage);
  },
});

await uploader.upload();
Enter fullscreen mode Exit fullscreen mode

⚡ Performance Optimizations

1. S3 Transfer Acceleration

Enable S3 Transfer Acceleration for 50-500% faster uploads over long distances:

// In S3Service constructor
this.s3Client = new S3Client({
  region: this.configService.get('AWS_REGION'),
  credentials: { /* ... */ },
  useAccelerateEndpoint: true, // Enable Transfer Acceleration
});
Enter fullscreen mode Exit fullscreen mode

Setup: Enable in S3 bucket settings → Properties → Transfer Acceleration

2. Optimal Chunk Sizes

File Size Recommended Chunk Size Reason
< 100MB Single upload Overhead not worth it
100MB - 1GB 10MB chunks Balance speed/reliability
1GB - 5GB 25MB chunks Fewer API calls
> 5GB 100MB chunks Maximum efficiency
/**
 * Calculates the optimal S3 multipart upload chunk size (in bytes)
 * based on the total file size to balance speed, reliability, and API efficiency.
 */
calculateOptimalChunkSize(fileSize: number): number {
  if (fileSize < 100 * 1024 * 1024) return fileSize; // Single upload
  if (fileSize < 1024 * 1024 * 1024) return 10 * 1024 * 1024; // 10MB
  if (fileSize < 5 * 1024 * 1024 * 1024) return 25 * 1024 * 1024; // 25MB
  return 100 * 1024 * 1024; // 100MB
}
Enter fullscreen mode Exit fullscreen mode

Tips:

  • For unstable networks → smaller chunks (5–10MB) for easier retries
  • For high-speed connections → larger chunks (25–100MB) for better throughput
  • AWS caps at 10,000 parts, so chunk size × parts ≤ total file size
  • Combine with parallel uploads (e.g., Promise.allSettled()) to fully utilize bandwidth

3. Parallel Upload Configuration

import axios from 'axios';

const OPTIMAL_CONCURRENCY = {
  // based on network speed
  slow: 2,      // < 5 Mbps
  medium: 3,    // 5-50 Mbps
  fast: 5,      // 50-100 Mbps
  veryFast: 8,  // > 100 Mbps
};

// auto-detect network speed
async function detectNetworkSpeed() {
  const start = Date.now();

  // request to get a 1MB test file
  const response = await axios.get('https://your-cdn.com/test-1mb.bin', {
    responseType: 'blob', // ensure we get the binary data
  });

  // accessing the blob ensures the download fully completes
  const blob = response.data;
  const duration = (Date.now() - start) / 1000; // seconds

  const speedMbps = (blob.size * 8) / (1024 * 1024 * duration); // convert bytes to megabits

  if (speedMbps < 5) return OPTIMAL_CONCURRENCY.slow;
  if (speedMbps < 50) return OPTIMAL_CONCURRENCY.medium;
  if (speedMbps < 100) return OPTIMAL_CONCURRENCY.fast;
  return OPTIMAL_CONCURRENCY.veryFast;
}

Enter fullscreen mode Exit fullscreen mode

4. Connection Pooling & Keep-Alive

// s3.service.ts
import { NodeHttpHandler } from '@smithy/node-http-handler';
import { Agent } from 'https';

constructor(private configService: ConfigService) {
  const agent = new Agent({
    keepAlive: true,
    maxSockets: 50, // Allow multiple concurrent connections
    keepAliveMsecs: 1000,
  });

  this.s3Client = new S3Client({
    region: this.configService.get('AWS_REGION'),
    credentials: { /* ... */ },
    requestHandler: new NodeHttpHandler({
      httpsAgent: agent,
      connectionTimeout: 5000,
      socketTimeout: 5000,
    }),
  });
}
Enter fullscreen mode Exit fullscreen mode

📊 Monitoring Upload Performance

// upload.service.ts
@Injectable()
export class UploadService {
  async trackUploadMetrics(
    key: string,
    fileSize: number,
    duration: number,
  ) {
    const speedMbps = (fileSize * 8) / (duration * 1024 * 1024);

    // Log to monitoring service (DataDog, CloudWatch, etc.)
    this.logger.log({
      event: 'upload_completed',
      key,
      fileSize,
      duration,
      speedMbps,
      timestamp: new Date(),
    });

    // Alert if speed is below threshold
    if (speedMbps < 1) {
      this.alertService.warn('Slow upload detected', {
        key,
        speedMbps,
      });
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

🛡️ Handling Upload Failures Gracefully

Retry Logic with Exponential Backoff

async function uploadPartWithRetry(chunk, url, maxRetries = 3) {
  let lastError;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await uploadChunk(chunk, url);
    } catch (error) {
      lastError = error;

      if (attempt < maxRetries) {
        // Exponential backoff: 1s, 2s, 4s
        const delay = Math.pow(2, attempt) * 1000;
        await new Promise(resolve => setTimeout(resolve, delay));
      }
    }
  }

  throw lastError;
}
Enter fullscreen mode Exit fullscreen mode

Resume Failed Uploads

class ResumableUploader extends MultipartUploader {
  constructor(file, options = {}) {
    super(file, options);
    this.uploadState = this.loadUploadState() || {
      uploadId: null,
      key: null,
      completedParts: [],
    };
  }

  saveUploadState() {
    localStorage.setItem(
      `upload_${this.file.name}`,
      JSON.stringify(this.uploadState)
    );
  }

  loadUploadState() {
    const saved = localStorage.getItem(`upload_${this.file.name}`);
    return saved ? JSON.parse(saved) : null;
  }

  async upload() {
    // Resume existing upload if available
    if (this.uploadState.uploadId) {
      return await this.resumeUpload();
    }

    // Start new upload
    return await super.upload();
  }

  async resumeUpload() {
    const { uploadId, key, completedParts } = this.uploadState;
    const completedPartNumbers = new Set(
      completedParts.map(p => p.PartNumber)
    );

    // Upload only remaining parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const remainingParts = [];

    for (let i = 1; i <= numParts; i++) {
      if (!completedPartNumbers.has(i)) {
        remainingParts.push(i);
      }
    }

    // Upload remaining parts
    // Complete upload

    localStorage.removeItem(`upload_${this.file.name}`);
  }
}
Enter fullscreen mode Exit fullscreen mode

📈 Speed Benchmarks

Efficient / Ideal upload speeds for a 1GB file:

Method Time Speed Notes
Through server 4-6 min ~2.5 MB/s Bottleneck
Direct presigned URL 1.5-2 min ~8 MB/s Good
Multipart (3 parts) 45-60 sec ~17 MB/s Better
Multipart (5 parts) + Acceleration 30-40 sec ~25 MB/s Best

✅ Production Checklist

  • ✅ Direct-to-S3 uploads implemented
  • ✅ Multipart upload for files > 100MB
  • ✅ Parallel part uploads (3-5 concurrent)
  • ✅ S3 Transfer Acceleration enabled
  • ✅ Optimal chunk sizes configured
  • ✅ Connection pooling enabled
  • ✅ Progress tracking implemented
  • ✅ Retry logic with exponential backoff
  • ✅ Resume capability for failed uploads
  • ✅ Upload speed monitoring
  • ✅ S3 lifecycle policies for cleanup
  • ✅ CloudFront CDN for download speed

🎯 Key Takeaways

  1. Never route files through your server - Use presigned URLs
  2. Use multipart uploads for large files (> 100MB)
  3. Upload parts in parallel - 3-5 concurrent uploads optimal
  4. Enable S3 Transfer Acceleration - Massive speed boost for global users
  5. Implement retry logic - Network issues happen
  6. Monitor upload speeds - Alert on degraded performance
  7. Optimize chunk sizes - Bigger files need bigger chunks

The difference between a slow upload system and a fast one often comes down to these fundamentals. Get them right, and your users will notice.

Top comments (0)