DEV Community

Cover image for Efficient S3 File Uploads: Speed & Large File Handling in Spring Boot
Adam - The Developer
Adam - The Developer

Posted on

Efficient S3 File Uploads: Speed & Large File Handling in Spring Boot

Uploading files efficiently to S3 isn't just about getting data from point A to point B—it's about doing it fast, reliably, and at scale. Whether you're handling 5MB images or 5GB videos, the right approach makes all the difference.

📑 Table of Contents


🚀 The Core Strategy: Direct-to-S3 Uploads

Never route files through your server. This is the #1 performance killer.

❌ The Wrong Way (Slow & Resource-Heavy)

Client → Your Server → S3

Problems: Server bottleneck, memory spikes, timeouts, limited scalability.

✅ The Right Way (Fast & Scalable)

Client → S3 (directly)

Your Server → Generates presigned URL only

Benefits: Maximum speed, no server memory issues, infinite scalability


📦 Implementation: Basic Presigned URL Upload

Maven Dependencies

<dependencies>
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>s3</artifactId>
        <version>2.20.0</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>
Enter fullscreen mode Exit fullscreen mode

Application Properties

# application.properties
aws.region=us-east-1
aws.access-key-id=${AWS_ACCESS_KEY_ID}
aws.secret-access-key=${AWS_SECRET_ACCESS_KEY}
aws.s3.bucket-name=${AWS_BUCKET_NAME}
Enter fullscreen mode Exit fullscreen mode

S3 Configuration

// S3Config.java
package com.example.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.http.apache.ApacheHttpClient;

import java.time.Duration;

@Configuration
public class S3Config {

    @Value("${aws.region}")
    private String region;

    @Value("${aws.access-key-id}")
    private String accessKeyId;

    @Value("${aws.secret-access-key}")
    private String secretAccessKey;

    @Bean
    public S3Client s3Client() {
        return S3Client.builder()
                .region(Region.of(region))
                .credentialsProvider(StaticCredentialsProvider.create(
                        AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
                .httpClient(ApacheHttpClient.builder()
                        .connectionTimeout(Duration.ofSeconds(5))
                        .socketTimeout(Duration.ofSeconds(5))
                        .maxConnections(50)
                        .build())
                .build();
    }

    @Bean
    public S3Presigner s3Presigner() {
        return S3Presigner.builder()
                .region(Region.of(region))
                .credentialsProvider(StaticCredentialsProvider.create(
                        AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
                .build();
    }
}
Enter fullscreen mode Exit fullscreen mode

S3 Service

// S3Service.java
package com.example.service;

import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.PresignedPutObjectRequest;
import software.amazon.awssdk.services.s3.presigner.model.PutObjectPresignRequest;

import java.time.Duration;
import java.util.UUID;

@Service
@RequiredArgsConstructor
public class S3Service {

    private final S3Presigner s3Presigner;

    @Value("${aws.s3.bucket-name}")
    private String bucketName;

    public PresignedUrlResponse generatePresignedUrl(
            String filename, 
            String contentType, 
            long fileSize) {

        String key = "uploads/" + UUID.randomUUID() + "/" + filename;

        PutObjectRequest putObjectRequest = PutObjectRequest.builder()
                .bucket(bucketName)
                .key(key)
                .contentType(contentType)
                .contentLength(fileSize)
                .build();

        PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
                .signatureDuration(Duration.ofMinutes(15)) // Short expiration for security
                .putObjectRequest(putObjectRequest)
                .build();

        PresignedPutObjectRequest presignedRequest = 
                s3Presigner.presignPutObject(presignRequest);

        return new PresignedUrlResponse(
                presignedRequest.url().toString(), 
                key
        );
    }
}
Enter fullscreen mode Exit fullscreen mode

DTOs

// PresignedUrlResponse.java
package com.example.dto;

import lombok.AllArgsConstructor;
import lombok.Data;

@Data
@AllArgsConstructor
public class PresignedUrlResponse {
    private String uploadUrl;
    private String key;
}

// UploadInitiateRequest.java
package com.example.dto;

import lombok.Data;

@Data
public class UploadInitiateRequest {
    private String filename;
    private String contentType;
    private long fileSize;
}
Enter fullscreen mode Exit fullscreen mode

Upload Controller

// UploadController.java
package com.example.controller;

import com.example.dto.PresignedUrlResponse;
import com.example.dto.UploadInitiateRequest;
import com.example.service.S3Service;
import lombok.RequiredArgsConstructor;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/api/upload")
@RequiredArgsConstructor
public class UploadController {

    private final S3Service s3Service;

    @PostMapping("/initiate")
    public ResponseEntity<PresignedUrlResponse> initiateUpload(
            @RequestBody UploadInitiateRequest request) {

        PresignedUrlResponse response = s3Service.generatePresignedUrl(
                request.getFilename(),
                request.getContentType(),
                request.getFileSize()
        );

        return ResponseEntity.ok(response);
    }
}
Enter fullscreen mode Exit fullscreen mode

Client-Side Upload with Progress

import axios from 'axios';

async function uploadFileToS3(file, token) {
  // Step 1: Get presigned URL
  const { data } = await axios.post(
    '/api/upload/initiate',
    {
      filename: file.name,
      contentType: file.type,
      fileSize: file.size,
    },
    {
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${token}`,
      },
    }
  );

  const { uploadUrl, key } = data;

  // Step 2: Upload directly to S3 with progress tracking
  await axios.put(uploadUrl, file, {
    headers: {
      'Content-Type': file.type,
    },
    onUploadProgress: (progressEvent) => {
      if (progressEvent.total) {
        const percentage = Math.round(
          (progressEvent.loaded / progressEvent.total) * 100
        );
        updateProgressBar(percentage);
      }
    },
  });

  return key;
}
Enter fullscreen mode Exit fullscreen mode

🚄 Multipart Upload: For Large Files (100MB+)

For files over 100MB, use S3's multipart upload. This provides:

  • Faster uploads: Parallel part uploads
  • Resumable uploads: Retry individual failed parts
  • Better reliability: Network issues don't kill the entire upload

Multipart Service

// MultipartService.java
package com.example.service;

import com.example.dto.*;
import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.PresignedUploadPartRequest;
import software.amazon.awssdk.services.s3.presigner.model.UploadPartPresignRequest;

import java.time.Duration;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;

@Service
@RequiredArgsConstructor
public class MultipartService {

    private final S3Client s3Client;
    private final S3Presigner s3Presigner;

    @Value("${aws.s3.bucket-name}")
    private String bucketName;

    // Optimal chunk size for network efficiency
    private static final long CHUNK_SIZE = 10 * 1024 * 1024; // 10MB

    public MultipartInitiateResponse initiateMultipartUpload(
            String filename, 
            String contentType) {

        String key = "uploads/" + UUID.randomUUID() + "/" + filename;

        CreateMultipartUploadRequest createRequest = 
                CreateMultipartUploadRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .contentType(contentType)
                        .build();

        CreateMultipartUploadResponse response = 
                s3Client.createMultipartUpload(createRequest);

        return new MultipartInitiateResponse(
                response.uploadId(),
                key,
                CHUNK_SIZE
        );
    }

    public String getPresignedPartUrl(
            String key, 
            String uploadId, 
            int partNumber) {

        UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
                .bucket(bucketName)
                .key(key)
                .uploadId(uploadId)
                .partNumber(partNumber)
                .build();

        UploadPartPresignRequest presignRequest = 
                UploadPartPresignRequest.builder()
                        .signatureDuration(Duration.ofHours(1)) // Longer for large parts
                        .uploadPartRequest(uploadPartRequest)
                        .build();

        PresignedUploadPartRequest presignedRequest = 
                s3Presigner.presignUploadPart(presignRequest);

        return presignedRequest.url().toString();
    }

    public CompleteMultipartUploadResponse completeMultipartUpload(
            String key,
            String uploadId,
            List<CompletedPartDto> parts) {

        List<CompletedPart> completedParts = parts.stream()
                .map(p -> CompletedPart.builder()
                        .partNumber(p.getPartNumber())
                        .eTag(p.getETag())
                        .build())
                .sorted((a, b) -> Integer.compare(a.partNumber(), b.partNumber()))
                .collect(Collectors.toList());

        CompletedMultipartUpload completedUpload = 
                CompletedMultipartUpload.builder()
                        .parts(completedParts)
                        .build();

        CompleteMultipartUploadRequest completeRequest = 
                CompleteMultipartUploadRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .multipartUpload(completedUpload)
                        .build();

        return s3Client.completeMultipartUpload(completeRequest);
    }

    public void abortMultipartUpload(String key, String uploadId) {
        AbortMultipartUploadRequest abortRequest = 
                AbortMultipartUploadRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .build();

        s3Client.abortMultipartUpload(abortRequest);
    }
}
Enter fullscreen mode Exit fullscreen mode

Multipart DTOs

// MultipartInitiateResponse.java
package com.example.dto;

import lombok.AllArgsConstructor;
import lombok.Data;

@Data
@AllArgsConstructor
public class MultipartInitiateResponse {
    private String uploadId;
    private String key;
    private long chunkSize;
}

// PartUrlRequest.java
package com.example.dto;

import lombok.Data;

@Data
public class PartUrlRequest {
    private String key;
    private String uploadId;
    private int partNumber;
}

// PartUrlResponse.java
package com.example.dto;

import lombok.AllArgsConstructor;
import lombok.Data;

@Data
@AllArgsConstructor
public class PartUrlResponse {
    private String url;
}

// CompleteMultipartRequest.java
package com.example.dto;

import lombok.Data;
import java.util.List;

@Data
public class CompleteMultipartRequest {
    private String key;
    private String uploadId;
    private List<CompletedPartDto> parts;
}

// CompletedPartDto.java
package com.example.dto;

import lombok.Data;

@Data
public class CompletedPartDto {
    private int partNumber;
    private String eTag;
}

// AbortMultipartRequest.java
package com.example.dto;

import lombok.Data;

@Data
public class AbortMultipartRequest {
    private String key;
    private String uploadId;
}
Enter fullscreen mode Exit fullscreen mode

Multipart Controller

// MultipartController.java
package com.example.controller;

import com.example.dto.*;
import com.example.service.MultipartService;
import lombok.RequiredArgsConstructor;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

import java.util.Map;

@RestController
@RequestMapping("/api/upload/multipart")
@RequiredArgsConstructor
public class MultipartController {

    private final MultipartService multipartService;

    @PostMapping("/initiate")
    public ResponseEntity<MultipartInitiateResponse> initiateMultipart(
            @RequestBody UploadInitiateRequest request) {

        MultipartInitiateResponse response = 
                multipartService.initiateMultipartUpload(
                        request.getFilename(),
                        request.getContentType()
                );

        return ResponseEntity.ok(response);
    }

    @PostMapping("/part-url")
    public ResponseEntity<PartUrlResponse> getPartUrl(
            @RequestBody PartUrlRequest request) {

        String url = multipartService.getPresignedPartUrl(
                request.getKey(),
                request.getUploadId(),
                request.getPartNumber()
        );

        return ResponseEntity.ok(new PartUrlResponse(url));
    }

    @PostMapping("/complete")
    public ResponseEntity<Map<String, String>> completeMultipart(
            @RequestBody CompleteMultipartRequest request) {

        multipartService.completeMultipartUpload(
                request.getKey(),
                request.getUploadId(),
                request.getParts()
        );

        return ResponseEntity.ok(Map.of("success", "true", "key", request.getKey()));
    }

    @PostMapping("/abort")
    public ResponseEntity<Map<String, Boolean>> abortMultipart(
            @RequestBody AbortMultipartRequest request) {

        multipartService.abortMultipartUpload(
                request.getKey(),
                request.getUploadId()
        );

        return ResponseEntity.ok(Map.of("success", true));
    }
}
Enter fullscreen mode Exit fullscreen mode

Client-Side Multipart Upload with Parallel Parts

import axios from "axios";

class MultipartUploader {
  constructor(file, options = {}) {
    this.file = file;
    this.chunkSize = options.chunkSize || 10 * 1024 * 1024; // 10MB
    this.maxConcurrent = options.maxConcurrent || 3; // Upload 3 parts simultaneously
    this.onProgress = options.onProgress || (() => {});
    this.uploadedBytes = 0;
  }

  async upload(token) {
    // Step 1: Initiate multipart upload
    const { data: initData } = await axios.post(
      '/api/upload/multipart/initiate',
      {
        filename: this.file.name,
        contentType: this.file.type,
        fileSize: this.file.size,
      },
      {
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${token}`,
        },
      }
    );

    const { uploadId, key, chunkSize } = initData;
    this.chunkSize = chunkSize;

    // Step 2: Calculate parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const parts = Array.from({ length: numParts }, (_, i) => i + 1);

    const completedParts = [];

    // Step 3: Upload parts in batches (concurrency limit)
    while (parts.length > 0) {
      const batch = parts.splice(0, this.maxConcurrent);

      const batchPromises = batch.map(async (partNumber) => {
        return await this.uploadPart(key, uploadId, partNumber, token);
      });

      const results = await Promise.all(batchPromises);
      completedParts.push(...results);
    }

    // Step 4: Complete multipart upload
    await axios.post(
      '/api/upload/multipart/complete',
      { key, uploadId, parts: completedParts },
      {
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${token}`,
        },
      }
    );

    return key;
  }

  async uploadPart(key, uploadId, partNumber, token) {
    // Step 1: Get presigned URL for this part
    const { data: urlData } = await axios.post(
      '/api/upload/multipart/part-url',
      { key, uploadId, partNumber },
      {
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${token}`,
        },
      }
    );

    const { url } = urlData;

    // Step 2: Extract chunk
    const start = (partNumber - 1) * this.chunkSize;
    const end = Math.min(start + this.chunkSize, this.file.size);
    const chunk = this.file.slice(start, end);

    // Step 3: Upload chunk
    const response = await axios.put(url, chunk, {
      headers: { 'Content-Type': this.file.type },
      onUploadProgress: (e) => {
        if (e.total) {
          this.uploadedBytes += e.loaded;
          const totalProgress = (this.uploadedBytes / this.file.size) * 100;
          this.onProgress(totalProgress);
        }
      },
    });

    // Step 4: Extract ETag from response headers
    const etag = response.headers['etag'];
    if (!etag) throw new Error(`Part ${partNumber} upload failed: missing ETag`);

    return {
      partNumber: partNumber,
      eTag: etag.replace(/"/g, ''),
    };
  }
}

// Usage
const uploader = new MultipartUploader(file, {
  maxConcurrent: 5, // Upload 5 parts at once for faster speed
  onProgress: (percentage) => {
    console.log(`Upload progress: ${percentage.toFixed(2)}%`);
    updateProgressBar(percentage);
  },
});

await uploader.upload(authToken);
Enter fullscreen mode Exit fullscreen mode

⚡ Performance Optimizations

1. S3 Transfer Acceleration

Enable S3 Transfer Acceleration for 50-500% faster uploads over long distances:

// S3Config.java - Updated
@Bean
public S3Client s3Client() {
    return S3Client.builder()
            .region(Region.of(region))
            .credentialsProvider(StaticCredentialsProvider.create(
                    AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
            .accelerate(true) // Enable Transfer Acceleration
            .httpClient(ApacheHttpClient.builder()
                    .connectionTimeout(Duration.ofSeconds(5))
                    .socketTimeout(Duration.ofSeconds(5))
                    .maxConnections(50)
                    .build())
            .build();
}

@Bean
public S3Presigner s3Presigner() {
    return S3Presigner.builder()
            .region(Region.of(region))
            .credentialsProvider(StaticCredentialsProvider.create(
                    AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
            .accelerate(true) // Enable Transfer Acceleration
            .build();
}
Enter fullscreen mode Exit fullscreen mode

Setup: Enable in S3 bucket settings → Properties → Transfer Acceleration

2. Optimal Chunk Sizes

File Size Recommended Chunk Size Reason
< 100MB Single upload Overhead not worth it
100MB - 1GB 10MB chunks Balance speed/reliability
1GB - 5GB 25MB chunks Fewer API calls
> 5GB 100MB chunks Maximum efficiency
/**
 * Calculates the optimal S3 multipart upload chunk size (in bytes)
 * based on the total file size to balance speed, reliability, and API efficiency.
 */
public long calculateOptimalChunkSize(long fileSize) {
    long MB_100 = 100L * 1024 * 1024;
    long GB_1 = 1024L * 1024 * 1024;
    long GB_5 = 5L * 1024 * 1024 * 1024;

    if (fileSize < MB_100) return fileSize; // Single upload
    if (fileSize < GB_1) return 10L * 1024 * 1024; // 10MB
    if (fileSize < GB_5) return 25L * 1024 * 1024; // 25MB
    return 100L * 1024 * 1024; // 100MB
}
Enter fullscreen mode Exit fullscreen mode

Tips:

  • For unstable networks → smaller chunks (5–10MB) for easier retries
  • For high-speed connections → larger chunks (25–100MB) for better throughput
  • AWS caps at 10,000 parts, so chunk size × parts ≤ total file size
  • Combine with parallel uploads (e.g., CompletableFuture) to fully utilize bandwidth

3. Parallel Upload Configuration

import axios from 'axios';

const OPTIMAL_CONCURRENCY = {
  // based on network speed
  slow: 2,      // < 5 Mbps
  medium: 3,    // 5-50 Mbps
  fast: 5,      // 50-100 Mbps
  veryFast: 8,  // > 100 Mbps
};

// auto-detect network speed
async function detectNetworkSpeed() {
  const start = Date.now();

  const response = await axios.get('https://your-cdn.com/test-1mb.bin', {
    responseType: 'blob',
  });

  const blob = response.data;
  const duration = (Date.now() - start) / 1000; // seconds

  const speedMbps = (blob.size * 8) / (1024 * 1024 * duration);

  if (speedMbps < 5) return OPTIMAL_CONCURRENCY.slow;
  if (speedMbps < 50) return OPTIMAL_CONCURRENCY.medium;
  if (speedMbps < 100) return OPTIMAL_CONCURRENCY.fast;
  return OPTIMAL_CONCURRENCY.veryFast;
}
Enter fullscreen mode Exit fullscreen mode

4. Connection Pooling & Keep-Alive

// S3Config.java - Enhanced connection pooling
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;

@Bean
public S3Client s3Client() {
    ApacheHttpClient httpClient = ApacheHttpClient.builder()
            .connectionTimeout(Duration.ofSeconds(5))
            .socketTimeout(Duration.ofSeconds(5))
            .maxConnections(50) // Allow multiple concurrent connections
            .connectionTimeToLive(Duration.ofMinutes(1))
            .useIdleConnectionReaper(true)
            .build();

    return S3Client.builder()
            .region(Region.of(region))
            .credentialsProvider(StaticCredentialsProvider.create(
                    AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
            .httpClient(httpClient)
            .build();
}
Enter fullscreen mode Exit fullscreen mode

📊 Monitoring Upload Performance

// UploadMetricsService.java
package com.example.service;

import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class UploadMetricsService {

    public void trackUploadMetrics(
            String key,
            long fileSize,
            long durationMs) {

        double durationSeconds = durationMs / 1000.0;
        double speedMbps = (fileSize * 8.0) / (durationSeconds * 1024 * 1024);

        // Log to monitoring service (DataDog, CloudWatch, etc.)
        log.info("Upload completed - Key: {}, Size: {} bytes, Duration: {} ms, Speed: {} Mbps",
                key, fileSize, durationMs, String.format("%.2f", speedMbps));

        // Alert if speed is below threshold
        if (speedMbps < 1.0) {
            log.warn("Slow upload detected - Key: {}, Speed: {} Mbps",
                    key, String.format("%.2f", speedMbps));
            // Trigger alert to monitoring system
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Integration with Micrometer for Metrics

// MetricsConfig.java
package com.example.config;

import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class MetricsConfig {

    @Bean
    public Timer uploadTimer(MeterRegistry registry) {
        return Timer.builder("s3.upload.duration")
                .description("S3 upload duration")
                .register(registry);
    }
}

// Usage in service
@Service
@RequiredArgsConstructor
public class UploadService {
    private final Timer uploadTimer;

    public void recordUpload(Runnable uploadTask) {
        uploadTimer.record(uploadTask);
    }
}
Enter fullscreen mode Exit fullscreen mode

🛡️ Handling Upload Failures Gracefully

Retry Logic with Exponential Backoff

async function uploadPartWithRetry(chunk, url, maxRetries = 3) {
  let lastError;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await uploadChunk(chunk, url);
    } catch (error) {
      lastError = error;

      if (attempt < maxRetries) {
        // Exponential backoff: 1s, 2s, 4s
        const delay = Math.pow(2, attempt) * 1000;
        await new Promise(resolve => setTimeout(resolve, delay));
      }
    }
  }

  throw lastError;
}
Enter fullscreen mode Exit fullscreen mode

Resume Failed Uploads

class ResumableUploader extends MultipartUploader {
  constructor(file, options = {}) {
    super(file, options);
    this.uploadState = this.loadUploadState() || {
      uploadId: null,
      key: null,
      completedParts: [],
    };
  }

  saveUploadState() {
    localStorage.setItem(
      `upload_${this.file.name}`,
      JSON.stringify(this.uploadState)
    );
  }

  loadUploadState() {
    const saved = localStorage.getItem(`upload_${this.file.name}`);
    return saved ? JSON.parse(saved) : null;
  }

  async upload(token) {
    // Resume existing upload if available
    if (this.uploadState.uploadId) {
      return await this.resumeUpload(token);
    }

    // Start new upload
    return await super.upload(token);
  }

  async resumeUpload(token) {
    const { uploadId, key, completedParts } = this.uploadState;
    const completedPartNumbers = new Set(
      completedParts.map(p => p.partNumber)
    );

    // Upload only remaining parts
    const numParts = Math.ceil(this.file.size / this.chunkSize);
    const remainingParts = [];

    for (let i = 1; i <= numParts; i++) {
      if (!completedPartNumbers.has(i)) {
        remainingParts.push(i);
      }
    }

    // Upload remaining parts in batches
    const newParts = [];
    while (remainingParts.length > 0) {
      const batch = remainingParts.splice(0, this.maxConcurrent);
      const results = await Promise.all(
        batch.map(partNum => this.uploadPart(key, uploadId, partNum, token))
      );
      newParts.push(...results);
    }

    // Complete upload
    const allParts = [...completedParts, ...newParts];
    await axios.post(
      '/api/upload/multipart/complete',
      { key, uploadId, parts: allParts },
      {
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${token}`,
        },
      }
    );

    localStorage.removeItem(`upload_${this.file.name}`);
    return key;
  }
}
Enter fullscreen mode Exit fullscreen mode

Spring Boot Side: Handling Orphaned Uploads

// MultipartCleanupService.java
package com.example.service;

import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;

import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.List;

@Slf4j
@Service
@RequiredArgsConstructor
public class MultipartCleanupService {

    private final S3Client s3Client;

    @Value("${aws.s3.bucket-name}")
    private String bucketName;

    // Run cleanup daily at 2 AM
    @Scheduled(cron = "0 0 2 * * ?")
    public void cleanupAbandonedUploads() {
        log.info("Starting cleanup of abandoned multipart uploads");

        ListMultipartUploadsRequest listRequest = ListMultipartUploadsRequest.builder()
                .bucket(bucketName)
                .build();

        ListMultipartUploadsResponse response = s3Client.listMultipartUploads(listRequest);
        List<MultipartUpload> uploads = response.uploads();

        Instant oneDayAgo = Instant.now().minus(1, ChronoUnit.DAYS);

        int abortedCount = 0;
        for (MultipartUpload upload : uploads) {
            if (upload.initiated().isBefore(oneDayAgo)) {
                try {
                    AbortMultipartUploadRequest abortRequest = 
                            AbortMultipartUploadRequest.builder()
                                    .bucket(bucketName)
                                    .key(upload.key())
                                    .uploadId(upload.uploadId())
                                    .build();

                    s3Client.abortMultipartUpload(abortRequest);
                    abortedCount++;
                    log.info("Aborted abandoned upload: {} (initiated: {})", 
                            upload.key(), upload.initiated());
                } catch (Exception e) {
                    log.error("Failed to abort upload: {}", upload.key(), e);
                }
            }
        }

        log.info("Cleanup completed. Aborted {} uploads", abortedCount);
    }
}
Enter fullscreen mode Exit fullscreen mode

📈 Speed Benchmarks

Efficient / Ideal upload speeds for a 1GB file:

Method Time Speed Notes
Through server 4-6 min ~2.5 MB/s Bottleneck
Direct presigned URL 1.5-2 min ~8 MB/s Good
Multipart (3 parts) 45-60 sec ~17 MB/s Better
Multipart (5 parts) + Acceleration 30-40 sec ~25 MB/s Best

✅ Production Checklist

  • ✅ Direct-to-S3 uploads implemented
  • ✅ Multipart upload for files > 100MB
  • ✅ Parallel part uploads (3-5 concurrent)
  • ✅ S3 Transfer Acceleration enabled
  • ✅ Optimal chunk sizes configured
  • ✅ Connection pooling enabled (Apache HTTP Client)
  • ✅ Progress tracking implemented
  • ✅ Retry logic with exponential backoff
  • ✅ Resume capability for failed uploads
  • ✅ Upload speed monitoring (Micrometer/CloudWatch)
  • ✅ Scheduled cleanup for abandoned multipart uploads
  • ✅ S3 lifecycle policies for cleanup
  • ✅ CloudFront CDN for download speed
  • ✅ CORS configuration on S3 bucket
  • ✅ Security: Short presigned URL expiration times
  • ✅ Exception handling and logging

🔧 Additional Configuration

Enable Scheduled Tasks

// Application.java
package com.example;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;

@SpringBootApplication
@EnableScheduling  // Enable @Scheduled annotations
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}
Enter fullscreen mode Exit fullscreen mode

CORS Configuration for S3 Bucket

Add this CORS configuration to your S3 bucket:

[
    {
        "AllowedHeaders": ["*"],
        "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
        "AllowedOrigins": ["https://yourdomain.com"],
        "ExposeHeaders": ["ETag"],
        "MaxAgeSeconds": 3000
    }
]
Enter fullscreen mode Exit fullscreen mode

Exception Handling

// GlobalExceptionHandler.java
package com.example.exception;

import lombok.extern.slf4j.Slf4j;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.RestControllerAdvice;
import software.amazon.awssdk.services.s3.model.S3Exception;

import java.util.Map;

@Slf4j
@RestControllerAdvice
public class GlobalExceptionHandler {

    @ExceptionHandler(S3Exception.class)
    public ResponseEntity<Map<String, String>> handleS3Exception(S3Exception e) {
        log.error("S3 operation failed", e);
        return ResponseEntity
                .status(HttpStatus.INTERNAL_SERVER_ERROR)
                .body(Map.of(
                        "error", "S3 operation failed",
                        "message", e.awsErrorDetails().errorMessage()
                ));
    }

    @ExceptionHandler(IllegalArgumentException.class)
    public ResponseEntity<Map<String, String>> handleIllegalArgument(
            IllegalArgumentException e) {
        return ResponseEntity
                .status(HttpStatus.BAD_REQUEST)
                .body(Map.of("error", e.getMessage()));
    }
}
Enter fullscreen mode Exit fullscreen mode

Security: JWT Authentication Example

// SecurityConfig.java
package com.example.config;

import lombok.RequiredArgsConstructor;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;

@Configuration
@EnableWebSecurity
@RequiredArgsConstructor
public class SecurityConfig {

    private final JwtAuthenticationFilter jwtAuthFilter;

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        http
                .csrf(csrf -> csrf.disable())
                .authorizeHttpRequests(auth -> auth
                        .requestMatchers("/api/upload/**").authenticated()
                        .anyRequest().permitAll()
                )
                .sessionManagement(session -> session
                        .sessionCreationPolicy(SessionCreationPolicy.STATELESS)
                )
                .addFilterBefore(jwtAuthFilter, UsernamePasswordAuthenticationFilter.class);

        return http.build();
    }
}
Enter fullscreen mode Exit fullscreen mode

Application Properties - Complete Example

# application.properties

# AWS Configuration
aws.region=us-east-1
aws.access-key-id=${AWS_ACCESS_KEY_ID}
aws.secret-access-key=${AWS_SECRET_ACCESS_KEY}
aws.s3.bucket-name=${AWS_BUCKET_NAME}

# Server Configuration
server.port=8080
server.max-http-header-size=65536

# File Upload Limits (for metadata only, not actual files)
spring.servlet.multipart.enabled=false

# Logging
logging.level.software.amazon.awssdk=DEBUG
logging.level.com.example=INFO

# Connection Pool
aws.s3.connection-timeout=5000
aws.s3.socket-timeout=5000
aws.s3.max-connections=50

# Scheduled Tasks
spring.task.scheduling.pool.size=5
Enter fullscreen mode Exit fullscreen mode

🎯 Key Takeaways

  1. Never route files through your server - Use presigned URLs for direct S3 uploads
  2. Use multipart uploads for large files (> 100MB) with proper chunk sizing
  3. Upload parts in parallel - 3-5 concurrent uploads optimal for most networks
  4. Enable S3 Transfer Acceleration - Massive speed boost for global users
  5. Implement retry logic - Network issues happen, plan for them
  6. Monitor upload speeds - Use Micrometer/CloudWatch to track performance
  7. Optimize chunk sizes - Bigger files need bigger chunks (10-100MB)
  8. Clean up abandoned uploads - Use scheduled tasks to abort old multipart uploads
  9. Configure connection pooling - Apache HTTP Client with keep-alive and multiple connections
  10. Secure your endpoints - Always authenticate upload initiation requests

Performance Checklist Summary

  • Architecture: Direct-to-S3 (not through server)
  • Large Files: Multipart upload with optimal chunking
  • Parallelism: 3-5 concurrent part uploads
  • Speed: Transfer Acceleration enabled
  • Reliability: Exponential backoff retry logic
  • Resumability: State persistence for interrupted uploads
  • Monitoring: Metrics and alerts configured
  • Cleanup: Scheduled task for orphaned uploads
  • Security: JWT auth + short presigned URL expiration

The difference between a slow upload system and a fast one often comes down to these fundamentals. Get them right, and your users will notice.


📚 Additional Resources

Top comments (0)