DEV Community

Cover image for The Architecture That Lets Us Sleep: Scalable Uploads with S3 presigned URLs
Kelechi Oliver A.
Kelechi Oliver A.

Posted on

The Architecture That Lets Us Sleep: Scalable Uploads with S3 presigned URLs

I've tackled the complexities of file management across numerous projects, and one truth always holds: large file uploads pose a significant threat to backend stability and available bandwidth. The traditional method of funnelling huge files through your application server is a recipe for bottlenecks. Fortunately, we don't have to settle for that complexity. By integrating Amazon S3, we can utilize features like pre-signed URLs and Multipart Uploads to offload the entire data transfer process. This crucial step not only simplifies our architecture but ensures the upload remains scalable and efficient without overloading our own infrastructure.

In this article, we will build a Next.js video gallery app optimized for handling large file uploads. To ensure maximum performance and keep the interface snappy, we'll implement file chunking and execute the entire upload process off the main UI thread. This approach eliminates blocking, resulting in a significantly faster and more reliable user experience.

A complete project can be found on Github

Prerequisites

Before diving in, ensure you have the following:

  • AWS Account with permissions to use S3
  • Node.js (v18+) installed locally

An AWS S3 Pre-Signed URL is a special, time-limited link created by your backend that provides temporary access to a private S3 object. It allows an unauthorized user to securely upload or download a file directly to S3 without needing any AWS credentials or exposing your sensitive access keys. For detailed documentation, see Download and upload objects with presigned URLs - Amazon Simple Storage Service.

So with the foundational concepts established, we're done with definitions and ready to build. Let's immediately pivot to development and construct a practical application. This project will serve as a clear, working illustration of our key takeaway: how combining S3 Pre-Signed URLs with Multipart Uploads radically improves both the efficiency and the user experience of handling large file uploads

This is what we would be building. The complete code can be found on Github.

Initialise a Next.js project

npx create-next-app@latest mini-video-gallery --typescript --tailwind --eslint --app --src-dir --import-alias "@/*"
cd mini-video-gallery
Enter fullscreen mode Exit fullscreen mode

Next, let’s install the required dependencies

# AWS SDK for S3 operations
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner

# shadcn/ui setup
npx shadcn@latest init
npx shadcn@latest add button progress card separator
Enter fullscreen mode Exit fullscreen mode

Since we’re working with AWS, we need to set up env’s for secret keys to authenticate requests

Create a new file .env.local, copy and paste this inside

AWS_ACCESS_KEY_ID=your_aws_access_key_id
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
AWS_REGION=us-east-1
AWS_S3_BUCKET_NAME=your_s3_bucket_name
Enter fullscreen mode Exit fullscreen mode

Get an access key from the AWS IAM console, read more on how to generate access keys here. Also, ensure you have an existing S3 bucket or create a new S3 bucket.

Replace the follow with their relevant values your_aws_access_key_id, your_aws_secret_access_key and your_s3_bucket_name

Next, we need to configure Next.js to support web workers

Update next.config.ts

import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  experimental: {
    webpackBuildWorker: true,
  },
};

export default nextConfig;
Enter fullscreen mode Exit fullscreen mode

Now let’s set up our API routes to handle generating pre-signed URLs.

Note: Codes within the following sections are snippets; check the relevant files on GitHub for the complete code.

  1. src/app/api/presigned-url/route.ts This route handles generating presigned URLs for simple uploads (<20mb)
import { NextRequest, NextResponse } from 'next/server';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const s3Client = new S3Client({
  region: process.env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
});

export async function POST(request: NextRequest) {
  const { fileName, fileType, fileSize } = await request.json();

  const timestamp = Date.now();
  const uniqueFileName = `${timestamp}-${fileName}`;

  const command = new PutObjectCommand({
    Bucket: process.env.AWS_S3_BUCKET_NAME,
    Key: uniqueFileName,
    ContentType: fileType,
  });

  const presignedUrl = await getSignedUrl(s3Client, command, {
    expiresIn: 3600,
  });

  return NextResponse.json({
    presignedUrl,
    fileName: uniqueFileName,
    fileSize,
  });
}
Enter fullscreen mode Exit fullscreen mode
  1. src/app/api/multipart-upload/route.ts This route handles multipart upload for large files (>20mb). Get the complete code from the project repository on GitHub
import { NextRequest, NextResponse } from 'next/server';
import { 
  S3Client, 
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

// S3 client configuration...

export async function POST(request: NextRequest) {
  const { action, fileName, fileType, uploadId, partNumber, parts } = await request.json();

  switch (action) {
    case 'initiate':
      return await initiateMultipartUpload(fileName, fileType);
    case 'getUploadUrl':
      return await getUploadPartUrl(fileName, uploadId, partNumber);
    case 'complete':
      return await completeMultipartUpload(fileName, uploadId, parts);
    // ... other cases
  }
}
Enter fullscreen mode Exit fullscreen mode
  1. src/app/api/files/route.ts This route fetches uploaded videos from S3
import { S3Client, ListObjectsV2Command, GetObjectCommand } from '@aws-sdk/client-s3';

export async function GET() {
  const command = new ListObjectsV2Command({
    Bucket: process.env.AWS_S3_BUCKET_NAME!,
    MaxKeys: 100,
  });

  const response = await s3Client.send(command);

  // Filter for video files and generate signed URLs
  const videoExtensions = ['.mp4', '.mov', '.avi', '.mkv', '.webm', '.m4v'];
  // ... processing logic
}
Enter fullscreen mode Exit fullscreen mode

Now, let’s create a web worker to handle file upload in the background

  1. src/workers/multipart-upload.worker.ts This worker would handle large file uploads in the background
interface MultipartUploadData {
  fileName: string;
  fileType: string;
  file: File;
  chunkSize: number;
}

self.onmessage = async function(e) {
  const { type, data } = e.data;

  if (type === 'START_MULTIPART_UPLOAD') {
    await handleMultipartUpload(data);
  }
};

async function handleMultipartUpload(data: MultipartUploadData) {
  // 1. Initiate a multipart upload
  // 2. Split the file into chunks
  // 3. Upload each part with progress tracking
  // 4. Complete multipart upload
}
Enter fullscreen mode Exit fullscreen mode

This web worker would run outside the main UI thread, handle file chunking, while providing real-time progress updates as well as handling errors and retries.

Next, let's add UI components

  1. src/components/FileUpload.tsx This component presents an interface to select files, drag and drop to trigger the upload event
export default function FileUpload({ onUploadComplete }: FileUploadProps) {
  const [selectedFile, setSelectedFile] = useState<File | null>(null);
  const [uploading, setUploading] = useState(false);
  const [uploadProgress, setUploadProgress] = useState<UploadProgress>({});

  const MAX_FILE_SIZE_FOR_SIMPLE_UPLOAD = 20 * 1024 * 1024; // 20MB

  const uploadFile = async () => {
    if (selectedFile.size <= MAX_FILE_SIZE_FOR_SIMPLE_UPLOAD) {
      await handleSimpleUpload(selectedFile);
    } else {
      await handleMultipartUpload(selectedFile);
    }
  };

  // Drag & drop interface with progress bar
  return (
    <Card className="w-full max-w-2xl mx-auto">
      <CardHeader>
        <CardTitle className="flex items-center gap-2">
          <Upload className="h-5 w-5" />
          Upload Video
        </CardTitle>
      </CardHeader>
      ...
   )
}
Enter fullscreen mode Exit fullscreen mode

Let’s also add a component to display the uploaded videos

  1. src/components/VideoGallery.tsx This would call the list files endpoint to fetch files uploaded to S3.
export default function VideoGallery({ refreshTrigger }: VideoGalleryProps) {
  const [videos, setVideos] = useState<VideoFile[]>([]);
  const [selectedVideo, setSelectedVideo] = useState<VideoFile | null>(null);

  // Fetch videos from API
  // Display in grid layout
  // Video player for selected video
  // Download functionality
}
Enter fullscreen mode Exit fullscreen mode

Next, we need to include a layout component.

  1. src/app/page.tsx Our layout component has the file upload component at the top and the video gallery component at the bottom
export default function Home() {
  const [refreshTrigger, setRefreshTrigger] = useState(0);

  const handleUploadComplete = () => {
    setRefreshTrigger(prev => prev + 1);
  };

  return (
    <div className="min-h-screen bg-gray-50">
      <div className="container mx-auto px-4 py-8">
        <FileUpload onUploadComplete={handleUploadComplete} />
        <VideoGallery refreshTrigger={refreshTrigger} />
      </div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Finally, this is what the complete project structure should look like. Note: additional files can be copied from the GitHub repository.

├── src/
│   ├── app/
│   │   ├── api/
│   │   │   ├── presigned-url/route.ts
│   │   │   ├── multipart-upload/route.ts
│   │   │   └── files/route.ts
│   │   └── page.tsx
│   ├── components/
│   │   ├── FileUpload.tsx
│   │   ├── VideoGallery.tsx
│   │   └── ui/ (shadcn components)
│   └── workers/
│       └── multipart-upload.worker.ts
├── .env.local
└── next.config.ts
Enter fullscreen mode Exit fullscreen mode

CORS Configuration

Uploading files to S3 from the browser can result in CORS issues; we need to configure S3 to allow upload requests from our web app running locally.

Follow these steps to configure CORS from the AWS console:

  1. Go to S3 Console
  2. Select your Bucket, then click Permissions
  3. Scroll to "Cross-origin resource sharing (CORS)”
  4. Edit and paste the configuration below
[
    {
        "AllowedHeaders": ["*"],
        "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
        "AllowedOrigins": ["http://localhost:3000", "https://yourdomain.com"],
        "ExposeHeaders": ["ETag", "x-amz-version-id"],
        "MaxAgeSeconds": 3000
    }
]
Enter fullscreen mode Exit fullscreen mode

Running app

With that done, we can run the app using npm run dev. Test the app by uploading sample videos, and also check your Chrome Dev Tools for network requests.

Conclusion

Congratulations! We've reached the finish line! By leveraging S3 Pre-Signed URLs and Multipart Uploads, we've built a file gallery app that is not just functional, but genuinely robust. We successfully offloaded heavy file processing to a separate thread on the client side, keeping our main UI buttery smooth. Best of all, because the heavy lifting now relies on the massive, resilient power of AWS S3 instead of our own backend bandwidth, we can all sleep a little sounder tonight, knowing our application is scaled and ready for whatever our users throw at it.

As with everything in software engineering, we can extend this approach further with parallel uploads, retry logic, and progressive streaming for even more robustness.

Thanks for getting to this point. Happy coding, bye for now.

References
Cover Photo by Thomas William on Unsplash

Top comments (0)