When you let users upload files to S3, the most intuitive solution is to create an API route that accepts a file and uses the aws-sdk on the server to forward it to your bucket. This way each file transfer is done twice (2x) consuming resources and creating a bottleneck.
Uploading files directly from the browser can save your serverβs bandwidth and simplify infrastructure. Hereβs how to do it securely and cleanly with react and Next.js.
Table of Contents
- TL;DR
- Motivation
- Disposition
- The Core: server operates entities
- Step 1: The Server Side API Route for Uploads
- Step 2: A Decoupled Client Side Architecture
- Step 3: Implementing Secure Downloads
- Handling Environment Variables
- Where to go from here?
TL;DR
If you know what youβre doing and just need a working example reference, here you can find the complete code used in this article.
This guide covers a secure file upload to S3 using react, next.js and AWS SDK v3 with pre-signed URLs.
Motivation
The react next.js frontend app lets users perform a secure file upload to S3 directly from the browser and let them have access to those files afterwards. Pretty straightforward feature, but letβs get deeper:
- There is an operations or support team that reviews those files in an internal admin board (e.g. verify contracts, evaluate numbers in reports etc.)
- These files contain sensitive information and can not be accessed publicly.
- The client user can only see his files and not other userβs files.
Instead of implementing your own static server solution, you can delegate storing and serverless file management to any S3-compatible object storage.
Disposition
Whether youβre using Amazon S3, DigitalOcean Spaces, Linode Object Storage etc., this applies to any S3-compatible storage service. In order to upload and read files, we can use @aws-sdk/client-s3 for S3 compatible Object Storage.
The most intuitive solution is often to create an API route that accepts a file and then uses the AWS-SDK on the server to forward it to your bucket. This server proxy makes your application the bottleneck.
+------------------+ +---------------+ +---------------+
β Client Browser β βββ β Your Server β βββ β S3 Bucket β
+------------------+ +---------------+ +---------------+
β β
(1) Upload File (2) Forward File
β β
βββββββββββββββββββββββββββ
File travels twice
Every uploaded megabyte must be processed by your server, consuming its bandwidth and memory, before being sent on to S3. The file is transferred twice potentially exceeding memory or execution time limits (for large files).
The same happens when a user downloads a file.
The industry-standard solution is to offload this work by letting the client (browser) upload directly to S3.
The Core: server operates entities
Instead of doing the heavy lifting, your api route performs a quick, secure handshake.
The Client Requests an Upload: Your front-end makes a small API call to your server, providing metadata like the file's content type (MIME type).
The Server Generates a pre-signed URL: Your server which safely stores your s3 credential uses the SDK to generate a temporary, single-use URL with specific permissions (e.g., "allow a
PUT
operation forimage/png
for the next 10 minutes").The Client Uploads Directly to S3: The client receives this URL and uses it to send the file straight to S3, bypassing your server entirely.
+----------------------+
| Your Server |
| (Auth + Signed URLs) |
+----------+-----------+
β
+---------------+---------------+
| |
(1) Request Upload URL (1) Request Download URL
| |
β β
+-----+-----+ +-----+-----+
| Client | | Client |
| (Browser) | | (Browser) |
+-----+-----+ +-----+-----+
| |
(2) Upload File using URL (2) Use URL to Download File
| |
β β
+-----+-----+ +-----+-----+
| S3 | βββββββββββββ | S3 |
| (Storage) | (3) File Stored | (Storage) |
+-----------+ +-----------+
This pattern is secure, highly scalable, and more cost-effective. Letβs build this step by step.
The following code examples are taken from a Next.js project that uses Feature-Sliced Design (FSD) for its architecture.
You can follow along with the complete, working code in the example repository linked above.
Step 1: The Server Side API Route for Uploads
This Next.js API route for S3 uploads, validates the request and generates the pre-signed URL. It's also the perfect place to enforce business logic, like attaching metadata.
// src/app/api/upload/route.ts
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { NextResponse } from "next/server";
import crypto from "crypto";
// --- This is a placeholder for your authentication logic ---
// In a real app, you would get the user's session from the request
// headers, cookies, or a token.
async function getUserIdFromSession(request: Request): Promise<number> {
// Example: return the user ID from your auth system (e.g., NextAuth, Clerk)
// For now, we'll return a hardcoded ID.
return 123;
}
// -------------------------------------------------------------
// This is safe to initialize here as it only runs on the server.
const s3 = new S3Client({
region: process.env.S3_REGION!,
endpoint: `https://${process.env.S3_ENDPOINT!}`,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
});
export async function POST(request: Request) {
try {
const { contentType } = await request.json();
// const userId = 123; // In a real app, derive this from the user's session.
// 1. Get the current user's ID securely on the server
const userId = await getUserIdFromSession(request);
const key = crypto.randomBytes(16).toString("hex");
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET_NAME!,
Key: key,
ContentType: contentType,
Metadata: {
uploadedBy: userId.toString(),
uploadedAt: new Date().toISOString(),
},
});
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 600 }); // URL is valid for 10 minutes
return NextResponse.json({ signedUrl, key });
} catch (error) {
// ...error handling
}
}
This API route is the foundation for secure serverless file uploads in any Next.js + AWS S3 SDK integration.
Step 2: A Decoupled Client Side Architecture
To keep our code clean and testable, we'll separate frontend file upload logic from the business logic.
The Data Access Layer
This file's only job is to execute fetch
requests. It handles how we communicate with our APIs.
// src/shared/api/s3.ts
// Requests a pre-signed URL from our backend.
export async function getPresignedUrl(contentType: string): Promise<{ signedUrl: string; key: string }> {
const response = await fetch('/api/upload', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ contentType }),
});
if (!response.ok) throw new Error('Failed to get pre-signed URL.');
return response.json();
}
// Uploads the file to the provided S3 URL.
export async function uploadToS3(signedUrl: string, file: File): Promise<void> {
const response = await fetch(signedUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
});
if (!response.ok) throw new Error('S3 upload failed.');
}
The Business Logic Layer
This file defines what the feature does - the sequence of steps - without knowing the implementation details.
// src/features/file-management/model/uploadFile.ts
import { getPresignedUrl, uploadToS3 } from "@shared/api/s3";
// This function orchestrates the upload process.
export async function uploadFile(file: File): Promise<{ success: boolean, key?: string }> {
try {
const { signedUrl, key } = await getPresignedUrl(file.type);
await uploadToS3(signedUrl, file);
// Step 3: Upload file key to store in database
// console.log("Uploading file key to store in database...");
// await storeUploadedFileKey(key, additionalData);
return { success: true, key };
} catch (error) {
console.error("Upload failed:", error);
return { success: false };
}
}
With this separation, your UI component only needs to import and call uploadFile(theFile)
. The component remains simple, and your logic is reusable and easy to unit test.
Step 3: Implementing Secure Downloads
The download flow uses the same secure pattern. First, we create an API route to generate a pre-signed GetObject
URL for a secure S3 file download.
// src/app/api/download/route.ts
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
// ... s3 client setup is the same ...
export async function GET(request: NextRequest) {
const key = request.nextUrl.searchParams.get('key');
if (!key) { /* handle error */ }
const command = new GetObjectCommand({
Bucket: process.env.S3_BUCKET_NAME!,
Key: key,
// This header forces the browser to download the file.
ResponseContentDisposition: `attachment;`
// You can override the filename for the user here
// But then you must also know the file extension
// ResponseContentDisposition: `attachment; filename="${key}"`
});
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 minute validity
return NextResponse.json({ signedUrl });
}
Now for the client-side, which involves a browser hack to trigger the download without navigating away from the page.
// src/features/file-management/model/downloadFile.ts
import { getDownloadUrl } from "@shared/api/s3"; // Assumes a getDownloadUrl function in the shared API layer
// This function orchestrates the business logic for downloading a file.
export async function downloadFile(key: string): Promise<{ success: boolean }> {
try {
// Step 1: Get the pre-signed download URL
const { signedUrl } = await getDownloadUrl(key);
// Step 2: Trigger the download in the browser
const link = document.createElement('a');
link.href = signedUrl;
link.setAttribute('download', key);
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
return { success: true };
} catch (error) {
console.error("File download process failed:", error);
return { success: false };
}
}
Together, these steps form a scalable, production-ready file upload and download workflow using react Next.js, TypeScript, and AWS S3 SDK.
A Deeper Look at the Client Side Download
You might be wondering about the link.click()
part. Why not just use window.location.href = signedUrl
?
First, a direct navigation would rip the user out of your single-page application. Second, for content types the browser can render (like images, PDFs, or text files), it would likely just display the file instead of saving it.
The programmatic anchor tag (<a>
) solves both problems. Hereβs a line-by-line breakdown:
-
const link = document.createElement('a');
We create a new, invisible link element in memory. -
link.href = signedUrl;
We set its destination to our secure, temporary S3 URL. -
link.setAttribute('download', key);
This is the most important part. Thedownload
attribute is a command to the browser: "Do not navigate to thishref
. Instead, treat its content as a file to be saved, and use this attribute's value as the suggested filename." This forces the download behaviour. -
document.body.appendChild(link);
The link must be attached to the document for it to be clickable. -
link.click();
We programmatically simulate a user clicking the link, which triggers the browser's "Save As..." dialog. -
document.body.removeChild(link);
The link has done its job, so we immediately remove it from the DOM to keep our page clean.
Handling Environment Variables
As a final note, manage your credentials using a .env
file and provide a template for other developers with a .env.example
. Be sure to add .env
to your .gitignore
.
# .env.example
# S3 Configuration
S3_BUCKET_NAME=your-s3-bucket-name
S3_REGION=your-bucket-region
S3_ENDPOINT=your-s3-endpoint
S3_ACCESS_KEY_ID=your-access-key-id
S3_SECRET_ACCESS_KEY=your-secret-access-key
Proper environment variable management for credentials is crucial for a secure S3 integration in any web app that uses serverless solutions.
Where to go from here?
Once a file is in your S3 bucket, the next logical step is often to trigger serverless functions based on S3 events (e.g. S3 automation, Lambda triggers, and object storage workflows).
You could:
- Generate thumbnails for uploaded images.
- Transcode files into different formats.
- Run data processing jobs on uploaded files.
- Notify other services that a new file is available.
π Please share how you treat user-provided data?
Do you analyse it with metrics? If so, what kinds of insights are you looking to uncover?
Thanks for reading!
Top comments (0)