DEV Community

Cover image for Bypass AWS API Gateway limits: Upload large files using AWS S3 presigned URLs
Siddhant Khare
Siddhant Khare

Posted on

Bypass AWS API Gateway limits: Upload large files using AWS S3 presigned URLs

tl;dr; The AWS API Gateway has a payload size limit of 10MB, which was determined based on customer feedback and load testing during the service launch[1]. Currently, it is not possible to configure this limit, but AWS may consider enhancing this feature in the future for customers who require further limitations[1]. If a compressed payload smaller than 10MB is passed to the API Gateway, it will be accepted. However, if the uncompressed payload exceeds 10MB, the API Gateway will auto-decompress it and may reject the request with an "Error 413 Request entity too large"[1]. To work around this limit, one recommendation is to use Amazon S3 for large file transfers by returning an S3 Pre-signed URL in the API Gateway response, allowing direct uploads to S3[2]. This approach helps bypass the 10MB limit of the API Gateway and enables uploads directly to S3, which can handle larger file sizes up to 5TB with multipart uploads[3].

Introduction

In this blog post, we'll explore how to upload large files to AWS S3 using presigned URLs. Originally, our file upload mechanism was handled through AWS API Gateway, which is limited to accepting files up to 10 MB. To accommodate larger files, we've implemented a solution using S3 presigned URLs, allowing files to be uploaded directly from the client interface without size limitations imposed by API Gateway.

Process Overview

The process involves two main steps:

  1. Fetch the presigned URL from S3 via an API Gateway → Lambda setup.
  2. Use the obtained URL to upload the file directly from the client.

Diagram explanation

Mermaid diagram of workaround solution to bypass AWS API Gateway

Lambda Function for Generating Presigned URLs

First, let's look at the Lambda function, which is responsible for generating the presigned URL. We'll use Node.js for the Lambda function:

const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');

const client = new S3Client({
    region: 'eu-west-2',
    signatureVersion: 'v4'
});

exports.handler = async (event, context) => {
    // Define the parameters for the S3 bucket and the object key
    const params = {
        Bucket: 'YOUR_BUCKET_NAME',
        Key: 'OBJECT_KEY'
    }
    const expires = 3600; // Set expiration time for the signed URL (in seconds)

    try {
        const url = await getSignedUrl(client, new PutObjectCommand(params), {
            expiresIn: expires
        });

        return {
            headers: {
                'Access-Control-Allow-Origin': '*'
            },
            body: JSON.stringify({ url })
        }
    } catch (error) {
        return {
            headers: {
                'Access-Control-Allow-Origin': '*'
            },
            body: JSON.stringify({ error: error.message })
        }
    }
};
Enter fullscreen mode Exit fullscreen mode

Client-Side Upload Using React

Next, let's discuss how to use this presigned URL on the client side to upload a file. Here we use React:

import axios from 'axios';

// Function to upload the file using the presigned URL
export async function upload(file) {
    const getUrlResponse = await getSignedUrl();
    const url = getUrlResponse.body;

    // Handle cases where the URL is not obtained
    if (!url) return;

    await uploadFile(url, file);
}

// Function to retrieve the signed URL
export async function getSignedUrl() {
    const result = await axios.get('API_GATEWAY_URL')
        .then(response => {
            return { body: response.data.url }
        })
        .catch(err => {
            return { body: null }
        });

    return result;
}

// Function to perform the file upload
export async function uploadFile(url, file) {
    const result = await axios.put(url, file, {
        headers: { 'Content-Type': file.type }
    }).then(response => {
        return { body: response.data }
    }).catch(err => {
        return { body: null }
    });

    return result;
}
Enter fullscreen mode Exit fullscreen mode

CORS Configuration on S3

One common issue during implementation is CORS errors. Here's how you can configure CORS in your S3 bucket settings:

{
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["PUT"], // Ensure 'PUT' is allowed for file uploads
    "AllowedOrigins": ["http://localhost:3000"] // Adjust this according to your client's URL
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

By using AWS S3 presigned URLs, we can bypass the limitations of AWS API Gateway for large file uploads, offering a scalable and secure solution directly from client applications. This method not only simplifies the process but also keeps your data secure during transit.


Stay Connected and Get More Insights

If you found this guide helpful and are dealing with similar challenges, don't hesitate to reach out for personalized consulting at Superpeer. For more tech insights and updates, consider following me on GitHub. Let's innovate together!

Top comments (3)

Collapse
 
jaecktec profile image
Constantin

Small caviat, bucket names can be abused for a financial ddos, since you are also paying for authorized api calls against your bucket.

repost.aws/questions/QUbqrBw4QhQIu...

In general using presigned URLs is a good idea, however one should be aware of the potential attack vector

Collapse
 
onwelfare profile image
Frank the tank

Use multipart uploads and problem solved.

Collapse
 
amythical profile image
amythical

But the payload would still go to a server first and then pushed to a storage like S3. Wont that mean running a large server instead of a smaller instance instead and moving the upload functionality to the client and S3.
Thoughts?