DEV Community

Cover image for Better uploads with Vue Formulate, S3, and Lambda
Justin Schroeder
Justin Schroeder

Posted on

Better uploads with Vue Formulate, S3, and Lambda

Not many developers enjoy building forms — and even the oddballs who say they do don't enjoy file uploads (or they're lying 🤷‍♂️). It's a universal experience — file uploads are a pain, and worse — after all the necessary technical work the end user experience is still typically poor.

Gather around friends, today I'd like to share another way to upload files that makes writing file uploads as easy as <FormulateInput type="file" />, provides a slick user experience, and requires no server-side code (well — AWS Lambdas are technically servers...ehh, you get the idea).


This is a long article, but the end results are worth it. Here's what we'll be covering:

See? It's a lot, but remember the end result is <FormulateInput type="file" /> resulting in direct uploads to AWS S3. Stick with me and we'll make it through.

Ye olde way

In the ye olde days we uploaded files by slapping one or more <input type="file"> inputs in a <form> that included the HTML attribute enctype="multipart-form-data". This did all the hard work of buttoning up the file for us and submitting it to our backend. Our backend code would then handle those files and usually place them somewhere on the filesystem. For example, here is a PHP script (from the official PHP docs) that handles a file upload:

$uploaddir = '/var/www/uploads/';
$uploadfile = $uploaddir . basename($_FILES['userfile']['name']);

if (move_uploaded_file($_FILES['userfile']['tmp_name'], $uploadfile)) {
    echo "File is valid, and was successfully uploaded.\n";
} else {
    echo "Possible file upload attack!\n";
Enter fullscreen mode Exit fullscreen mode

Nice — so we can see PHP magically created some kind of temporary file with the contents of the uploaded file, and we move that temporary file to a permanent location on the filesystem (if we want to keep the file). This methodology still works today across various platforms, so why is it passé? Let’s highlight some of the ways this simple approach falls short:

  • There is no user feedback that the file is uploading. No progress bar, no loading animations, no disabled submit button. The user just sits there waiting for the form to submit. Have a lot of files? Your user will definitely get confused and click that submit button multiple times. Neato 👌
  • If there's an issue with the file upload, the user won't find out till after they waited for the entire upload to complete.
  • Your backend needs to be configured to handle file uploads. For PHP this requires configuring php.ini variables like upload_max_filesize, post_max_size and max_input_time.
  • If you're using a node server you need to be even more careful with uploads. Due to the single-threaded nature of node you can easily cause your server to run out of memory and crash.
  • If you're using a serverless stack your backend won't even have a filesystem to store the uploads on (thats where this article comes in handy 👍).
  • Your servers have a finite amount of disk space and it will eventually run out.

Some of these issues can be solved by passing the file "through" your server and then on to a cloud service like S3. For example, the PHP code above could use a stream wrapper to pass the file through to an S3 bucket instead of the local filesystem. However, this is effectively double-uploading — 1) the client uploads the file to your server 2) then your server uploads the file to S3.

An even better user experience is to upload files via fetch or XMLHttpRequest (XMLHttpRequest is still preferred since fetch doesn't support progress updates). However, rigging up these AJAX uploaders is a lot of work even when using pre-existing libraries and they come with their own backend shortcomings.

There's another way

What if our backend servers never touched the file uploads at all? What if we could upload our files directly to our cloud provider from the client's browser? What if our backend/database only stored the URL to the uploaded file?

Vue Formulate allows you to turbo-charge your file and image inputs to do just that by implementing a custom uploader function. The following describes how this can be accomplished with AWS Lambda and S3. What is Vue Formulate? Glad you asked — it's the easiest way to build forms for Vue — and I wrote an introduction article about it you might be interested in.

To provide the best user experience, Vue Formulate handles file uploads in an opinionated way. The library handles all of the UX like creating a dropzone, showing selected files, progress bars, file validation, displaying upload errors, and pushing completed uploads into the form's model. All you need to provide is an instance of Axios or a custom uploader function that performs your desired XHR request (don't worry, we're going to work through that together in this article).

By the time a user submits the form and your @submit handler is called Vue Formulate has already completed any file uploads in your form and merged the file URLs into the form data. Your backend can be sent a simple JSON payload and never needs to deal with the original files themselves. Even better, with just a little work, we can make those files upload directly to S3.

So how does this "direct uploading" work — and how do we do it in a secure way? S3 supports a feature that allows the creation of "signed URLs", which are generated URLs that include all the necessary credentials to perform 1 pre-approved function — such as putting an object into an S3 bucket 😉! However to create these signed URLs we need some code to be executed in a secured environment — this environment could be a standard backend server, but for our purposes we're going to use a simple Lambda function. This is a great use case for Lambda as it is a small, discrete operation that only needs to be run when a user adds files to our form (no need to have a server running 24/7 waiting to perform this operation).


Our custom Vue Formulate uploader function will perform a few steps:

  1. Collect the files to be uploaded.
  2. Request a signed upload URL from our AWS Lambda function.
  3. Upload the file(s) to our S3 bucket using the signed upload URL.

Once we've added our custom uploader to our Vue Formulate instance, all of our file and image inputs will automatically use this mechanism. Sounds good, yeah? Ok — let's get cracking!

1. Setup an AWS Account

If you don't already have an AWS account, you'll need to set one up first. This is a standard signup process — you'll need to verify yourself and provide billing information (don't worry, AWS Lambda function call pricing and AWS S3 storage pricing are really cheap).

2. Create an S3 Storage Bucket

Use the services dropdown to navigate to S3 so that we can create a new storage bucket. You'll need to answer a series of question when creating the bucket. This includes:

  • Bucket name — I generally try to pick names that could be subdomains if I decide to rig up a DNS record for them in the future. For this example, I'll use as my bucket name.
  • Region name (pick the one geographically closest to you)
  • Bucket settings for Block Public Access — uncheck all of these boxes since we're going to allow public downloads. In this example, we won't be creating private file uploads, but this same process works for that use case.
  • Bucket versioning — you can leave this disabled, it's cheaper and we'll be using random ids to ensure we don't accidentally overwrite existing files with new uploads.
  • Tags — These are optional and only if you want to use them. These can be helpful for tracking billing costs if you are using a lot of AWS resources.
  • Advanced Settings - Leave "Object Lock" disabled.

3. Configure CORS for the bucket

Next, we need to ensure that we configure CORS for the bucket to enable our direct uploading. In this case I'm going to apply a liberal Access-Control-Allow-Origin: * since I want my example to work from any domain. You can be more specific with your access control if you want to limit which domains are allowed to upload files to your S3 storage bucket.

Click on your bucket, then select "Permissions" in the tab bar. Scroll down to "Cross-origin resource sharing", click "Edit", and enter the following JSON configuration. Finally, hit "Save Changes":

        "AllowedHeaders": [
        "AllowedMethods": [
        "AllowedOrigins": [
        "ExposeHeaders": []
Enter fullscreen mode Exit fullscreen mode

4. Create an IAM role

Next, we'll need to create an IAM role for Lambda. Use the services menu to navigate to the IAM service (Identity Access Management). Click on roles in the sidebar and choose "Create role". Select the Lambda "use case" from the services use cases and move on to the next step.

This is where we attach "policies" (basically permissions). We'll add the AWSLambdaBasicExecutionRole which gives our new role the ability to run Lambda functions.

Next, add tags if you want them (not required), and finally, give your role a name and a description you'll recognize and create the role. (1)

Next, we need to add the ability for this role to access the S3 bucket we created. Choose the role we just created, select "Attach policies", and then click "Create Policy" button at the top. Then follow these steps:

  1. Select the S3 service
  2. Select actions PutObject, and PutObjectACL
  3. Specify the bucket ARN, and "Any" (*) object in the bucket.
  4. Review and name the policy, then create it.

Creating the S3 access policy

Naming the S3 access policy

Finally, go back to the role we created, refresh the list of policies, search for our newly created policy, and add it to the role.

Selecting the access policy

Role policies when complete

5. Create the Lambda and API

Use the services dropdown to search for the Lambda service. Open it, and choose "Create Function", and follow the prompts:

  1. Select "Author from scratch"
  2. Choose a function name, for this example I'll use "VueFormulateUploadSigner".
  3. Change the execution role and select "Use existing Role". Choose the new role that we created in the previous step.
  4. Leave the advanced settings unchanged and create the function.

Lambda creation screen

Remember, this Lambda function is responsible for creating our signed upload URL, so we need an endpoint to trigger the lambda's execution. To do this, click the "+ add trigger" button, select "API Gateway", and follow the prompts:

  1. Select "Create an API"
  2. For "API type" choose "HTTP API"
  3. For security, select "open" (You can always come back and add JWT later if it's needed for your specific application)
  4. Leave the additional settings blank and "Add" the gateway.

6. Add the function code

We need our lambda function to create a signed putObject URL for us. In the Function code section double click on index.js. This file is the actual code that will be executed when our Lambda is run. In this case we want to use the AWS SDK for node.js to create a signed putObject URL for S3.

Here's some code that does just that. You can copy and paste it directly into the code editor — although you should read through it to understand what it is doing.

var S3 = require('aws-sdk/clients/s3');

const CORS = {
    'Access-Control-Allow-Origin': '*',
    'Access-Control-Allow-Headers': 'Content-Type'

 * Return an error response code with a message
function invalid (message, statusCode = 422) {
    return {
      isBase64Encoded: false,
      body: JSON.stringify({ message }),
      headers: {
        "Content-Type": "application/json",

 * Generate a random slug-friendly UUID
function uuid (iterations = 1) {
    let randomStr = Math.random().toString(36).substring(2, 15)
    return iterations <= 0 ? randomStr : randomStr + uuid(iterations - 1)

 * Our primary Lambda handler.
exports.handler = async (event) => {
    // Handle CORS preflight requests
    if (event.requestContext.http.method === 'OPTIONS') {
        return {
            statusCode: 200,
            headers: CORS
    // Lets make sure this request has a fileName
    const body = JSON.parse(event.body)

    // First, let's do some basic validation to ensure we recieved proper data
    if (!body && typeof body !== 'object' || !body.extension || !body.mime) {
        return invalid('Request must include "extension" and "mime" properties.')

     * We generate a random filename to store this file at. This generally good
     * practice as it helps prevent unintended naming collisions, and helps
     * reduce the exposure of the files (slightly). If we want to keep the name
     * of the original file, store that server-side with a record of this new
     * name.
    const filePath = `${uuid()}.${body.extension}`

     * These are the configuration options that we want to apply to the signed
     * 'putObject' URL we are going to generate. In this case, we want to add
     * a file with a public upload. The expiration here ensures this upload URL
     * is only valid for 5 minutes.
    var params = {
        Bucket: process.env.BUCKET_NAME,
        Key: filePath,
        Expires: 300,
        ACL: 'public-read'

     * Now we create a new instance of the AWS SDK for S3. Notice how there are
     * no credentials here. This is because AWS will automatically use the
     * IAM role that has been assigned to this Lambda runtime.
     * The signature that gets generated uses the permissions assigned to this
     * role, so you must ensure that the Lambda role has permissions to
     * `putObject` on the bucket you specified above. If this is not true, the
     * signature will still get produced (getSignedUrl is just computational, it
     * does not actually check permissions) but when you try to PUT to the S3
     * bucket you will run into an Access Denied error.
    const client = new S3({
        signatureVersion: 'v4',
        region: 'us-east-1',

    try {
         * Now we create the signed 'putObject' URL that will allow us to upload
         * files directly to our S3 bucket from the client-side.
        const uploadUrl = await new Promise((resolve, reject) => {
            client.getSignedUrl('putObject', params, function (err, url) {
                return (err) ? reject(err) : resolve(url)

        // Finally, we return the uploadUrl in the HTTP response
        return {
            headers: {
                'Content-Type': 'application/json',
            statusCode: 200,
            body: JSON.stringify({ uploadUrl })
    } catch (error) {
        // If there are any errors in the signature generation process, we
        // let the end user know with a 500.
        return invalid('Unable to create the signed URL.', 500)

Enter fullscreen mode Exit fullscreen mode

Adding the code to the function

Once you add this code, click "Deploy". Now — the last thing we need to do in Lambda is add the BUCKET_NAME environment variable.

Scroll down from the code editor and choose "Edit" under environment variables. Enter a new key BUCKET_NAME and set the value to our S3 bucket name (I chose as my name). Hit save, and your Lambda is ready to go!

The BUCKET_NAME environment variable

7. Configure the API Gateway

We're getting close! Before we can start sending HTTP traffic to our Lambda we need to configure the API Gateway we created.

Navigate to the API gateway service and you should see a service with the same name as our Lambda with an -API suffix — let's click into that. The API Gateway service is a powerful utility that makes it easy to configure which Lambdas respond to which API requests. If you chose "Develop > Routes" you'll see that our Lambda has already attached itself to the /{lambdaName} route.

API Gateway Routes

Personally, I prefer this route to be something more like /signature. We can easily change it, and while we're at it, let's restrict this endpoint to only respond to POST requests.

Editing a route

There's a problem though. Since we've restricted the endpoint to POST only, the browser's CORS OPTIONS preflight requests will fail.

Let's add another route for the same /signature path that also points to our Lambda (our code there will handle the CORS request). Create the route, and then click "Create and attach an integration" on the for the OPTIONS route and follow the prompts:

  1. Select "Lambda function" for the integration type.
  2. Select the region and function of our Lambda.
  3. Create the integration.

Creating the OPTIONS route

Routes after configuration

When making changes to this default API, the changes are auto-deployed on the default "stage". You can think of stages like environments. Adding multiple stages here is beyond the scope of what we're doing here. For such a simple function using the default stage is perfectly fine.

If you navigate back to the main page for this API, you'll see we have an "invoke URL" for $default — this is your new APIs URL!

The API Endpoint

(You can change this to a custom domain if you wish, but this guide doesn't focus on that)

8. Test your endpoint!

Phew — that took some doing, but we should be up and running at this point. To test, copy the "invoke URL" and append /signature to the end of it. Let's try to ping our endpoint with a cURL request. Be sure to replace the values with your own endpoint values:

curl -d '{"extension": "pdf", "mime": "application/json"}' \
-H 'Content-Type: application/json' \
Enter fullscreen mode Exit fullscreen mode

You should get back a JSON response with a signed URL:

Enter fullscreen mode Exit fullscreen mode

Success! Our Lambda code creates upload URLs that expire after 5 minutes — this isn't a problem since Vue Formulate will use the signed url immediately, but if you're playing around with the URL by hand it's worth keeping the expiration limit in mind.

Note: The above CURL request is an actual live lambda I manage, feel free test with it, be be aware that all files are automatically deleted after 24 hours 👍

9. The uploader function

The last step in our process is writing a custom uploader for Vue Formulate. Remember, when Vue Formulate receives a file from the end user it passes that file off to an uploader function (or axios). We want to use a custom implementation of the uploader function to fetch a signed URL and then perform an XMLHttpRequest (xhr) to that URL with our file data. The implementation details of this will vary ever so slightly depending on the specifics of your project but here's how this can be done globally via a Vue Formulate plugin:


async function uploadToS3 (file, progress, error, options) {
  const matches =\.([a-zA-Z0-9]+)$/)
  const extension = (matches) ? matches[1] : 'txt'
  const response = await fetch(options.uploadUrl, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json'
    body: JSON.stringify({
      mime: file.type || 'application/octet-stream'
  if (response.ok) {
    const { uploadUrl } = await response.json()
    const xhr = new XMLHttpRequest()'PUT', uploadUrl)
    xhr.upload.addEventListener('progress', e => progress(Math.round(e.loaded / * 90) + 10))
    xhr.setRequestHeader('Content-Type', 'application/octet-stream')
    try {
      await new Promise((resolve, reject) => {
        xhr.onload = e => (xhr.status - 200) < 100 ? resolve() : reject(new Error('Failed to upload'))
        xhr.onerror = e => reject(new Error('Failed to upload'))
      const url = new URL(uploadUrl)
      return {
        url: `${url.protocol}//${}${url.pathname}`,
    } catch {
      // we'll suppress this since we have a catch all error
  // Catch all error
  error('There was an error uploading your file.')

export default function (instance) {
    uploader: uploadToS3
Enter fullscreen mode Exit fullscreen mode


import Vue from 'vue'
import VueFormulate from '@braid/vue-formulate'
import S3UploaderPlugin from './s3-uploader-plugin'

// Your main.js file or wherever you initialize Vue Formulate.

Vue.use(VueFormulate, {
    // Use API Gateway URL + route path 😉
    uploadUrl: '',
    plugins: [
Enter fullscreen mode Exit fullscreen mode

A working example

You're done! With those changes in place, all file and image inputs in your Vue Formulate instance will automatically upload their contents directly to S3 from the
client's browser.

You can use as many file uploads as you'd like on any and all forms in your project with no additional configuration.

Here's an example in action:

If you’re intrigued, checkout You can follow me, Justin Schroeder, on twitter — as well as my co-maintainer Andrew Boyd.

Discussion (0)