DEV Community

loading...
Cover image for Using an S3 object storage provider in Node.js

Using an S3 object storage provider in Node.js

Reece Charsville
I am a backend developer. I run a digital agency based in the UK called Forward Digital. We build mobile apps, websites and web applications.
・8 min read

This post originally appeared on my blog here.

Introduction

Today I am going to be running through how to use an S3 object storage provider.

(Just want to see the code? The GitHub is here)

For those who don't know S3 object storage is a cloud service for hosting files. It is accessible via an API, which means it can easily be integrated into your projects. There are hundreds of uses cases but some of the most common involve hosting user-generated content and allowing users to upload profile images.

Some of the most popular providers for S3 storage include Amazon AWS, Vultr and Digital Ocean. They all provide the same service but have a few differences when it comes to price, locations, capacities and bandwidths, so it's worth looking around to see which one suits your needs best.

My first experience with S3 was using AWS. AWS is great.....but its also very confusing, especially for a backend developer like me who tries to stay clear of DevOps as much as he can. I trawled through the AWS documentation trying to understand how to implement the S3 service and after many hours of playing with buckets, policies and IAM roles I got it working. After my ordeal I decided to try other providers to see how the implementations differ (In the hope of finding a simpler solution). It turns out that implementations are the same across providers!

So, I will run you through a very simple example of how to implement a basic S3 object storage in Nodejs. The example I am going to give uses Express and multer for the file upload, however the object storage code is framework agnostic and only requires the aws-sdk.

Preparing our Node.js project

Before we can connect to our S3 provider there are 4 things you will need. These are:

  • The buckets endpoint URL
  • The bucket name
  • Access key
  • Secret access key

These should be provided to you once you have set up your bucket through your chosen providers dashboard. You will want to ensure that your keys are kept private and securely. So in this example we will use dotenv environment variables.

Firstly, lets create our .env file in our project root:

S3_BUCKET_NAME=your_bucket_name // e.g my-bucket
S3_ENDPOINT_URL=your_endpoint_url // e.g https://eu.amazons3.com/
S3_ACCESS_KEY=your_access_key
S3_SECRET_KEY=your_secret_access_key
Enter fullscreen mode Exit fullscreen mode

Now we have the information for creating a connection, lets go ahead and install the packages for initialising a connection.

The first thing we need is the aws-sdk this is the npm package used for connecting and interacting with an S3 storage. Run the following command to install:

npm install aws-sdk
Enter fullscreen mode Exit fullscreen mode

In this example we are using TypeScript so we can also install some type definitions. If you are using JavaScript then you can ignore this step.

npm install --save-dev @aws-sdk/types
Enter fullscreen mode Exit fullscreen mode

Setting up the connection

Once installed we can create our connection.ts :

import * as S3 from 'aws-sdk/clients/s3';

export default function Connect(path: string | null = ''): S3 {
    return new S3({
        apiVersion: 'latest',
        endpoint: `${process.env.S3_ENDPOINT_URL}${path}`,
        credentials: {
            accessKeyId: process.env.S3_ACCESS_KEY,
            secretAccessKey: process.env.S3_SECRET_KEY,
        },
    });
}
Enter fullscreen mode Exit fullscreen mode

Lets go through this code line by line. So firstly we import the S3 client from the aws-sdk. The aws-sdk includes a lot of features, so we only need to import the S3 client for this implementation.

Next we create our Connect function. This function will new up an S3 client using the credentials that we stored in our environment variables.

Our connect function takes in an optional path parameter. When this is set we can specify the path that we want to upload our file to. For example we may want to upload an image to a subdirectory called images. So we would set the path to 'images'. This path is then appended to the endpoint URL. So as an example our endpoint now becomes https://eu.amazons3.com/images. If we don't set the path parameter the connection will default to the buckets root.

In our configuration we also provide an S3 API version. In this example I will use latest but you may want to pick a version that works for you. You can read up more about API versions and why you should pick one here.

Uploading a file

Now we have a working S3 client instance we can use it to upload files. Lets create a function for uploading a file. For this example we are using multer, so TypeScript users you can install the types with npm i --save-dev @types/multer.

Our upload.ts will look like this:

import { PutObjectOutput, PutObjectRequest } from 'aws-sdk/clients/s3';
import {AWSError} from 'aws-sdk/lib/error';
import * as S3 from 'aws-sdk/clients/s3';
import Connect from './connection';

export default async function Upload(bucket: string, file: Express.Multer.File, objectName: string, path: string | null = null): Promise<string> {
    return new Promise<string>((resolve, reject) => {
        const s3: S3 = Connect(path);
        const params: PutObjectRequest = { Bucket: bucket, Key: objectName, Body: file.buffer, ACL: 'public-read', ContentType: file.mimetype };
        s3.putObject(params, (err: AWSError, data: PutObjectOutput) => {
            if (err) reject(err);
            resolve(`${process.env.S3_ENDPOINT_URL}${bucket}/${path}/${objectName}`);
        });
    });
}
Enter fullscreen mode Exit fullscreen mode

In our Upload function we are passing in 4 parameters:

Parameters Description
bucket This is the name of the bucket you set up with the provider and what we have stored in our environment variable (e.g my-bucket).
file This is the actual file that we are uploading.
objectName This is the name that we would like to use when we store the file in the cloud. This name should include your file extension. If you are uploading a gif then this should be image.gif as oppose to just image.
path (Optional) This is passed straight through to the connection we made previously. So by default it is set to null, which would mean the file is uploaded to the root of the bucket. If you supply 'images' to this parameter then the file you upload will be stored in a subdirectory called images.

Our Upload function will return a Promise. This will resolve the URL of our uploaded file once the S3 client has finished uploading.

Inside our new Promise, we first use our Connect function to get an initialised S3 client, passing through our optional path parameter.

Then we create our S3 request parameters. In the parameters we set 5 options:

Parameters Description
Bucket This is the name of the bucket. We set this using our bucket parameter.
Key This is the name that is used when the file is stored in the bucket. We use our objectName parameter here.
Body This is the file we are uploading. This option takes a file buffer. So we use our parameter file.buffer
ACL This option is used to specify the access of the file we are uploading. In this example we are using 'public-read'. This means that anyone who has the URL of the file we upload can read it. If you want to read more about the different ACL types then read here.
ContentType This is used to tell S3 the type of file we are uploading. It takes in a file mime type. We pass this in using our file parameters file.mimetype.

Next we call the putObject method on the S3 client. We pass in our request parameters above, and define a callback. The callback will give us an error if the upload fails. So we can check if this has a value in our callback and reject our Promise if there is an error. If there is no error then we can resolve our promise with the URL of our object. We construct the URL of our uploaded object using the endpoint URL, bucket name, path and object name. So as an example if uploaded image.gif to an images folder inside our my-bucket, then the URL would be https://eu.amazons3.com/my-bucket/images/image.gif

Deleting a file

When it comes to deleting a file the process is very similar to upload.

We can create a delete.ts:

import {DeleteObjectOutput, DeleteObjectRequest} from 'aws-sdk/clients/s3';
import {AWSError} from 'aws-sdk/lib/error';
import * as S3 from 'aws-sdk/clients/s3';
import Connect from './connection';

export default async function Delete(bucket: string, objectName: string, path: string | null = null): Promise<DeleteObjectOutput> {
    return new Promise<DeleteObjectOutput>((resolve, reject) => {
        const s3: S3 = Connect(path);
        const params: DeleteObjectRequest = { Bucket: bucket, Key: objectName };
        s3.deleteObject(params, (err: AWSError, data: DeleteObjectOutput) => {
            if (err) reject(err);
            resolve(data);
        });
    });
}
Enter fullscreen mode Exit fullscreen mode

This function takes in 3 of the parameters we have seen before:

Parameters Description
bucket The name of our bucket we created with the provider and stored in the environment variables.
objectName The name that we used when storing the object. E.g image.gif
path The path to the object. E.g 'images' would delete the object with the objectName supplied inside the images subdirectory. If null this defaults to the root of the bucket.

Inside our Promise we use our Connect function to get an initialised S3 client.

We create our request parameters. Setting the Bucket and Key options using our functions parameters.

Then we use the deleteObject method on the client, passing in our request parameters and defining a callback. Just like before we check if the callback has errored and reject the promise if an error occurs.

If no error occurs then we resolve the deleteObject response.

Setting up our Express endpoints

We have defined some functions to connect to our S3 provider, upload objects and delete objects. The next question is how do we use them?

We will use Express and Multer as an example to demonstrate how to use them.

Using our Express app we can define a POST endpoint like the following:

app.post(
    '/upload',
    multer().single('formFile'),
    async (req, res) => {
        if(!req.file) res.status(400).send('Bad Request: No file was uploaded');
        // If you want to retain the original filename and extension just use originalname like below
        // const filename: string = req.file.originalname;
        const fileExtension: string = req.file.originalname.split('.').pop();
        const filename: string = `my-custom-filename.${fileExtension}`;
        const url: string = await Upload(process.env.S3_BUCKET_NAME, req.file, filename, 'images/logo');
        res.status(201).send(url);
    });
Enter fullscreen mode Exit fullscreen mode

This creates an endpoint called /upload which accepts multi-part form data. We use the multer middleware with this endpoint. The multer middleware will look in the submitted form data for the field with the key formFile. This key should be paired with a file. The middleware then attaches the file object to the request under the property file .

In our handler we check that a file has been supplied and throw a Bad Request response if none was sent.

In the example I have shown how to use a custom filename. We read the file extension from our files original name first. Then we create a new filename, appending the original file extension e.g my-custom-filename.gif .

Next we call our Upload function. We pass in the bucket name stored in our environment variables; the file in the request; our custom filename; and in the example I am uploading to the subdirectory images/logo.

After awaiting our Upload we will have the URL of the uploaded file and we can send this in our endpoints response object.

If you would like to see how to use the delete function with an Express endpoint then take a look at the example project.

Example project

I have created a full working example project on GitHub which uses the code we have gone through today. Check it out here.

Discussion (0)