DEV Community

Cover image for Build a file upload service with NodeJS, Typescript, Clean Architecture and AWS S3

Build a file upload service with NodeJS, Typescript, Clean Architecture and AWS S3

joaosczip profile image joaosczip ・5 min read

In the old days, before that cloud thing become what it’s today, the way that the programs store their files (images, documents, etc) was a little different from now.

We had our local servers, running our applications locally and any type of file that was uploaded, was also stored in the same server (or not, but still locally) that the application was.

The problem with that architecture is that it can lead to several problems for our server, especially if the amount of the files that will be stored was meaningful (GB of files for example). The two main problems with that are the storage and the security. The storage could leave our server to become slow and the security, in case of loss of files, there’s no comment, especially if those files are confidential.

Alt Text
Client-Server architecture with local servers —

Today the scenario is a little different, instead of local servers, we have a lot of cloud providers who provide us several different services, including storage services. In this example, I’ll show you how to create an API with NodeJS, Express and Typescript, to handle and upload files to AWS S3 (Simple Storage Service), also using Clean Architecture, IoC, SOLID principles and multer to handle the form-data that we’ll send.

I’m assuming that you already have an AWS account and an IAM user with the right permissions to use S3.

API Structure

Let’s get started from our domain layer. In NodeJS environments, we don’t have the File Interface provided for us in the browser, so we need to create it on our own. So, inside the domain folder, let’s create the models/file.ts

This interface provides all the information that we need to handle and manipulate the incoming files. It’s very similar to the interface from the browsers.

And we also need an interface in our domain to represent the response from the file upload action. In our case, it’ll be very simple, just the path from the object in the storage service.

Now, we can start with our principal use case, the File Upload. Inside domain, let’s create the file usecases/file-upload.ts

Remembering that our use case is just the contract. The implementation will be in the application layer. As you can see, the imports are made with the tsconfig-paths, to be cleaner and flexible to changes.

When I have absolute knowledge of what I need to implement and what I’ll use to do that, I always start with the domain layer, because it’s the core of our service.

Getting started with our concrete implementation, in the application layer, let’s create a class that will implement the FIleUpload interface, this class will be responsible to group the received files from the controller and send them to the infra service that will communicate with the AWS SDK.

Before creating our class, let’s define the protocol for the infra service, we’ll call him FileUploader and place it in the application/protocols/file-uploader.ts

Now, we’ll be able to create our implementation to FileUpload (the use case, not the infra protocol). Our concrete class will be called RemoteFileUpload.

This is a very simple implementation, as you can see, we’re implementing the FileUpload and using tsyringe as IoC container to inject an implementation of FIleUploader (it’s not created yet).

We need to add the decorator @injectable to say that all the dependencies of the class will be injected when the application starts. The @inject decorator receives a token that we want to resolve, in our case, the FileUploader.

Now we need to create the class that will implement the FIleUploader and communicates with the aws-sdk. This class will be in the infra layer, so, let’s create the infra/aws-file-uploader.ts.

This class uses the aws-sdk library (don’t forget to install it) to communicate with the AWS S3 and make the uploads. It can send just a single or a list of files (one per time). Here, the implementation isn’t the most important, pay attention to the architecture details, the way I choose to implement was just what meets my needs.

As you can see, I’m using some config file to store the bucket name and the region, this file uses the env variables that I defined in my docker-compose (you can define in your .env if running locally).

Remember that in the actual version of the sdk there is no more need to fill the aws secret and key when instantiating any service, it’ll look for those values in your environment.

With our application and infra layer completes, it’s time to create the controller that will handle the requests and will call the use case with the files.

The controller also uses the IoC container to receive a FileUpload instance, which in your case will be the RemoteFileUpload. I won’t get deep into the details of the HttpRequest and HttpResponse types, those are just to abstract the Express response and request parameters. The interface that the controller was implementing, have just the method handle, that will receive the request and return a response.

With the controller created, let’s create our route and call our controller inside it. Another thing to notice about the route is that we need to use the multer to handle the files on the request (because we can’t send files on JSON).

Because multer has his own type of File, we created a middleware to map the multer file to our own File interface.

As you can see, we first added the upload middleware from multer and after that, our own fileHandler middleware, so, when passing the request to the controller, the files will be mapped to the format that we expect and inside the body.

Now, the rest is just configuration. We need to configure our service startup, to use express and also the IoC container with the map of contract => implementation.

Remember to put the import of the IoC inside your entrypoint.

I hope that this tutorial will help you clarify something, like handling files with express, IoC container, usage of AWS SDK and files upload.

The goal of this article was to explain how we can create a very robust and flexible architecture even in cases that we need to communicate with external services and/or use third-party libraries. The focus wasn’t on the Clean Architecture itself, but what we can do using the principles behind her and some SOLID principles.

I’ll leave the link to the repository here if you are interested to see how the project got.

Discussion (0)

Editor guide