DEV Community

Felipe Campos
Felipe Campos

Posted on

Starter application using Node.js + React.js + Nginx + Docker, being deployed to AWS EC2 - Step to step

Summary

Introduction

I see on the internet many ways to develop web applications, with different technologies and languages combined with each other, but I rarely see an updated tutorial on how to build an application from the beginning to production, working on a pre-determined IP address, where it is possible to access the same through the browser.

Here I will cover a tutorial on how to carry out this task, where the application will be composed of React.js on the front-end, Node.js on the back-end, Nginx to take the application to production and Docker to containerize the application that, finally, running on an EC2 instance on AWS.


Step 1 - Accessing the repository

The first step will be to access the web application repository, what can accessed in this link: https://github.com/fco3lho/node-react-nginx-docker-starter

This application is very simple, containing only a POST route and a GET route. It is basically based on creating tasks and showing them on the screen.


Step 2 - Understand the application

Let's understand the application.

Front-end

In the front-end folder, there are all the React.js files, but the main one is the src/App.js file, and the package.json file is where the dependencies that will be used by src/App.js, such as Axios. Let's analyze:

Image description

  • At the top of the src/App() function, we have the declaration of 3 variables that will be used by the next 2 functions.
  • In the middle, we have the handleSubmit() function, which is the function that will send the task to the back-end, so that it can register it.
  • Below, we have the hook useEffect(), which is executed whenever the application is started and also when the condition given to the hook is true, in this case, when the variable condition is true. The useEffect() has the function of updating the task list whenever a new item is inserted in it.

Image description

Here in the front-end HTML, there are two parts:

  • The first part is the form for entering a task.
  • In the second part, it is a map that prints all registered lists in order of ID.

Back-end

In the back-end folder, there are all the Node.js files, but the main one is the index.js file, and in the package.json file are all the dependencies that will be used by index.js, such as express and mysql2. Let's analyze:

Image description

  • At the top of the code, all the import of dependencies and their use are carried out.
  • In the middle, the connection to the database is created, where the host is defined as mysql, as it will be the name of our containerized service in the future, user and password defaults to root, and the database named as tasks.
  • At the bottom, the connection is established, returning a success message when connecting or an error message for not connecting.

Image description

  • At the top of the code, the GET request is made, which returns all the content present in the database to the front-end.

  • In the middle, the POST request is made, which inserts a new task, coming from the front-end, into the database.

  • At the bottom, is where the server is running, in this case, on port 3001.

Database

Image description

  • Schema for the database, creating just a table with task ID and description.

Nginx

The Nginx folder contains the default.conf file, and it is present in the front-end folder, as we will need the build folder for React and default.conf ** in a single place to be run together in the **Dockerfile.

Image description

  • At the top, we have declared which port Nginx will listen on.
  • In the middle, we have control of the URL/, which will be served to the front-end, along with the CORS configuration to accept requests from all sources.
  • At the bottom, we have control of the URL/api, which will be the route that the front-end will use to communicate with the back-end, along with the proxy_pass that defines the back-end route. In conclusion, this part of the code basically has control of proxy_pass/api.

Dockerfile front-end

Image description

  • In line 2, the Node image is used and named builder for Docker.
  • In lines 4 and 5, the folder with the directory /usr/src/app is created and defined as the working directory.
  • In lines 7 and 8, the package.json file is copied from the local machine to the container's working directory, and then the npm install command is executed.
  • On line 10, the rest of the files from the local machine are copied to the container's working directory.
  • On line 12, the npm run build command is executed to create a production-optimized version of the application's front-end.
  • In line 15, the Nginx image is used.
  • On line 17, port 80 is exposed.
  • On line 19, the file ./nginx/default.conf from the local machine is copied to the Nginx configuration folder, /etc/nginx/conf.d/default.conf, in the container's working directory.
  • On line 20, the builder's build folder, named on line 2, is copied to the Nginx application folder, /usr/share/nginx/html.

Dockerfile back-end

Image description

  • The same logic as the front-end Dockerfile is followed, however on line 11 the global installation of pm2 is performed, on line 13 port 3001 is exposed, and on line 15 the pm2 command is used to run the back-end.

docker-compose.yml

This is the file that executes the other Dockerfiles and the most important part of it is the creation of the MySQL container and the container dependencies, since the rest of the front-end and back-end logic are in the Dockerfiles.

Image description

  • In environment in the mysql container, the database password and the database are declared.
  • In volumes in the mysql container, the database schema that we will need is copied to the MySQL database initialization directory in the container, which creates our task database.
  • In depends_on on the frontend and backend containers, they are what the containers depend on to function, so the backend depends on the health service of the mysql container and the frontend depends on the backend to function.

Step 3 - Create a instance EC2 on AWS

  1. Log in to the AWS website by clicking here.
  2. Search for and sign in to the EC2 service by clicking here
  3. Click 'Launch Instance' and create an instance with:
    • Operating System: Ubuntu 22.04;
    • Architecture: 64 bits;
    • Instance type: t2.micro (Free Tier)
    • Create a new pair of RSA keys and .pem extension. When creating the new key pair, it will be downloaded, which you must keep to be able to connect to the instance.
  4. The instance creation summary should look like this:Image description
  5. Click 'Run instance'.

Note: Take care not to be charged by AWS if you only want to use the Free Tier.


Step 4 - Defining the instance security groups

  1. Now go to the instances tab and click on the instance ID you created.
  2. In the instance details, click on the 'Security' tab. Image description
  3. Now click on the security group, which in my case is named 'sg-03e9813c42229fff2 (launch-wizard-1)'.
  4. Click 'Edit entry rules'. Image description
  5. Save the following input rules: Image description

Step 5 - Connect to the instance through the terminal

  1. Return to the instances tab and click on your instance ID.
  2. Click 'Connect'.
  3. The tutorial on how to connect to your instance will appear.

In my case, my key is in the 'Downloads' folder, so I will open the terminal there and execute the commands mentioned in the tutorial. That way:

Image description

Here we are already connected to the instance.


Step 6 - Install docker on the instance

1.

sudo apt update
Enter fullscreen mode Exit fullscreen mode

2.

sudo apt upgrade -y
Enter fullscreen mode Exit fullscreen mode

3.

sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Enter fullscreen mode Exit fullscreen mode

4.

echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode

5.

sudo apt update
Enter fullscreen mode Exit fullscreen mode

The docker user group exists but contains no users, which is why you’re required to use sudo to run Docker commands.

To create the docker group and add your user:

1.

sudo groupadd docker
Enter fullscreen mode Exit fullscreen mode

2.

sudo usermod -aG docker $USER
Enter fullscreen mode Exit fullscreen mode

3.

newgrp docker
Enter fullscreen mode Exit fullscreen mode

Step 7 - Clone application repository and configure it based on the instance IP

  1. Clone the repository with the link https://github.com/fco3lho/node-react-nginx-docker-starter.git, using the following command:

    git clone https://github.com/fco3lho/node-react-nginx-docker-starter.git
    
  2. Enter the project folder using the following command:

    cd node-react-nginx-docker-starter/
    

Now, let's copy the public IPv4 address of the instance, which can be found in the instance summary:

Image description

Copy the address which in my case is 15.228.99.248.

Now, let's first configure Nginx, going into its folder and modifying the configuration file:

  1. Use the following command to enter the Nginx configuration folder:

    cd frontend/nginx
    
  2. Use the following command to modify the configuration file:

    nano default.conf
    
  3. Change the proxy_pass in the location /api block from localhost to the public IPv4 address of your instance. This way:Image description

  4. Save using the command CTRL + O, Enter to confirm and CTRL + X to exit.

Now let's configure the address that the frontend sends its requests to in App.js.

  1. Use o seguinte comando para ir para a pasta do App.js:

    cd ../src/
    
  2. Use the following command to modify App.js:

    nano App.js
    
  3. Modify the two Axios routes, removing the localhost and placing the public IP address of the instance, like this:
    Image description

  4. Save using the command CTRL + O, Enter to confirm and CTRL + X to exit.

Now the application is configured to run on the EC2 instance. Let's execute it.


Step 8 - Run application in docker

  1. Go back to the project root folder with the following command:

    cd ../../
    
  2. Run the application with Docker, using the following command:

    docker compose up -d
    

Step 9 - Congratulations!

Now you can access the application in your browser, just enter the public IPv4 address of your instance in the address bar.

Image description

Notes

  • Be careful when creating or changing configurations in AWS, as this may incur some costs.
  • As this is an instance with weak Free Tier configurations, it is sometimes necessary to reload the page for it to update the creation of a task.
  • I apologize for possible English errors, I enjoyed the idea of creating this tutorial together with training my English.

If you've made it this far, follow me on Github and connect with me on LinkedIn:

Top comments (0)