TL;DR
In cloud computing world, an containerized application promotes the decoupling principle, which offers a logical packaging mechanism. It allows container-based applications to be easily deployed and ensures the consistency. As a React enthusiast, I am going to share with you yet another way how I package my React application.
Preparation
For the following steps, i assume that you have some basic knowledge of Docker, React, Linux-based folder structure...
Let's start
Init our React application
For convenience's sake, I init a blank React application with create-react-app.
phuong@Arch ~/kitchen/js$ npx create-react-app fooapp
Build our application using node:alpine image
We will use a NodeJS image in order to build application to guarantee complete isolation. After application to our application folder (in this case it is cd fooapp). Create a file named Dockerfile like below:
FROM node:alpine as builder
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
Line 1: We declare the image that we use to build our React app and attach builder label to it.
Line 2: We use WORKDIR directive to indicate that we are currently in /app in container
Line 3: Copy our application into Docker container
Line 4: Install dependencies for our React application in container
Line 5: Execute command to build our application, our application will be built into chunk and saved in a directory named build
Serving our application using Nginx
But wait, it is true that our built application obviously cannot serve itself, we need a server to serve application as static resource. I recommend nginx image as our server because of its popularity for low resource consumption, simple configuration and high performance.
We need a configuration file for nginx server, let's create nginx.conf in your root of application folder:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
In order not to copy node_modules and unwanted folders into our container. We simply list them in .dockerignore file
.git
node_modules
build
So our complete Dockerfile will be:
FROM node:alpine as builder
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Line 7: Directive FROM indicates we use nginx:alpine image (in conjunction with node:alpine image)
Line 8: We copy nginx configuration file into our container
Line 9: --from=builder instructs docker to copy built folder from stage 1 as we labeled it above
Line 10: Expose port 80 to outside of the container
Line 11: Directive tell that nginx should stay in the foreground. Because for containers, it is useful as best practice is one process = one container.
Wrap everything up
Let's check our directory, we should have exact application directory like below.
Let's start building our container using command:
docker build -t fooapp:v1 .
Our build process
To verify everything is fine, we run our newly built container using command:
docker run --rm -d 8080:80 fooapp:v1
The --rm flag tells docker to remove the container after running our application container and -d instructs docker to bind port 80 on our host machine to port 8080 of our application container.
Voilà
Now we should be able access our application from browser.
For conclusion, I would like to thank you for your time reading my first post, above steps and arguments are just my personal thoughts, if there is anything wrong, let me hear from you. Feel free to drop comment below. Thanks. :)
P.s: I also published my git repo for this post at
Top comments (2)
Great Job!
Just a correction on the port flag:
docker run --rm -p 8080:80 fooapp:v1
good job, keep up and you will be success