DEV Community

Cover image for Dockerizing React app and Express API with MongoDB
Mac
Mac

Posted on

Dockerizing React app and Express API with MongoDB

Simple guide on how to move your React app, Express API and MongoDB to Docker using containers.

For sake of simplicity I just assume that you have working front-end and back-end as well as connected database.

Best idea is to have both api and client repos in one folder. You can have one remote repo with both of them or use two separate remote repos and then combine them with parent remote using git submodules. That’s how I did that.

React App

I used Create-React-App (CRA) with TypeScript for my project. It was simple blog with couple views.

First thing is to create Dockerfile in client root folder. To do that just type:

$ touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

Open file and lets fill it out. I’m using TypeScript with my CRA to first I have to build my application and then take what I get and host it as static files. To achieve that we’ll got with two stage docker build.

First stage is using node to build app. I use alpine version as it’s the lightest so our container’ll be tiny.

FROM node:12-alpine as builder

WORKDIR /app
COPY package.json /app/package.json
RUN npm install
COPY . /app
RUN npm run build
Enter fullscreen mode Exit fullscreen mode

That’s how beginning of the Dockerfile looks like. We’re using node:alpine as builder, then setting up working directory to /app, that’s gonna create new folder in our container. We copy our package.json to new folder in the container and install all packages. Next, we copy everything from /services/client folder and paste it to our container. Last bit of that step is to build everything.

Now we have to host our freshly created build. To do that we’re gonna use nginx, again as alpine version to cut on size.

FROM nginx:1.16.0-alpine
COPY --from=builder /app/build /usr/share/nginx/html

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

We copy the build from the previous step and paste it to nginx folder. Then expose port 80, that’s gonna be port on which our container’ll be listening for connections. Last line is to start nginx.

That’s all for client part. Whole Dockerfile should look like that:

FROM node:12-alpine as build

WORKDIR /app
COPY package.json /app/package.json
RUN npm install
COPY . /app
RUN npm run build

FROM nginx:1.16.0-alpine
COPY --from=build /app/build /usr/share/nginx/html

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

Express API

API is quite simple as well, RESTful routing to create posts, auth etc. Lets start with creating Dockerfiler in api root folder, same as in the previous part.

I used ES6 features so I have to compile everything to vanilla JS to run it and I went with Babel. As you can guess, that’s gonna be two stage build, again.

FROM node:12-alpine as builder

WORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build
Enter fullscreen mode Exit fullscreen mode

It’s very similar to client’s Docker file so I won’t be explaining it again. There’s only one difference, though.

RUN apk --no-cache add --virtual builds-deps build-base python
Enter fullscreen mode Exit fullscreen mode

I used bcrypt to hash my passwords before saving them to the database. Very popular package but it has some problems when using apline images. You might find errors similar to:

node-pre-gyp WARN Pre-built binaries not found for bcrypt@3.0.8 and node@12.16.1 (node-v72 ABI, musl) (falling back to source compile with node-gyp)

npm ERR! Failed at the bcrypt@3.0.8 install script.
Enter fullscreen mode Exit fullscreen mode

It’s well know problem and the solution is to install additional packages and python before installing npm packages.

Next stage, similarly as for the client, is to take the build api and run it with node.

FROM node:12-alpine

WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod

EXPOSE 808
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

One exception is to install only production packages. We don’t need Babel anymore as everything was complied in step one. Then we expose port 8080 to listen to requests and start node.

Whole Dockerfile should looks like that:

FROM node:12-alpine as builder

WORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build

FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod

EXPOSE 808
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Docker-compose

Last step is to combine the api and client containers with MongoDB container. To do that we use docker-compose file, that is placed in our parent repo root directory as it have to get access to both client and api’s Dockerfiles.

Let’s create docker-compose file:

$ touch docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

We should ended up with file structure like the one below.

Fill in the docker-compose file with following code and I’ll explain it afterwards.

version: "3"

services:
  api:
    build: ./services/api
    ports:
      - "8080:8080"
    depends_on:
      - db
    container_name: blog-api

  client:
    build: ./services/client
    ports:
      - "80:80"
    container_name: blog-client

  db:
    image: mongo
    ports:
      - "27017:27017"
    container_name: blog-db
Enter fullscreen mode Exit fullscreen mode

It’s really simple as that. We have three services, client, api and mongo. There is no Dockerfile for mongo, Docker’ll download image from it’s hub and create container out of it. That means our database it perishable but for beginning is enough.

In the api and client we have build key, which points to Dockerfile locations for both services respectively (root folder). Ports bind container port assigned in Dockerfile to our docker-compose network port so containers can talk to each other. The api service also has depends_on key, it tells Docker to wait with starting it until the db container is fully running. Because of that we’re gonna avoid connection errors from the api container.

One more bit for MongoDB. In our codebase for the back-end we have to update mongo connection string. Usually we point to localhost:

mongodb://localhost:27017/blog
Enter fullscreen mode Exit fullscreen mode

But with docker-compose it have to point to container name:

mongodb://blog-db:27017/blog
Enter fullscreen mode Exit fullscreen mode

Final touch is to run everything with following command in parent repo root directory (where the docker-compose.yml is):

$ docker-compose up
Enter fullscreen mode Exit fullscreen mode

That’s all. More reading than coding I guess. Thanks for staying till the end :)

Top comments (2)

Collapse
 
almoullim profile image
Ali Almoullim

Nice tutorial, you should however include data persistence for the db container as iirc its not setup by default

Collapse
 
macru profile image
Mac • Edited

Thanks 🙏 😊

That's the plan for later to be honest. Idea behind was to create quick and simple guide first