DEV Community

Cover image for Docker + ReactJS tutorial - Part 1
Brandon Wie
Brandon Wie

Posted on

Docker + ReactJS tutorial - Part 1

Development to Production workflow + multi-stage builds + docker compose

generate a React app using CRA

npx create-react-app my-app # npm
# or
yarn create react-app my-app #yarn
Enter fullscreen mode Exit fullscreen mode
  • go to the directory you just created
cd my-app
Enter fullscreen mode Exit fullscreen mode

Create Dockerfile

touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

contain all of the steps we need to customize an image

# Dockerfile
# node doesn't have to be this version
FROM node:19-alpine3.16 
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["yarn", "start"]
Enter fullscreen mode Exit fullscreen mode
FROM node:19-alpine3.16
Enter fullscreen mode Exit fullscreen mode
  • we specify a node image so anytime you're customizing an image we have to give it an image that we want to customize
WORKDIR /app
Enter fullscreen mode Exit fullscreen mode
  • the working directory of the container
  • anytime we copy any files it's going to run those commands and copy those files into this directory so we know that anything related to our app is going to be stored in that directory
  • so technically we don't need this command for anything
COPY package.json .
Enter fullscreen mode Exit fullscreen mode
  • take the package.json file and copy it into the image
  • right and then that way once we copy the package.json file,
  • we can do an npm/yarn install to install all of our dependencies
  • work directory is specified, so either /app or .
RUN yarn install
Enter fullscreen mode Exit fullscreen mode
  • the next thing that we want to do is run an npm install
COPY . .
Enter fullscreen mode Exit fullscreen mode
  • the next thing that you want to do is now copy the rest of all of our code or the rest of all of our files into our container

why copy package.json with COPY . . again above?

  • an optimization for Docker to build the image faster for future builds
  • installing dependencies is a very expensive operation
  • each line represents a layer, so the above codes represent 5 different layers
  • Docker builds these images based on these layers
  • on build, Docker caches the result of each layer
  • package.json doesn't change that often unless we add a new dependency, we can cache the result of two layers (COPY package.json and RUN yarn install)

  • and then when we build the image again, Docker will use the cached result

  • Docker would have no idea whether we changed our source code or we changed the dependencies in our packages.json so every time we ran a copy we would have to then do a full npm install regardless of whether or not the dependencies change so we would be unable to take the cached result

  • therefore, by splitting up the COPY into two, we can ensure that only when we change our package.json, we have to run an npm install

EXPOSE 3000
CMD ["yarn", "start"]
Enter fullscreen mode Exit fullscreen mode
  • The app listens on port 3000 so we want to expose port 3000 and then finally we need to do an yarn start to actually start the development server so we'll type in cmd

Build image

docker build -t react-image .
Enter fullscreen mode Exit fullscreen mode
  • -t, --tag: name and optionally a tag in the 'name:tag' format

  • outside of containers can't talk to containers by default

  • so EXPOSE 3000 doesn't really do anything else than just expose the port inside the container

docker run -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode
  • -d: run in detached mode (run in the background)
  • -p: port forwarding (forwarding port from the host machine to the container)
  • --name: name of the container

  • 3001: port on the host machine (poked hole for outside world)

  • 3000: port on the container (what port we're going to send traffic to our container')


Docker networking - forwarding ports

docker networking diagrams


dockerignore files

prevent unnecessary files from being copied into the image

Let's check the files inside the container first,

docker exec -it react-app sh # or bash if sh doesn't work

ls -a
Enter fullscreen mode Exit fullscreen mode
  • docker exec: run a command in a running container
  • -it: interactive terminal
  • react-app: container name
  • sh or bash: shell (not every image is using the bash shell)

You will see a bunch of files that are unnecessary to keep inside.

Let's create .dockerignore in the root folder of your local environment.

# .dockerignore
node_modules
Dockerfile
.git
.gitignore
.dockerignore
.env
Enter fullscreen mode Exit fullscreen mode

Remove the previous container, rebuild the image, and run the container

docker stop react-app
docker rm react-app # `-f` to force remove if don't skip stop

docker build -t react-image .
docker run -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode

Go to shell in the container and check if the target files are ignored properly

docker exec -it react-app sh
Enter fullscreen mode Exit fullscreen mode
ls -a
Enter fullscreen mode Exit fullscreen mode

Now Docker runs the container named react-appwith the image that we created. However, the local changes won't apply to the app inside the container. We're gonna look into that.


Manage data in a Docker container

By default, all files created inside a container are stored on a writable container layer. This means that:

  • The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.

  • A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.

  • Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.

Docker has two options for containers to store files on the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts.

Docker also supports containers storing files in-memory on the host machine. Such files are not persisted. If you’re running Docker on Linux, tmpfs mount is used to store files in the host’s system memory. If you’re running Docker on Windows, named pipe is used to store files in the host’s system memory.

from Docker documentation


Comparisons of the three

Volumes

Volumes

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.


Bind mount

Bind mounts

Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its absolute path on the host machine.

By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.


tmpfs mount

tmpfs mounts

If you’re running Docker on Linux, you have a third option: tmpfs mounts. When you create a container with a tmpfs mount, the container can create files outside the container’s writable layer.

As opposed to volumes and bind mounts, a tmpfs mount is temporary, and only persisted in the host memory. When the container stops, the tmpfs mount is removed, and files written there won’t be persisted.


Enough with explanations, let's continue.

Bind mounts

To make your local development environment communicate with the container you created, you need to use bind mounts method.

Stop container

docker rm react-app -f
Enter fullscreen mode Exit fullscreen mode

And run the container with bind mounts

docker run -v $(pwd):/app -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode
  • -v: bind mount (also can be volumes depending on the first field)

    • -v dirlocaldirectory:containerdirectory
    • -v $(pwd):/app: bind mount the current working directory to the /app directory in the container
    • you can only sync src folder
  • Official document recommend new users use --mount instead of --volume | -v when bind mounts because it give a way clear readability.

docker run --mount type=bind,source="$(pwd)",target=/app -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode
  • the documentation shows Volumes and Bind mounts both use -v flag, only the difference is the first field:
    • for Volumes: In the case of named volumes, the first field is the name of the volume, and is unique on a given host machine. For anonymous volumes, the first field is omitted.
    • for Bind mounts: In the case of bind mounts, the first field is the path to the file or directory on the host machine.

Hot Reload

*(update 01.26.2022) I found that HMR works without setting up CHOKIDAR_USEPOLLING value on MacOS. I followed the implementation right below and HMR works perfectly. Please leave a comment if it doesn't work.

To enable hot reload,

add CHOKIDAR_USEPOLLING=true as ENV to your Dockerfile

What is chokidar anyway?: Minimal and efficient cross-platform file-watching library

...
...
ENV CHOKIDAR_USEPOLLING=true
COPY . .
...
Enter fullscreen mode Exit fullscreen mode

or you can add it to your docker run command with -e flag

 docker run -e CHOKIDAR_USEPOLLING=true -v $(pwd):/app -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode

*(update 01.26.2022) I found that HMR works without setting up CHOKIDAR_USEPOLLINGvalue. I followed the implementation right below and HMR works perfectly. Please leave a comment if it doesn't work.

(important) Hot Reload issue with CRA v5.0 (I used V5.0.1)

CRA 5.0 fails to hot-reload in a docker container

  1. Create setup.js file in the root directory
   // setup.js
   const fs = require('fs');
   const path = require('path');

   if (process.env.NODE_ENV === 'development') {
     const webPackConfigFile = path.resolve(
       './node_modules/react-scripts/config/webpack.config.js'
     );
     let webPackConfigFileText = fs.readFileSync(webPackConfigFile, 'utf8');

     if (!webPackConfigFileText.includes('watchOptions')) {
       if (webPackConfigFileText.includes('performance: false,')) {
         webPackConfigFileText = webPackConfigFileText.replace(
           'performance: false,',
           "performance: false,\n\t\twatchOptions: { aggregateTimeout: 200, poll: 1000, ignored: '**/node_modules', },"
         );
         fs.writeFileSync(webPackConfigFile, webPackConfigFileText, 'utf8');
       } else {
         throw new Error(`Failed to inject watchOptions`);
       }
     }
   }
Enter fullscreen mode Exit fullscreen mode

the setup.js will find the webpack.config.js file and add watchOptions to it.

  1. Change start script in package.json
   "scripts": {
    "start": "node ./setup && react-scripts start",
    ...
   },
Enter fullscreen mode Exit fullscreen mode
  1. Set WDS_SOCKET_PORT to the current port as ENV on Dockerfile
...
ENV WDS_SOCKET_PORT=3001
COPY . .
...
Enter fullscreen mode Exit fullscreen mode

or you can add it to the docker run command with -e flag

docker run -e WDS_SOCKET_PORT=3001 -v $(pwd):/app -d -p 3001:3000 --name react-app react-image 
Enter fullscreen mode Exit fullscreen mode
  • otherwise, you'll see WebSocketClient.js:16 WebSocket connection to 'ws://localhost:3000/ws' failed: error on your console
  1. Remove the running container and re-run it.
docker rm react-app -f

docker run -v $(pwd):/app -d -p 3001:3000 --name react-app react-image
Enter fullscreen mode Exit fullscreen mode

NOW YOU HAVE UP AND RUNNING DOCKER CONTAINER WITH HOT RELOAD

See you in the next part.


References

Top comments (0)