Containers have been around in Linux for years. It a concept where one or more processes are isolated from the rest of the system. The containers have all the files they need to run those processes independent of the system, which is why they have become a commonly used tool for deployments.
They solve a problem that many developers deal with across more systems than just Linux. Containers make it so an application can run consistently on any operating system without worrying about dependency failures and varying configurations. Docker brought the power of these Linux containers to everyone else and that's why they are used in various production environments.
What is Docker
Docker is an open-source tool that handles the lifecycle of your containers. It's used to simplify how you build and ship the code you run in production. That means you can create a Docker container that has all of the dependencies you need to run your app and manage it all the way up to production.
Depending on your needs, Docker containers can be used instead of virtual machines. A virtual machine is more resource-intensive than containers because they need a virtual copy of the operating system and the hardware it needs to run. That takes up a lot of RAM really fast.
Docker containers just need a virtual copy of the operating system. It'll use all of the resources of the physical server, so they don't need copies of the hardware. That means they are super lightweight and can be used on top of any number of system configurations and still run the application exactly like you expect it to.
With Docker, you can develop locally in a container that you can share with other developers and use that same container to deploy to production once a consensus on changes is reached. Once it's ready for prod, you can deploy your application as a container or as an orchestrated service and it'll run the same exact way that it did locally.
Why you should be familiar with it
This brings up some of the reasons we use Docker with both enterprise and smaller applications. Containers help solve the "runs on my machine problem". Developers can share a container image and build and run the exact same container on different machines. When you can run code consistently without worrying about what underlying settings the server hardware has, you can develop locally on any machine without changing a bunch of configurations.
Working with Docker containers also makes deploying to any environment easier. You donβt have to account for the extra resource consumption that comes from virtual machines. This will help improve the performance and reliability of your application by giving you one tool that lets you manage all of the changes your code and container have to go through.
How to work with Docker
There are a few key components to working with Docker you need to know about: images and containers.
Images
Docker images are the templates for creating Docker containers. They have the instructions that dictate which packages and preconfigured server environments are needed for your application to run. Your images are made of a collection of files that are used to build a functional container.
These files include the dependencies, code for the application, and any other installations you need. There are a couple of ways to make a new image. You can take a running container and change some things and save it as a new image or you can make one from scratch by creating a new Dockerfile.
We'll go through an example Docker image and talk about what's happening. Let's start by making a Dockerfile to run a React app.
# pull official base image
FROM node:alpine3.12
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
RUN npm install react-scripts@3.4.1 -g
EXPOSE 3000
# add app
COPY . ./
# start app
CMD ["npm", "start"]
The beginning of each line in this file is a keyword used in Docker to help it understand what to do. In this file, we start with the official base image for Node to set up the environment we'll need to run a React app. Then we create the working directory for the container.
This is where the application code will be held in the container. Then you set the path for where the dependencies will be installed and next you go ahead and install the dependencies listed in your packge.json. Next, we tell Docker that the container is listening on port 3000. Lastly, you add the app to the right directory and start it.
Now we can build the image with the Docker command:
docker build -t local-react:0.1 .
Don't forget that .
at the end of the line! That's what tells Docker to build the image from the files and folders in the current working directory.
Containers
Now that you have an image built, you can create a container with it. Running your image as a container uses this Docker command:
docker run --publish 3000:3000 --detach --name lr local-react:0.1
This command takes your image and runs it as a container. Back in the image, you included a line to make port 3000 of the container available outside of the container itself. With the --publish
flag, you're forwarding traffic from port 3000 of the system the Docker container is hosted on to the container's port 3000. We have to do that or else firewall rules will keep all network traffic from getting to your container.
The --detach
flag runs the container in the background of the terminal. That means it doesn't take any inputs or display outputs. This is a common option, but you can always reattach the container to the terminal if you need to later. The --name
flag lets you provide a name for your container that you can use in later commands. In this case, the name is lr
.
Now you should be able to visit localhost:3000
and see your application running.
Conclusion
Docker might not be used everywhere, but it is a common technology that's good to know about. It helps with development on different machines and once you have Docker installed, it's fun to play with tools that would otherwise take way too long to set up.
Make sure you follow me on Twitter because I post about stuff like this and other tech topics all the time!
If you want to define your CI/CD configs in Python instead of YAML, try Conducto for your CI/CD pipeline. Itβs pretty easy to get up and running.
Top comments (2)
Thank you! I've been looking for a simple yet thorough explanation of what exactly Docker is, and how to use it.
Glad to be helpful!