As a budding software developer I always have my ear to the ground to try and pick up on new technologies or tools that I want to learn. Docker is one of those tools.
What is Docker?
In a way, Docker is like a completely packaged virtual machine. It is a tool to design, deploy, and run applications on any machine with all the parts it needs such as libraries and dependencies. Docker accomplishes this task through its use of "containers" which allow the developer to rest assured that the application will run on any other Linux machine regardless of any customizations that may exist on that machine that differ from the machine used for writing and testing the original code.
Containers
So, what is a container anyway? A container is way of running a command that is isolated from everything else on the system. When the command should only access the resources it should have access to, a container will provide that functionality without giving it access to anything else on the machine. A process running inside of a container is not aware of any other processes going on outside of the container. Containers make it so:
- As long as Docker is installed on a machine, the app will run
- Containerized apps on a single server don't conflict with one another
- If you update an app you don't have to worry about it breaking other contained apps
- Docker apps will not break due to OS updates of packages that are not related to Docker
Why Containers?
When Docker was originally pitched it was compared to the containers used in the global shipping industry. Shipping containers are of a standardized size and any company that plans on transporting them knows exactly what to expect when they show up to pick one up. It does not matter what kind of truck or boat you plan to transport it on, if it fits it sits. Docker Containers work the same way. A machine running the app in the Docker Container shouldn't really have to worry about what is inside of the app and the app that is in the container doesn't care what kind of machine it is running on.
Docker Images
In Docker language, an Image refers to a blueprint from which brand-new containers can be started. Images don't change, but you can start a container from an image, perform operations in it, and save another image based on the latest state of the container. When you start a container its like booting up a machine after it was powered down. Instead of starting up a computer or spinning up a new virtual machine, You create a new Container from scratch which looks exactly like the image you chose.
Dockerfiles
Dockerfiles can be thought of like a "Readme" for the images and containers. It is a set of precise instructions which direct the program how to create a new image and set defaults for containers. It should create the same image for anybody running the application at any point in time. The Dockerfile is a basic instruction for how to setup the required environment for running the application except it is in the form of executable code.
Volumes
Volumes are like separate "containers" for persisting data. Containers leave nothing behind by default - Any changes made to the container are lost as soon as it is removed or "shut down". That is where Volumes come in. Volumes are for persisting and sharing data but the container doesn't have to know anything about the host machine when using them. If a container is using a volume and that container is shut down, you can start up another container and point it to the same volume that the other container was using.
So that is it. Those are the most basic of basic components of Docker. I hope this brief write-up has been helpful in understanding the basics of the Docker system and how it may benefit you in your future endeavors.
Top comments (0)