Welcome to the first post on learning Kubernetes! Before we dive into the complexities, we have to talk about the building block that made it all possible: Docker.
If you’ve ever worked in software development, you’ve likely encountered the phrase: "But it works on my machine!"
Why Docker?
Imagine you’ve just finished a new feature and your code is merged into the Version Control System (VCS), and the build pipeline created an artifact which is deployed to the server.
Dev Server: It works perfectly. ✅
Test Server: Working as expected. ✅
Production: You deploy, and the build fails. ❌
What happened?
There could be multiple reasons for this, but most frequent reason is "Environment Misconfiguration" or due to missing dependencies which were there in dev and test environment but missing from production.
Traditionally, you cannot ship the entire environment and dependencies with your code. You shipped the "artifact" to the servers. That's when containers come into picture.
How Docker Solved this?
With Docker, instead of shipping just your code, you package everything the code needs to run (libraries, dependencies, configurations, and even the base OS binaries) into one single unit. This makes chances of misconfigurations negligible.
What is a Container?
Container is an isolated lightweight sandbox environment with all the application code, libraries and runtime dependencies. Everything required to run the application irrespective of host operating system.
Key Distinction: Unlike a Virtual Machine (VM), a container does not package a full Operating System. It has bare minimum binaries and uses the host's OS kernel. This makes containers incredibly lightweight, fast and portable across any machine that has Docker installed.
Definition: Docker is the platform that allows you to build, ship, and run these containers anywhere.
How It Works: Dockerfile to Runtime
To containerize an application, we follow a specific workflow involving three main components: Dockerfiles, Images, and Containers.
The Dockerfile
A Dockerfile is a simple text document containing a list of instructions. When you rundocker build, Docker reads these instructions to create an Image.The Image
A Docker image is a snapshot of your application. It contains everything your app needs to run. However, we cannot ship images to servers. So we need docker image to be stored in some registry same as a VCS. All environments pull the image from the same registry using thedocker pullcommand.The Container
When you want to run your app, you use thedocker runcommand which takes the image and turns it into a running instance. This instance is the container.
The Docker Architecture
Docker has majorly 3 components in its architecture:
The Client: This is where you code (like your laptop).
The Docker Daemon (dockerd): The "brain" of Docker that sits on the host machine. It listens for API requests from the client and manages images and containers.
The Registry: The remote storage where images are hosted.
The Lifecycle of a Docker Command:
docker build: The Client tells the Daemon to create an image based on a Dockerfile which is stored locally.
docker push: The Daemon pushes your local image to a remote Registry so it can be used on the server.
docker pull: command is used on the environment to deploy. It asks the Daemon to download a specific image from the Registry.
docker run: The Daemon instructs the Container Runtime to spin up a container using that pulled image.
Summary
By using Docker, we ensure that the environment in Dev is identical to the environment in Prod. If it works on your machine, it will work on the server, because you are shipping "your machine" (the container) along with the code.
Now that we understand why we need containers, the next step in our series is learning how to manage multiple containers at once. That is where Kubernetes enters the story.
Stay tuned for Part 2!
Top comments (0)