DEV Community

Cover image for Docker and Kubernetes
MinBapE
MinBapE

Posted on

Docker and Kubernetes

Introduction

Last year, I participated in an external activity with the theme "Development of Container Monitoring Visualization Dashboard in Docker and Kubernetes Environment." To preserve the fundamental concepts of Docker and Kubernetes that I learned at that time, I'm writing this article in the way I understood them.


What is Docker?

When you run nginx on a server or execute a .sh script, from the operating system's perspective, it's just running a process. Docker is similar - running a container ultimately means executing some process. The difference isn't in how the process runs, but in the environment the process perceives. In Docker, processes operate within an isolated environment (filesystem, network, process list, etc.).

1) Why was it adopted?
The biggest advantage of Docker is that it reduces problems arising from different execution environments across servers. Even programs that build well locally often fail to run when moved to a server due to library dependency conflicts or version differences. With Docker, you can bundle applications with their required dependencies for deployment, significantly reducing trial and error from these environmental differences.

2) Definition
To properly explain Docker, it's a platform that allows you to package and run applications in units called Containers. Containers share the operating system kernel while separating the filesystem, network, and process space to run processes in an isolated environment.
Below are key terms for understanding Docker:

  • Image: An execution template containing files/dependencies/configurations needed for execution = Package
  • Container: An actually running instance based on an image = Process
  • Registry: A repository for storing or distributing images (e.g., Docker Hub, Amazon ECR, Google Container Registry, etc.)

In summary, Docker is a tool that creates identical execution environments as images, allowing you to run containers the same way anywhere.

3) Differences from Virtual Machines
Both VMs and Docker provide isolated execution environments, but they differ in their isolation method and weight.
Unlike Containers, VMs virtualize the entire operating system including the guest OS on top of a hypervisor. It's a structure that puts an entire OS on top of the application. This difference typically results in the following characteristics:

  • Resource Usage: VMs are relatively heavy as they include the OS, while Containers are lightweight.
  • Startup Speed: VMs require a boot process, but Containers execute closer to process execution, running much faster.
  • Isolation Level: VMs have stronger isolation as they're separated at the OS level, while Containers have thinner isolation boundaries compared to VMs as they share the kernel.

In summary, VMs are like virtual computers including the OS, while Containers are closer to isolated process execution that shares the kernel.
When you need a different OS environment, VMs are advantageous, and when you want to deploy programs quickly and consistently on the same kernel basis, Containers are beneficial.


What is Kubernetes?

If Docker provides the unit for creating and running Containers, Kubernetes (abbreviated as k8s) is a system for managing those Containers from an operational perspective. Its purpose is to automate problems that arise when services grow and Containers multiply (deployment, failures, scaling, networking, etc.).

1) The Inconvenience of Docker Operations
With Docker alone, running and stopping Containers isn't difficult. However, as services grow and the number of Containers increases, operational aspects become cumbersome. For example, it's not easy to consistently manage tasks like automatically recovering when a Container terminates abnormally (self-healing), scaling to multiple instances in response to traffic increases, and replacing without downtime during deployment using only Docker commands. Eventually, these operational tasks are managed by scripts or people directly, and as scale grows, management complexity rapidly increases.

2) Kubernetes Core Concepts
Kubernetes operates around several core objects:

  • Pod
    • The minimum unit for running Containers in Kubernetes
    • Usually manages one Container or bundles several closely related Containers together.
  • Deployment
    • An object that maintains Pods at the desired count and manages version updates (rolling update method that replaces with new versions without shutting down the service)
  • Service
    • An object that provides a fixed access point because Pods can have their IPs changed when replaced or restarted
    • Also serves as internal load balancing.
  • Node
    • The physical server or virtual machine where Pods actually run
    • Multiple nodes together form a Cluster (a unit that bundles and operates multiple servers as one).
  • Control Plane
    • The management area that stores the Cluster's state and decides which Pods to place on which Nodes

The core of Kubernetes isn't directly manipulating Containers, but managing to maintain the desired state once you declare it. If you define goals like 'n Pods should be running' or 'this Service should always be accessible', Kubernetes adjusts accordingly.


Summary

Docker is a tool that packages applications and execution environments as Images and allows you to run Containers based on them in a consistent manner.

Kubernetes is a Container orchestration system designed to automate operational issues like deployment/recovery/scaling/networking when multiple Containers come together to form a service.

Using Docker + Kubernetes together, you can create consistent execution environments with Docker and operate those environments stably and efficiently with Kubernetes. Developers only need to prepare code and images, and Kubernetes automates the rest of the operational burden. This combination has become the standard for modern cloud-native application development and operations.

Top comments (0)