Explain Docker and Kubernetes Like I'm Five

github logo ・1 min read

Hey guys.

So in one of my previous workplaces, I had to install Docker on my computer and I don't really know what the purpose of said program is.

My bosses also mentioned something about Kubernetes at the time and I don't know if they are related or not. I did a bit of reading, but I'm still confused.

Can you help a fellow developer out? I would basically want to know what purpose these two programs serve.

Much appreciated,
Andrei

twitter logo DISCUSS (4)
markdown guide
 

Docker and Kubermetes are both container management software. You can think of them as a direct analog to tools like VMWare ESXi, VirtualBox or Xen, just for containers instead of virtual machines.

Of course, that then shifts the question to: What are containers?

The basic concept of a container is best explained by comparison with virtual machines. In essence, a VM virtualizes and emulates physical hardware. A container, on the other hand, virtualizes and emulates the operating system, providing slightly less isolation with a often much lower performance impact.

So, with a container, you have an isolated view of the 'system' that is completely separate from the host and other containers. You see your own view of the network interfaces, your own view of the filesystem, and your own view of the system itself, just like with a VM.

There are a couple of very specific advantages over virtual machines:

  • It's a lot more lightweight than an actual virtual machine, since you're not emulating any hardware. This means that containers can often run faster than VM's, and you can fit more on the same physical hardware.
  • It lets the environment inside the container be a lot more lightweight. Because of how containers are usually designed, you often don't need a full set of system services in the container, but only the actual software you're trying to run.
  • It provides much tighter integration on the host-side. For example, with good container software, you can directly manage processes inside the container from the host using regular host tools instead of having to log in to the container itself.

There are also some disadvantages relative to VM's:

  • Your containers have to use the same OS kernel as the host (so if your host is Linux, your containers have to be Linux also). There are some exceptions to this (for example, FreeBSD can run containers that look like Linux on the inside), but they're pretty rare. This also, usually, applies to the CPU architecture (so you can't run ARM containers on x86), though there are ways to work around that on some platforms.
  • You're entirely dependent on the host OS for security. Because you're sharing the kernel with the host system, anything that compromises the kernel in the container has access to the whole system. With a VM, you would still have to compromise the hypervisor to get access to the rest of the system, and even that may not get you full access (for example, if it's a type-2 hypervisor like VirtualBox being run as a normal user).
  • Because containers are tied to the host system's OS, it's more complicated to do live migrations of running containers than it is to do live migrations of running VM's.
 

So in essence, With Docker and Kubernetes you can create a sort of "branch" of your OS that you can modify and work with without impacting your actual OS?

This is my understanding of what you have said and I see the usefulness in it since I assume you can also create a "branch" of a remote OS.

What I can't wrap my head around is how can this be useful for websites, since I've heard of people using it for websites, though I can accept that something may have gotten lost in translation there :) .

Anyhow, thanks for the reply, it cleared a lot of things up.

 

So in essence, With Docker and Kubernetes you can create a sort of "branch" of your OS that you can modify and work with without impacting your actual OS?

Kind of. The only part that has to be truly shared is the kernel itself, so on Linux you could run a completely different distribution in a container from what the host is running (for example, running Fedora in a container on an Ubuntu host).

The isolation goes a bit further than that though. You can have a completely different network configuration in the container as well (so your container might be on a different subnet with a different IP and possibly even a different VLAN from the host), and can similarly isolate other aspects from the host.

What I can't wrap my head around is how can this be useful for websites, since I've heard of people using it for websites, though I can accept that something may have gotten lost in translation there :) .

There are two ways containers tend to be used when dealing with websites:

  • To isolate multiple sites hosted on the same system. In essence, the same concept as virtual hosts in most web servers, just taken a bit further. The benefit here is mostly for the person doing the hosting for the site, as they can give each customer their own shell access to fully manage the files for their site without having to worry about them interfering with each other.
  • To isolate the various components of the site/service from each other for improved security. For example, you might be running multiple web applications on the same site under different paths, and with containers you could give each web app it's own container so that someone compromising one can't easily interfere with the others.

I see. Stuff will probably become clearer if I ever get a chance to work with them on a day to day basis.

Thank you for your replies, they helped me finally get a grasp of these two things :) .

Classic DEV Post from Feb 9

A Developers Guide to Getting Fit

How I lost 30 lbs and kept if off while juggling a career as an engineer and developer advocate.

Andrei-Lucian Moraru profile image
Software Developer since 2013, with a passion for writing (tutorials and other stuff).