DEV Community

Zachary Izdepski
Zachary Izdepski

Posted on

Microservices, Kubernetes and Docker

Just a few days ago I had some issues with an EC2 instance. I'm pretty new to the development field, and as such I fumbled through inbound rules and playing with the VIM editor to cobble together an environment file and ran up against permission and dependency disparities between my little macbook air and the instance I was just getting acquainted with. I eventually weeded out the bugs, got things running, and vowed to explore other solutions to my deployment woes for the next go around. It just seemed like there was so much that could go wrong with configuring an instance up close that there must have been advancements in that area. I'd heard of other services like Heroku, Elastic Beanstalk and Digital Ocean, but what really grabbed my attention was the rapid rise in the demand for and utilization of microservices, particularly Docker with Kubernetes. So, in this blog I'd like to explore what those are and how they have launched us deeper into the world of the virtual cloud.

To begin, we need to understand the basic purpose of Docker, the problems it solves as well as some vocabulary. Imagine this common scenario; you've built your app, it runs great, and you send it off to be tested. You get an email a day later saying there are several bugs in the app and it is not ready to be deployed on the market. Perplexed, and maybe a little frustrated, you respond with "Well, I have no issues at my end. It must be an issue with your machine!". There are so many variables involved in running a large application in a foreign environment that issues are commonplace. Docker solves this problem by creating an "image" of your environment, which is essentially an instance of your os and app bundled with all of its dependencies. This bundling creates something called a container can be run on any computer anywhere as long as it is run with Docker. What goes into the container is defined in a Dockerfile that can be written in json or yml format, the latter being recommended for readability. The container instance will have everything it needs to run like it is on your own machine, controlling for all those pesky variables and delivering a product that is air tight right out of development. But that is just the beginning of what Docker can accomplish. These containers are replicable, and as such ready to handle variable workloads. This can be accomplished with the Docker Daemon, but it has also become popular to use Kubernetes.

A common misconception about Docker containers is that they are virtual machines (VMs), which is untrue because virtual machines run on their own os while a container uses a shared os. This has the advantage of sharing bins and libraries between containers, which prevents redundancies and saves space. in the same vein, Kubernetes offers an architecture that prioritizes efficiency. It is an open source container orchestration tool, meaning it does what it sounds like - orchestrating the number and location of containers. A Kubernete cluster utilizes a single control plane to manage nodes, which are worker machines that can be either virtual or physical. Each node has pods that can contain a number of containers. The Kubernete control plane controls the scheduling of pod deployment through a Replication controller that contains a history of deployments and can scale them up or down. If a container fails, another one can take its place until the failed container comes back online. The same is true for a pod.

Cloud based computing has come a long way in the last several years, and while there are some disadvantages to decentralized deployment services, such as higher complexity and tougher troubleshooting, it is clear that the pros far outweigh the cons. When used at scale, it's easy to see why more and more companies are interested in investing in cloud tech, with the understanding that sometimes harder is better.

Top comments (0)