DEV Community

Cover image for Why we need Docker?
Abhay Goswami
Abhay Goswami

Posted on

Why we need Docker?

Hi, everyone this is my first post about Docker. I just get a chance to learn about docker and apply in the production at my company.

Before learning docker. First we should clear that Why we need docker and some old school development process?

The bad old days

Applications run businesses. If applications break, businesses suffer and sometimes go away. These statements get truer every day!

Most applications run on servers. And in the past, we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.

So, the story usually went something like this... Every time the business needed a new application, IT would go out and buy a new server. And most of the time nobody knew the performance requirements of the new application! This meant IT had to make guesses when choosing the model and size of servers to buy.

As a result, IT did the only thing it could do - it bought big fast servers with lots of resiliency.Under-powered servers might be unable to execute transactions, which might result in lost customers and lost revenue. So, IT usually bought bigger servers than were actually needed. This resulted in huge numbers of servers operating as low as 5-10% of their potential capacity. A tragic waste of company capital and resources!.

Hello VMVare !

VMware, Inc. gave the world a gift - the virtual machine (VM). We finally had a technology that would let us safely and securely run multiple business applications on a single server.

All of a sudden, we could squeeze massive amounts of value out of existing corporate assets, such as servers, resulting in a lot more bang for the company’s buck.

But... and there’s always a but! As great as VMs are, they’re not perfect!

The fact that every VM requires its own dedicated OS is a major flaw. Every OS consumes CPU, RAM and storage that could otherwise be used to power more applications. Every OS needs patching and monitoring. VMs are slow to boot and portability isn’t great - migrating and moving VM workloads between hypervisors and cloud platforms is harder than it needs to be.

Hello Containers!

For a long time, the big web-scale players like Google have been using container technologies to address these shortcomings of the VM model.

In the container model the container is roughly analogous to the VM. The major difference through, is that every container does not require a full-blown OS. In fact, all containers on a single host share a single OS. This frees up huge amounts of system resources such as CPU, RAM, and storage.

Containers are also fast to start and ultra-portable. Moving container workloads from your laptop, to the cloud, and then to VMs or bare metal in your data center is a breeze.

Linux containers

Modern containers started in the Linux world and are the product of an immense amount of work from a wide variety of people over a long period of time. Just as one example, Google Inc. has contributed many container-related technologies to the Linux kernel. Without these, and other contributions, we wouldn’t have
modern containers today.

Some of the major technologies that enabled the massive growth of containers in recent years include kernel namespaces, control groups, and of course Docker. The modern container ecosystem is deeply indebted to the many individuals and organizations that laid the strong foundations that we currently build on!

Windows containers

Over the past few years, Microsoft Corp. has worked extremely hard to bring Docker and container technologies to the Windows platform.

In achieving this, Microsoft has worked closely with Docker, Inc.

The core Windows technologies required to implement containers are collectively referred to as Windows Containers. The user-space tooling to work with these Windows Containers is Docker. This makes the Docker experience on Windows almost exactly the same as Docker on Linux. This way developers and sysadmins familiar with the Docker toolset from the Linux platform will feel at home using Windows containers.

Windows containers vs Linux containers

It’s vital to understand that a running container uses the kernel of the host machine it is running on. This means that a container designed to run on a host with a Windows kernel will not run on a Linux host. This means that you can think of it like this at a high level - Windows containers require a Windows Host, and Linux containers require a Linux host.

Thanks for reading.

Next post will cover about Docker

make sure you follow me for the updates and If you like the post then please like and give your feedback so that I can improve in my next post.

Latest comments (12)

Collapse
 
jessekphillips profile image
Jesse Phillips

Windows Docker hosts a VM which allows for it to execute Linux containers on a Windows host.

Also the kernel is basically the shared part of containers. all the user land applications which make up the OS are part of the container.

Collapse
 
mehdico profile image
Mehdi Mousavi

I used docker on Mac, worse performance ever.

Collapse
 
elmuerte profile image
Michiel Hendriks • Edited

And in the past, we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.

That's absolutely not true.

Collapse
 
habereder profile image
Raphael Habereder

Well... technically not. But it was standard practice in big companies to size your servers to serve only one specific application. That way you could reduce the impact of an overloaded server on other applications.
Apache2 as proxy was notorious for burning CPUs to the ground, if you got hit with an unexpected high amount of users. That's why NGinX won over the industry in a landslide due to it's way more efficient design of parallel workers.

While "one dedicated server per app" was probably only a "big company thing", it was a thing that people actually did and to this day still continue to do with VMs.

Collapse
 
elmuerte profile image
Michiel Hendriks

And it continues with containers. One "app" per container is the same nonsense. Technology doesn't automatically fix bad practices. These "big companies" are now rolling our k8s clusters to run a single application instance.
An extra fun thing about these containered images is that they they form a new security issue which is even more complicated to fix. The security feature that the images are immutable is a problem for patching.

Thread Thread
 
habereder profile image
Raphael Habereder

Now you got me hooked.

One "app" per container is the same nonsense.

Why is that nonsense in your opinion?
One app/service per container is absolutely the right thing to do. If you split your app into multiple containers, you can scale your app by it's independent building-blocks.
If you split it up too much, you get a dependency-nightmare and performance hits (network requests are much slower than you might think)

On the other hand, if you bundle multiple apps into a container, how are you going to horizontally scale them? In a bundle you scale everything, which you most probably never want to do.

Technology doesn't automatically fix bad practices.

I absolutely agree, but don't see where this fits into the context of containers.

These "big companies" are now rolling our k8s clusters to run a single application instance.

Nobody in their right mind should be rolling out a cluster for a decicated app. The norm is a cluster for a whole department/team. They manage their cluster and usually therein create namespaces to isolate their services+dependencies.

An extra fun thing about these containered images is that they they form a new security issue which is even more complicated to fix. The security feature that the images are immutable is a problem for patching.

I need more context for this. Everything you do has security issues. There is no perfect security. Never. Having images immutable is a strong-point in regards to security. You don't patch live containers. You build new ones with their current patchset and are done. Having a CI is common practice in development, so why should immutability be a problem for patching?

Thread Thread
 
elmuerte profile image
Michiel Hendriks • Edited

Why is that nonsense in your opinion?
One app/service per container is absolutely the right thing to do.

Yes, one service per container is a better setup. Services are often a collection of applications. If you look at the way discourse is set up you'll see that combine a bunch of applications to form the service which runs within a container. The "discourse" service is a single container, despite it it runs a bunch of applications.

I've also seen software which comes with multiple contains, each containing a single application. So to manage a single service you now have to manage multiple containers.

Using containers for scaling? Most software out there does not need to scale over multiple instances. Most software doesn't even do this properly anyway. Using multiple containers for a service does have an impact on scaling: maintainability and management scales down.

Single application containers for scaling is a similar bad concept that software should be composed of micro services. Most software doesn't need to be build from micro services. Quite often starting from the idea that everything should be a microservice running in a separate container, results in a platform which scales badly and requires a lot of services.

Nobody in their right mind should be rolling out a cluster for a decicated app.

Nobody is their right mind should be running a server per application. Yet, that was the premise of this article. These bad decisions are being made.

I need more context for this. Everything you do has security issues.

The problem is that you need to apply patches to all containers images. But not every company rolls container images. What if I need to hotfix my struts2 library some software you run uses it, because it contains a massive exploit? You can't just drop in the fixed file. You now need to create infrastructure to create and manage images.

Software is not exclusively run in SaaS.

Look, I'm not saying docker et.al. are bad. They do have their uses, but don't treat them as the single tool that exist. Sadly docker is seen as the only way to make use of namespaces and control groups (cgroups), which is not. We have this awesome pieces of technology, but we chose to pick a single tool and regard everything problem as a nail.

Thread Thread
 
habereder profile image
Raphael Habereder

I've also seen software which comes with multiple contains, each containing a single application. So to manage a single service you now have to manage multiple containers.

That's not a service, or application, that is a stack. And for a stack, that is common practice. Isolate components and run/scale them independently.

Single application containers for scaling is a similar bad concept that software should be composed of micro services.

Why? If you want to scale as efficiently as possible, you have to slice your services correctly. Too big a slice and you waste resources, too small a slice and you lose the benefit by introducing more intra-communication and thereby latency.

Nobody is their right mind should be running a server per application. Yet, that was the premise of this article. These bad decisions are being made.

You still don't explain why. It's been common practice with many very old, but well explained reasons, as for why you should isolate your workloads as much as possible.

Most software doesn't need to be build from micro services. Quite often starting from the idea that everything should be a microservice running in a separate container, results in a platform which scales badly and requires a lot of services

Then they need a better architect. Microservices isn't the solution for everything, but they have a use-case. Damning a technology for misuse is the wrong approach.
As for scaling, we currently deploy about 2000 microservices at a client. They scale buttery smooth with a rock-solid platform. That is possible due to some amazing architects that do correct and painstakingly detailed slicing of services.

The problem is that you need to apply patches to all containers images. But not every company rolls container images. What if I need to hotfix my struts2 library some software you run uses it, because it contains a massive exploit? You can't just drop in the fixed file.

I actually can. Give me the file, I commit it, and 10 minutes later the whole thing is staged, tested and deployed. If you don't have 10 minutes, you should stay away from a production machine.
People that say "it's impossible", never used a well-made CI Pipeline. Even bare-metal deploying houses don't manually patch servers or applications. They have ansible, puppet, jenkins and other technologies in place, long before deployments have to be made.
You. Don't. Touch. Production. Period.

You test, stage, deploy. Done. That is all it takes and a very technology-agnostic process. Be my guest to do it by hand, or be reasonable and automate it.
Continous Integration shouldn't be a new concept to anyone that develops software.

Software is not exclusively run in SaaS.

Big software is, because it is convenient and removes the all too often happening human-error.

Look, I'm not saying docker et.al. are bad. They do have their uses, but don't treat them as the single tool that exist.

That is sound reasoning, but this is a docker-centric post with some solid points.
You are dragging bad practices and non-optimal technical choices in here and are doing exactly that which you say you don't. With those two arguments you can bomb every opinion on every technical post out of the water. They apply to pretty much anything. If you use a tool incorrectly, the tool isn't at fault.

Containers, if used right, can be used for pretty much anything. Saying "but we have cgroups" is a shitty argument, because cgroups by their own do just resource-control. Nothing else.

Sure, underlying of docker are tons of existing technologies, that just doesn't make it less cumbersome to use these old technologies. It's the same if you would say "why use docker, we have LXC", when actually working with LXC is a royal pain in the arse.

Collapse
 
tcarrio profile image
Tom

Not sure what they mean by their use of "safely and securely". If they mean that processes are not isolated from one another completely, sure. They aren't. That's something cgroups that solves.

However, the system permissions model and networking stack absolutely allowed multiple services to be served from a single host securely. VMs provide a far lower level of isolation though, while still not perfect. See: virtual machine escapes.

Collapse
 
srleyva profile image
Stephen Leyva (He/Him)

In fact it’s never true XD

Collapse
 
cescquintero profile image
Francisco Quintero 🇨🇴 • Edited

Hi, great post here.

I've used both VMs and (Docker) Containers and can't say either is better in terms of portable environments.

In fact, (plug here :D) I wrote a post about VMs VS Docker Containers where I tell my bad experiences with containers.

In the end of the day, every tool has its pros and cons.

Collapse
 
abhay676 profile image
Abhay Goswami

Yeah, everything has its own advantages and disadvantages as per the needs. But loves to see the other prospective of VM. 😊
Thanks for sharing 🙏