DEV Community

Discussion on: Why we need Docker?

Collapse
 
elmuerte profile image
Michiel Hendriks • Edited

And in the past, we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.

That's absolutely not true.

Collapse
 
srleyva profile image
Stephen Leyva (He/Him)

In fact it’s never true XD

Collapse
 
tcarrio profile image
Tom

Not sure what they mean by their use of "safely and securely". If they mean that processes are not isolated from one another completely, sure. They aren't. That's something cgroups that solves.

However, the system permissions model and networking stack absolutely allowed multiple services to be served from a single host securely. VMs provide a far lower level of isolation though, while still not perfect. See: virtual machine escapes.

Collapse
 
habereder profile image
Raphael Habereder

Well... technically not. But it was standard practice in big companies to size your servers to serve only one specific application. That way you could reduce the impact of an overloaded server on other applications.
Apache2 as proxy was notorious for burning CPUs to the ground, if you got hit with an unexpected high amount of users. That's why NGinX won over the industry in a landslide due to it's way more efficient design of parallel workers.

While "one dedicated server per app" was probably only a "big company thing", it was a thing that people actually did and to this day still continue to do with VMs.

Collapse
 
elmuerte profile image
Michiel Hendriks

And it continues with containers. One "app" per container is the same nonsense. Technology doesn't automatically fix bad practices. These "big companies" are now rolling our k8s clusters to run a single application instance.
An extra fun thing about these containered images is that they they form a new security issue which is even more complicated to fix. The security feature that the images are immutable is a problem for patching.

Thread Thread
 
habereder profile image
Raphael Habereder

Now you got me hooked.

One "app" per container is the same nonsense.

Why is that nonsense in your opinion?
One app/service per container is absolutely the right thing to do. If you split your app into multiple containers, you can scale your app by it's independent building-blocks.
If you split it up too much, you get a dependency-nightmare and performance hits (network requests are much slower than you might think)

On the other hand, if you bundle multiple apps into a container, how are you going to horizontally scale them? In a bundle you scale everything, which you most probably never want to do.

Technology doesn't automatically fix bad practices.

I absolutely agree, but don't see where this fits into the context of containers.

These "big companies" are now rolling our k8s clusters to run a single application instance.

Nobody in their right mind should be rolling out a cluster for a decicated app. The norm is a cluster for a whole department/team. They manage their cluster and usually therein create namespaces to isolate their services+dependencies.

An extra fun thing about these containered images is that they they form a new security issue which is even more complicated to fix. The security feature that the images are immutable is a problem for patching.

I need more context for this. Everything you do has security issues. There is no perfect security. Never. Having images immutable is a strong-point in regards to security. You don't patch live containers. You build new ones with their current patchset and are done. Having a CI is common practice in development, so why should immutability be a problem for patching?

Thread Thread
 
elmuerte profile image
Michiel Hendriks • Edited

Why is that nonsense in your opinion?
One app/service per container is absolutely the right thing to do.

Yes, one service per container is a better setup. Services are often a collection of applications. If you look at the way discourse is set up you'll see that combine a bunch of applications to form the service which runs within a container. The "discourse" service is a single container, despite it it runs a bunch of applications.

I've also seen software which comes with multiple contains, each containing a single application. So to manage a single service you now have to manage multiple containers.

Using containers for scaling? Most software out there does not need to scale over multiple instances. Most software doesn't even do this properly anyway. Using multiple containers for a service does have an impact on scaling: maintainability and management scales down.

Single application containers for scaling is a similar bad concept that software should be composed of micro services. Most software doesn't need to be build from micro services. Quite often starting from the idea that everything should be a microservice running in a separate container, results in a platform which scales badly and requires a lot of services.

Nobody in their right mind should be rolling out a cluster for a decicated app.

Nobody is their right mind should be running a server per application. Yet, that was the premise of this article. These bad decisions are being made.

I need more context for this. Everything you do has security issues.

The problem is that you need to apply patches to all containers images. But not every company rolls container images. What if I need to hotfix my struts2 library some software you run uses it, because it contains a massive exploit? You can't just drop in the fixed file. You now need to create infrastructure to create and manage images.

Software is not exclusively run in SaaS.

Look, I'm not saying docker et.al. are bad. They do have their uses, but don't treat them as the single tool that exist. Sadly docker is seen as the only way to make use of namespaces and control groups (cgroups), which is not. We have this awesome pieces of technology, but we chose to pick a single tool and regard everything problem as a nail.

Thread Thread
 
habereder profile image
Raphael Habereder

I've also seen software which comes with multiple contains, each containing a single application. So to manage a single service you now have to manage multiple containers.

That's not a service, or application, that is a stack. And for a stack, that is common practice. Isolate components and run/scale them independently.

Single application containers for scaling is a similar bad concept that software should be composed of micro services.

Why? If you want to scale as efficiently as possible, you have to slice your services correctly. Too big a slice and you waste resources, too small a slice and you lose the benefit by introducing more intra-communication and thereby latency.

Nobody is their right mind should be running a server per application. Yet, that was the premise of this article. These bad decisions are being made.

You still don't explain why. It's been common practice with many very old, but well explained reasons, as for why you should isolate your workloads as much as possible.

Most software doesn't need to be build from micro services. Quite often starting from the idea that everything should be a microservice running in a separate container, results in a platform which scales badly and requires a lot of services

Then they need a better architect. Microservices isn't the solution for everything, but they have a use-case. Damning a technology for misuse is the wrong approach.
As for scaling, we currently deploy about 2000 microservices at a client. They scale buttery smooth with a rock-solid platform. That is possible due to some amazing architects that do correct and painstakingly detailed slicing of services.

The problem is that you need to apply patches to all containers images. But not every company rolls container images. What if I need to hotfix my struts2 library some software you run uses it, because it contains a massive exploit? You can't just drop in the fixed file.

I actually can. Give me the file, I commit it, and 10 minutes later the whole thing is staged, tested and deployed. If you don't have 10 minutes, you should stay away from a production machine.
People that say "it's impossible", never used a well-made CI Pipeline. Even bare-metal deploying houses don't manually patch servers or applications. They have ansible, puppet, jenkins and other technologies in place, long before deployments have to be made.
You. Don't. Touch. Production. Period.

You test, stage, deploy. Done. That is all it takes and a very technology-agnostic process. Be my guest to do it by hand, or be reasonable and automate it.
Continous Integration shouldn't be a new concept to anyone that develops software.

Software is not exclusively run in SaaS.

Big software is, because it is convenient and removes the all too often happening human-error.

Look, I'm not saying docker et.al. are bad. They do have their uses, but don't treat them as the single tool that exist.

That is sound reasoning, but this is a docker-centric post with some solid points.
You are dragging bad practices and non-optimal technical choices in here and are doing exactly that which you say you don't. With those two arguments you can bomb every opinion on every technical post out of the water. They apply to pretty much anything. If you use a tool incorrectly, the tool isn't at fault.

Containers, if used right, can be used for pretty much anything. Saying "but we have cgroups" is a shitty argument, because cgroups by their own do just resource-control. Nothing else.

Sure, underlying of docker are tons of existing technologies, that just doesn't make it less cumbersome to use these old technologies. It's the same if you would say "why use docker, we have LXC", when actually working with LXC is a royal pain in the arse.