DEV Community

Cover image for And you? Do you hate or Love Kubernetes?
Miguel Angel Muñoz Sanchez for AWS Community Builders

Posted on • Originally published at en.paradigmadigital.com

And you? Do you hate or Love Kubernetes?

This month, we will deal with a very controversial topic within the AWS world.

In the past few months, I have encountered many people who tend to hate Kubernetes and try to avoid it at all costs. It has not been precisely people who commit to legacy models but, quite the opposite, people with an extensive background in application modernization and the use of AWS.

This is interesting because the current state of implementation of Kubernetes grows bigger every day, and it has many defenders.

Kubernetes is a great product. I think no one questions this, and it has a lot of valid use cases.

Many valid alternatives to Kubernetes may be more suitable depending on the use case.

But what problems do I see with Kubernetes, and why do I like other alternatives better? I invite you to follow me in this beautiful story of hatred of a technological stack.

What Is Kubernetes?

"Kubernetes is a container orchestrator". I have had a lot of discussions about this sentence, which is still on the official page of Kubernetes. It has even generated quite a few jokes (someone has a lovely kimono with that phrase screenprinted on the back). But in short (and agreeing with the owner of the Kimono), Kubernetes is not a container orchestrator; it is many more things, and this is where perhaps the first problem arises: Kubernetes is not easy; it never was, and it never will be.

To explain this, we must go back to the origin of Kubernetes. In 1979 …, it is unnecessary to go that far back, but the ancestors of containerizing applications go back to that date. ;) In 2006, several Google engineers began to develop something curious within the Linux kernel called "process containers"—although they later renamed it "control groups" or, as it is better known, "cgroups." This feature allowed the birth of the containers as we now know them.

Google began to use this functionality internally for its applications, and as it needed to manage them, it created Borg (the ancestor of Kubernetes) to manage its containers.
Years later, Docker was released (previous container implementations already existed, but none were as good and simple).

The first problem arose here: Containers were an incredible idea, but there was no way to manage and govern them (there were indeed specific solutions, but they were not complete).

Everything changed at the end of 2014. Google rewrote Borg with all the knowledge it had gained and released it as Kubernetes. Kubernetes was here to fix all problems and quickly became the preferred technology for managing and governing containers. This is a concise summary and in broad strokes; we could write row after row about the history of containers since 1979 or something like that… 2014. It was a prolific year. At the end of the year, two container-based AWS services were launched that we will talk about later: Lambda and ECS.

The Problems Begin

Kubernetes was the most complete and mature solution. When it launched Kubernetes, Google used and maintained Borg for almost ten years.

However, it is a solution designed to manage thousands and thousands of containers distributed among hundreds of physical clusters, which is quite noticeable when using it.
It is not intended for small workloads but for massive workloads.

Kubernetes solved many things but also brought about new problems, such as managing Kubernetes networking, load isolation, scaling, securitization, encryption, deployment model, operation model, patching, etc.

A clear example of these problems is the scaling of Kubernetes. Escalating and de-escalating a pod (Which is a group of one or more containers) is easy. But escalating and de-escalating the infrastructure where the pods are running (nodes) is very complicated because you have to create the infrastructure. The pods do not have homogeneous sizes, and you have to distribute the replicas of the pods on different servers, reunify pods on the same server to empty others and be able to turn them off, etc.

Although many solutions are of help (such as Karpenter), the effort and cost of managing and maintaining a Kubernetes cluster are very high.

Kubernetes requires a lot of expertise in the technology and a team to maintain the technology stack.

In 2014, cloud implementation was still small. For example, OpenStack was at its highest point. At that time, the I.T. world was much more complex and dependent on pure infrastructure.

Managing layers and layers of complexity was our daily bread. But we are not in 2014 anymore, and the Cloud has made life easier for us and shifted the paradigm.

We live in a time when attempts are made to simplify these tasks as much as possible, and developers tend to be empowered to speed up deployments. Thus, adding these layers of infrastructure complexity is going against the grain.

Here, a series of cloud Services that are simpler than a pure Kubernetes Cluster but in which we can execute containerized loads come into play: Lambda, Fargate, ECS (Elastic Container Service), and EKS (Elastic Kubernetes Service).

I'm going to be honest: None, including EKS, a Kubernetes-managed service, have the power of pure Kubernetes, but we don't need that power.

Is It Necessary to Deploy Kubernetes in All Use Cases?

The answer is no. Many use cases do not require something as complex as Kubernetes.

  • Lambda is a serverless service that allows code to be run directly without providing infrastructure.
    It can't manage the container itself but generates a pre-built container or context that allows us to run code directly.

  • Fargate (ECS or EKS) is a service that allows you to run containers in a serverless mode without worrying about the cluster running the load.

Both Lambda and Fargate remove you from all that complexity; they take care of managing it. You deploy your code or your image, and that's it—something straightforward but compelling at the same time.

Both use an exciting open-source technology developed by Amazon called Firecracker.

  • ECS is an AWS-managed container orchestrator. It's much simpler than Kubernetes. It is just a container orchestrator that delegates the rest of the tasks to other AWS services.

  • EKS is a managed Kubernetes service that abstracts us from infrastructure deployment. EKS is in charge of managing part of our Kubernetes clusters (Kubernetes Masters). It eliminates part of the complexity of Kubernetes but affords us some flexibility.

All these Services make it easier to deploy applications based on a container model by abstracting from the complexity of Kubernetes.
These services are more limited and designed for various use cases.

Why deploy something as complex as Kubernetes if we can use more straightforward tools?

Well, it isn't straightforward to explain, but there are usually reasons for this, which we will see and discuss here.

Kubernetes Does Not Have Lock-In

It is pretty standard to think that Kubernetes does not have a Lock-In, and it is one of the most used justifications for prioritizing Kubernetes over other alternatives. But, unfortunately, Kubernetes has Vendor Lock-In.

First, a Kubernetes development must be executed in Kubernetes; you will not be able to run it in another type of containerization, and even less outside the world of containers.

And that is a Lock-In. It is not very big because the container model is quite flexible and allows us to move in a "simple" way. But let's be honest: no one deploys vanilla Kubernetes.

Vanilla Kubernetes has little Lock-In (although it does), but it is challenging to deploy and requires more software to deal with all the complexity associated with Kubernetes.

The Vendors who propose different Stacks that add tools that solve or facilitate many of the problems we have mentioned come here.

The problem is that each Vendor adds its features to add value to its Stack, thereby causing a Lock-In between different Stacks. It is ironic to talk about avoiding Lock-Ins with Stacks that use their tools and even modify the Kubernetes model.

Many think migrating from one flavor of Kubernetes to another is transparent while going to a cloud service will be expensive.

Once we are working in containers, the effort will be very similar.
A recent study compares migrating a standard project to different flavors of managed Kubernetes and ECS. Interestingly, the migration time and the migration effort are precisely the same.

We have talked about Lock-In at other times on this blog.
And it's a necessary evil, so we must manage it.

There is a great tendency to avoid it, partly due to an abuse of Lock-In by certain Vendors and not having been appropriately managed in the past.

It is necessary to assess whether a Lock-In like the one that Lambda, Fargate, ECS, or EKS may have suits us and makes life easier for us and whether it considers how much it would cost us to move to another technology.

The important thing isn't having a Lock-In (because it's impossible to avoid) but managing it correctly.

Kubernetes is Multi-Cloud

This is the biggest lie ever told in Cloud computing, and the answer is no. Kubernetes is not Multi-Cloud. You can run Kubernetes on multiple clouds, but it does not mean it works the same way in each Cloud.

An example I would like to give is Terraform. Terraform allows infrastructure to be deployed in all Clouds, but a Terraform code you created for AWS will only work on AWS; it will not work on any other Cloud.

Terraform allows us to use the same structure and language—but not the same content. The same goes for Kubernetes (although, in actuality, i*t is the power given to us by the containers*, not Kubernetes).

A Kubernetes Cluster on AWS won't work the same in Azure and Google Cloud, and this is so because the different Clouds are similar, but their implementation is different. Only by looking at the differences between the networking model and the IAM (Identity and Access Management) model can we realize the differences.

For a while, and thanks to the great Corey Quinn, I have always recommended the same thing when talking about multi-cloud: First of all, try to mount it in multi-region with the same cloud provider.

Managing something as simple as persistence becomes very complicated when we move from a single region to multiple regions.

And each layer we add gets increasingly complicated—and we are talking about the same Cloud where the model is the same, and the APIs are compatible. If we move to another Cloud, the problem multiplies exponentially.

Kubernetes == Cloud

There's a pretty big assumption that if we're using Kubernetes, we're using the Cloud.

Although all Cloud Providers have managed Kubernetes services, Kubernetes as a technology was not born in the Cloud and evolved in parallel.

Indeed, Kubernetes' best practices align significantly with the excellent cloud and application modernization practices.

The use of containers makes sense in microservices architectures.

Containers are not a new thing. The use of containers, or, instead, the ancestors of containers, comes from way back, and many Unix system administrators have used those ancestors, so evolving to Kubernetes was not difficult and can even be seen as something natural.

This in itself is not a problem. Having a strategy in an On-Prem container is not bad per se. The problem is that Kubernetes is sometimes used as part of a non-existent cloud evolution.

We are discussing a cloud strategy based on using Kubernetes in the Cloud as an On-Prem infrastructure.
This is a terrible idea because we are using the Cloud as an attached CPD, which does not work the same as a CPD.

In Kubernetes, Not Everything Fits

In the end, Kubernetes has been given so much flexibility that many workloads can be run.

This seems reasonable at first, but the fact that it can be executed does not mean it is the most optimal solution, even less so if we want to evolve.

A clear example would be databases in Kubernetes.

It is possible to run a database on Kubernetes, but it just doesn't make sense. Ultimately, you are not containerizing a microservice but an entire database server.

*What good is a pod if it requires an entire server? *(You might be surprised at what you see out there.)

Another horrible example is the famous "Lift and Shift to Kubernetes." What is the point of moving from a virtualized server to a pod in Kubernetes?

It's possible, but we're just asking for trouble and using container technology for something that's not its intended purpose.

The problem is not whether Kubernetes can run these uploads. The problem is that it's a wrong use case and becoming too general.

With great power comes great responsibility, and in the case of Kubernetes, this power is being used to containerize loads that should not be running on Kubernetes.

Summary

I won't lie: Kubernetes is not a wrong solution. There are use cases where it is the most optimal solution.

Here at Paradigma, some colleagues are working on Kubernetes projects where there is no other option than using Kubernetes, and they are doing a great job.
I have seen quite a few Kubernetes Clusters that are very well set up, well operated, and necessary.

I don't hate Kubernetes; I wouldn't say I like bad Kubernetes implementations, which, unfortunately, are the most common of late.

A good technology that should be used for one type of use case is being used for the wrong use cases.

This is a problem because, a lot of times, we are creating unnecessary complexity. These bad implementations are doomed to failure.

We commonly start things by setting up a Kubernetes cluster to run our future workloads without considering them.

First, we assemble the cluster, and then we define the loads. There is also the possibility of directly developing in Kubernetes without considering if it will be the most optimal solution.

We are in 2023. The division between infrastructure and development is something of the past. We must think about the load we will generate and choose the most optimal place to execute it.

*I recommend going from less to more complexity, load by load, and evaluating each jump.
*

The order that I propose would be the following:

  1. Lambda
  2. Fargate
  3. ECS
  4. EKS
  5. Kubernetes EC2
  6. Kubernetes OnPrem

We are evaluating the "why" in each jump, which is essential.

If I can't use Lambda, I must ask myself why and if it is justified. In many cases, Lambda is not used because the container must be running. But is this requirement justified, or is it just because a development where the service does not depend on events and is always running is more comfortable or usual for me? The same goes for Fargate, which is often discarded for not allowing persistent disks to be mounted.

Although ECS, EKS, and Kubernetes allow persistent disks to be mounted in the pods, it is not recommended; it should be avoided as much as possible.

We must do this exercise with all the loads and steps. Often, Kubernetes is abused because it allows us to misuse it from the past. But this is not an advantage; it is a problem.

It is also essential to analyze each load without considering the global.
If, for example, 80% of our loads can work in Fargate and the rest require EKS, nothing happens. Let's then set up a small cluster for that remaining 20% ​and execute the other 80% in Lambda.

Last but not least, we must not forget about EC2. There are loads that it does not make sense to containerize right now. A containerized monolith is still a monolith. In these cases, staying in EC2 and evolving our application to other models in the future is not a bad idea.

This is all to discuss my hatred for Kubernetes or its misuse.
P.S.: The post includes links to exciting technologies such as Karpenter and Firecracker. I recommend that you take a look at them.

Top comments (0)