DEV Community

Cover image for Why Does Your Team Need Kubernetes?
Dumebi Okolo
Dumebi Okolo

Posted on

Why Does Your Team Need Kubernetes?

To effectively answer the question “Why Kubernetes,” you need to understand why solutions like Kubernetes were created and the problem(s) teams or developers faced. A good way to understand this is with a familiar scenario.


Understanding Application Scalability

Imagine a solo-prenuer (let’s call her Janet Dough) who owns an e-commerce website that sells cars. Typically, she gets around ten passive visitors to the website.
Janet Dough, on a marketing campaign, decides to give away an exotic car if you shop from 12 pm to 2 pm on Friday.
From a passive 10 visitors a day, the website is set to see a million or more visitors within the timeframe of the giveaway.
Because of the influx of potential buyers and winners, the website is likely to be overwhelmed by the traffic, thereby becoming slow, difficult to navigate or ultimately crashing.
Janet Dough has been present in a similar situation and witnessed this problem happen first-hand.
Janet’s experience is not unique to her, as there are businesses or teams that face similar challenges. It is a good example of what happens when an application is built for minimal traffic, but not fortified to scale for peak traffic periods.


Let's talk about scalability.

What Is Application Scaling?

The scalability of an application refers to its capacity to expand. That is, its ability to expand or compress over time and effectively manage an increasing number of requests per minute (RPM).
For Janet Dough, her business website was originally designed to manage minimal traffic, and no proper plans had been made to handle such a huge influx of visitors/potential buyers.

Types of Application Scaling

In designing systems, scaling is an important aspect of handling different, changing system loads efficiently.

Manual Scaling

Manual scaling is the process of a team or individual manually adjusting the resources (such as CPU, memory, or storage) allocated to an application or system to handle different, changing loads.
A system can be scaled manually in two ways:
Vertical scaling (scaling up) by expanding or cushioning the capabilities of a single server or node by investing additional resources. Vertical scaling is reasonably simple but has constraints due to a maximum capacity restriction on the workload that a single server can manage.
Horizontal scaling (scaling out), unlike vertical scaling, is the process of adding more machines/containers that all run the app in parallel. Whilst this solves some of the issues raised in vertical scaling, it still involves spending more money and requires manual efforts to do so.

Auto-Scaling

Auto-scaling is a process that automatically adjusts the computing resources allocated to an application based on real-time demand. In cloud environments, this feature allows systems to scale up (add resources) or scale down (reduce resources) as workload increases or decreases, ensuring optimal performance and cost-efficiency without manual intervention.
In more recent times, teams and system engineers have moved to automatic scaling to better handle increased influx and the need for a project or system.

For Janet Dough, after considering manual scaling and autoscaling, she landed on an auto-scaling implementation: Kubernetes.


What is Kubernetes?

Kubernetes, also known as K8s, is an open source system for automating the deployment, scaling, and management of containerised applications.
You can’t fully understand the essence of Kubernetes without understanding containers.

What are containers?

Containers are lightweight, executable application components that combine source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
Docker is the most popular tool for creating and running Linux containers.

6 Ways Kubernetes Improves Developer Performance

What does Kubernetes do
Kubernetes schedules and automates container-related tasks throughout the application lifecycle, including the following.

Deployment

Deploy a specified number of containers to a specified host and keep them running in a desired state.

Rollouts

A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume or roll back rollouts.

Service discovery

Kubernetes can automatically expose a container to the internet or to other containers by using a domain name system (DNS) name or IP address.

Storage provisioning

Set Kubernetes to mount persistent local or cloud storage for your containers as needed.

Load balancing

Based on the CPU usage or custom metrics, Kubernetes load balancing distributes the workload across a network to maintain performance and stability.

Autoscaling

When traffic spikes, Kubernetes can spin up new clusters as needed to handle the additional workload and release the added clusters when no longer in use.

Self-healing for high availability

When a container fails, Kubernetes can restart or replace it automatically to prevent downtime. It can also take down containers that don’t meet your health check requirements.


Why Choose Kubernetes?

Taking into account the time and effort it takes to manually scale a system, the advent and combination of Kubernetes and containerization is a breath of fresh air. It takes away the tedious and repetitive tasks from the hands of the developer and inputs them into a structure that runs and takes care of the heavy lifting for you.
For example, running a script as simple as the one below creates three replicas on different servers for resilience and efficiency.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: Janet-Dough
spec:
  replicas: 3
  selector:
    matchLabels:
      app: Janet-Dough
  template:
    metadata:
      labels:
        app: Janet-dough
    spec:
      containers:
      - name: Janet-Dough-container
        image: janet-Dough:latest
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Possible Limitations of Kubernetes

While self-hosting a Kubernetes cluster in a cloud-based environment is possible, setup and management can become complex for an enterprise organisation.
However, with managed Kubernetes services, the provider typically manages the Kubernetes control plane components. The managed service provider helps automate routine processes for updates, load balancing, scaling and monitoring. For example, Red Hat OpenShift is a Kubernetes service that can be deployed in any environment and on all major public clouds, including Amazon Web Services (AWS), etc.
Furthermore, although Kubernetes is the technology of choice for orchestrating container-based cloud applications, it depends on other components, ranging from networking, ingress, load balancing, storage, continuous integration and continuous delivery (CI/CD) and more, to be fully functional.
This means that the individual or administrator handling this would need knowledge of these protocols.


Considering all of this, Janet Dough decided that she would not be able to manually handle the scaling of her website, considering the time she had for other things.
Choosing Kubernetes as the solution of choice, Janet Dough was able to efficiently handle the large influx of visitors her website received, without a case of crashing or system downtime.
Kubernetes also provides a suite of other tools and features, including logging and monitoring, depending on your specific needs.

Top comments (0)