DEV Community

Cover image for Cloud, Containers, and Kubernetes: Fundamentals for Developers
Samira Awad
Samira Awad

Posted on

Cloud, Containers, and Kubernetes: Fundamentals for Developers

Introduction to Cloud and Containerization

In this article, we will explore what the cloud is and how it relates to containerization, focusing on technologies like Docker and Kubernetes. To understand these concepts effectively, we will also discuss Linux, as much of the cloud infrastructure is based on this operating system.

Linux as an Operating System

An operating system is a set of programs that allows a device to function. Examples include Windows, Linux, and macOS on computers, and Android and iOS on mobile devices. Linux, being open source, is used by many companies and individuals worldwide, resulting in various distributions tailored to different needs. Additionally, Linux dominates servers and supercomputers due to its flexibility and efficiency, primarily managed through command lines.

The Cloud: What Is It Exactly?

When we say an application stores data in the cloud, we mean that this data is on a server that the application accesses via the internet. This server is generally located in a datacenter alongside other servers from different companies, although it could also be in a private location. The physical location of the server does not matter; what is important is the remote connection and access.

In more technical terms, "cloud" also refers to the IT infrastructure that we outsource to providers like Microsoft, Amazon, or Google. These providers lease us computing resources (servers, storage, etc.) from their datacenters, allowing us to run cloud services without worrying about hardware maintenance. This offers flexibility, as we can hire servers based on demand and only pay for the time used. This model, popularized by Amazon with its Elastic Infrastructure service (which automatically increases or decreases the number of servers in minutes based on service demand), has revolutionized how we manage servers.

Linux in the Cloud and on Servers

Due to its stability, security, and efficiency, Linux is the most used operating system in the cloud and on virtual servers. Many companies have migrated their servers to the cloud, making "cloud" and "internet" practically synonymous for many. The open-source nature of Linux allows the community to audit it and quickly identify vulnerabilities, providing a significant security advantage over closed systems.

Moreover, Linux is more cost-effective as it does not require usage licenses, making it a preferred choice for companies that manage large numbers of servers. With advanced tools for remote and automated management, Linux enables efficient administration of hundreds or thousands of machines.

Servers and the Dominance of Linux

Managing and configuring servers via the command line is more efficient and allows for better remote administration compared to graphical interfaces. This is why Linux dominates in virtual servers and the cloud, compared to other operating systems.

In terms of costs, it is a more economical option as it does not require licenses, unlike its competitors, where each server needs one. Additionally, a company providing a service may have many servers. In the 2000s, it became possible to have a server with multiple virtual machines inside, whether physical or in the cloud.

Regarding security, being open source allows the community to audit it and detect vulnerabilities more rapidly, which is not possible in closed systems. In other systems, only company personnel have access to the code, and only they can identify security flaws.

It is called the cloud because, a long time ago, in network diagrams, a cloud drawing was used to represent external networks where one did not know what was happening. As the internet grew, that external network became synonymous with the internet, where "cloud" was used in computing to represent it.


Image description


From the Cloud to Containers: Docker as an Efficient Solution

The cloud has transformed how companies manage and deploy applications, allowing them to leverage scalable and flexible infrastructures without worrying about the physical maintenance of servers. However, as applications grow in complexity, the need for more efficient solutions to manage dependencies, environments, and scalability arises.

This is where containerization comes into play, a technology that has revolutionized software development and deployment in the cloud. Docker, the most popular container system, simplifies the packaging of applications with all their dependencies, ensuring they run consistently in any environment, whether local or in the cloud. Next, we will explore how Docker and containers have changed how applications are distributed, executed, and scaled in cloud infrastructure.

Software Containers

Containers are a way to distribute and run applications in isolation. They contain the application along with all its dependencies, solving common compatibility issues between environments. In traditional installations, dependencies (packages and services) are managed separately, but many computers have different versions and configurations of these dependencies, creating problems.

As a solution, containers include everything needed for the application to function correctly in any system, regardless of the environment configuration.

Problems that Containers Solve

  1. Distribution: Imagine you develop an application that works on your computer but not on others. Locating configuration differences can be complicated and slow. Containers address this issue by packaging the application with its dependencies and configurations in an image, ensuring it works the same way on any system. The downside is that images can take up more space, especially if the application has many dependencies. This is why containers are mainly used for applications running on servers, where it is crucial that what works on the developer's computer also works on the server.

  2. Execution: Running an application in a container is similar to running any other app, but with the advantage that containers are isolated. This means we can limit the container's access to system resources (CPU, memory, etc.), preventing it from interfering with other running applications. Although this isolation is less than that of virtual machines, it is sufficient for most cases and allows for more efficient and controlled use of resources.

Difference Between Virtual Machines and Containers

  • Containers: Use the same kernel of the operating system to run multiple applications, each isolated in its environment but sharing the same operating system.
  • Virtual Machines: Simulate a complete computer with its operating system and kernel. This provides greater isolation between applications and more information security but requires more resources.

Containers are lighter than virtual machines, and although they do not offer the same level of isolation, they are often sufficient for most applications. The most popular container system is Docker. For example, if we develop a website in a container, we can replicate the image across different servers to ensure availability. If something happens to the server where it is running, we can copy that same file (image) to another server, one that is functioning correctly, saving work.

Image description


Kubernetes: Container Orchestrator

Kubernetes is a system for orchestrating containers, helping to manage, scale, and maintain applications distributed across them. "Scaling" means increasing or decreasing the number of containers based on demand, such as in a website that experiences more traffic at certain times of the day.

This saves costs by not running containers that are not being used. We can launch more replicas when demand is higher and reduce them when it is lower. These replicas can be launched on the same server, different servers, or even on different servers in various parts of the world to be closer to the users viewing the page.

Kubernetes handles:

  • Launching containers when needed and stopping them when no longer required.
  • Balancing the load between containers by distributing user requests among several containers to optimize performance.
  • Running on a cluster of servers, launching applications on servers with greater availability.

Kubernetes can run on one or more servers, forming a cluster. This cluster is the set of servers available to run applications. When Kubernetes needs to run a new replica of the application (referred to as a "pod" in computing), it checks which server has availability and launches the app on that server.

We can think about how the pods running within the cluster interconnect, how they send network information to each other; there are various ways to send this information within the cluster. We can also consider how new applications, for example, new versions, can be launched without interrupting our service.

In Summary

Kubernetes is an ideal infrastructure tool for large applications, as it automates the management of multiple containers across different servers, facilitating scalability and availability. It is not recommended for small systems, as its complexity and cost do not justify the benefits.

Choosing which components to use, how they will interconnect, and how to configure them can be somewhat challenging to manage. This is why most cloud providers offer managed Kubernetes. These come preconfigured and installed with specified components, allowing us to focus solely on deploying our applications in the ready-to-use Kubernetes cluster provided by the cloud provider.

History of Kubernetes

Kubernetes is a set of software components that collectively provide this infrastructure. It was created by Google in 2015 as an open-source project under the Apache license. The name comes from Greek and means "ship's captain." One of the key decisions was to make it extensible, meaning these software components can be improved, extended, or replaced to provide better functionalities. This has led to projects that add or change Kubernetes components to offer enhanced infrastructure capabilities.

Microservices and Kubernetes

Microservices break down a large application into smaller components, where each fulfills a specific task and communicates with other parts by sending packets over the network. For example, one part of a website shows news, another shows related news, another handles comments, and another manages authentication.

In the past, it was common to have a single large site/app that encompassed everything, so if we needed the app to handle more demand and have greater availability, the entire system had to grow as a single entity. The microservices architecture allows services to be scaled individually based on their needs. For instance, the service displaying the homepage of a website may require more replicas than the authentication service due to higher traffic demand.

Kubernetes simplifies the management of microservices, as it automatically adds and removes replicas, distributes traffic, and streamlines administration.

Conclusion

The cloud offers great flexibility: with a click, we can add servers, storage space, configure backups, create databases, or enable global load balancing. However, this level of ease and elasticity comes at a cost, and it is not always the best option for all companies.

For organizations that already have the necessary infrastructure, the cloud may not be the most economical choice. But for those starting out and requiring a scalable solution, the cloud can be ideal, significantly reducing the workload. Some companies also opt for a hybrid strategy, maintaining part of their infrastructure locally while leveraging the cloud for services requiring more flexibility.

Kubernetes is a powerful tool for managing applications that use containers, especially in microservices environments. While it is a complex infrastructure, it is extremely useful for large applications requiring scalability and high availability. Cloud providers typically offer managed Kubernetes, reducing complexity and allowing developers to focus on deploying their applications.


Jobs Related to the Cloud

When a job posting mentions "cloud infrastructure," it usually refers to the ability to handle different cloud service providers and the products they offer. People working in this field utilize resources provided by providers like Amazon, Google, or Microsoft to deploy and maintain applications. For example, in a development team, the frontend manages the application that interacts with users, while the backend handles services running on the cloud infrastructure.

Those maintaining cloud services need to know how to use virtual servers, storage systems, databases, and monitoring tools, among other services provided by these providers.

If the job posting refers to "cloud technologies," it may be talking about tools related to the Cloud Native Computing Foundation (CNCF). This foundation, created around Kubernetes, groups together a series of open-source projects used alongside Kubernetes to create cloud-native solutions. These technologies are designed to work on elastic infrastructures, leveraging containers, microservices, and other dynamic functionalities.

Additionally, there is the option to work on developing the cloud infrastructure itself, being part of the team that ensures servers are always available and functioning correctly.


*--My Own Notes Taken from the Brilliant Content of "Aprendiendo con Marga" <3--
*

Top comments (1)

Collapse
 
jangelodev profile image
João Angelo

Hi Samira Awad,
Thanks for sharing.