DEV Community

Cover image for How is it: to live with Kubernetes in production?
Roman Belshevitz (Balashevich) for Otomato

Posted on

How is it: to live with Kubernetes in production?

Kubernetes' success is due in no small part to its flexibility and power as a container orchestration system. It can scale virtually indefinitely, providing the backbone for many of the world's most popular online services. And as a proven open-source solution with a rich ecosystem, it's easily accessible and simple to set up, whether for personal learning, development, or testing.

When it comes to deploying Kubernetes in production, however, things get a bit more complex. There are numerous aspects you need to consider that cover the full spectrum of critical success factors for an online application, including stability, security, and how you manage and monitor your application once it's up and running. If you get any of these points wrong, it can be costly.

Please note, however: Kubernetes is a complicated topic — you will have to be patient and delve into many aspects, or…contact us at Otomato — at least for advice 🤓. Anyway, author will be glad if respective colleagues will consider this text a first step on the road to production spinning on K8s.

And that's it… How will Kubernetes be used in production?

Kubernetes is designed to deploy, scale, and manage containerized applications. It schedules the containers themselves and manages the workloads that run on them. This ability to manage both the applications and the underlying infrastructure enables the integration of software development and platform management — which in turn improves application manageability and portability. With the right architecture and processes in place, it's possible to make frequent changes to a highly available application without taking it offline. You can even move an application from one cluster to another. This is the reality of using Kubernetes in production. However, to harness the potential of Kubernetes in this way, you need to configure it properly from the start.

Kubernetes in production: important considerations

Production-ready is a multi-faceted term, because what “production-ready” means depends on your use case. While it could be argued that a Kubernetes cluster is production-ready the moment it can serve traffic, there are a number of generally accepted minimum requirements. Let's explore these below.

🎯 I (first, indeed). Security

Obviously, you need to prioritize security and ensure that attack surfaces are kept to a minimum. That means securing everything — including your applications and the data they use. Kubernetes itself is a rapidly evolving product, with updates released regularly. Make sure you stay up to date.

🎯 II (second). Portability and scalability

These two aspects can be grouped together, as they ultimately boil down to the same thing: maximizing cluster performance. The goal is to ensure that you continue to benefit from Kubernetes' ability to self-heal nodes, automatically scale any infrastructure, and adapt to your expanding business without sacrificing performance.

🎯 III (third). Speed of deployment

Kubenetes is designed to simplify infrastructure and application management. Its automated features, coupled with an appropriate operating model like DevOps approach, allow you to increase the speed of deployment through automated CI/CD pipelines.

🎯 IV (fourth). Disaster recovery

Using Kubernetes in conjunction with DevOps approach can help tremendously with disaster recovery, reducing mockup-to-release time from hours to minutes. Even in the event of a complete cluster collapse, you are even capable to recover your container infrastructure quickly and easily with K8s assistance.

🎯 V (fifth). Observability

Observability is all about getting an overview of your running services so that you can make informed decisions before deployment to minimize risk.

Kubernetes in production: best practices

In production, best practices primarily concern how you will run Kubernetes at scale. This involves a number of issues related to managing a large application that may be distributed across different time zones and clouds, with multiple development teams working on it simultaneously. Considering it, let’s list k8s’ feasters which are useful for application development in a team.

⚙️ Namespaces allow different services to run side by side on a single node. When you divide the cluster in this way, multiple teams can work on it simultaneously. Namespaces are easy to create and delete (hint: it can be even easier) and can reduce server costs while increasing quality by providing a convenient environment for testing before deployment.

⚙️ Role-Based Access Control (RBAC). For an added layer of security, RBAC gives administrators control over who can see what is in operation in a cluster. It allows the creation of role groups (e.g., 'Developer' or 'Administrator') to which group permissions can be assigned. Administrators can then specify which roles can access which clusters, including details such as whether they have read or write access to them. Roles can also be applied to an entire namespace, allowing you to specify who can create, read, or write Kubernetes resources within that space.

⚙️ Network Policies. Remember that a namespace is just a virtual separation. Applications can still easily communicate with each other. To restrict which services (or even which namespaces) can communicate with each other, you should implement a network policy.

Advanced protection in production

🛡️ Pod-to-pod SSL traffic. SSL is not native in Kubernetes, but can be implemented via a secure service mesh such as Istio.

🛡️ Accessing the Cluster API. TLS should be implemented for all API traffic, along with API authentication and authorization for all calls.

🛡️ Access to the Kubelet. The Kubelet provides HTTPS endpoints that allow control over pods and nodes. All production clusters should therefore enable authentication and authorization via the Kubelet.

🛡️ User and Workload Capabilities. Restrict resource usage by setting quotas and limits, specifying container privileges, and limiting network, API, and node access to pods.

🛡️ Cluster component protection. Finally, restrict access to etcd, enable full audit logging, start a key rotation schedule, encrypt secrets, and set up standards and audits for third-party applications.

🛡️ Data storage and Kubernetes. It is a complex issue. The most important point to consider is whether your application is stateful or stateless. This is because when a container is restarted, the data it contains is lost. If your application is stateless, this is not a problem, but for applications that need persistent data storage (e.g., MongoDB or MySQL), you need to use a different Kubernetes feature: Volumes.

🛡️ Defining CI/CD pipelines well — with Kubernetes in mind. Cloud-native software like Kubernetes has made continuous deployment a reality for many organizations. But to achieve this, you require solid technical practices and well-organized CI/CD pipelines. These can help you improve code quality and increase the speed at which you can deploy new features.

This is not the time to be careless!

Saying in a more common manner, when being properly organized with DevOps, CI/CD also solves the problem of having to give your entire development team kubectl access to your cluster — something you should generally avoid. Btw, look, AWS has a whole manual on this topic!

In conclusion. Even Kubernetes has its limitations

Kubernetes delivers more reliable services than other systems thanks to its ability to self-heal and auto-scale. Still, it lacks some tools. For example, if you are running a large distributed system, monitoring external dependencies such as RDS databases or other services from public cloud providers can be problematic.

The author expresses the hope that this brief will help colleagues to competently and effectively organize the process of “rolling out” the software to consumers and ensure the trouble-free operation of your applications.

However, it should be kept in mind — and we at Otomato always do that! — there may be challenges with that rush for increasing speed. The most common issue with speeding up software development is the increasing risk of bugs. To mitigate this risk, you could add some manual steps before deployment. Please note: doing so risks diverging your development and production environments.

Top comments (0)