DEV Community

Cover image for Infrastructure Engineering - The Kubernetes Way
t.v.vignesh for Timecampus

Posted on

Infrastructure Engineering - The Kubernetes Way

This blog is a part of a series on Kubernetes and its ecosystem where we will dive deep into the infrastructure one piece at a time

In the previous post, we discussed "The First Principles of Infrastructure Engineering" setting the context to this series discussing all our visions and goals when working with incrementally scalable and adaptable cloud native infrastructure to cater to the most important demands of today. With that settled, let us start our journey in this direction and see how various tools and platforms make that all happen to realize our goals.

While we will talk about infrastructure, the first parallel we can draw is Kubernetes and its ecosystem itself given it addresses a major spectrum it covers and also the thriving ecosystem, community and enterprise backing around it. And that is what we are going to talk about in this blog post. On how Kubernetes addresses the various pressing challenges in the infrastructure as a whole.

You may ask, is Kubernetes the only player which does this?

Well, the answer is NO, and there are definitely other players like CoreOS and OpenStack. While you should choose what is best for you, there is a great future for Kubernetes and its ecosystem especially because of the huge community backing which is more evident if you take a look at the thriving landscape here. And most of the times K8s should be a great fit for most of the use cases.

But this does not stop you from using them together with CoreOS and Openstack. For instance, you can run Kubernetes on Openstack or Openstack on Kubernetes if you wish.

What does the Cloud Native Stack look like?

Just getting your application running within a Virtual Machine on a cloud provider does not make you cloud native. But as Pivotal said, Cloud native is an approach to building and running applications that fully exploit the advantages of the cloud computing model which means that we need a completely different thought process on how we look at infrastructure.

These are some of the important parts of the cloud native stack as mentioned by Mr.Janakiram MSV in The New Stack.

The Cloud Native Stack

And each block you see in this image is a completely separate problem to be solved when working with applications at any scale. Let us see how Kubernetes and the various tools takes up this difficult job abstracting a lot of the complex details required during onboarding and maintenance of infrastructure.

(You can go through the book from The New Stack on The State of Kubernetes and its ecosystem here: https://thenewstack.io/ebooks/kubernetes/state-of-kubernetes-ecosystem-second-edition-2020/)

Compute

Every application requires some degree of computing power depending on the business logic and the demand and the infrastructure should allow for incrementally scale up or down compute as needed depending on the ever changing requirements.

Kubernetes (which is essentially a project conceptualized from Borg at Google) allows for managing compute efficiently and scale it up/down as needed.

The compute available for the application depends on various factors. The type of node in which you run the containers (the number of CPU cores available to the node, the processors attached, the RAM available and other accessories like GPUs in some cases which are compute intensive), the constraints you have set manually including the cpu, memory limits, pod affinity/anti-affinity and so on.

And the best part about it is that all this compute is shared within the cluster or a specific namespace thereby allowing you to effectively manage the resources you have. And if you are not sure about the change in the computing needs, all you need to do is setup an autoscaler (HPA, VPA, CA) and that will scale your compute up/down depending on the constraints you set.

Now, what if you want to do serverless? Its easy when especially when you work with the Kubernetes ecosystem that you have projects like KNative, OpenFaas, etc. doing exactly this (Ultimately, serverless is nothing but abstracting everything for the developers and offering the elasticity in compute as needed)

Or if you are someone holding a legacy infrastructure and cannot afford to run all your compute on containers, you can go for projects like Kubevirt which allows for exactly that running containers and virtual machines side by side in your cluster.

Having all these choices makes Kubernetes a great choice for managing compute at any scale.

Storage

Managing state is difficult for a reason. Managing your storage improperly may lead to bottlenecks in your application as you scale. Every disk is limited by its IOPS (Input/Output Operations Per Second) and as you scale, the storage layer might cause quite a few problems and may often become the bottleneck.

This might require clustering your storage possibly with multiple replicas and to add to this, you would also need to maintain periodic backups and handle seamless failovers. Now, while Kubernetes does have its way of managing storage using Persistent Volumes, Claims and Storage Classes it does not handle every problem with respect to storage and rather leaves it to the storage provider.

Its has been possible to plug in many storage provisioners to Kubernetes and add many volume plugins but the future aligns itself towards CSI and is therefore promising for compatibility with any storage provider of your choosing.

For eg. if you would like to bring in file systems like Ceph onto Kubernetes, you do have a Ceph-CSI driver or if you would like to reduce friction while maintaining Ceph, you can even go for something like Rook with Ceph CSI. There are many players in this space and you can read about how they perform and differ here each providing different set of options including File, Block, NFS and more.

If you would like to get more idea on the various storage options available, I would recommend you to watch this talk by Saad Ali from Google:

Network

Networking is critical for any application since it essentially is the data path for all your requests, responses and internal communications. This becomes even more important in the Kubernetes world since your cluster is going to be composed of various pods across same or multiple namespaces each fulfilling specific responsibilities, and having the ability to communicate with each other is critical to the application's working.

This is made possible with the inbuilt service discovery mechanisms from Kubernetes like CoreDNS (Kubedns being its predecessor) with some help from Kube-proxy and also the basic Kubernetes constructs like Services and Ingress ranging from various types including ClusterIP, NodePort, LoadBalancer and more.

Having constructs like these makes it very easier to establish communication via labels rather than having to hardcode the IP addresses or do complex routing all of which is handled by the DNS server if given the proper FQDN (Fully Qualified Domain Name)

Networking becomes more interesting especially with introduction to CNI (Container Networking Interface) which has actually become the standard for all the networking abstractions (similar to what CSI does for storage). This allows users to use any CNI plugin underneath amongst the large number of in-tree and third party plugins available like Calico, Weavenet, Flannel, etc. and you can look at a comparison of some of them here.

CNI even empowers service mesh implementations to leverage its power. For instance, LinkerD has its own CNI plugin which takes care of managing the IP Table rules for every pod in the service mesh and Istio has something similar too.

Having all of these as Kubernetes constructs also allows us to moderate traffic using Pod Network Policies and the possibilities are endless.

All of this makes Kubernetes a great fit for networking as well. And if you would like to know more about the networking stack, I would recommend you to watch this talk by Bowei Du and Tim Hockin from Google:

Or this talk from Michael Rubin from Google:

Container-Optimized OS

With more and more virtualization and isolation in place, it calls for trimming down the operating system, kernel and all the packages used so that the resources can be used efficiently and there are less bugs to fix or patch. This is why a container optimized OS came into place. For instance there are distributions like Flatcar from Kinvolk, RancherOS from Rancher and cloud providers even maintain their own versions of COS like this, this and this.

If you would like to know on why a container optimized OS is really needed, I would recommend you to watch this talk:

Container Runtime

Docker initially started out as a pioneer for containers (even though cgroups and containers were used even before Docker in production) and this brought about a lot of people using Docker as the container runtime when it all started. But Kubernetes looked at it in a different way with the basic unit being a Pod. While Kubernetes initially came out with support for Docker using Shims and has been maintaining it, it recently announced that it is deprecating Docker as the container runtime in favor of CRI (Container Runtime Interface), a standard which can provide way for standardization for various container runtimes including Docker, Containerd, runc and other runtimes to come like CRI-O

If you would like to have a look at how they differ, this should give a better idea and if you would like to explore more, you can also watch this talk by Phil Estes from IBM:

Service Mesh

Service Mesh is meant to provide you a platform to incrementally scale networking without chaos, maintain control in the communication between services and also offload a lot of operations like authentication, authorization, encryption, tracing, canary testing and more away from the application thereby reducing the complexity as your application scales. But remember, service mesh might be an overkill for small projects and those that don't work with microservices but can definitely benefit as you have scale up your application.

Tools like Istio, Linkerd and Consul are known for empowering users with their great service mesh offerings each running a sidecar along side the actual container. While setting up service mesh was initially complicated, things are gradually coming to a point where there are CLIs and charts which can set up everything for you in one shot.

The Service Mesh Interface (SMI) also paves way to a lot of future developments in this space bringing in a lot of standardization and interoperability without having to change the way your application works. If you would like to know more about what service mesh is and whether you really need a service mesh, you can go through this blog post from William Morgan, creator of Linkerd or you can also watch this talk by Brendan Burns from Microsoft:

Security & Hardening

Security as people rightly say must be a first class citizen for everything we do. While Kubernetes is intentionally not secure by default, it provides you a lot of options and ways to secure the infrastructure and all the workloads you run on top. This can include anything from controlling access to your cluster, the namespace, controlling inter-pod traffic, preventing rootful access, scanning images for vulnerabilities and more and each needs a different approach to be used in order to reach the goal of Zero Trust Security (eg. Beyondcorp).

There are a lot of steps to security in Kubernetes. For instance, you can use RBAC for authorization, encrypt the etcd store, scan the image registry using something like Clair, adding sidecars or libraries with Open Policy Agent (OPA) to allow for defining policies as per your need, adding the appropriate Security Contexts and Pod Security Policies to control the permissions your containers has and a lot more like configuring MTLS to encrypting traffic within your mesh and so on each of which brings you one step closer to your journey with Zero Trust Security.

Container Orchestration Engine

Now that we have all our major part of the underlying infrastructure in place and all the workloads we need to run on top, it is important to have an orchestration layer to manage all the infrastructure we have. This is where a project like Kubernetes falls into place. But as we discussed already, Kubernetes is not the only option out there (though its definitely the leader in this area). There are other options available like OKD which powers Openshift, Rancher Kubernetes Engine (RKE) and more.

And even if you go for Kubernetes, you will find various distributions depending on the cloud provider you go for or the environment you would like to run your clusters in. For instance, you could use Kubeedge if you would like to work with IOTs allowing you to run the cluster offline, optimized to run on smaller hardware with support for different processor architectures.

Build Management

Once you have your applications ready and have checked in the latest code to your version control, the next important step is to actually produce the artifacts you need in order to push the relevant images to the registry and then pull it to run in the cluster. This can be preceded by a lot of steps including scanning for vulnerabilities, running automated tests and so on all of which are typically automated in the DevSecOps flow.

Some of the pioneers in this area does include Gitlab CI, Jenkins-X, Github Actions and more which can be combined with tools like Skaffold, Skopeo and the likes of it to bring about a complete pipeline to manage the builds.

There are a lot of tools you can use to build your OCI compatible images. This can include going for a container builder like Docker, Buildkit (again from Docker/Moby), Buildah and more as shared here. It all boils down to what you would like to optimize for. If you would like to know how they compare, you can have a look at the presentation by Akhiro Suda from NTT here on the various builders and how they differ.

If you are interested in knowing more on how to setup your CI/CD pipelines, I did write a small tutorial on configuring Skaffold with Gitlab CI/CD connecting to a GKE Cluster pushing/pulling images to/from a GCR registry here

Release Management

Once the application has been tested, built, scanned and pushed the final step in the pipeline is to actually release the application to multiple environments as needed. This is where release management plays a major role. Irrespective of the tool being used, the final asset to be applied to Kubernetes is nothing but a YAML or JSON manifest providing all the details of the various resources.

This is the same case even if package managers like Helm is used since it does nothing but template the YAML files as available with various values as supplied by the users which can later be pushed to an OCI compliant registry for versioning and sharing as releases. The CD Foundation also hosts a number of projects under its umbrella meant to address exactly the same problem.

If you would like to go through Continuous delivery with Kubernetes in detail, I would recommend going through this from Google.

Container Registry

As we discussed in Build Management, the artifacts generated (be it container images, helm charts, etc. ) are typically pushed to an OCI compliant registry which can then be used as the single source of truth when deploying to the Kubernetes clusters.

While it all started with Docker Hub leading the way, you have other options like Quay, GCR, ECR, ACR providing Image registry as a SAAS service, and if that doesn't work for you, you can even host your own private registry using something like Harbor making the possibilities endless.

These registries often come with features like user account management and authorization allowing you to share the same registry with multiple users and clients as needed with appropriate tokens or credentials.

Observability

Now that we have a huge stack already in place, the next important thing to keep in mind is the fact that all of these have to be monitored proactively for anomalous behavior indicating possible issues, failures or changes in behavior of the entire system. This becomes really important with scale because it is very difficult to drill down on any problem with a microservices architecture in place with multiple tools and services working together unless an observability stack is in place.

This is one of the places where the Kubernetes ecosystem really shines. You can go for an ELK (Elastic, Logstash, Kibana) stack, EFK (Elastic, FluentD, Kibana) stack, use Prometheus for metrics, Grafana for your dashboards, Jaeger for Distributed Tracing, and so on with new players coming in like Loki, Tempo, etc. making this space really interesting. Or if you don't want to bother maintaining this stack, you can also go for Saas providers like Datadog, GCloud Operations (formerly stackdriver), etc. after setting up instrumentation in the applications where applicable.

Now, this becomes even more interesting if you have service mesh in your equation. You can observe and visualize a lot of application level metrics without changing the code since any sidecar like Linkerd or Envoy exposes its own set of metrics to be collected for visualization.

But what about the agents which expose all the data and metrics? Things are more interesting than ever with the merger between Open Tracing and Open Census creating Open Telemetry which is a standard to look forward to for all observability in the future.

If you would like to get a more comprehensive idea on the future (or should I say present) of observability, I would recommend going through this talk by Tom and Fredric:

Developer Experience

This has been an area of friction so far for all the Kubernetes users since it takes quite a lot of time to get the instant feedback for the changes they do because of all the complexity with Kubernetes comes with. You have to write code, build it, push it to the registry, run it in the cluster and if you want to debug, you are left with not so great ways to do it.

In other words, The Inner Dev Loop has not been that good so far with Kubernetes until recently since there has been an emergence of a huge number of tools to address this problem. Though there are a lot of tools which tries to address DevEx (Developer experience) with Kubernetes, they don't cater to every scenario out there, so you might want to choose the right tool which works for you depending on whether you use Local Clusters, Remote Clusters or Hybrid Clusters for development.

Some of the tools would be Skaffold with Cloud Code, Tilt, Bridge for Kubernetes, Okteto, Service Preview with Telepresence, Garden and so on each addressing the similar problem differently. While they have made the job a lot easier, there is still some more way to go, since according to me, Kubernetes should essentially fade away behind the scenes allowing developers to focus just on the logic at hand but I am very sure that we are in the right path towards it.

So, with this, I hope you got an idea on what the typical "Cloud Native" stack looks like and how we manage our infrastructure "The Kubernetes Way". There is a lot more we can talk about, but I will save it for the future blog posts where we will dive more deep into each part of the stack we talked about in this blog.

I would just like to conclude here saying that making your application "Cloud Native" is a continuous journey since you always have room to improve in one place or another and this should be done incrementally in different phases without getting overwhelmed. Trust me, I have done it and I know how it feels, but if you start small and scale gradually, then it should be an interesting journey.

Any questions? Looking for help? Feel free to reach out to me @techahoy. And if this helped, do share this across with your friends, do hang around and follow us for more like this every week. See you all soon.

Oldest comments (0)