This blog is a part of a series on Kubernetes and its ecosystem where we will dive deep into the infrastructure one piece at a time
Now that we have answered all the basic questions in relation to Kubernetes and have also had a look at the various architectural patterns you can opt for, the next step to this puzzle is to figure out the deployment strategy which is of the right fit for you. And that is what we are going to discuss in this blog post.
You might have seen people talk about Private Cloud, Public Cloud, On-Premise and Hybrid Cloud deployments. But with Kubernetes, a lot of these differences fade away since most of the differences between all these environments are typically not related to Kubernetes at all but to the infrastructure supporting Kubernetes deployments.
In a public cloud deployment, the cloud provider takes care of almost everything around your Kubernetes cluster giving you a near-unlimited scalability, lowest maintenance and lower costs since you are going to share all the resources with other tenants as well making it a great option for all businesses which does not host any highly confidential data and can manage on a shared infrastructure.
Ultimately, the public cloud is all about maintaining a shared security model with both the cloud provider and the users playing significant roles. You can read about GKE Shared Security Model or GCP Shared Responsibility Model, AWS Shared Responsibility Model and Azure Shared Responsibility Model to learn more about what different cloud providers say about the responsibilities they take and those that are offloaded to you.
While there are soft-partitions typically in place in a public cloud, sharing of resources between multiple tenants have often been viewed as a security concern by some organizations along with some sectors like banking/finance, health and military having strict regulations in place on where and how you host data along with the various data localization laws that govern a respective region.
In such cases, a private cloud can actually help you have more isolation and control of all resources (hardware and software) while allowing the cloud provider manage the resources for your tenant. It is like your own private data center in the cloud. While this adds a significant overhead to the pricing and also adds extra pieces to be managed by dedicated DevOps team, it can sometimes be worth it rather than having to manage everything on premise while also catering to workload elasticity.
Not all cloud providers do support private cloud (remember that Virtual Private Cloud and Private Cloud are quite different) with not many providers supporting private cloud without a VPC.
Virtual Private Cloud
This is the typical go to option when you want to opt in for private cloud. A Virtual Private Cloud (VPC) is nothing but running a private cloud in the public cloud infrastructure with multiple tenants separated by different subnets, private IPs and peering essentially simulating a dedicated environment (while the underlying infrastructure is still shared).
This would fit most of the use cases requiring compliance with all the regulations in place isolating the data transmission, processing and storage all in a private environment while being able to maintain the same cost as using a public cloud.
While there has been a huge cloud adoption over the years, on-premise systems still have its own place. They are being used to maintain the highest level of isolation, bolstering its own infrastructure, network, security, DNS and so on allowing the business to have complete control on all the infrastructure being used, reduce recurring costs, establish use-case specific network optimizations and even function in case of global failures from the other cloud providers making it a good bet if you are working with huge amounts of compute and data over long periods of time and have the resources to have on-premise datacenters in place. But do note that on-premise is not without challenges and its always better to have a cloud strategy to fall back to while trying to do it all on-premise.
While there are a lot of deployment strategies in place, hybrid cloud stands out amongst others since it allows you to use multiple deployment models or cloud providers depending on your needs and make the complete system work together as one (for eg. you can use an on-premise deployment for regulated industries and a public cloud deployment for others, or you can use GCP in US and INDIA while you opt for AWS, Azure or Alibaba in CHINA).
This is made possible by the very nature of Kubernetes being a standard portable platform across cloud providers, ability to manage infrastructure as code, ability to setup networking between them whenever needed with the help of multi-cluster service meshes and also due to the ability to orchestrate the deployments using Kubefed and Crossplane.
There has recently been new proprietary options in the market to enable such hybrid cloud deployments with services like Google Anthos, Azure Stack and AWS Outposts in place for enterprises who is looking to start with this journey with most of the heavy lifting done by the cloud providers. But do watch out for the pricing since it can end up to be costly over long periods.
Hybrid Deployments has to be done with great care since it adds a lot of complexity to the infrastructure you may have to manage also keeping in mind the pricing (eg. cross-region network calls may end up costing quite a lot)
Thinking of hybrid deployments brings us to workload portability because unless you have a portable workload, it may not be feasible to have hybrid deployment strategies. This also means that you have to reduce the dependence on proprietary services from your cloud providers as much as possible since you might have to end up doing cross-cloud or cross-region API calls otherwise if your other cloud provider or on-premise systems don't support it. Or sometimes you might even have to build out abstractions within your applications since not all the same service across multiple cloud providers does not often have the same APIs adding more complexity especially with hybrid architectures or you might have to use something like Crossplane to enable this for you to some extent.
But if all of these are not an issue, then Containers and an orchestration system like Kubernetes can always take care of workload portability especially with OCI now in place for containers and CSI, CNI, CRI, SMI for storage, networking, runtime and service mesh respectively creating a healthy standards based ecosystem for all thereby enabling workload portability without lock-in since for a workload to be truly portable, all the underlying resources should be portable without any/very limited changes.
While Kubernetes constructs like Pod and Deployment don't lock you in to a respective provider, you have to take into account the underlying infrastructure from the cloud provider (storage, compute and networking) which can sometime impact the way in which K8 runs its workloads across providers.
The Best Practices
If you are in your way to opt for a cloud provider, do make sure that you check out their best practices documentation which can really help you in understanding the ways in which you should organize and manage your resources for reaping the maximum benefit with future in mind. For instance, use this for Google Cloud, this for AWS, this for Azure, this for VMWare and there are lot of case studies which can help as well as you start your journey.
Bare Metal, Virtual Machines, Containers and Serverless
The next question which you might have is, what is the best unit of deployment for me? Should I go bare metal or virtual machines or containers or serverless? This depends completely on your use case and the degree of abstraction and control you want over your infrastructure.
- Bare Metal: Bare metal servers are servers which does not have any hypervisor on top thereby making it a single tenant and has the complete control of storage, networking and compute is available to you. You get the benefit of more storage, faster deploy times, faster speeds, efficient container deployments since you don't have to deal with VMs in this case which can be a significant overhead over your host operating systems. But this also means that you have to end up opting for dedicated instances from your cloud provider which can cost you more since you have to account for the elasticity you may need in advance and also the fact that you have to manage everything by yourself (your cloud provider completely gets out of the picture once you have the bare metal server in place) which can add operational overhead if you don't have the right team and tools in place.
- Virtual Machines: Virtual machines have made multitenancy possible thereby reducing the costs for the users since hardware gets shared depending on the needs. The other thing about virtual machines is the ability to scale up/down whenever needed by adding more VMs and load balancing between them but this is not as elegant as containers or serverless since you also end up having an operating system to take care of in every VM that you add which can lead to operational nightmare during patches/upgrades, add to huge licensing costs in case you are going for a proprietary OS and at the same time is not efficient as containers since containers share the underlying compute, storage and networking better giving you the ability to spin up more containers for the same cost you are spending on your VM.
- Containers: Containers have literally brought about a revolution in DevSecOps and infrastructure and people started realizing the huge benefits with Docker pioneering the movement and making it accessible for all (even though it was in use before). Containers make it possible to isolate your workloads without having to worry about managing new virtual machines, have a consistent/reproducible deployment across multiple environments, allow for efficient scalability, drastically reduce on the licensing costs and adding an orchestration system like Kubernetes or Swarm makes it even more powerful giving us the ability to treat containers like cattle and cater to all kinds of failures you can have in a typical distributed system. And this has truly changed the way we operate today but does require a significant tooling to be in place for it to work properly.
- Serverless: Serverless has long been seen as the final step to elastic computing. And while it seems ambitious, it cannot completely replace containers, virtual machines or bare metal deployments. It is to be seen as a great complement to them all considering the significant limitations they have. When you want to go for serverless you have to take into account a few things like Cold/Warm/Hot start of serverless functions since that decides the latency of the response you are going to get, keep in mind that every cloud provider has a timeout for execution like 15 minutes in case of AWS, 9 minutes in case of Google Cloud functions, 10 minutes for Azure Functions and so on making it unsuitable for long running jobs. In addition to this, there are also restrictions to the programming languages you can use in your serverless function (unless you are opting in for a container based deployment which essentially makes it a container based deployment 🤔) . If you still want to use serverless for long running jobs, you might have to reach out to them for dedicated/premium plans or maintain your own serverless infrastructure within your Kubernetes cluster using something like KNative, OpenFaas, Kubeless or similar and setting your own limits.
As you might already realize from what we have discussed, the best way forward is to choose the right strategy for your use case rather and embracing hybrid strategies so that you can keep all the factors including performance, scalability, security, usability and costs in check.
And if this helped, do share this across with your friends, do hang around and follow us for more like this every week. See you all soon.