DEV Community

loading...
Cover image for Kubernetes Backup & Restore made easy!

Kubernetes Backup & Restore made easy!

Techworld with Nana
DevOps Consultant | YouTuber ๐ŸŽฌ | Docker Captain ๐Ÿณ | AWS Container Hero โ˜๏ธ | Based in Austria ๐Ÿ‡ฆ๐Ÿ‡น
ใƒป5 min read

DevOps tool of the month is a series, where each month I introduce one new useful DevOps tool in 2021 ๐Ÿ™Œ๐Ÿผ

For July I chose: Kasten's K10 platform ๐ŸŽ‰ - a data management platform to backup and restore your applications easily to protect your data.


In this tutorial, we are going to talk about a challenging task of data management in Kubernetes and a tool that makes data management very easy for the K8s administrators, which is Kasten's K10.

What does data management in K8s actually mean, and why is it a challenging task? ๐Ÿง

Imagine, we have the following real-world setup in our K8s production cluster.

Production Setup

Production Kubernetes Setup

An EKS cluster, where our microservices application is running. Our microservices use ElasticSearch database, which is also running in the cluster. And in addition to that our application is using Amazon's RDS data services, which is a managed database outside the cluster.

This means our application has data services both inside and outside the cluster. And these data will be physically stored on some storage backend. RDS data will be stored on AWS of course. Data for ElasticSearch will be used in the cluster through K8s Persistent Volume components, but they also need to be physically stored somewhere. This could be a cloud storage on AWS, Google Cloud etc. or on-premise servers.

Data Management Use Cases for this setup

1. Underlying infrastructure fails โ›”๏ธ
The underlying infrastructure, where the cluster is running fails, and we lose all the pods, and the whole cluster. We would need to recreate the cluster with the same cluster state (K8s components - ectd) and application data. So we need to restore our whole cluster.
2. ElasticSearch DB gets corrupted โ›”๏ธ
Or let's say our ElasticSearch DB gets corrupted or hacked into and we lose all the data. Again, we need to restore our database to the latest working state.
3. Replicating Kubernetes cluster (Multi-Cloud or Hybrid Cloud) โ˜๏ธโ˜๏ธ
Or another common use case, say our cluster is running on AWS, but we want to make our production cluster more reliable and flexible, by not depending on just 1 cloud provider and by replicating it on a Google Cloud environment with the same application setup and application data.

The challenge

In all these cases, the challenge is:

how do we capture an application backup that includes all the data that the application needs, whether it's the databases in the K8s cluster or a managed data service outside the cluster?

So that if our cluster failed, or something happened to our application and we lost all the data etc, we would be able to restore or replicate the application state with all its components, like pods, services, configMaps, etc and its data?

And that is a challenging task.

Bad Solution Options ๐Ÿคจ

Now let's look at what alternatives we have available:

  • VM or etcd Backups ๐Ÿ‘€

If you do VM backups of your cluster nodes or etcd backups, you will save the state of the cluster, but what about the application data? They are not stored on the worker nodes, they are stored outside the cluster on a cloud platform or on on-premise servers.

โœ… Cluster State backup up
โŒ Application Data not backed up

  • Use Cloud Providers Backup and Restore Mechanism ๐Ÿ‘€

On the other side, for the cloud-storage backends, the cloud providers themselves have their own backup and replication mechanisms. But it's only partially managed by the platform, so you still have to configure the data backups etc yourself. Plus, it's just the data in the volume. This doesn't include the cluster state.

โœ… Data in Volume backed up
โŒ Only partially managed by cloud platform

  • Write Custom Backup and Restore Scripts for the different infrastructure levels ๐Ÿ‘€

Many teams write custom scripts to piece together backup solutions on different levels, like components and state inside the cluster and data outside the cluster. But this scripts can get very complex, very soon, because the data and state is spread on many levels, many platforms. And the script code usually ends up being too tied to the underlying platform where data is stored.

The same goes for the restore logic. Many teams use custom scripts to write restore logic or cluster recreation logic from all the different backup sources.

So overall, your team may end up with lots of complex self-managed scripts, which are usually hard to maintain. ๐Ÿ˜ฃ

โœ… Tailored to application
โŒ Complex scripts
โŒ Too tied to the underlying platform
โŒ Difficult to maintain

How K10 solves these problems ๐Ÿš€

These are exactly the challenges that Kasten's K10 tool addresses. So how does K10 solve these problems?

๐Ÿ’ก Abstraction of underlying infrastructure ๐Ÿ’ก

K10 abstracts away the underlying infrastructure to give you a consistent data management support, no matter where the data is actually stored:
K10 abstraction

So teams can choose whichever infrastructure or platform they want for their application, without sacrificing operational simplicity, because K10 has a pretty extensive ecosystem, and integrates with various relational and NoSQL databases, many different Kubernetes distributions and all clouds.

So instead of backup scripts for each platform or level, you just have 1 easy UI interface of K10 to create complete application backups in the cluster:
K10 data management platform

So everything that is part of the application, like K8s components (pods, services etc.) and application data in volumes as well as data in managed data services outside the cluster will be captured in the application snapshot by K10.

So you can easily take that snapshot and reproduce or restore your cluster on any infrastructure you want. ๐Ÿ™Œ

๐Ÿ’ก Policy-driven Automation ๐Ÿ’ก

K10 works with policies. Instead of manually backing up and restoring your applications, which means more effort and higher risk of making mistakes, you can configure backup and restore tasks to run automatically with the settings you defined in the policy.

๐Ÿ’ก Multi-Cluster Manager ๐Ÿ’ก

Now, what if you have multiple clusters across zones or regions or even across cloud platforms and on-premise data center. How do you consistently manage 10s or 100s of cluster backups? ๐Ÿค” Well, K10 actually has a multi-cluster mode. In K10's multi-cluster dashboard, you have a view of all your clusters, as well as a way to create and configure global backup and restore policies that you can apply to multiple clusters.

๐Ÿ’ก Kubernetes Native API ๐Ÿ’ก

Now, if you have 100s or 1000s of applications across many clusters, of course you don't want to create policies on the UI. For that, K10 provides us with Kubernetes native way of scripting policies with YAML. So you can also automate your policy creation and configuration as part of your policy as code workflow. ๐Ÿ‘

K10 in Action - Hands-On Demo ๐Ÿ‘ฉ๐Ÿปโ€๐Ÿ’ป

In the video, I will show you how K10 actually works in practice. In the hands-on demo, we will create an automated backup policy for our mysql application to protect its data and then restore it within seconds:


More awesome tools coming up next on this series, so stay tuned for it! ๐ŸŽฌ ๐Ÿ˜Š


Like, share and follow me ๐Ÿ˜ for more content:

Discussion (2)

Collapse
vinayhegde1990 profile image
Vinay Hegde • Edited

Excellent article with the video as always Nana!

Had a few doubts as follows:

  1. Kasten K10 website mentions 10 nodes free forever, does it mean a Kunernetes cluster with 10 worker nodes or is it something else?

  2. Would maintaining the applications as YAML Helm charts in a Git repo be a better approach since not only they're reproducible across K8s clusters (EKS, GKE or on-prem) but also provide visibility via Gitops & reduce dependency of managing 1 more tool?

Could you please share your insights here?

Collapse
michaelcade1 profile image
Michael Cade

Awesome stuff Nana as always.