DEV Community

Cover image for Sharing files across Kubernetes clusters with Terraform - the easy way
russmedia-devops
russmedia-devops

Posted on

Sharing files across Kubernetes clusters with Terraform - the easy way

Bit of Erento’s background

My journey with Erento started back in 2015. I joined a company that was transforming from a self-hosted monolith to a cloud-based microservices architecture. We also decided to go on with Kubernetes. We went with one of the very first stable releases 1.0.5, we had no resources limits control, no affinity, even simple deployment resource was far distant. It was a very interesting journey and we learned a lot.

Journey to the clouds

In short: First, we landed on AWS with self-managed Kubernetes Cluster, separate etcd cluster and CoreOS as a system backbone. Everything was running smoothly but the maintenance of such setup absorbed quite a lot of time and our ops resources were quite limited (it was me only). We slowly observed growing google cloud ecosystem, and when google launched managed Kubernetes we decided it is time to go. It turned out to be a great decision. Managed infrastructure saved us a lot of time and money.

There is always an issue that follows you

In our case, one of the issues was always managing customer-uploaded files. On our old datacenter, we had NFS with all of its downsides. It was running on one server and was pretty static, but the whole infrastructure was. When we were doing a redesign we wanted to pick something modern. For us the crucial points were:

  • filesystem scalability
  • redundancy
  • Kubernetes integration
  • ability to snapshot and restore the whole file system
  • managed as a code (preferably terraform)
  • legacy friendly (we were still managing some backend stuff on our good old monolith)
  • no need for a huge amount of data
  • no need for speed (files are not frequently accessed)

We did a lot of tests with managed filesystems like AWS EFS and Google Cloud Filestore but they were lacking a snapshotting feature. We needed an option to quickly restore/recreate the whole cluster and refresh our dev environments.

GlusterFS to the Rescue

Finally, we found the tool that ticked most of our boxes — GlusterFS backed by cool Google Cloud-based GlusterFS setup made by rimiusz. We were only lacking some terraform management — so we decided to write our own terraform module.
Kubernetes — GlusterFS Integration

The base idea is very simple. We run 3 nodes in a separate subnetwork. Each has a static disk attached. We run 3 copies of each file, so we have a very high redundancy. We chose such setup to be able to restore the whole cluster using just a snapshot of one disk.

Before creating the Terraform module for GlusterFS solution, we have also investigated different approach with hosting GlusterFS on Kubernetes itself (as described here).

We have decided not to host on Kubernetes for a few reasons:

  • with our solution, multiple Kubernetes clusters can access GlusterFS cluster, which brings us high availability and failover
  • separation of applications and static hosting layer allows us to maintain and upgrade Kubernetes clusters without any risk of impacting GlusterFS
  • solution with separated GlusterFS is simpler to maintain

Open sourcing the terraform module

After the period of testing we realised that we are really happy with the module, it is maintenance-free. Other companies within Russmedia Equity Partners also had similar needs. So together with our CTO Konrad Cerny, we decided to go all the way and share it back to the community: https://github.com/erento/terraform-google-glusterfs

FYI, we are very passionate about OpenSource here :)

The module can be configured to:

  • start from plain disk or snapshot
  • define based image (preferably Debian based)
  • define the number and type of Gluster nodes
  • define the number of replicas for each file
  • data disks type and size
  • by default Gluster is using last part of the subnet i.e. 10.0.0.254, 10.0.0.253 — if those IPs are already taken, you can define ip_offset and use lower IPs.
  • we also define security — you can define security tags that will be allowed to access your cluster
  • other useful options can be found in variables.tf:

So… Now to create the cluster all you need to do is define:

gluster_definion.tf

And after the terraform run, you will get a ready to apply YAML with endpoints and service.

All you need to do is to apply them to your Kubernetes cluster:

kubectl apply -f glusterfs-endpoints.json
kubectl apply -f files/glusterfs-svc.json
Enter fullscreen mode Exit fullscreen mode

The Usage

To consume endpoints in your app all you need to configure, is to add natively supported GlusterFS volume and mount in your deployment:

What next?

We would like to run GlusterFS similarly to Elasticsearch clusters — as a build-in Kubernetes setup (via. stateful sets). We also plan to create the helm chart for GlusterFS.

Summary

It took us a while to find the solution that would suit our needs almost perfectly. As usual, there is never a one size fits all option, but if this article can help you and users within the company we will be very pleased. Erento is part of Russmedia Equity Partners. We are a group of companies that embrace knowledge and tech sharing. We regularly meet at internal conferences and organize calls to make it happen. It is seriously pushing us forward and we are hoping to spread this kind of spirit all over the world. Please make sure to follow Russmedia Equity Partners on Medium.

Enjoy!

Written by Eryk Zalejski

Top comments (0)