Introduction
Local storage volumes remain the preferred choice when it comes to the control of managing cluster services and achieving higher performance.
In this article, we highlight how Vadim Tkachenko, CTO, Percona, deployed a Percona Kubernetes Operator using OpenEBS Local Persistent Volumes for enhanced database efficiency.
A few basics first:
Managing Kubernetes Volumes with OpenEBS
Local PVs are meant for applications that can tolerate reduced availability since Local Volumes can only be accessed from one node and do not include replication at the storage layer.
OpenEBS has become a popular choice for provisioning local persistent volumes (local PVs) for resilient applications. Users of OpenEBS for Local PV include reference users such as ByteDance (maker of TikTok), Flipkart (one of the world’s largest eCommerce providers), and thousands of others including many that have shared their experiences in the Adopters.md at the OpenEBS community.
OpenEBS also extends Kubernetes storage by introducing additional features such as –
- Storage device health monitoring
- Capacity management
- Velero backup & restore
Lifecycle Automation Using Percona
The Percona XtraDB Cluster is an open-source, enterprise-grade Kubernetes solution that supports distributed applications through improved availability and data security. Percona uses streaming replication that enables:
- Bulk data transfer faster
- Easier data processing
A Percona XtraDB Cluster is deployed using custom-configured Kubernetes Operators that automate the creation and management of cluster components.
In the steps to follow, we’ll deploy such operators on a Percona XtraDB cluster while using Simple Storage and OpenEBS Local PV Hostpath for storage.
Deploying Operators with Simple Storage
Simple storage solutions are available in multiple storage types attached to any Kubernetes cluster. Such local volumes can only be accessed from a single node within the cluster, and follow two different approaches to deploying clusters.
- A Percona Cluster deployed using simple storage can be defined by specifying the volume specification using
hostPath
into a Percona Cluster deployment’s YAML file.
volumeSpec:
hostPath:
path: /data
type: Directory
- The second approach is to use the
emptyDir
volume specification (that is similar to Ephemeral EC2 instances) in the Percona Cluster deployment’s YAML file:
volumeSpec:
emptyDir: {}
With either of the approaches, the PODs that use this volume have to be scheduled on the node from where it is provisioned. This makes it perfect for applications with High Availability configurations hard-coded into them.
Deploying Operators in Persistent Volumes using OpenEBS
Deploying a cluster in local persistent volumes takes on the redundant tasks of managing volumes provisioned in the cluster. While doing so, operators utilize dynamically created volumes, which OpenEBS efficiently provisions through a number of alternatives.
The following specifications are required to dynamically provision a Percona XtraDB cluster with a PV using OpenEBS LocalPV hostPath
.
The OpenEBS Local PV Hostpath
OpenEBS enables Local Persistent Volumes that allow developers to deploy high-performance applications with the ability to perform automatic replication, data protection, data backup, and restore. The Local PVs offered by OpenEBS are backed by hostPath
on various filesystems, including Ext3, ZFS, and XFS.
The first step to provisioning the volumes involves defining the storage class.
This can be done through a local Non-Volatile Memory Express (NVMe) storage. This allows the cluster to connect to servers and flash-based storage devices through the machine's peripheral component (PCIe) bus. The localnvme
storage class specification will look similar to:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localnvme
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /data/openebs
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Alternatively, this can also be done by defining the Local SSD storage class with the following specifications:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localssd
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /mnt/data/ebs
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
The operators can now be deployed using the defined storage classes. In our case for this article, we deployed the Percona Cluster using the localnvme
storage class with the specifications as shown below:
volumeSpec:
persistentVolumeClaim:
storageClassName: localnvme
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
The cluster is now deployed using the localnvme
storage class, and stores data on the /data/ebs
repository.
To check the volumes that have been provisioned, use the command:
$ kubectl get pv
The output is similar to this that shows Local Persistent Volumes hosted on the specified storage class.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-325e11ff-9082-432c-b484-c0c7a3d1c949 200Gi RWO Delete Bound pxc/datadir-cluster2-pxc-2 localssd 2d18h
pvc-3f940d14-8363-450e-a238-a9ff09bee5bc 200Gi RWO Delete Bound pxc/datadir-cluster3-pxc-1 localnvme 43h
pvc-421cbf46-73a6-45f1-811e-1fca279fe22d 200Gi RWO Delete Bound pxc/datadir-cluster2-pxc-1 localssd 2d18h
pvc-53ee7345-bc53-4808-97d6-2acf728a57d7 200Gi RWO Delete Bound pxc/datadir-cluster3-pxc-0 localnvme 43h
pvc-b98eca4a-03b0-4b5f-a152-9b45a35767c6 200Gi RWO Delete Bound pxc/datadir-cluster2-pxc-0 localssd 2d18h
pvc-fbe499a9-ecae-4196-b29e-c4d35d69b381 200Gi RWO Delete Bound pxc/datadir-cluster3-pxc-2 localnvme 43h
Advantages of Using OpenEBS for Dynamic Provisioning of Local Volumes
A few benefits of using OpenEBS for dynamic volume provisioning include:
OpenEBS volume provisioning helps to avoid vendor lock-in by enabling a data abstraction layer that can easily be moved between different Kubernetes environments.
Designed with a microservices architecture in mind, OpenEBS offers the perfect storage solution for stateful Kubernetes applications. The loosely coupled architecture makes it easy to run applications by multiple teams since each team can deploy its own dynamic volume and storage class. Users can also select different underlying OpenEBS engines, whether LocalPV or replicated storage options such as OpenEBS Mayastor.
In this article, we explored how OpenEBS enabled the dynamic provisioning of PVs for a Percona XtraDB cluster. While Percona makes it efficient to manage an application lifecycle by leveraging Kubernetes operators, OpenEBS enables the running of stateful Kubernetes applications by provisioning of Dynamic Persistent Volumes (PVs).
MayaData is the originator of OpenEBS, the most popular, and highest performance, containerized storage for Kubernetes. To discuss how OpenEBS can be used for your stateful workloads, please drop us a message here.
This article has been written based on Vadim Tkachenko’s helpful insights on his experience of using OpenEBS while deploying a Percona Kubernetes Operator.
References :
Deploying Percona Kubernetes Operators with OpenEBS Local Storage
OpenEBS for the Management of Kubernetes Storage Volumes
This article has already been published on https://blog.mayadata.io/deploying-percona-kubernetes-operators-using-openebs-local-storage and has been authorized by MayaData for a republish.
Top comments (0)