DEV Community

Naresh Maharaj for Aerospike

Posted on • Originally published at developer.aerospike.com

Block and Filesystem side-by-side with K8s and Aerospike

In this article we focus on side-by-side block and filesystem requests using Kubernetes.​ The driver for this is it will allow us to deploy Aerospike using Aerospike's all flash mode.

Storing database values on a raw block device with index information on file can bring significant cost reductions, especially when considering use cases for Aerospike’s All-Flash. To support such a workload, you can configure Aerospike to use NVMe Flash drives as the primary index store.

How does this operation work in practice?
Suppose a devOps engineer has a host machine with one or more raw block devices and wants to use a filesystem formatted to Ext4 or XFS for storage. devOps could log into each host, create partitions, format the Filesystem, and mount the device in the traditional way but this seems less than pragmatic.
That's where side-by-side block and filesystem requests using Kubernetes comes in.

This article describes the following process:

  • Spin up an AWS EKS Cluster with eksctl
  • Clarify our understanding of block storage devices
  • Introduce the Local Static Provisioner
  • Create a Persistent Volume (PV) using a raw device
    • 1. intended for block storage
    • 2. intended for storing files
  • Install httpd and review
    • 1. scheduling Pods to a host with a filesystem
    • 2. use of a PersistentVolumeClaim (PVC)
    • 3. node affinity with a PersistentVolume
  • Install Aerospike using the Kubernetes Operator
  • Insert some test data
  • Use Kubernetes Taints and Tolerations
    • 1. to reserve Hosts with filesystems
    • 2. schedule Pods to Hosts with filesystems
  • Clean everything up.

Prerequisites

Before we zoom into action, we need to have the following:

  • lots of energy and some basic knowledge of Kubernetes
  • jq ( a cool json command line parser )
  • eksctl ( cli tool for working with EKS clusters )
  • kubectl ( cli tool to run commands against k8s clusters )
  • parted ( cli tool for creating and deleting partitions)

Create an EKS Cluster

Setup


Create a file called my-cluster.yaml with the following contents. This specifies the configuration for the Kubernetes cluster itself. The ssh configuration is optional.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: my1-eks-cluster
  region: us-east-1
nodeGroups:
  - name: ng-1
    labels: { role: database }
    instanceType: m5d.large
    desiredCapacity: 3
    volumeSize: 13
    ssh:
      allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
Enter fullscreen mode Exit fullscreen mode

Use the AWS eksctl tool to create our Kubernetes cluster. You may need to install eksctl if this is not present on your machine.
Using eksctl ...

eksctl create cluster -f my-cluster.yaml
Enter fullscreen mode Exit fullscreen mode

The result on completion will be a 3 node Kubernetes cluster. Confirm successful creation as follows :

kubectl get nodes -L role
NAME                             STATUS   ROLES    AGE     VERSION                ROLE
ip-192-168-22-209.ec2.internal   Ready    <none>   2m36s   v1.22.12-eks-ba74326   database
ip-192-168-28-68.ec2.internal    Ready    <none>   2m38s   v1.22.12-eks-ba74326   database
ip-192-168-50-205.ec2.internal   Ready    <none>   2m33s   v1.22.12-eks-ba74326   database
Enter fullscreen mode Exit fullscreen mode

Raw Block Storage

Block storage stores a sequence of bytes in a fixed size block (page) on a storage device. Each block has a unique hash that references the address location of the specified block. Unlike a filesystem, block storage doesn't have the associated metadata such as format-type, owner, date, etc. Also, block storage doesn’t use the conventional storage paths to access data like a filesystem file. This reduction in overhead contributes to improved overall access speeds when using raw block devices. The ability to store bytes in blocks allows applications the flexibility to decide how these blocks are accessed and managed, making block storage an ideal choice for low latency databases such as Aerospike. From a developer's perspective, a block device is simply a large array of bytes, usually with some minimum granularity for reads and writes. In Aerospike this granularity is configured and referred to as the write-block-size. The Aerospike Kubernetes Operator uses the storage infrastructure software inside of Kubernetes and the need for data platforms to use raw block storage becomes ever more important.


Local Static Provisioner

The local volume static provisioner manages the PersistentVolume (PV) lifecycle for pre-allocated disks by detecting and creating PVs for each local disk on the host, and cleaning up the disks when released. It doesn't support dynamic provisioning where storage volumes can be created on-demand. Without dynamic provisioning a manual step is required to create new storage volumes which are then mapped to PersistentVolume objects.

Why do I need it?

As a captain of a ship, your time is best spent at the helm and not constantly going down below deck. Similarly, the LocalStaticProvisioner maintains the low level details like mappings between paths of local volumes and PV objects and keeps these stable across reboots and device changes, so you can focus on orchestration efforts.

The PersistentVolume, often seen as a remote storage resource, decouples the volumes from the Pods. However, a distributed application with fast access patterns and replication logic can benefit from the disks being accessed locally. Local disk usage will strongly couple your application data to a specific node, so if the Pod fails and is successfully rescheduled, the data on that node will still be available to the container. If the node is no longer accessible, data could be inaccessible but applications with built in replication logic will ensure data is copied ( we like to say replicated ) onto other nodes and this means applications can survive a node level outage. Without using the LocalStaticProvisioner the operator would be responsible for partitioning, formatting and mounting devices before they could be used as a Kubernetes PersistentVolume. The operator would also need to be responsible for cleaning up the volume and preparing it for reuse, something which is now the responsibility of the LocalStaticProvisioner when the PersistentVolumeClaim is released.

Persistent Volume Block Store

To use a device as raw block storage, you can edit the following yaml file as required default_example_provisioner_generated.yaml.

The following section is the key part to this file. For the majority of the time you will not need to change it.

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-provisioner-config 
  namespace: default 
data:
  storageClassMap: |     
    fast-disks:
       hostDir: /mnt/fast-disks
       mountDir:  /mnt/fast-disks 
       blockCleanerCommand:
         - "/scripts/shred.sh"
         - "2"
       volumeMode: Filesystem
       fsType: ext4
       namePattern: "*"
Enter fullscreen mode Exit fullscreen mode

To use a device as block storage for Aerospike, we need to replace the section above with the following. A pre-configured file is available here for your convenience.

In a moment we will deploy this file so there is no need to do anything just right now.

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-provisioner-config
  namespace: aerospike
data:
  useNodeNameOnly: "true"
  storageClassMap: |
    local-ssd:
       hostDir: /mnt/disks
       mountDir:  /mnt/disks
       blockCleanerCommand:
         - "/scripts/shred.sh"
         - "2"
       volumeMode: Block
Enter fullscreen mode Exit fullscreen mode

To learn more about how to configure and Operate the LocalStaticProvisioner see the Operation and Configuration guide.

Once your Kubernetes cluster is up and running, create the soft links to your local storage device(s) on each node.

kubectl get nodes -o wide
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-10-182.ec2.internal   Ready    <none>   20m   v1.22.12-eks-ba74326
ip-192-168-15-174.ec2.internal   Ready    <none>   20m   v1.22.12-eks-ba74326
ip-192-168-45-233.ec2.internal   Ready    <none>   21m   v1.22.12-eks-ba74326
Enter fullscreen mode Exit fullscreen mode

For each node, add a permanent soft link to the block device. In our case, the device name is nvme1n1 as below. You will need to ssh into each of the Kubernetes worker nodes to do this.

lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme1n1       259:0    0 69.9G  0 disk
nvme0n1       259:1    0   13G  0 disk
├─nvme0n1p1   259:2    0   13G  0 part /
└─nvme0n1p128 259:3    0    1M  0 part

sudo mkdir /mnt/disks -p
cd /mnt/disks/
sudo ln -sf /dev/nvme1n1 /mnt/disks/
ls -lrt /mnt/disks/
Enter fullscreen mode Exit fullscreen mode

Now apply the local static provisioner yaml file.

kubectl create ns aerospike
kubectl create -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/2.2.1/config/samples/storage/aerospike_local_volume_provisioner.yaml
kubectl create -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/2.2.1/config/samples/storage/local_storage_class.yaml
Enter fullscreen mode Exit fullscreen mode

You should now see the following resources created.

kubectl get sc,pv -n aerospike
NAME                                        PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/gp2 (default)   kubernetes.io/aws-ebs          Delete          WaitForFirstConsumer   false                  39m
storageclass.storage.k8s.io/local-ssd       kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  73s

NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
persistentvolume/local-pv-45df0f3d   69Gi       RWO            Delete           Available           local-ssd               6m42s
persistentvolume/local-pv-72d40f98   69Gi       RWO            Delete           Available           local-ssd               6m42s
persistentvolume/local-pv-7637ecb    69Gi       RWO            Delete           Available           local-ssd               6m42s
Enter fullscreen mode Exit fullscreen mode

Remove the resources for now.

kubectl delete -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/2.2.1/config/samples/storage/aerospike_local_volume_provisioner.yaml
kubectl delete -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/2.2.1/config/samples/storage/local_storage_class.yaml
kubectl get pv -o=json|jq .items[].metadata.name -r | grep ^local | while read -r line; do kubectl delete pv $line;done
Enter fullscreen mode Exit fullscreen mode

Persistent Volume Filesystem

This brings us to the core part of this article which is, how do we create an XFS filesystem from raw storage devices? In this section we partition the disk so we can have a block and a filesystem on a single device, but we do this only on 2 out of the 3 nodes. So, of the 3 nodes only 2 will have a filesystem.

Host Addresses

We start by getting a list of the nodes and IP addresses:

kubectl get no -o wide
NAME                             STATUS   ROLES    AGE   VERSION                INTERNAL-IP      EXTERNAL-IP    OS-IMAGE         KERNEL-VERSION                 CONTAINER-RUNTIME
ip-192-168-23-238.ec2.internal   Ready    <none>   63m   v1.22.12-eks-ba74326   192.168.23.238   44.212.74.12   Amazon Linux 2   5.4.209-116.367.amzn2.x86_64   docker://20.10.17
ip-192-168-63-152.ec2.internal   Ready    <none>   63m   v1.22.12-eks-ba74326   192.168.63.152   3.234.239.82   Amazon Linux 2   5.4.209-116.367.amzn2.x86_64   docker://20.10.17
ip-192-168-7-181.ec2.internal    Ready    <none>   63m   v1.22.12-eks-ba74326   192.168.7.181    3.87.204.50    Amazon Linux 2   5.4.209-116.367.amzn2.x86_64   docker://20.10.17
Enter fullscreen mode Exit fullscreen mode

Create Partitions

We are going to use the 'parted' application to create our disk partitions. If this is not installed then use the following command to install it.

sudo yum install parted -y
Enter fullscreen mode Exit fullscreen mode

Next, we partition our nvme1n1 disk to give us partitions nvme1n1p1 and nvme1n1p2:

sudo parted -a opt --script /dev/nvme1n1 mklabel gpt mkpart primary 0% 10% mkpart primary 10% 100%
lsblk

NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme1n1       259:0    0 69.9G  0 disk
├─nvme1n1p1   259:4    0    7G  0 part
└─nvme1n1p2   259:5    0 62.9G  0 part
nvme0n1       259:1    0   13G  0 disk
├─nvme0n1p1   259:2    0   13G  0 part /
└─nvme0n1p128 259:3    0    1M  0 part
Enter fullscreen mode Exit fullscreen mode

Discovery Directories

Lets create the soft links in our discovery directory on 2 of our chosen hosts which will have a 7GB filesystem.

sudo mkdir /mnt/fs-disks -p
sudo mkdir /mnt/fast-disks -p
sudo ln -sf /dev/nvme1n1p1 /mnt/fs-disks/
sudo ln -sf /dev/nvme1n1p2 /mnt/fast-disks/
ls -lrt /mnt/fs-disks/ /mnt/fast-disks/
Enter fullscreen mode Exit fullscreen mode

On the remaining 1 x host which does not have a filesystem do the following. In a real production system all disks would be the identical.

sudo mkdir /mnt/fast-disks -p
sudo ln -sf /dev/nvme1n1 /mnt/fast-disks/
Enter fullscreen mode Exit fullscreen mode

Storage Classes & Local Static Provisioner

Download the local static provisioner & the storage class files, make a copy so we can edit them as required.

wget https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/2.2.1/config/samples/storage/local_storage_class.yaml
cp local_storage_class.yaml local_storage_class-fast-ssd.yaml
cp local_storage_class.yaml local_storage_class-fs-ssd.yaml
Enter fullscreen mode Exit fullscreen mode

Storage Class

To create the storage classes [fast-ssd] and [fs-ssd] we need to change the metadata.name in each of our the edited storage_class files respectively and then deploy them as below. Make one [fast-ssd] the other [fs-ssd].

kubectl create -f local_storage_class-fast-ssd.yaml
kubectl create -f local_storage_class-fs-ssd.yaml
kubectl get sc

NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
fast-ssd        kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  9s
fs-ssd          kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  3m23s
Enter fullscreen mode Exit fullscreen mode

Local Static Provisioner

We now want to ensure that the block storage and the filesystem storage can work side by side on the same host. In order to do this we have edited both files

  • aerospike_local_volume_provisioner-fast-ssd.yaml
  • aerospike_local_volume_provisioner-fs-ssd.yaml

This ensures that the names of the following Kubernetes resources do not clash.

  • ServiceAccount
  • ClusterRole
  • ClusterRoleBinding
  • ConfigMap
  • DaemonSet

Following are direct links to the edited files for your convenience.

Create an aerospike namespace and deploy the storage yaml files.

kubectl create ns aerospike
kubectl create -f https://raw.githubusercontent.com/nareshmaharaj-consultant/localstaticprovisioner-k8s-config-files/main/aerospike_local_volume_provisioner-fs-ssd.yaml
kubectl create -f https://raw.githubusercontent.com/nareshmaharaj-consultant/localstaticprovisioner-k8s-config-files/main/aerospike_local_volume_provisioner-fast-ssd.yaml
Enter fullscreen mode Exit fullscreen mode

Here is the result of deploying our changes. Notice the 7GB filesystem using storage class [fs-ssd].

kubectl get sc,pv
NAME                                        PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/fast-ssd        kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  17m
storageclass.storage.k8s.io/fs-ssd          kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  17m
storageclass.storage.k8s.io/gp2 (default)   kubernetes.io/aws-ebs          Delete          WaitForFirstConsumer   false                  126m

NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
persistentvolume/local-pv-42dc4d3f   62Gi       RWO            Delete           Available           fast-ssd                22s
persistentvolume/local-pv-ba8bc9e    69Gi       RWO            Delete           Available           fast-ssd                22s
persistentvolume/local-pv-d7501418   69Gi       RWO            Delete           Available           fast-ssd                22s
persistentvolume/local-pv-f5282eeb   7152Mi     RWO            Delete           Available           fs-ssd                  2s
Enter fullscreen mode Exit fullscreen mode

Using the Filesystem

One misconception that might arise is how do we know which node has the XFS filesystem - so our Pod can be scheduled to the correct node? To answer this question - we start by testing our new filesystem and see where it takes us. First we create a simple PersistentVolumeClaim referencing our filesystem [fs-ssd] storage class. Go ahead and create a file called pvc-fs.yaml with the following content.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fs-1g-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: fs-ssd
Enter fullscreen mode Exit fullscreen mode

Create a Pod with the httpd server image and reference the PersistentVolumeClaim. Call your file pod-fs.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    app: fs-httpd
  name: fs-httpd
spec:
  containers:
  - image: httpd
    name: fs-httpd
    resources: {}
    volumeMounts:
      - mountPath: "/datafiles/"
        name: my-fs
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: my-fs
      persistentVolumeClaim:
        claimName: fs-1g-pvc
status: {}
Enter fullscreen mode Exit fullscreen mode

Deploy both the PVC and the Pod

kubectl create -f pvc-fs.yaml
kubectl create -f pod-fs.yaml
Enter fullscreen mode Exit fullscreen mode

Run the following command to see where this pod was scheduled.

kubectl describe po fs-http
Enter fullscreen mode Exit fullscreen mode

Output shows that the Pod was scheduled to the correct Node with the XFS filesystem ip-192-168-28-160.ec2.internal/192.168.28.160. (Your Node IP address will be different)

This happened because our StorageClass [fs-ssd] was used in the PersistentVolumeClaim. The PersistentVolumeClaim was used in our Pod and the scheduler knew exactly where the Pod needed to be scheduled.

We can go a bit further and prove this by adding a Taint to all Nodes with a filesystem. Delete the pod and then add the following taint. Then recreate the Pod once more. ( learn more about Kubernetes taints & tolerations )

(Note: If you are keeping the taint and later redeploy the LocalStaticProvisioner be sure to uncomment the tolerations section in each LocalStaticProvisioner yaml file.)

# chg node name here!! 
kubectl delete -f pod-fs.yaml
kubectl taint node ip-192-168-28-160.ec2.internal storageType=filesystem:NoSchedule
kubectl taint node ip-192-168-63-152.ec2.internal storageType=filesystem:NoSchedule
kubectl create -f pod-fs.yaml
Enter fullscreen mode Exit fullscreen mode

If we look at the events in the Pod, we see something very interesting.

"0/3 nodes are available: 2 node(s) had taint {storageType: filesystem}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict."

kubectl describe po fs-httpd
Name:             fs-httpd
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=fs-httpd
Annotations:      kubernetes.io/psp: eks.privileged
Status:           Pending
...
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  35s   default-scheduler  0/3 nodes are available: 2 node(s) had taint {storageType: filesystem}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict.
Enter fullscreen mode Exit fullscreen mode

This means that 2 of our Nodes rejected the Pod as it did not tolerate the Taint, but it did match the PersistentVolume on the Node. Of the remaining 1 x Node, it failed to meet the Node Affinity of the PersistentVolume attached to the PersistentVolumeClaim. To be clear, each of our PersistentVolume(s) are coupled to Nodes using the Node Affinity rules. If you describe a volume you will see the following:

kubectl describe pv local-pv-9dbcb710
Name:              local-pv-9dbcb710
StorageClass:      fs-ssd
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [ip-192-168-28-160.ec2.internal]
Enter fullscreen mode Exit fullscreen mode

Now add a toleration to the Pod or delete the taint and we should get a Pod scheduled to the correct Node. Below is our new Pod configuration with the tolerations added

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    app: fs-httpd
  name: fs-httpd
spec:
  containers:
  - image: httpd
    name: fs-httpd
    resources: {}
    volumeMounts:
      - mountPath: "/datafiles/"
        name: my-fs
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: my-fs
      persistentVolumeClaim:
        claimName: fs-1g-pcv
  tolerations:
    - key: storageType
      operator: "Equal"
      value: filesystem
      effect: NoSchedule
status: {}
Enter fullscreen mode Exit fullscreen mode

If we describe our Pod we can see what events occured. I've only included the relevant information.

kubectl describe po fs-httpd
Name:             fs-httpd
Node:             ip-192-168-28-160.ec2.internal/192.168.28.160
Volumes:
    ClaimName:  fs-1g-pcv
Tolerations:                storageType=filesystem:NoSchedule
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  91s   default-scheduler  Successfully assigned default/fs-httpd to ip-192-168-28-160.ec2.internal
  Normal  Created    90s   kubelet            Created container fs-httpd
  Normal  Started    90s   kubelet            Started container fs-httpd
Enter fullscreen mode Exit fullscreen mode

One last interesting fact is to see the mounted XFS filesystem /dev/nvme1n1p1 in the container.

kubectl exec -it fs-httpd -- bash
root@fs-httpd:/usr/local/apache2# df -Th
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   13G  2.9G   11G  22% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/nvme1n1p1 xfs      7.0G   40M  7.0G   1% /datafiles
/dev/nvme0n1p1 xfs       13G  2.9G   11G  22% /etc/hosts
shm            tmpfs     64M     0   64M   0% /dev/shm
tmpfs          tmpfs    6.9G   12K  6.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs          tmpfs    3.8G     0  3.8G   0% /proc/acpi
tmpfs          tmpfs    3.8G     0  3.8G   0% /sys/firmware
Enter fullscreen mode Exit fullscreen mode

Installing Aerospike - Block & Filesystem Storage

Aerospike Kubernetes Operator


Install the AKO (Aerospike Kubernetes Operator) and set up the database cluster. Refer to the following documentation for details on how to do this.

AKO Install Docs

You can confirm that your installation is successful by checking the pods in the operators namespace.

kubectl get pods -n operators
NAME                                                     READY   STATUS    RESTARTS   AGE
aerospike-operator-controller-manager-7946df5dd9-lmqvt   2/2     Running   0          2m44s
aerospike-operator-controller-manager-7946df5dd9-tg4sh   2/2     Running   0          2m44s
Enter fullscreen mode Exit fullscreen mode

Create the Aerospike Database Cluster


In this section we create an Aerospike database cluster. For full details, refer to the following documentation.

Aerospike Cluster

The basic steps are as follows.

Get the code

Clone the Aerospike Kubernetes Git repo and be sure to copy your feature/licence key file, if you have one, to the following directory - config/samples/secrets.

git clone https://github.com/aerospike/aerospike-kubernetes-operator.git
cd aerospike-kubernetes-operator/
Enter fullscreen mode Exit fullscreen mode

Initialize Storage

kubectl apply -f config/samples/storage/eks_ssd_storage_class.yaml
Enter fullscreen mode Exit fullscreen mode

As of version 6.0, for each block storage device, erase the disk to avoid critical errors:

Oct 26 2022 14:05:53 GMT: CRITICAL (drv_ssd): (drv_ssd.c:2216) /test/dev/xvdf: not an Aerospike device but not erased - check config or erase device
Enter fullscreen mode Exit fullscreen mode

Use the following command to erase the device on each host. Remember on 2 of the Nodes the device will be nvme1n1p2 (demo only)

export diskPart=/dev/nvme1n1
sudo blkdiscard ${diskPart} && sudo blkdiscard -z --length 8MiB ${diskPart} &
Enter fullscreen mode Exit fullscreen mode

Add Secrets


Secrets, for those not familiar with the terminology, allow data to be introduced into the Kubernetes environment, while ensuring that it cannot be read. Examples might include private PKI keys or passwords.

kubectl  -n aerospike create secret generic aerospike-secret --from-file=config/samples/secrets
kubectl  -n aerospike create secret generic auth-secret --from-literal=password='admin123'
Enter fullscreen mode Exit fullscreen mode

Create Aerospike Cluster


Before we create the cluster, delete the httpd Pod and PersistentVolumeClaim we created earlier:

kubectl delete -f pod-fs.yaml
kubectl delete pvc fs-1g-pvc
Enter fullscreen mode Exit fullscreen mode

Make sure all the PersistentVolume(s) have a status of 'Available':

kubectl get pv -n aerospike
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-684418fc   62Gi       RWO            Delete           Available           fast-ssd                4h13m
local-pv-9dbcb710   7152Mi     RWO            Delete           Available           fs-ssd                  5m42s
local-pv-aa49ff2c   69Gi       RWO            Delete           Available           fast-ssd                4h13m
local-pv-ad278bd7   69Gi       RWO            Delete           Available           fast-ssd                4h13m
Enter fullscreen mode Exit fullscreen mode

If some of the PVs are not in the 'Available' state run the following patch to release the PersistentVolumeClaim (PVC).

kubectl get pv -o=json|jq .items[].metadata.name -r | grep ^local | while read -r line; do kubectl patch pv $line -p '{"spec":{"claimRef": null}}';done
Enter fullscreen mode Exit fullscreen mode

Use the file all_flash_cluster_cr.yaml to configure the Aerospike Database to use the All Flash recipe so we get to use the Block and Filesystem storage as a single unit of work. Note where we have used the different Storage classes and Tolerations.

  • storageClass: ssd ( AWS gps ext4 Filesystem Dynamic Storage )
  • storageClass: fast-ssd ( LocalStaticProvisioner Block Storage )
  • storageClass: fs-ssd ( LocalStaticProvisioner XFS filesystem )

Deploy the Aerospike Database

kubectl create -f all_flash_cluster_cr.yaml
kubectl get po -n aerospike -w
Enter fullscreen mode Exit fullscreen mode

Verify Aerospike Database Running

The get pods command should show you two active Aerospike pods.

kubectl get po -n aerospike
NAME                                        READY   STATUS    RESTARTS   AGE
aerocluster-0-0                             1/1     Running   0          8m32s
aerocluster-0-1                             1/1     Running   0          8m32s
Enter fullscreen mode Exit fullscreen mode

Insert test data with the benchmark tool

The Aerospike Benchmark tool measures the performance of an Aerospike cluster. It can mimic real-world workloads with configurable record structures, various access patterns, and UDF calls. Use the benchmark tool to insert some data easily.

Login to a node (which should have a filesystem - just checking...)

kubectl exec -it aerocluster-0-0 -n aerospike -- bash
Enter fullscreen mode Exit fullscreen mode

Add some data using the simplest of commands

asbench -Uadmin -Padmin123
Enter fullscreen mode Exit fullscreen mode

To verify we successfully added some data use aql tool, login and run a query

aql -Uadmin -Padmin123
select * from test
Enter fullscreen mode Exit fullscreen mode

Cleaning up


WARNING: If this step is bypassed and the EKS cluster is deleted, all persistent volumes created will remain.

Delete Database Cluster

This is the important step and should always be run when using cloud storage classes.

kubectl delete -f all_flash_cluster_cr.yaml
Enter fullscreen mode Exit fullscreen mode

Delete EKS Cluster

We can now delete the EKS cluster:

eksctl delete cluster -f my-cluster.yaml
Enter fullscreen mode Exit fullscreen mode

Conclusion

We have seen that with little effort we can easily spin up a deployment with local disks for block storage, combined with filesystem storage. We also confirmed the Pods were being scheduled automatically to the correct nodes. We also learned that by using Kubernetes taints, we could effectively reserve nodes for specific usage. You had a sneak preview into Aerospike - All Flash use cases too. And one final point, we only looked at the Kubernetes Special Interest Group (SIG) version of the LocalStaticProvisioner. You may also wish to consider Rancher's version which is listed below.

Review Rancher
https://github.com/rancher/local-path-provisioner

Top comments (0)