DEV Community

Cover image for Expanding Persistent Volume for Statefulsets in Kubernetes
Ishan Sharma
Ishan Sharma

Posted on

Expanding Persistent Volume for Statefulsets in Kubernetes

Expanding volumes in Kubernetes clusters is a routine maintenance activity and seemingly straightforward – or so I thought! Recently, a unique challenge arose when there was a request to increase the storage size for a Stateful Application running in a Kubernetes cluster.

Let's jump into the story to share the real-world experience of this request.

Problem Statement

In the realm of Kubernetes clusters, a statefulset application thrived. Its popularity soared to the extent that it demanded more storage than the currently attached volume could provide. The team, recognizing the need to meet this demand, set out to expand the storage of the persistent volume connected to the statefulset. However, quickly realized that this was easier said than done.

As kubernetes 1.24 supports volume expansion, Just like anyone who had the first encounter with this task, I anticipated that it was as simple as increasing the spec.volumeClaimTemplates.spec.resources.requests.storage in the statefulset application. But It was not !!

Contrary to initial expectations, modifying the spec.volumeClaimTemplates.spec.resources.requests.storage alone does not trigger the expansion of volumes attached to statefulsets. Manual intervention is necessary to navigate this intricate landscape. Let’s learn how can we achieve it in this article.

Prerequisites For Persistent Volume Expansion

  • The underlying provider and CSI driver for the persistent volume must support volume expansion. You should refer to the documentation of your CSI driver to check the capability. (Managed Kubernetes with their inbuilt CSI driver supports this natively and the feature volume expansion is also enabled)

  • Volume expansion must be enabled on the respective storage class used by the persistent volume. This can be checked using the below command:

## Output in Name of storageclass with support to ALLOWVOLUMEEXPANSION
$ kubectl get storageclass -o custom-columns=NAME:.metadata.name,ALLOWVOLUMEEXPANSION:.allowVolumeExpansion
Enter fullscreen mode Exit fullscreen mode

Output from the above command will look something like this:

NAME                    ALLOWVOLUMEEXPANSION
azurefile               true
azurefile-csi           true
azurefile-csi-premium   true
azurefile-premium       true
default                 true
managed                 true
managed-csi             true
managed-csi-premium     true
managed-premium         true
Enter fullscreen mode Exit fullscreen mode

Step-by-Step Guide With Example.

To make it more practical, I will use Alertmanager application, which is installed with kube-prometheus-stack helm chart.

Reproduce the problem

  • Installation of helm chart.
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm upgrade --install kube-prom-stack prometheus-community/kube-prometheus-stack --namespace kube-prom-stack --create-namespace --values custom-values.yaml
Release "kube-prom-stack" does not exist. Installing it now.
NAME: kube-prom-stack
LAST DEPLOYED: Thu Nov 23 10:42:39 2023
NAMESPACE: kube-prom-stack
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace kube-prom-stack get pods -l "release=kube-prom-stack"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Enter fullscreen mode Exit fullscreen mode

Info: Application details are out of the scope of this article.

-custom-values.yaml

alertmanager:
  ingress:
    enabled: false

  podDisruptionBudget:
    enabled: false

  alertmanagerSpec:
    storage:
      volumeClaimTemplate:
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 2Gi
Enter fullscreen mode Exit fullscreen mode
  • Once the installation is complete, we can list the statefulsets.

Info: kube-prom-stack namespace is set as the default namespace.

$ kubectl get statefulsets
NAME                                                   READY   AGE
alertmanager-kube-prom-stack-kube-prome-alertmanager   1/1     11m
prometheus-kube-prom-stack-kube-prome-prometheus       1/1     11m
Enter fullscreen mode Exit fullscreen mode
  • Check the details of Persistent volume claim created by the volumeClaimTemplates specification of the statefulset to verify if the correct values are applied.
## Get the pvc name from the statefulset volumeClaimTemplate
$ kubectl get sts alertmanager-kube-prom-stack-kube-prome-alertmanager -o jsonpath='{.spec.volumeClaimTemplates[0].metadata.name}' ; echo ""
alertmanager-kube-prom-stack-kube-prome-alertmanager-db

## Get the pvc size request from the statefulset volumeClaimTemplate
$ kubectl get sts alertmanager-kube-prom-stack-kube-prome-alertmanager -o jsonpath='{.spec.volumeClaimTemplates[0].spec.resources}' | jq
{
  "requests": {
    "storage": "2Gi"
  }
}

## Verify the size of the PVC from the statefulset volumeClaimTemplate
$ kubectl get pvc alertmanager-kube-prom-stack-kube-prome-alertmanager-db-alertmanager-kube-prom-stack-kube-prome-alertmanager-0 -o jsonpath='{.spec.resources.requests}' | jq
{
  "storage": "2Gi"
}
Enter fullscreen mode Exit fullscreen mode

Storage is 2Gi which seems to be as per our provided values.yaml

  • Extend the persistent volume claim storage request in the provided values.yaml.

  • Modified custom-values.yaml

alertmanager:
  ingress:
    enabled: false

  podDisruptionBudget:
    enabled: false

  alertmanagerSpec:
    storage:
      volumeClaimTemplate:
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 5Gi ## Increased ##
Enter fullscreen mode Exit fullscreen mode

Size is updated to 5Gi, will apply the new values and monitor the changes.

  • Helm upgrade with new values.
$ helm upgrade --install kube-prom-stack prometheus-community/kube-prometheus-stack --namespace kube-prom-stack --values custom-values.yaml
Release "kube-prom-stack" has been upgraded. Happy Helming!
NAME: kube-prom-stack
LAST DEPLOYED: Thu Nov 23 10:55:25 2023
NAMESPACE: kube-prom-stack
STATUS: deployed
REVISION: 2
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace kube-prom-stack get pods -l "release=kube-prom-stack"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Enter fullscreen mode Exit fullscreen mode
  • Check the details of the Persistent volume claim after updating the size.
## Get the pvc name from the statefulset volumeClaimTemplate
$ kubectl get sts alertmanager-kube-prom-stack-kube-prome-alertmanager -o jsonpath='{.spec.volumeClaimTemplates[0].metadata.name}' ; echo ""
alertmanager-kube-prom-stack-kube-prome-alertmanager-db

## Get the PVC size request from the statefulset volumeClaimTemplate (UPDATED)
$ kubectl get sts alertmanager-kube-prom-stack-kube-prome-alertmanager -o jsonpath='{.spec.volumeClaimTemplates[0].spec.resources}' | jq
{
  "requests": {
    "storage": "5Gi"
  }
}

## Verify the size of the pvc got from the statefulset volumeClaimTemplate (NOT-UPDATED)
$ kubectl get pvc alertmanager-kube-prom-stack-kube-prome-alertmanager-db-alertmanager-kube-prom-stack-kube-prome-alertmanager-0 -o jsonpath='{.spec.resources.requests}' | jq
{
  "storage": "2Gi"
}
Enter fullscreen mode Exit fullscreen mode

Even though the volumeClaimTemplate of statefulset(alertmanager) is updated yet the persistent volume claim is using the old size 2Gi.

Solution

The following steps are required to increase the size from 2Gi to 5Gi.

  • STEP 1: In this case the statefulset is managed via the operator hence needs to be paused as well on the custom resource alertmanager

Get the name of alertmanager resource.

$ kubectl get alertmanager
NAME                                      VERSION   REPLICAS   READY   RECONCILED   AVAILABLE   AGE
kube-prom-stack-kube-prome-alertmanager   v0.26.0   1          1       True         True        61m
Enter fullscreen mode Exit fullscreen mode

Patch the resource so that the operator does not recreate it.

$ kubectl patch alertmanager/kube-prom-stack-kube-prome-alertmanager  --patch '{"spec": {"paused": true, "storage": {"volumeClaimTemplate": {"spec": {"resources": {"requests": {"storage":"5Gi"}}}}}}}' --type merge
alertmanager.monitoring.coreos.com/kube-prom-stack-kube-prome-alertmanager patched
Enter fullscreen mode Exit fullscreen mode
  • STEP 2: Patch the persistent volume claims associated with the statefulset.
for p in $(kubectl get pvc -l alertmanager=kube-prom-stack-kube-prome-alertmanager -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do \
  kubectl patch pvc/${p} --patch '{"spec": {"resources": {"requests": {"storage":"5Gi"}}}}'; \
done
Enter fullscreen mode Exit fullscreen mode
  • STEP 3: Final step is to delete the statefulset with the orphan strategy.

By passing --cascade=orphan to kubectl delete, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted.

$ kubectl delete sts alertmanager-kube-prom-stack-kube-prome-alertmanager --cascade=orphan
Enter fullscreen mode Exit fullscreen mode

To verify if the PVC is extended.

## PVC is updated
$ kubectl get pvc alertmanager-kube-prom-stack-kube-prome-alertmanager-db-alertmanager-kube-prom-stack-kube-prome-alertmanager-0 -o jsonpath='{.spec.resources.requests}' | jq
{
  "storage": "5Gi"
}
Enter fullscreen mode Exit fullscreen mode
  • STEP 4: Resume the operator to take control of the alertmanager provisioning. This step is valid in statefultsets managed via operators only.
kubectl patch alertmanager/kube-prom-stack-kube-prome-alertmanager --patch '{"spec": {"paused": false}}' --type merge
Enter fullscreen mode Exit fullscreen mode

Conclusion

In Summary following steps need to be followed in the respective order to expand the persistent volume on statefulsets.

  1. Patch the statefulset with the required(expanded) volumeClaimTemplates configuration.
  2. Patch the persistent volume claims with the required(expanded) size configuration associated with the statefulset.
  3. Delete the statefulset with cascade=orphan strategy.

And that concludes our exploration into the dynamic world of scaling storage for stateful applications in Kubernetes.

Expanding the persistent volumes in statefulsets is a well-known restriction and is been tracked in the Support Volume Expansion Through StatefulSets and StatefulSet: support resize PVC storage in K8s v1.11 GitHub issues in official Kubernetes repositories.

Acknowledgments

Thank you for embarking on this Kubernetes adventure with me. Scaling storage in Kubernetes can be complex, but you've gained valuable insights to navigate this terrain. May your clusters be resilient, and your applications thrive. Happy reading, and until our next exploration!

Helpful Links

Top comments (0)