DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

How to Use Kubernetes 1.32 CSI Drivers and AWS EFS for Shared File Storage

How to Use Kubernetes 1.32 CSI Drivers and AWS EFS for Shared File Storage

Shared file storage is a critical requirement for many Kubernetes workloads, including content management systems, data processing pipelines, and collaborative applications. Unlike block storage which is tied to a single pod, shared file systems allow multiple pods across different nodes to read and write to the same storage volume simultaneously. Kubernetes 1.32 continues to support the Container Storage Interface (CSI) standard for integrating external storage systems, and AWS Elastic File System (EFS) is a fully managed, scalable NFS-based file storage service that pairs perfectly with CSI drivers for shared storage use cases.

Prerequisites

Before you begin configuring the AWS EFS CSI driver for Kubernetes 1.32, ensure you have the following:

  • A running Kubernetes 1.32 cluster (self-managed or EKS) with nodes in the same AWS region as your EFS file system
  • AWS CLI configured with credentials that have permissions to create and manage EFS file systems, IAM roles, and EC2 security groups
  • kubectl installed and configured to connect to your cluster
  • Helm 3.x installed (optional, but recommended for simplified driver installation)

Step 1: Create an AWS EFS File System

First, create the EFS file system that will serve as your shared storage backend:

  1. Open the AWS Management Console and navigate to the EFS dashboard
  2. Click "Create file system", enter a name, select the VPC where your Kubernetes cluster runs, and choose the same region as your cluster
  3. Configure the performance mode (General Purpose or Max I/O) and throughput mode (Bursting or Provisioned) based on your workload needs
  4. Review and create the file system, then note the File System ID (e.g., fs-0123456789abcdef0)

Next, configure the EFS security group to allow inbound NFS traffic (port 2049) from the security groups associated with your Kubernetes cluster nodes. This ensures pods running on nodes can access the EFS file system.

Step 2: Install the AWS EFS CSI Driver

Kubernetes 1.32 requires the AWS EFS CSI driver to interact with EFS file systems. You can install the driver using Helm or by applying the official Kubernetes manifest.

Using Helm (recommended):

helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm repo update
helm upgrade --install aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
  --namespace kube-system \
  --set image.tag=v1.7.1  # Check for the latest compatible version with K8s 1.32
Enter fullscreen mode Exit fullscreen mode

Alternatively, apply the official manifest directly:

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=v1.7.1"
Enter fullscreen mode Exit fullscreen mode

Verify the driver installation by checking that the CSI controller and node pods are running in the kube-system namespace:

kubectl get pods -n kube-system | grep efs-csi
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure IAM Permissions for the CSI Driver

The EFS CSI driver needs IAM permissions to describe EFS file systems and manage mount targets. For EKS clusters, you can attach the IAM role for service accounts (IRSA) to the driver's service account. For self-managed clusters, you can use IAM roles for EC2 instances or pod identity agents.

Create an IAM policy with the following permissions (save as efs-csi-policy.json):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "elasticfilesystem:DescribeFileSystems",
        "elasticfilesystem:DescribeMountTargets",
        "elasticfilesystem:DescribeMountTargetSecurityGroups",
        "elasticfilesystem:CreateMountTarget",
        "elasticfilesystem:DeleteMountTarget",
        "elasticfilesystem:ModifyMountTargetSecurityGroups"
      ],
      "Resource": "*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Attach this policy to the IAM role associated with your CSI driver service account. For EKS, use the following command to associate the role with IRSA:

eksctl create iamserviceaccount \
  --name efs-csi-controller-sa \
  --namespace kube-system \
  --cluster  \
  --attach-policy-arn arn:aws:iam:::policy/efs-csi-policy \
  --approve
Enter fullscreen mode Exit fullscreen mode

Step 4: Create a StorageClass for EFS

Define a Kubernetes StorageClass that references the EFS CSI driver and your EFS file system. Create a file named efs-storageclass.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  fileSystemId: fs-0123456789abcdef0  # Replace with your EFS File System ID
  directoryPerms: "700"  # Optional: Set permissions for the root directory
  gidRangeStart: "1000"  # Optional: GID range for dynamic provisioning
  gidRangeEnd: "2000"
reclaimPolicy: Retain  # EFS volumes are not deleted when PVCs are removed
volumeBindingMode: Immediate
Enter fullscreen mode Exit fullscreen mode

Apply the StorageClass:

kubectl apply -f efs-storageclass.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5: Create a PersistentVolumeClaim (PVC)

Create a PVC that requests storage from the EFS StorageClass. Save the following as efs-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-pvc
spec:
  accessModes:
    - ReadWriteMany  # EFS supports shared access across multiple pods
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi  # Note: EFS is elastic, this is a request not a hard limit
Enter fullscreen mode Exit fullscreen mode

Apply the PVC:

kubectl apply -f efs-pvc.yaml
Enter fullscreen mode Exit fullscreen mode

Verify the PVC is bound:

kubectl get pvc efs-pvc
Enter fullscreen mode Exit fullscreen mode

Step 6: Deploy a Test Workload

Deploy a sample pod that uses the EFS PVC to verify shared storage works. Create a file named test-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: efs-test-pod
spec:
  containers:
  - name: test-container
    image: nginx:latest
    volumeMounts:
    - name: efs-storage
      mountPath: /usr/share/nginx/html
  volumes:
  - name: efs-storage
    persistentVolumeClaim:
      claimName: efs-pvc
Enter fullscreen mode Exit fullscreen mode

Apply the pod:

kubectl apply -f test-pod.yaml
Enter fullscreen mode Exit fullscreen mode

Exec into the pod and create a test file:

kubectl exec -it efs-test-pod -- bash
echo "Hello from EFS" > /usr/share/nginx/html/test.txt
exit
Enter fullscreen mode Exit fullscreen mode

Deploy a second pod using the same PVC, exec into it, and verify the test file is accessible to confirm shared storage works across pods.

Best Practices for EFS CSI on Kubernetes 1.32

  • Use IAM roles for service accounts (IRSA) instead of hardcoding AWS credentials in cluster nodes
  • Set appropriate directory permissions and GID ranges in the StorageClass to avoid permission issues
  • Monitor EFS throughput and IOPS using AWS CloudWatch to ensure performance meets workload requirements
  • Use the ReadWriteMany access mode only when necessary, as EFS has higher latency than block storage for single-pod workloads
  • Regularly update the EFS CSI driver to the latest stable version compatible with Kubernetes 1.32 to get bug fixes and security patches

Troubleshooting Common Issues

  • PVC Stuck in Pending: Check that the EFS CSI driver pods are running, the File System ID in the StorageClass is correct, and the node security groups allow NFS traffic to EFS.
  • Permission Denied Errors: Verify the GID/UID settings in the StorageClass match the user IDs used in your container images, and that the EFS file system permissions allow access from the node security groups.
  • Mount Failures: Check the CSI node pod logs for errors: kubectl logs -n kube-system -l app=efs-csi-node, and ensure the EFS file system is in the same region as your cluster nodes.

Conclusion

Using Kubernetes 1.32 CSI drivers with AWS EFS provides a scalable, fully managed shared file storage solution for workloads that require concurrent access from multiple pods. By following the steps above, you can quickly configure the EFS CSI driver, create shared storage volumes, and deploy workloads that leverage persistent shared file systems. Always refer to the official Kubernetes 1.32 and AWS EFS CSI driver documentation for the latest compatibility and feature updates.

Top comments (0)