DEV Community

Cover image for Implementing AWS EKS with EFS for dynamic volume provisioning using Terraform. Kubernetes Series - Episode 5
Javier Sepúlveda
Javier Sepúlveda

Posted on

Implementing AWS EKS with EFS for dynamic volume provisioning using Terraform. Kubernetes Series - Episode 5

Cloud people!

In the last Episode of this series, we covered the steps to configure karpenter using helm charts within Kubernetes to scale the cluster to meet demand.

In this episode, the focus is on deploying dynamic volumes with efs using Helm and terraform.

Requirements

Let's see how we can do this using terraform and raw manifest.

Reference Architecture

The reference architecture operates within a node.

Dynamic volume architecture

Let's see how we can do this using terraform.

Step 1.

Is necessary using the providers for this deployment, you can see configurations providers in the file versions.tf.

This is the link for all code terraform in branch episode5.

GitHub logo segoja7 / EKS

Deployments for EKS

Step 2.

In this step the EFS and the role are deployed, the Helm charts need the EFS ID and the role for creating the storageclass correctly.

Link terraform module of EFS for deploy the resource.
Registry

module "efs" {
  source  = "terraform-aws-modules/efs/aws"
  version = "1.6.0"

  name                            = "efs-testing"
  encrypted                       = true
  performance_mode                = "generalPurpose"
  throughput_mode                 = "provisioned"
  provisioned_throughput_in_mibps = 25
  enable_backup_policy            = false
  create_backup_policy            = false
  attach_policy                   = true
  policy_statements = [
    {
      sid    = "connect"
      Effect = "Allow"
      actions = ["elasticfilesystem:ClientMount",
        "elasticfilesystem:ClientRootAccess",
      "elasticfilesystem:ClientWrite"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["*"]
        }
      ]
    }
  ]

  lifecycle_policy = {
    transition_to_ia = "AFTER_90_DAYS"
  }

  mount_targets = {
    for i in range(length(module.vpc.private_subnets)) :
    module.vpc.private_subnets[i] => {
      subnet_id = module.vpc.private_subnets[i]
#      security_groups = [module.security-group.security_group_id]
    }
  }
  security_group_description = "EFS security group"
  security_group_vpc_id      = module.vpc.vpc_id
  security_group_rules = {
    vpc = {
      # relying on the defaults provdied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = module.vpc.private_subnets_cidr_blocks
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Additionally, is necessary a role with the permissions for creating the dynamic volumes.

resource "aws_iam_role" "efs_controller_role" {
  name = "role-efsdriver-${module.eks.cluster_name}"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Federated = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${local.cleaned_issuer_url}"
        },
        Action = "sts:AssumeRoleWithWebIdentity",
        Condition = {
          StringEquals = {
            "${local.cleaned_issuer_url}:sub" = "system:serviceaccount:kube-system:efs-csi-controller-sa"
            "${local.cleaned_issuer_url}:aud" = "sts.amazonaws.com"
          }
        }
      }
    ]
  })
  tags = local.tags
}

resource "aws_iam_role_policy_attachment" "efs_controller_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy"
  role       = aws_iam_role.efs_controller_role.name
}
Enter fullscreen mode Exit fullscreen mode

Step 3.

With the EFS and the role created, in this step the configuration of the aws module eks_blueprints_addons is performed, this module allows the helm deployment and the creation of the EFS controller. In this case, the values.yaml receives as parameter the role_arn or service account with the necessary permissions and the EFS ID that will be associated to the storageclass.

In this case, the helm is downloaded from the repository.

module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0" #ensure to update this to the latest/desired version

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  helm_releases = {
    efs-csi-driver = {
      name             = "efs-csi-driver"
      namespace        = "kube-system"
      create_namespace = true
      chart            = "./helm-charts/aws-efs-csi-driver"
      values = [
        templatefile("./helm-charts/aws-efs-csi-driver/values.yaml", {
          role_arn = aws_iam_role.efs_controller_role.arn,
          efs_id   = module.efs.id
        })
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

In the past I was have problems with GID allocator, something related to this problem.

That is why I prefer to assign Gid ranges for the storage class, taking into account that the accesspoint has a quota of 1000.

Piece of code of values.yaml

storageClasses:
- name: efs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  provisioner: efs.csi.aws.com
  parameters:
    provisioningMode: efs-ap
    fileSystemId: ${efs_id}
    directoryPerms: "700"
    gidRangeEnd: "2000"
    gidRangeStart: "1000"
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
Enter fullscreen mode Exit fullscreen mode

Step 4.

Great, For this moment all the components are deployed.

Controllers of EFS.
EFS-CSI-Controller

SA of EFS.

SA EFS

SC of EFS.

SC EFS

Step 5.

Great, For this moment all the components are deployed. For testing reason the namespace it is created manually.

kubectl create ns episode5
Enter fullscreen mode Exit fullscreen mode

ns

Step 6.

Now that the namespace is created, it will create the app that will use the pvc.

Statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-db
  namespace: episode5
spec:
  serviceName: mysql-svc
  replicas: 1
  selector:
    matchLabels:
      app: mysql-db
  template:
    metadata:
      labels:
        app: mysql-db
    spec:
      securityContext:
        runAsUser: 2000
        runAsGroup: 2000
        fsGroup: 2000
      containers:
      - name: mysql-db
        image: mysql:latest
        ports:
          - containerPort: 3306
        volumeMounts:
          - name: statefulset-dynamicstorage
            mountPath: /var/lib/mysql
        env:
          - name: MYSQL_ROOT_PASSWORD
            value: "segoja7secure!" #This is not recommend, use secrets!
          - name: MYSQL_USER
            value: "user" #This is not recommend, use secrets!
          - name: MYSQL_PASSWORD
            value: "episode2" #This is not recommend, use secrets!
      volumes:
      - name: statefulset-dynamicstorage
        persistentVolumeClaim:
          claimName: statefulset-dynamicstorage
Enter fullscreen mode Exit fullscreen mode

In the moment that I was make proves, I facing an error related to posix users, this is special case for some applications that need to configuring her directory with the permissions required.

Please check those links for a better understand.
Link1
Link2
Link3

POSIX user

    spec:
      securityContext:
        runAsUser: 2000
        runAsGroup: 2000
        fsGroup: 2000
Enter fullscreen mode Exit fullscreen mode

In conclusion, the pod is running all process with the userid:2000

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: statefulset-dynamicstorage
  namespace: episode5
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
Enter fullscreen mode Exit fullscreen mode

Step 7.

Now you can perform a storage validation, create a database.

Applying port forward for connect database using k9s.

Applying port forward

Now, you can access from the client, in this case the client is heidi, you can use any client.

table mysql

Step 8.

Recreating pods of statefulset and enabling again, port forward.

Recreating pods of statefulset

Connecting in the new pod.

new pod

By connecting back on a Heidi client to the mysql pod, you can see that the data is persistent and the database, table is stored in the efs dynamic volume, which shows that the data is being persisted beyond the lifecycle of the pod.

In this phase, a statefulset database pod was created using the efs dynamic provisioning volume.

The following addons have been installed:
• efs-csi

If you have any questions, please leave them in the comments!

Successful!!

Top comments (0)