DEV Community

Javier Sepúlveda
Javier Sepúlveda

Posted on

Deploying an AWS EKS and Velero backup using Helm provider with Terraform. Kubernetes Series - Episode 3

Cloud people!

In the last Episode covered steps to set up the first application within Kubernetes while ensuring persistent storage for the application's using EBS and some raws manifests.

In this episode, the focus is on deploying velero within Kubernetes to backup or restore all objects in the cluster, or filter objects.

Requirements

Let's see how we can do this using terraform and a new module to display helm charts with terraform.

Reference Architecture

The reference architecture operates within a node.

velero architecture

Step 1.

Adding the eks-blueprints-addons allows the creation of resources with Helm or Kubernetes provider

This is the link for all code terraform in branch episode3.

GitHub logo segoja7 / EKS

Deployments for EKS

Remember that in episode 1 an eks cluster was created from scratch, and that code has been reused in each episode.

This the code for deploy velero using the module eks-blueprints-addons.

Is necessary add providers for connect and create resources inside eks cluster.

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--profile", local.profile]
  }
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      # This requires the awscli to be installed locally where Terraform is executed
      args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--profile", local.profile]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0" #ensure to update this to the latest/desired version

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn


  enable_velero = true
  velero = {
    s3_backup_location = "arn:aws:s3:::bucket-s3-terraform-bucket/velero-test"
  }

  tags = {
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

This module configures everything and you don't need to worry about anything, except for creating an s3 bucket and adding it in the code.

I test and really works!

But in this case, it is need to have more control for that reason this is another example with steps more manuals and more configurable for differents use cases.

Step 2.

The same code that in step 1 but using helm.

In this case helm charts is locally inside project, is possible use remote urls for charts.

module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0" #ensure to update this to the latest/desired version

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  helm_releases = {
    velero = {
      name             = "velero"
      namespace        = "velero"
      create_namespace = true
      chart            = "./helm-charts/helm-charts-velero-5.2.0/velero"
      values           = [templatefile("./helm-charts/helm-charts-velero-5.2.0/velero/values.yaml", { ROLE = aws_iam_role.velero-backup-role.arn })]
    }
  }

  tags = {
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3.

Is necessary create a policy and role for attach with the service account in velero.

This is the link velero with oficial documentation.

In the locals file, a local.cleaned_issuer_url was created to format and remove the endpoint eks url prefix for the role.
This is used in the trust relationship role.

locals {
  cleaned_issuer_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
}
Enter fullscreen mode Exit fullscreen mode
resource "aws_iam_policy" "velero-backup" {
  name = "velero-backup-policy-${module.eks.cluster_name}"

  policy = jsonencode(
    {
      "Version" : "2012-10-17",
      "Statement" : [
        {
          "Effect" : "Allow",
          "Action" : [
            "ec2:DescribeVolumes",
            "ec2:DescribeSnapshots",
            "ec2:CreateTags",
            "ec2:CreateVolume",
            "ec2:CreateSnapshot",
            "ec2:DeleteSnapshot"
          ],
          "Resource" : "*"
        },
        {
          "Effect" : "Allow",
          "Action" : [
            "s3:GetObject",
            "s3:DeleteObject",
            "s3:PutObject",
            "s3:AbortMultipartUpload",
            "s3:ListMultipartUploadParts"
          ],
          "Resource" : [
            "arn:aws:s3:::bucket-s3-terraform-nequi/*"
          ]
        },
        {
          "Effect" : "Allow",
          "Action" : [
            "s3:ListBucket"
          ],
          "Resource" : [
            "arn:aws:s3:::bucket-s3-terraform-nequi/*",
            "arn:aws:s3:::bucket-s3-terraform-nequi"
          ]
        }
      ]
    }
  )
  tags = local.tags
}

resource "aws_iam_role" "velero-backup-role" {
  name = "velero-backup-role-${module.eks.cluster_name}"
  assume_role_policy = jsonencode(
    {
      "Version" : "2012-10-17",
      "Statement" : [
        {
          "Effect" : "Allow",
          "Principal" : {
            Federated = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${local.cleaned_issuer_url}"
          },
          "Action" : "sts:AssumeRoleWithWebIdentity",
          "Condition" : {
            "StringEquals" : {
              "${local.cleaned_issuer_url}:sub" = "system:serviceaccount:velero:velero-server"
              "${local.cleaned_issuer_url}:aud" = "sts.amazonaws.com"
            }
          }
        }
      ]
    }
  )
  tags = local.tags
}

resource "aws_iam_role_policy_attachment" "velero_policy_attachment" {
  policy_arn = aws_iam_policy.velero-backup.arn
  role       = aws_iam_role.velero-backup-role.name
}

Enter fullscreen mode Exit fullscreen mode

Step 4.

In this step it is necessary to add the helm chart locally for that is necessary got out to repository and download the manifest.

This is the link velero

Inside the project was created an folder with the helm charts velero.

helm charts locally

Step 5.

This step is only for referencing the information for velero as the role for permissions, the arn bucket s3 and other configuration for the values.yaml.
The first is using the paths for velero helm charts, additional using the function templatefile is posible pass variables at the file values.yaml in this case an arn role for permissions.

variable ROLE inside values.yaml

serviceAccount:
  server:
    create: true
    name: velero-server
    annotations:
      eks.amazonaws.com/role-arn: ${ROLE}
    labels:
Enter fullscreen mode Exit fullscreen mode

using variable ROLE and assing the arn value from role.

  helm_releases = {
    velero = {
      name             = "velero"
      namespace        = "velero"
      create_namespace = true
      chart            = "./helm-charts/helm-charts-velero-5.2.0/velero"
      values           = [templatefile("./helm-charts/helm-charts-velero-5.2.0/velero/values.yaml", { ROLE = aws_iam_role.velero-backup-role.arn })]
    }
  }
Enter fullscreen mode Exit fullscreen mode

Configuring the values.yaml for deploy velero.
the values are too many, but in this case only the basic ones were configured.

you can check the values.yaml in the repository of the branch episode3.

GitHub logo segoja7 / EKS

Deployments for EKS

Step 6.

Deploying velero.

With all configured is posible deploying velero.

deploying velero

Step 7.

checking configurations.

kubectl get all -n velero
Enter fullscreen mode Exit fullscreen mode

kubectl get all -n velero

kubectl get backupstoragelocation -n velero
Enter fullscreen mode Exit fullscreen mode

kubectl get backupstoragelocation -n velero

kubectl get volumesnapshotlocation -n velero
Enter fullscreen mode Exit fullscreen mode

kubectl get volumesnapshotlocation -n velero

kubectl get sa velero-server  -n velero -o yaml
Enter fullscreen mode Exit fullscreen mode

kubectl get sa velero-server  -n velero -o yaml

Step 8.

testing backups.
For this demo there is a test namespace with an pod running nginx.

kubectl get all -n test
Enter fullscreen mode Exit fullscreen mode

kubectl get all -n test

Creating backup
Create backup of this namespace using velero cli.

velero backup create backup --include-namespaces test
Enter fullscreen mode Exit fullscreen mode

velero backup create backup --include-namespaces test

check backup in s3 bucket

backup velero in s3 bucket

Describe backup

velero backup describe backup
Enter fullscreen mode Exit fullscreen mode

velero backup describe backup

Delete the namespace test for restore backup.

kubectl delete ns test
Enter fullscreen mode Exit fullscreen mode

kubectl delete ns test

Restore the namespace from backup

velero restore create --from-backup backup
Enter fullscreen mode Exit fullscreen mode

velero restore create --from-backup backup

Verify that the namespace is restored

kubectl get all -n test
Enter fullscreen mode Exit fullscreen mode

kubectl get all -n test

Conclusion, velero is successfully installed with least privileges using the service account and roles in aws, additionally the backups are tested by creating and restoring the backups.

Successful!!

Top comments (2)

Collapse
 
luiz_carlospego_b4472653 profile image
LUIZ CARLOS PeGo

How to perform backups of files generated by pg_dump?

Collapse
 
segoja7 profile image
Javier Sepúlveda • Edited

You can use hooks velero.io/docs/v1.14/backup-hooks/ or a traditional backup using cron job periodically with pg_dump and store the dumped data in a bucket s3 for example!