DEV Community

langyizhao
langyizhao

Posted on

My 3 Different Attempts in Creating an EKS K8s Cluster

For the past few years, I was just using Kubernetes. Only until recently, I had to do some experimental work in a sandboxed Kubernetes environment. The first thing I need to do is to build the sandbox myself.

There was no hard requirement about how to build it, but since our team was already using AWS, EKS is the obvious choice at least to me: Rather than rolling my own I almost always prefer to use a managed service so I don't have to worry about the maintenance, some of the testing, and most of the security issues.

For me, the trade-offs such as vendor lock-in and marginally more costs are acceptable.

A consideration after deciding to use EKS is whether to use Fargate to back the nodes or the plain auto-scaling EC2 instances. Fargate is attractive to me because of its serverless vibe: I won't need the POC to be up all day long, so on-demand is not bad. But again, this is a soft requirement because I would only need a few micro to small nodes thus the cost would be negligible.

[Edit] Eventually I went back to plain EC2 from Fargate because:

  1. Fargate requires more steps to set up in some of the cluster building options I list below.
  2. Fargate requires more steps to add additional controllers (such as the AWS Load Balancer Controller ) into an established cluster.

Without further ado, here are the three ways I experimented in creating a new EKS cluster:

Experiment 1: Terraform with official AWS resources types

When using Terraform, my usual preference is to use as many official resources as possible before resorting to 3rd-party modules.

For EKS, I noticed the official AWS resource is https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster

I created a local module starting from this aws_eks_cluster resource:

resource "aws_eks_cluster" "fargate-eks-cluster" {
  name     = "fargate-eks-cluster-${var.env}"
  role_arn = aws_iam_role.cluster-role.arn

  # Refer to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster#basic-usage
  ...
}
Enter fullscreen mode Exit fullscreen mode

Then I realized it depends on a few other resources such as aws_iam_role and aws_iam_role_policy_attachment.

Fortunately, I can find the basic examples in the links above.

Trying to use Fargate, I referred to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_fargate_profile and created three Fargate profiles for the default namespace, our working namespace and one specifically for CoreDNS as instructed.

resource "aws_eks_fargate_profile" "fargate-profile-default-ns" {
  cluster_name           = aws_eks_cluster.fargate-eks-cluster.name
  fargate_profile_name   = "fargate-profile-default-ns-${var.env}"
  pod_execution_role_arn = aws_iam_role.fargate-role.arn
  subnet_ids             = var.private_subnets

  selector {
    namespace = "default"
  }
}

resource "aws_eks_fargate_profile" "fargate-profile-coredns" {
  cluster_name           = aws_eks_cluster.fargate-eks-cluster.name
  fargate_profile_name   = "fargate-profile-coredns-${var.env}"
  pod_execution_role_arn = aws_iam_role.fargate-role.arn
  subnet_ids             = var.private_subnets

  selector {
    namespace = "kube-system"
    labels    = { "k8s-app": "kube-dns" }
  }
}

resource "aws_eks_fargate_profile" "fargate-profile-muse" {
  cluster_name           = aws_eks_cluster.fargate-eks-cluster.name
  fargate_profile_name   = "fargate-profile-muse-${var.env}"
  pod_execution_role_arn = aws_iam_role.fargate-role.arn
  subnet_ids             = var.private_subnets

  selector {
    namespace = "muse"
  }
}
Enter fullscreen mode Exit fullscreen mode

Then I created the aws_iam_role resource for all of them.

And these are all the resources I needed in this local module. I used this module in my root module and did terraform apply.

The Terraform steps looked successful and after the cluster creation, I was able to verify that kubectl commands do connect to the right server.

However, when I tried to create K8s resources with my pre-existing manifest files, it didn't work as expected: the Overview Tab in the AWS console didn't show anything node alive.

Experiment 2: With eksctl CLI commands

I was initially following the official Getting started with eksctl guide in creating my cluster, with the --fargate option.

At the end of the eksctl create cluster --fargate command, a Fargate profile was created and several configurations were done under the hood. This is at least a couple of steps ahead of what I have achieved at the end of experiment 1.

Unfortunately, the out-of-box Fargate profile apparently doesn't work for non-default namespaces, and the networking wasn't quite right.

Despite [following the troubleshooting steps as well as creating custom Fargate Profiles, I still couldn't get any of my testing pods into running status. I believe I wasn't far from getting everything working but decided to try something else with the remainder of time.

Experiment 3: Experiment 3: Terraform with community modules

In seek of a more turnkey-ish solution, I looked into this community Terraform EKS module: https://github.com/terraform-aws-modules/terraform-aws-eks

As mentioned in its own issue, the complexity of this module is getting very high. I would usually avoid any heavy module like this and stay with plain resources if I hadn't failed in Experiment 1.

My main.tf file was just a combination of its working examples especially https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/basic, except for that the module source has to be

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
Enter fullscreen mode Exit fullscreen mode

Instead of the ../.. in the example.

The end result turned out to be really good and the generated cluster met all our needs for my sandbox (Though I still had to install AWS Load Balancer Controller separately for Ingress)

My Conclusion

As it turned out, my smoothest cluster creating experience was to use the community Terraform EKS module following its examples. It works almost out-of-the-box.

If Terraform is not your thing, using eksctl would also save you a lot of initial legwork by creating the default underlying resources behind the scene with its subcommand. Just keep in mind that you still need to tinker with the cluster after the creation is done.

Oldest comments (0)