This one is simply a result of a need that I had and that was about getting a fully functional, flexible, and secure Amazon EKS cluster set up in under half an hour to be able to test anything asap. For that, I did not want to spend too much time developing IaC myself as there are so many great sources out there that are worth supporting rather than reinventing the wheel. The force is there in the community and as an AWS Community Builder I came across something that met my expectations hence I’m sharing my experience hoping you may find it helpful too.
It is meant to get you your EKS cluster while you can go buy yourself a coffee ☕️
This time I will start the other way around and go straight away to the solution while context and other details can be found down below.
The only thing to reveal at this stage is that I’m leveraging Amazon EKS Blueprints for Terraform 🚀
MVP
While one can use the flexibility of the EKS Blueprints solution to set things up in many different ways and depending on individual requirements, I’ve got the minimal/initial configuration I start with, and that consists of the following:
- the control plane with whitelisted public access,
- the data plane (spot EC2 instances) communicating with the control plane privately,
- all EKS-managed add-ons enabled and using the most recent versions,
- ArgoCD publicly accessible (whitelisted) through an ALB configured with a Route53 domain and an ACM certificate,
- a set of additional add-ons deployed with the use of ArgoCD and following the GitOps approach.
The following extra add-ons are enabled by default:
- Cluster autoscaler
- AWS load balancer controller
- External DNS
- FluentBit
Here’s the code that sets everything up.
It’s opinionated, however, I believe it’s a perfect starting point where you have a fully functional Kubernetes cluster with GitOps support and can immediately start deploying and testing anything you want.
The Terraform code (/terraform) in this repo consists of three components:
- account (optional) — covers S3 bucket for storing logs, etc.
- core — covers networking
- k8s — covers EKS cluster configuration
In addition, there’s a K8s configuration (/k8s) covering add-ons that is periodically read by ArgoCD to keep things set up as declared in Git — the GitOps way. If you decide to use that code simply look for TODOs and provide values that will be relevant to your set-up.
Finally, after running Terraform and then going to get your well-deserved coffee…
… it’s there, up and running!
It needs a couple of minutes to deploy the add-ons automatically, including the AWS Load Balancer controller and External DNS responsible for exposing the ArgoCD UI publicly.
Then, you just have to retrieve ArgoCD's initial admin password…
$ kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath=”{.data.password}” | base64 -d
n20x3mwZoapDv9JC
…and you can log in 😎
App of Apps
Now there’s that first ArgoCD application called add-ons where the enabled core K8s controllers belong to.
To learn more about the App of Apps pattern in ArgoCD check this link.
Logging
Apart from cluster logs that were enabled also the logs from all the pods get nicely delivered to CloudWatch and can be easily queried with Logs Insights. See some examples below.
EKS Blueprints
Now, let’s get to the roots of the solution…
EKS Blueprints helps you compose complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads. With EKS Blueprints, you describe the configuration for the desired state of your EKS environment, such as the control plane, worker nodes, and Kubernetes add-ons, as an IaC blueprint.
Looks like a sponsored advertisement? Maybe, but it’s not!
What I was after personally is something that:
- follows best practices
- is flexible and extensible
- is being actively supported
First of all, EKS Blueprints turned out not only to be an open-source project supported by some K8s and AWS enthusiasts only. It’s a result of cooperation between AWS representatives, their partners, and others as the answer to customers’ needs. That I believe has defined the direction and shaped the foundation of what that project represents. One of its pillars is that it follows AWS Well-Architected Framework best practices and therefore lets you focus more on the functional side of your set-up.
Secondly, it supports a wide and constantly growing range of Kubernetes add-ons. The EKS Blueprints allow it to be deployed either with Terraform or with AWS CDK, the two probably most popular tools out there.
Then, it implements the so-called GitOps bridge that takes care of configuring resources (e.g. IAM roles and service accounts) to satisfy add-on functionalities requirements.
Lastly, the already mentioned growing community — people using it for real, professionals testing it in battle, on real projects, in many different use cases, and for other purposes — made me realize it is not an ephemeral thing and enough quality is there.
Caveats
Things one should be aware of when using EKS Blueprints…
There are multiple modules calls happening behind the scenes due to the fact the EKS Blueprints support quite a wide range of various controllers/add-ons which ultimately makes the Terraform initialization last a bit longer (~3 minutes).
When configuring private connectivity between the data plane and the API server endpoint sometimes things don’t work at the beginning. AWS recommends that if your endpoint does not resolve to a private IP address within the VPC you should enable public access and then disable it again.
Amazon EKS cluster endpoint access control
Deleting K8s namespaces created with Terraform is not that straightforward so check the link below to get yourself unblocked just in case.
Troubleshoot terminated Amazon EKS namespaces
Beware! Terraform doesn’t know about AWS resources provisioned by K8s controllers running on the cluster. Make sure you tidy up after running terraform destroy
or know what you should do to make the controllers delete relevant resources before destroying your infrastructure. The last thing you want is to have a couple of ALBs hanging there and costing you $16–$20 per month each.
Top comments (0)