DEV Community

Chathra Serasinghe
Chathra Serasinghe

Posted on

Well-architected EKS Cluster using EKS Blueprints

There are various approaches you can follow to deploy Kubernetes clusters in the AWS environment. However, choosing the correct set of tools and configuring them correctly is always a tricky task. It is because the Kubernetes ecosystem is rapidly growing and the things you used a few months ago may have become obsolete now. As a result, correctly implementing and administering such complex clusters is becoming a nightmare.
On the other hand, Customers always prefer to get things done quickly while yet adhering to best standards. Therefore AWS has introduced codified reference architectures called EKS blueprints, which helps you to create and manage well-architected EKS clusters with less effort and time.

Deploying EKS using EKS blueprints - Architecture

Image image reference: AWS blog

You can consider addons are something like modules if you coming from a terraform background. These addons can be added to the cluster to enhance the clusters capabilities. You can easily grant access to EKS cluster using Teams.
Basically, it supports two types of Teams you can define using EKS blueprints by default.

ApplicationTeam ---> Managing workloads
PlatformTeam ----> Administrating cluster


Let's get our hands dirty.
I am using CDK for this demonstration. However, terraform also can be used to build a Kubernetes cluster using EKS blueprints.


First, you have to make sure that you have installed the following.
sudo apt install nodejs

2)CDK latest or at least 2.37.1
npm install -g aws-cdk@2.37.1

Create a typescript CDK project

cdk init app --language typescript


Install the eks-blueprints NPM package

npm i @aws-quickstart/eks-blueprints


Then set your AWS credentials. Refer the following AWS docs if you don't know how to do it.

Replace your main cdk file with the following code.

Typically, it is located in the project's bin folder.(i.e bin/<root_project_directory>.ts)

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import * as blueprints from '@aws-quickstart/eks-blueprints';

const app = new cdk.App();
const account = '<your account no>';
const region = '<region>';

const addOns: Array<blueprints.ClusterAddOn> = [
    new blueprints.addons.VpcCniAddOn(),
    new blueprints.addons.CoreDnsAddOn(),
    new blueprints.addons.KubeProxyAddOn(),
    new blueprints.addons.CertManagerAddOn(),
    //Adding CalicoOperatorAddOn support Network policies
    new blueprints.addons.CalicoOperatorAddOn(),
    //Adding MetricsServerAddOn to support metrics collection
    new blueprints.addons.MetricsServerAddOn(),
    //ClusterAutoScalerAddOn will add required resources to support ClusterAutoScalling
    new blueprints.addons.ClusterAutoScalerAddOn(),
    new blueprints.addons.AwsLoadBalancerControllerAddOn(),

    .build(app, 'eks-blueprint');
Enter fullscreen mode Exit fullscreen mode

CDK bootstrap and deploy

run cdk bootstrap command to bootstrap the enviroment. If you are running for the first time this command is required to create cdk bootstrap enviroment in your AWS account.

run cdk deploy command to deploy the EKS cluster and its addons.
This process will take roughly around 20-30 mints.

Accesing the Kubernetes Cluster

Once the deployment is success you will get a similar output as shown below which consists the update-kubeconfig command to execute which typically update the~/.kube/config file and enables us to access the Kubernetes cluster.


Copy the update-kubeconfig command and run it on your terminal


Verifying the Kubernetes resources

You can run following verification steps to see what have been installed in Kubernetes cluster.

  • List all namespaces


  • List all pods in kube-system namespace Image

You can see the running EKS worker instance as well in EC2 page in AWS management console.


Verifying the provisioned AWS resources

Note:- Even though CDK handles the majority of the hard work for you, it is always worthwhile to review what resources CDK has provisioned. In cloudformation page, you can see that the CDK code has created 1 main stack and 2 nested stacks.


You may wish to execute the following command and see what AWS resources have been provisioned through each stack.

aws cloudformation describe-stack-resources --stack-name <stack name> --region <region> --query 'StackResources[*].ResourceType' --no-cli-pager
Enter fullscreen mode Exit fullscreen mode

Customize your cluster

By default, EKS blueprint creates a Managed Node group for the EKS Cluster. If you wish to customize the default configurations of Managed Node group, you can do that via MngClusterProvider.
For example, In this scenario I want to increase the number of desired nodes to be 2.You can do that by defining the properties for MngClusterProvider as shown below.

const props: blueprints.MngClusterProviderProps = {
    version: eks.KubernetesVersion.V1_21,
    minSize: 1,
    maxSize: 3,
    //increasing worker node count to 2
    desiredSize: 2,
const clusterProvider = new blueprints.MngClusterProvider(props);

    //Adding the cluster provider to the EksBlueprint builder.
    .build(app, 'eks-blueprint');
Enter fullscreen mode Exit fullscreen mode

Once you deploy the changes, You can notice that the old instance has been terminated, and a new managed node group with two instances has been created.

This is not the end, but the EKS blueprints provide you a great start to build your EKS cluster faster and provide a plethora of add-ons and customizations to enrich it in terms of various aspects.

Top comments (0)