DEV Community


Posted on • Updated on

Use Cluster API to provision Kubernetes clusters in anywhere!


Learn how to use Cluster API to provision multiple EKS clusters.

What is Cluster API?

Provisioning Kubernetes clusters is never an easy task. When there are 1000+ clusters, you definitely want to have a standardised approach to ease your life. If you have this concern, you come to the right place! Cluster API is what you need!

Some of you might know tools like kOps, Kubespray. You can imagine Cluster API as their alternative solution, but more powerful!

According to the official page, "Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters."

Here are some highlighted points of Cluster API:

  • Pure YAML-based. Kubernetes style. Super handy.
  • Support any mainstream infrastructure provider. Provision your Kubernetes clusters in cloud/on-premise environments in the same place.
  • Managed Kubernetes services support. AWS EKS, Azure AKS, GCP GKE all are supported.
  • Bring your own infrastructure. Reuse existing infrastructures. Focus on provisioning Kubernetes clusters.

It is awesome, right?

To demonstrate how to use Cluster API, I am going to show you how to use it to create AWS EKS clusters.


Before you continue, there are some concepts you need to understand first.

Infrastructure Provider
Cluster API project defines a set of APIs in the form of Custom Resource Definitions (CRDs). Each provider has to follow the API to specify how to provision infrastructure accordingly.

You can use clusterctl config repositories command to get a list of supported providers and their repository configuration.

For AWS, its implementation is Cluster API Provider AWS (CAPA).

Management Cluster
A Kubernetes cluster that manages the lifecycle of Workload Clusters. In this cluster, you use Cluster API to further provision Workload Clusters in any infrastructure provider.

Creating Management Cluster is a "the chicken or the egg first" problem. There are two methods available:

  • Find an existing Kubernetes cluster and transform it as Management Cluster.
  • "Bootstrap & Pivot" method. Bootstrap a Kubernetes cluster as a temporary Management Cluster, then use Cluster API inside the temporary Management Cluster to provision a target Management Cluster. Finally move all the Cluster API resources from the temporary Management Cluster to the target Management Cluster.

To show you the comprehensive usage of Cluster API, I am going to use the "Bootstrap & Pivot" method. In which you can learn how to migrate Management Cluster.

Workload Cluster
A Kubernetes cluster whose lifecycle is managed by a Management Cluster.



Obviously, there are some tools you need to install first:

  • kubectl - Everyone should know it so well :)
  • Kind - A tool for running local Kubernetes clusters using Docker container "nodes".
  • clusterctl - CLI tool of Cluster API (CAPI)
  • clusterawsadm - CLI tool of Cluster API Provider AWS (CAPA)
brew install kubectl clusterctl kind

wget -O /usr/local/bin/clusterawsadm /usr/local/bin/ 
chmod +x /usr/local/bin/clusterawsadm
Enter fullscreen mode Exit fullscreen mode

Setup your AWS credential

I won't talk much about it as there are multiple ways to setup your credential.

However there are two things need to be reminded:

  • Your credential must have administrative permissions. Creating EKS cluster involve IAM resources creation.
  • Use a permanent credential. Your credential will be stored inside the Management Cluster. And it will use the credential to monitor status of Workload Clusters.


All the used codes can be found in my Github repository

Bootstrap a temporary Management Cluster

We will use Kind to create the temporary Management Cluster.

Run these commands to create the cluster.

kind create cluster --name kind
kubectl config use-context kind-kind
Enter fullscreen mode Exit fullscreen mode

You can use alternative distributions, such as minikube, microk8s, k3s, k0s. It is up to your preference, the concept is pretty much the same.

Create AWS IAM resources

Create or update an AWS CloudFormation stack for bootstrapping Kubernetes Cluster API and Kubernetes AWS Identity and Access Management (IAM) permissions.

export AWS_REGION=

clusterawsadm bootstrap iam create-cloudformation-stack --region ${AWS_REGION}
Enter fullscreen mode Exit fullscreen mode

(Optional) Create AWS network resources

EKS is based on several network infrastructure elements.

You can:

  • Let CAPA to provision network infrastructures for you. Good for people who want to experiment how to use Cluster API.
  • Bring your own network infrastructure. Basically you have to create a VPC, NAT gateways, Internet gateways, route tables and subnets.

No worry, I will cover both approaches in this tutorial.

Install CAPI and CAPA to your Management Cluster

Use cluster init to install core Cluster API resources, along with CAPA resources in your Kind cluster.

Since managed node group creation is an experimental feature, your need to enable it before installation.

export AWS_REGION=

# Allow CAPA to create EKS managed node groups
export EXP_MACHINE_POOL=true
export EKSEnableIAM=true

# Create the base64 encoded AWS credentials using clusterawsadm.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile --region ${AWS_REGION})

clusterctl init --infrastructure aws
Enter fullscreen mode Exit fullscreen mode

Now the Management Cluster will use the AWS_B64ENCODED_CREDENTIALS to provision EKS resources.

To enable experimental features in an existing Management Cluster, please visit here for the instructions.

Security Trick

Since the credential is already loaded into the CAPA controller, you can delete it by using clusterawsadm controller zero-credentials.

When your CAPA controller is restarted, or your Management Cluster is migrated to a new Kubernetes Cluster, you need to update it by using these commands.

export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
clusterawsadm controller update-credentials
clusterawsadm controller rollout-controller
Enter fullscreen mode Exit fullscreen mode

Create the target Management Cluster in AWS EKS

You need to remind of two things.

  1. Choose an EKS supported Kubernetes version.
    As of now, latest version that EKS supports is 1.21

  2. Name your EKS cluster based on EKS naming rules.
    You can use any of the following characters: the set of Unicode letters, digits, hyphens and underscores.

I have prepared two yaml files for two options respectively:

  • I want Cluster API to manage everything for me -> use managed-management-cluster.yaml
  • I want to bring my own infrastructure -> use byoi-management-cluster.yaml.

Be careful that you may need to modify some fields in the YAML file, such as AWS account ID, vpc id and subnet ids.

Run kubectl apply -f byoi-management-cluster.yaml to provision your Management Cluster.

Now we need to get the kubeconfig file to access our newly created Management Cluster.

Cluster API provides a native method to export the kubeconfig file. However there is a minor incompatible issue. We can use sed to solve it.

export CLUSTER_API_CLUSTER_NAME=byoi-mgnt-cl

kubectl --namespace=capi get secret ${CLUSTER_API_CLUSTER_NAME}-user-kubeconfig \
   -o jsonpath={.data.value} | base64 --decode \
   > ${CLUSTER_API_CLUSTER_NAME}.kubeconfig

sed -i '' -e 's/v1alpha1/v1beta1/' ${CLUSTER_API_CLUSTER_NAME}.kubeconfig
Enter fullscreen mode Exit fullscreen mode


If you create your Management Cluster in your desired target Kubernetes cluster. You can stop in here.

If you are using temporary Management Cluster, now it is the time to move our Cluster API resources from the bootstrap temporary cluster to the target EKS cluster.

Indeed, the Cluster API resources we are moving is the target Management Cluster itself.

# Remember to enable experimental features
export EXP_MACHINE_POOL=true
export EKSEnableIAM=true

# Before you move the resources, you need to install Cluster # API in the target Management Cluster first.
clusterctl init --infrastructure aws --kubeconfig ${CLUSTER_API_CLUSTER_NAME}.kubeconfig 

clusterctl move -n capi --to-kubeconfig=${CLUSTER_API_CLUSTER_NAME}.kubeconfig 
Enter fullscreen mode Exit fullscreen mode

Clusterctl move

If you create Cluster API managed clusters across multiple namespaces, you need to run the clusterctl move command for each namespace.

For example:

clusterctl move -n ${OTHER_NAMESPACE} --to-kubeconfig=${CLUSTER_API_CLUSTER_NAME}.kubeconfig 
Enter fullscreen mode Exit fullscreen mode

(Bonus) Create Workload Clusters

Remember of goal of using Cluster API is to provisioning, upgrading, and operating multiple Kubernetes clusters?

In my repo and you can see there is a file called byoi-workload-cluster.yaml. Now try to provision the EKS Workload Cluster from the EKS Management Cluster by running kubectl apply -f byoi-workload-cluster.yaml.

And notice that the EKS version of the Workload Cluster is v1.20. After you provision the Workload Cluster, you can modify the version to v1.21 and then apply the file again. Upgrading your EKS cluster is that simple!

Clean up resources

To delete the temporary Management Cluster, use kind delete cluster --name kind

To delete Cluster API provisioned cluster, use kubectl delete cluster {cluster name}

To delete EKS managed node groups, use kubectl delete {node group name}


Congratulation! You have moved the first step toward Cluster API. Feel free to do more cool stuffs, provision clusters in other providers.

If you like my article, please give me some reactions, or leave me message in below. Thank you.

Reading materials

CAPI - Concepts
CAPA - Enabling EKS managed Machine Pools
CAPI - Reference
CAPI free course

Top comments (0)