DEV Community

Michael Levan
Michael Levan

Posted on

Platform Engineering On Kubernetes Part 2: Cluster API

“Using Kubernetes to manage Kubernetes” - this is a fun way of thinking about Platform Engineering and Kubernetes. The whole idea is it gives you the ability to literally manage Kubernetes operations like creating and updating clusters or deploying and managing applications on a Kubernetes cluster with another Kubernetes cluster (wondering at this point if anyone has watched Inception).

When it comes to performing CRUD operations for a new Kubernetes cluster on an existing Kubernetes cluster, Cluster API shines.

In part 1 of the Platform Engineering On Kubernetes series, you learned all about how Controllers, CRD’s, and Operators work. In part 2, you’ll learn about Cluster API along with the overlap when it comes to Operators and Cluster API.

Prerequisites

To follow along with this blog post from a hands-on perspective, you should have the following:

  1. Read the first blog post in this series as it ties in with this blog post. You can find it here.

Why Cluster API

There are many ideas behind why Platform Engineering exists. One of the primary reasons, at a high level, is the ability to have an underlying platform that makes utilizing tools and configurations easier for developers and engineers.

Going off of that analogy, let’s think about a Kubernetes cluster. If you aren’t an infrastructure engineer or have a background as a Sysadmin, creating a production-grade or a production-like Kubernetes cluster can be daunting. Even with a Managed Kubernetes Service, you still have to worry about scaling, networking, storage, writing the code to automate the creation/management of the cluster, and several other factors.

The truth is, if every engineer has to create their own clusters or deploy to those clusters which will require maintenance/upgrades/troubleshooting/etc., that means the expectation is every engineer and developer are Kubernetes experts. That simply won’t work, especially considering one of Platform Engineering’s biggest goals is to ensure proper management of responsibilities (sometimes called “separation of concerns”) when it comes to what engineer/developer is responsible for what tool/platform.

Cluster API comes in from a Platform Engineering perspective to help with this. It gives you the ability to have a more straightforward method of creating and managing clusters.

Cluster API Breakdown

Creating any type of on-prem infrastructure, cloud infrastructure, or cloud service isn’t easy. There are an absurd amount of bells and whistles on top of the various configurations that you can use, third-party tools, and best practices.

When it comes to Kubernetes clusters, it’s no different. Although there are various Managed Kubernetes Services like AKS, EKS, and GKE available, it’s still not easier to manage and create Kubernetes clusters. Developers, engineers, and other technology professionals who aren’t experts in Kubernetes don’t have the easiest time getting a cluster up and running.

Cluster API is one method that you can use to get Kubernetes up and running in an efficient fashion. After the initial configuration, it’s pretty much just two commands.

clusterctl generate cluster capi-provider --kubernetes-version v1.27.0 > capi-filename.yaml
kubectl apply -f capi-filename.yaml
Enter fullscreen mode Exit fullscreen mode

(more on these commands in the upcoming hands-on section).

The gist is that a Kubernetes Manifest gets generated based on a couple of configurations that Platform Engineers would configure for other engineers and developers to easily use and deploy Kubernetes. The generated Kubernetes Manifest then gets run and just like that, a Kubernetes cluster appears.

Using Cluster API

Now that you know a bit about Cluster API, let’s dive into the hands-on section of actually using it.

To use Cluster API, you need a Management Cluster. The Management Cluster will be the cluster that you use to create other Kubernetes clusters (hence the create Kubernetes with Kubernetes). Because you need to choose at least one type of cluster, this section from a hands-on perspective uses Azure Kubernetes Service (AKS). However, what you’ll notice as you dive into this and check out the other Providers, the steps are very similar for each provider.

If you don’t have an AKS cluster available, you can create one from the Terraform code here.

Once you create it, ensure that you run the following command to connect to it.

az aks get-credentials -n cluster_name -g resource_group_name
Enter fullscreen mode Exit fullscreen mode

Install Cluster API

First, you’ll need to install Cluster API. There are a few methods that you can use to complete this.

Linux


curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.4.4/clusterctl-linux-amd64 -o clusterctl

sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl

Enter fullscreen mode Exit fullscreen mode

Mac

brew install clusterctl

Enter fullscreen mode Exit fullscreen mode

Create The Management Cluster

Once Cluster API is installed, you’ll begin configuring the metadata for the Management Cluster.

Setting the CLUSTER_TOPOLOGY to true ensures that the feature gate for managed topologies is available.

export CLUSTER_TOPOLOGY=true
Enter fullscreen mode Exit fullscreen mode

Next, set the configurations for the cluster that’s creating/managing other Kubernetes clusters. This way, the Management Cluster has access to Azure.

export AZURE_SUBSCRIPTION_ID=""
export AZURE_TENANT_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
Enter fullscreen mode Exit fullscreen mode

Base64 encode the variables.

export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
Enter fullscreen mode Exit fullscreen mode

Set the configurations needed for the Azure Cluster Identity used by the AzureCluster.

export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
export CLUSTER_IDENTITY_NAME="cluster-identity"
export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
Enter fullscreen mode Exit fullscreen mode

Create the secret in the AKS cluster.

kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
Enter fullscreen mode Exit fullscreen mode

Initialize the Management Cluster.

clusterctl init --infrastructure azure
Enter fullscreen mode Exit fullscreen mode

Generate The Cluster Configuration

Now that the Management Cluster is initialized, you can start setting up the configurations for the new Kubernetes cluster.

Cluster API has a few different bootstrappers available. The most common used is Kubeadm. Because of that, Kubeadm will be used underneath the hood on the Azure Virtual Machines when creating the new cluster.

Set the environment variables for the clusters location, machine type, and resource group.

export AZURE_LOCATION="eastus"

export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"

export AZURE_RESOURCE_GROUP="devrelasaservice"
Enter fullscreen mode Exit fullscreen mode

Next, generate the Kubernetes Manifest that will contain the configurations for the new Kubernetes cluster.

clusterctl generate cluster capi-azure --kubernetes-version v1.27.0 > capi-azurekubeadm.yaml
Enter fullscreen mode Exit fullscreen mode

Run the Kubernetes Manifest.

kubectl apply -f capi-azurekubeadm.yaml
Enter fullscreen mode Exit fullscreen mode

You should now see an output on the terminal and in the Azure portal that has the new VMs with Kubernetes on them.

Configure The Cluster

After the cluster is created, you’ll have to set up a few configurations for the networking and the Cloud Controller.

First, ensure that your Kubeconfig is up to date to connect to the new Kubernetes cluster.

clusterctl get kubeconfig capi-azure > capi-azure.kubeconfig
Enter fullscreen mode Exit fullscreen mode

Next, install the Cloud Controller on the Kubernetes cluster.

helm install --kubeconfig=./capi-azure.kubeconfig --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=capi-azure --set cloudControllerManager.clusterCIDR="192.168.0.0/16"
Enter fullscreen mode Exit fullscreen mode

Set up a proper Container Network Interface (CNI) so the new Kubernetes cluster is usable. As an example, you can use the Helm Chart below to deploy Calico, a popular CNI.

helm repo add projectcalico https://docs.tigera.io/calico/charts --kubeconfig=./capi-azure.kubeconfig && \
helm install calico projectcalico/tigera-operator --kubeconfig=./capi-azure.kubeconfig -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --namespace tigera-operator --create-namespace
Enter fullscreen mode Exit fullscreen mode

Congrats! You’ve successfully deployed a Kubernetes cluster with Kubernetes.

If you’re interested in seeing how a Cluster API Provider can be created, check out Part 2 of the Kubernetes and Platform Engineering series on YouTube which you can find here.

Top comments (0)