DEV Community

Michael Cade
Michael Cade

Posted on • Originally published at vzilla.co.uk on

Using Terraform (IaC) to automate your Kubernetes Clusters and Apps

Introduction – Why

My goal for this project was to find a way to deploy a new cluster in AWS, Microsoft Azure and Google. I am constantly spinning up and down Kubernetes clusters and running through scenarios for content creation and just learning scenarios.

I have written several blogs talking about creating managed Kubernetes clusters in the three big clouds and some other options more recently. But I wanted to go one step further and wanted to automate the creation and removal of these environments for demo purposes.

The goal is to make it simple to not only create these clusters using Terraform from Hashicorp but to then also have a very easy way to deploy Kasten K10 as well to each of the clusters.

The purpose is to cover those on demand demo environments which deploys with one command and then allows me to rip it all down as well.

You can find the raw code here; I have however been conscious of creating readme.md files throughout the repository as maybe only certain areas will be of interest.

https://github.com/MichaelCade/tf_k8deploy

Walkthrough one example

I figured it would or might be useful to also walk through how to use at least one of these public clouds terraform scripts. For this demo we are going to use the GKE option.

Prerequisites

On our workstation first we need the Google Cloud Platform account, we also need the gcloud sdk to be configured on our system, finally we need kubectl each of these options are not OS constrained so if you are running Linux, Windows or MacOS you are good to go here.

I have also documented the steps for Google Cloud Platform specifically here.

before we get into the walkthrough we are also going to need to install Terraform on our system, once again this will have you covered across Linux, Windows and MacOS. This resource should help get you going with Terraform.

The first step is making sure you have gcloud configured correctly as per the section above and the link that dives into the step by step on authenticating to your GCP account. We then want to get our code downloaded from the following, be sure to have this located in your common location for code.

git clone https://github.com/MichaelCade/tf\_k8deploy.git

You will notice that this contains the deployment steps for AWS EKS, Microsoft AKS and GKE for this walkthrough we are only interested in GKE, but you are more than welcome obviously to explore the other folders, we will also cover the helm folder later in the walkthrough.

Let’s navigate to the GKE folder with the following command:

cd learn-terraform-provision-gke-cluster

I am using Visual Studio Code for my IDE so now we are into our folder you can run the following command to open the folder in VSCode

code .

You can check through the .tf files in the folder now and start to see what is going to be created, if you are new to terraform then I suggest you walking through the following to understand what is happening at each step before making specific changes to your deployment.

The one folder that you need to update before anything will work is the terraform.tfvars this needs to contain your project ID and your region. You should change the region accordingly to where you would like everything to be deployed. You can get your project ID by running the following command

gcloud config get-value project

Once you have updated your file above, we can then get provisioning our cluster, simple stuff so far even if you are new to terraform. Back in your terminal that is in the GKE folder you should run the following command: this command will download the required providers.

terraform init

We can now go ahead and deploy our new GKE cluster along with a dedicated VPC away from any existing infrastructure you have in your Google Cloud Platform. Run the following and type in yes if you are happy to proceed at the prompt

terraform apply

Once you have hit enter after saying “yes” it will start deploying your new GKE cluster. You can go through and check beforehand what this is going to deploy, this script will deploy a new GKE regional cluster and this can be explained here. I do have a plan to also add a zonal cluster option to the scripts here or if someone has already created it then please share.

After around 10 minutes possibly shorter you will have created your GKE cluster. For us to connect though we need to download the configuration for kubectl on our local machine to connect to the cluster. We can do this using the following command.

gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) –region $(terraform output -raw region)

Once you have ran the above you can also confirm you have access by using the following confirmation command of checking your hosts.

kubectl get nodes

From the above command you should see 6 nodes in a ready state, this is because we have deployed 2 nodes in each zone of the region. You can also check the context with the following command to confirm you are indeed in the correct configuration.

kubectl config get-contexts

This will be useful later when we come to remove the cluster and we want to also remove the context and cluster from our kubectl configuration.

At this stage we have a GKE cluster up and running and even this might be useful to some people but in the next section I want to add the functionality of deploying an application using the helm provider so that I can show and demonstrate the functionality of the application.

Helm deployment of Kasten K10 and example

As I mentioned the above gets us a fast way to deploy a new cluster without affecting our existing infrastructure, my use case here is to quickly spin up an environment for demos but also a fast way to destroy or get rid of the environment I created.

In this example I want to be able to deploy Kasten K10 which is a tool that provides the ability to protect your applications within Kubernetes.

In the git repository we downloaded earlier you should be able to navigate to the helm folder and within the folder you will see three additional folders and depending on your deployment of Kubernetes will determine the folder you choose.

Before we continue, I will also highlight that in each of the Kubernetes deployment folders you will find a similar file to GKE_Instructions.md which walks through step by step including the helm deployment of your application.

In your helm folder and then in our case the Google GKE folder you will see two files, kasten.tf and main.tf

Now that we are in our terminal, we should again issue the following command to download the required providers.

terraform init

The next command along with your approval “yes” will go ahead and firstly create a namespace called “kasten-io” and then proceed to deploy from the helm chart the latest release version of Kasten K10. You can also add additional helm chart values to the kasten.tf file if you have specific requirements, you will see this to be the case in the other options for the other public cloud deployments.

terraform apply

You can follow the progress of the deployment by running the following command

kubectl get pods –n kasten-io –w

When all pods are up and running you can also run the following to get the public IP of the Kasten K10 dashboard as we set this with our helm chart option within kasten.tf

kubectl get svc –n kasten-io

You can then take this DNS / IP address and add to your browser to connect to your instance of Kasten K10

https://IP Address/k10/#

Because we don’t want this now accessible to the world, we must obtain our token authentication that we also defined in our helm chart variables. You can follow this post to get that secret information.

Deleting your cluster or application

Ok so we are now at the stage where we might want to get rid of things or at least roll back to pre-Kasten K10 being deployed. It is easy to get things removed or reverted, we can run the following command from the helm folder to remove the Kasten deployment and namespace. Again, this is going to prompt you for a yes to continue.

terraform destroy

Let’s then say we are also done with the cluster as well, we can run the following command, for this to work you need to navigate to the first folder where we ran terraform apply the first time to create the cluster. This time I am going to share a faster way without the requirement to type in yes to make any of the terraform apply or destroy commands do what you want.

terraform destroy –auto-approve

So far so good? But we still have that kubectl context for the cluster, we can run the following command to get the cluster and context name.

kubectl config get-contexts

Then it’s a case of running the following to delete the context

kubectl config delete-context

And then the following to delete the cluster

Kubectl config delete-cluster

I do hope this was useful and I am also open to improving this workflow or adding additional features, my initial thoughts are for better and faster ways to demo I should also deploy a data service such as MySQL and add some data to it and automate the creation of backup policies etc.

Top comments (0)