DEV Community


Posted on

Create/Expose Test Kubernetes Cluster on EC2 with kind

WHY???! 😂

Well first off, there may have been better solutions for standing up a test cluster cheaply on EC2, but my thought process didn't lead me there. What was my thought process?

  • Need to test grafana dashboard on a test cluster..
  • Install Kind locally..
  • Kind doesn't work with experimental backend podman
  • Lets install kind on EC2.. Install Kind on EC2..
  • I want to work seamlessly from my workstation..

At this point in the thought process I was too focused on how to force Kind to meet my expectations than "Maybe there is a better way"... in anycase I got it to work, and wanted to record my process so that I wouldn't forget what I learned.

How I did it?

  • Install Kind
  • Configure EC2
  • Configure Local Environment

I had to run through a couple of hoops when configuring the local environment, but other than that it was pretty straight forward.

Install Kind

In order to install Kind with exposed ports (for connecting with kubectl from local pc) a Kind configuration file is necessary to override the default settings during the deployment.

Create Kind Configuration File

The following configuration file kind.yaml changes the bind address:port for the Kubernetes API server.

kind: Cluster
  apiServerAddress: ""
  apiServerPort: 7443

As a foot-note, you should never use a setup like this in production. Aside from the fact that Kind shouldn't ever be used in production.

Deploy Kind

Deploying the cluster is as easy as:

kind create cluster --config kind.yaml

Configure EC2

Now that we have Kind deployed and bound to we need to open that port on the node's security group.

Configure Security Group

Create an ingress rule similar to the screenshot below.

Alt Text

I have taken this screenshot using the IP to hide my actual IP. In practice this Ingress rule should be limited to the IP from which you will be accessing the cluster.

Configure Local Environment

We now that we have our cluster created, and enabled access through our security groups we can configure our local environment to connect to the API.

The API certificates generated by Kind will not support your EC2 instance's domain name or IP so we will need to define a hostname supported by the certificate in your /etc/hosts file to be used for connecting to the API. Additionally unless you have used an Elastic-IP you will need to take some precautions to make connecting to the API easy between each stop / start of the EC2 Instance.

To help with this, we will define two functions to assign/update the current IP of your EC2 instance to the name kind-control-plane (which is supported by the certificates generated by Kind) in your /etc/hosts file.

Create Helper Aliases

Adds current public IP of your EC2 instance to the name kind-control-plane

add-dev-machine-ip () { 
    sudo bash -c "echo $(aws ec2 describe-instances --instance-ids <your-instance-id> --query Reservations[0].Instances[0].PublicIpAddress) kind-control-plane >> /etc/hosts"

Updates current public IP of your EC2 instance to the entry made by the previous command.

update-dev-machine-ip () { 
    sudo sed -i "s/^.* kind-control-plane.*$/$(aws ec2 describe-instances --instance-ids <your-instance-id> --query Reservations[0].Instances[0].PublicIpAddress) kind-control-plane/" /etc/hosts

Realistically you will probably only run the first command once and then never need it... but I felt the need to make it a function anyway.

Update /etc/hosts

Add IP to /etc/hosts



sudo cat /etc/hosts
[rustysysdev@localhost ~]$ sudo cat /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 kind-control-plane

Configure Kubectl

Lastly, with our networking fixed we can configure kubectl to connect to our API server.

We will need to splice the ~/.kube/config generated by Kind on the EC2 instance with our local ~/.kube/config.

EC2 Instance

Output the ~/.kube/config generated by Kind.

cat ~/.kube/config

The output will look similar to the following.

apiVersion: v1
- cluster:
    certificate-authority-data: XXXXXXXXXXXXXX
  name: kind-kind
- context:
    cluster: kind-kind
    user: kind-kind
  name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
- name: kind-kind
    client-certificate-data: XXXXXXXXXXXXXX
    client-key-data: XXXXXXXXXXXXXX

Local PC

If you already have a local ~/.kube/config you will need to splice the above configuration into your local file.

In this example you would need to modify the following elements.

  • clusters:
  • contexts:
  • users:

Add to the clusters: element in your local file.

- cluster:
    certificate-authority-data: XXXXXXXXXXXXXX
    server: https://kind-control-plane:7443
  name: kind-kind

be mindful to change the portion of the server address to the kind-control-plane hostname we defined earlier.

Add to the context: element in your local file.

- context:
    cluster: kind-kind
    user: kind-kind
  name: kind-kind

Add to the users: element in your local file.

- name: kind-kind
    client-certificate-data: XXXXXXXXXXXXXX
    client-key-data: XXXXXXXXXXXXXX

If you do not have a local ~/.kube/config file you can create one and copy/paste the file from your EC2 instance. Mindful of course that you change the to https://kind-control-plane:7443.

Testing your setup

Switch to your kind-kind context.

kubectl config use-context kind-kind

Get node information.

kubectl get node

If everything is configured properly you should see output similar to the following.

NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   26h   v1.18.2


And there you have it, your own Kind cluster running in EC2 with local access. If anyone has any questions, please let me know!!

Top comments (2)

matheusfm profile image

Hi, thank you for this post.

It's possible to connect to the remote kind cluster without edit /etc/hosts by using tls-server-name in kubeconfig:

      tls-server-name: kind-control-plane
Enter fullscreen mode Exit fullscreen mode
yatin03 profile image

Yes, You can change server to ':7443' in kubeconfig file