DEV Community

Kwunhok Chan
Kwunhok Chan

Posted on

Setting up a multi-arch Amazon EKS cluster

Amazon EKS support for Arm-based instances powered by AWS Graviton is now generally available. The new general purpose (M6g), compute-optimized (C6g), and memory-optimized (R6g) instances running on AWS Graviton processors deliver up to 40% better price/performance over comparable current generation (M5, C5 and R5) x86-based instances. In this post I will walkthrough how you can setup a multi-architecture Amazon EKS cluster, that means having worker nodes from both x86 and Arm architecture in the same cluster so some applications that supports Arm can run on Graviton instances while others will stay on x86 instances.

Creating a cluster

First, we will start by creating an Amazon EKS cluster using the eksctl command line utility, you need to be running version 0.26.0-rc1 or later for ARM support. For information on installing or upgrading eksctl, see Installing or upgrading eksctl.

Here I am creating a cluster named eks-multi-arch with Kubernetes version 1.17.

eksctl create cluster --name eks-multi-arch \
  --version 1.17 \
  --without-nodegroup

Cluster provisioning takes several minutes. During cluster creation, you’ll see several lines of output. The last line of output is similar to the following example line.

[✓]  EKS cluster "eks-multi-arch" in "us-west-2" region is ready

When your cluster is ready, test that your kubectl configuration is correct.

kubectl get svc

And you should see an output similar to the following.

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   1m

Adding nodes to the cluster

Here comes the interesting part, creating node groups for the cluster. Let’s first create a managed node group for the x86 instances using m5.large instance type.

eksctl create nodegroup \
  --cluster eks-multi-arch \
  --region us-west-2 \
  --name x86-mng \
  --node-type m5.large \
  --nodes 1\
  --nodes-min 1\
  --nodes-max 3\
  --managed

Once the node group is ready, we can add the second node group for the Graviton instances using m6g.large instance type.

eksctl create nodegroup \
  --cluster eks-multi-arch \
  --region us-west-2 \
  --name graviton-mng \
  --node-type m6g.large \
  --nodes 1\
  --nodes-min 1\
  --nodes-max 3\
  --managed

Finally, check that we have 2 nodes running, 1 from each node group.

kubectl get nodes --label-columns=kubernetes.io/arch

Using the --label-columns parameter, you can show the values of the specific labels attached to your nodes. The kubernetes.io/arch label is populated by the kubelet when it starts. The output should show a column ARCH, where one node is amd64 and another node is arm64 like the following example.

NAME                                           STATUS   ROLES    AGE     VERSION              ARCH
ip-192-168-11-190.us-west-2.compute.internal   Ready    <none>   3m34s   v1.17.9-eks-4c6976   arm64
ip-192-168-89-170.us-west-2.compute.internal   Ready    <none>   8m44s   v1.17.9-eks-4c6976   amd64

Deploying applications

Now we have 2 nodes running in our cluster, we can start deploying containers to our cluster. Let’s first create a deployment for nginx, save the following content into a file named nginx.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

And create a Deployment based on the YAML file:

kubectl apply -f nginx.yaml

Check our deployment using the following command.

kubectl get pod -o wide

We have deployed 2 nginx pods, and Kubernetes scheduled them to run on different hosts.

NAME                               READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATES
nginx-deployment-59c9f8dff-4g2l4   1/1     Running   0          15s   192.168.78.75    ip-192-168-89-170.us-west-2.compute.internal   <none>           <none>
nginx-deployment-59c9f8dff-zvdhc   1/1     Running   0          15s   192.168.27.251   ip-192-168-11-190.us-west-2.compute.internal   <none>           <none>

What if, let say the nginx can only be run on x86 instances, we can set a constraint to the deployment using nodeSelector. Let’s modify our existing YAML file to add in nodeSelector.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
      nodeSelector:
        kubernetes.io/arch: amd64

And update the deployment using

kubectl apply -f nginx.yaml

Now if we check the pods again, we will see Kubernetes scheduled both pods onto the same host, which is the only x86 instance that we have.

NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATES
nginx-deployment-7755cd895c-hvgdp   1/1     Running   0          8s    192.168.64.153   ip-192-168-89-170.us-west-2.compute.internal   <none>           <none>
nginx-deployment-7755cd895c-wdnvf   1/1     Running   0          11s   192.168.74.107   ip-192-168-89-170.us-west-2.compute.internal   <none>           <none>

You could do the same if you want to deploy your applications only on ARM instances by changing the value in the kubernetes.io/arch label under nodeSelector. For more information on how to assign pods to nodes, check out the Kubernetes documentation here.

Summary

In this post, I showed how easy it is to setup a multi-architecture Amazon EKS cluster and how you can assign pods to specific nodes based on the CPU architecture of the nodes. Arm-instances delivers significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores. However, do check out some of the considerations listed out in the AWS documentation when using Arm nodes in your cluster.

Top comments (0)