<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yuri Bernstein</title>
    <description>The latest articles on DEV Community by Yuri Bernstein (@bernstein).</description>
    <link>https://dev.to/bernstein</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bernstein"/>
    <language>en</language>
    <item>
      <title>Deploying K8S Cluster on AWS EC2 Instances</title>
      <dc:creator>Yuri Bernstein</dc:creator>
      <pubDate>Wed, 15 Jan 2025 16:38:08 +0000</pubDate>
      <link>https://dev.to/bernstein/deploying-k8s-cluster-on-aws-ec2-instances-f9k</link>
      <guid>https://dev.to/bernstein/deploying-k8s-cluster-on-aws-ec2-instances-f9k</guid>
      <description>&lt;p&gt;Recently, I needed to deploy a working k8s cluster on top of AWS EC2 Ubuntu based instances. As always, I jump on an opportunity to share with the community, hope anyone finds it useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 0: Pre-requisites&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To make everything work, you will need to have an aws IAM user with proper permissions andaccess_key and secret_key . You’d also want to install and configure aws-cli. Follow the official documentation&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Instances&lt;/strong&gt;&lt;br&gt;
The easiest way is to use terraform for instances creation. Here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 3.0"
    }
  }
}

provider "aws" {
  region = "us-east-2"
}

locals {
  instance_type = "t2.medium"
}

resource "aws_instance" "control_plane" {
  ami           = "ami-09040d770ffe2224f"
  instance_type = local.instance_type
  count         = 1
  key_name      = "myawesomekey"
  tags = {
    Name     = "control_plane"
    k8s_role = "control_plane"
  }
}

resource "aws_instance" "worker" {
  ami           = "ami-09040d770ffe2224f"
  instance_type = local.instance_type
  count         = 3
  key_name      = "myawesomekey"

  tags = {
    Name     = "worker_${count.index + 1}"
    k8s_role = "worker"
  }
}

output "public_ips" {
  value = {
    control_plane = aws_instance.control_plane.*.public_ip
    workers       = { for idx, ip in aws_instance.worker : "worker_${idx + 1}" =&amp;gt; ip.public_ip }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’d want to modify the region, ami, instance_type, count and key_name according to your setup but they are all self-explanatory.&lt;/p&gt;

&lt;p&gt;save it as main.tf, open your terminal then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;In case you never used and need to install and configure terraform, you can follow the official guide &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you have your instances created, you’d need to set up the control plane and the worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Setup Ansible&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Follow the official documentation to install and configure Ansible&lt;br&gt;
&lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html" rel="noopener noreferrer"&gt;https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ansible is the best way to perform it:&lt;br&gt;
Let’s make sure we use aws_ec2 ansible inventory plugin, so we can run over the instances using their tags that tf created.&lt;br&gt;
Modify your ansible.cfg accordingly and make sure you have boto3 installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[inventory]
enable_plugins = aws_ec2

create aws_ec2.yaml with the following content:

plugin: aws_ec2
regions:
  - us-east-2
keyed_groups:
  - prefix: role
    key: tags.k8s_role
  - prefix: ''
    key: tags.k8s_role
    separator: ""    
filters:
  instance-state-name: running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is the one playbook to rule them all:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--
- name: Deploy Kubernetes Cluster
  hosts: all
  become: yes
  tasks:
    - name: Update and install prerequisites
      apt:
        name:
          - apt-transport-https
          - curl
          - socat
          - conntrack
        update_cache: yes
        state: latest

    - name: Install Docker
      apt:
        name: docker.io
        state: latest

    - name: Start and enable Docker service
      systemd:
        name: docker
        enabled: yes
        state: started

    - name: Add Kubernetes community GPG key
      become: yes
      ansible.builtin.get_url:
        url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
        dest: /tmp/kubernetes.gpg
        mode: '0644'
      register: download_gpg

    - name: De-armoring GPG key
      become: yes
      command: gpg --dearmor -o /etc/apt/keyrings/kubernetes.gpg /tmp/kubernetes.gpg
      when: download_gpg is changed


    - name: Add Kubernetes APT repository
      become: yes
      apt_repository:
        repo: "deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb /"
        state: present
        filename: kubernetes


    - name: Update package listings and install Kubernetes components
      apt:
        name:
          - kubelet
          - kubeadm
          - kubectl
        update_cache: yes
        state: latest

    - name: Hold Kubernetes packages
      ansible.builtin.dpkg_selections:
        name: "{{ item }}"
        selection: hold
      loop:
        - kubelet
        - kubeadm
        - kubectl

    - name: Disable swap
      command: swapoff -a
      ignore_errors: true

    - name: Remove swap from fstab
      lineinfile:
        path: /etc/fstab
        regexp: '^.* swap .*'
        line: '# commented out by Ansible to disable swap'
        state: present

    - name: Configure sysctl settings for Kubernetes
      copy:
        dest: /etc/sysctl.d/k8s.conf
        content: |
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
      notify: reload sysctl

    - name: Enable and start kubelet service
      systemd:
        name: kubelet
        enabled: yes
        state: started

    - name: Configure firewall for necessary Kubernetes ports
      ufw:
        rule: allow
        port: "{{ item }}"
        proto: tcp
      with_items:
        - 6443    # Kubernetes API server
        - 2379:2380 # etcd server client API
        - 10250   # Kubelet API
        - 10255   # Read-only Kubelet API (optional)
      when: inventory_hostname in groups['control_plane']

    - name: Open additional required ports for networking
      ufw:
        rule: allow
        port: "{{ item }}"
        proto: "{{ 'tcp' if item != '8472' else 'udp' }}"
      with_items:
        - 8472    # Overlay Network (UDP, flannel VXLAN if using flannel)
      when: inventory_hostname in groups['control_plane']

    - name: Reload UFW
      command: ufw reload
      when: inventory_hostname in groups['control_plane']

  handlers:
    - name: reload sysctl
      command: sysctl --system

- name: Initialize Kubernetes control plane
  hosts: role_control_plane
  become: yes
  tasks:
    - name: Initialize the Kubernetes control plane
      command: kubeadm init --pod-network-cidr=10.244.0.0/16
      register: kubeadm_init

    - name: Save kubeadm join command to local file
      local_action:
        module: shell
        cmd: "echo '{{ kubeadm_init.stdout }}' | grep -A 2 'kubeadm join' | tr -d '\\n' | sed 's/\\\\//g' &amp;gt; join_command.sh"
      run_once: true
      delegate_to: localhost
      become: no

    - name: Set up kubectl for root
      command: "{{ item }}"
      with_items:
        - mkdir -p /root/.kube
        - cp -i /etc/kubernetes/admin.conf /root/.kube/config
        - chown root:root /root/.kube/config

    - name: Install Flannel CNI plugin
      command: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
      when: inventory_hostname in groups['control_plane']

- name: Set up Kubernetes worker nodes
  hosts: role_worker
  tasks:
    - name: Ensure the join command script is present and correct
      local_action:
        module: stat
        path: "./join_command.sh"
      register: script_stat

    - name: Copy kubeadm join command to worker nodes
      copy:
        src: "./join_command.sh"
        dest: "/tmp/join_command.sh"
        mode: '0755'
      when: script_stat.stat.exists

    - name: Join cluster
      command: bash /tmp/join_command.sh
      args:
        executable: /bin/bash
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.Run the playbook&lt;/strong&gt;&lt;br&gt;
Save it as playbook.yaml and &lt;code&gt;ansible-playbook -i aws_ec2.yaml playbook.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. In case you don’t like Ansible&lt;/strong&gt;&lt;br&gt;
If you prefer running bash scripts on the instances directly, here is the script for the control_plane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Step 1: Update and install prerequisites
sudo apt update &amp;amp;&amp;amp; sudo apt install -y apt-transport-https curl socat conntrack

# Step 2: Install Docker
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

# Step 3: Configure UFW (Uncomplicated Firewall) to allow necessary Kubernetes ports
sudo ufw allow 6443/tcp    # Kubernetes API server
sudo ufw allow 2379:2380/tcp # etcd server client API
sudo ufw allow 10250/tcp   # Kubelet API
sudo ufw allow 10255/tcp   # Read-only Kubelet API (optional)
sudo ufw allow 10259/tcp   # kube-scheduler
sudo ufw allow 10257/tcp   # kube-controller-manager
sudo ufw allow 8472/udp    # Overlay Network (flannel VXLAN if using flannel)
sudo ufw reload
# Stop and disable the firewall for now, as during the installation additional ports may be required to be opened,
# you can start it back later, as we configured the opration neccessery ports.
sudo systemctl stop ufw
sudo systemctl disable ufw

# Step 4: Add the Kubernetes community GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Step 5: Add the Kubernetes APT repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Step 6: Update package listings and install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Step 7: Install crictl (required by kubelet)
VERSION="v1.24.2"  # Adjust the version to match your Kubernetes version
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amdoinux-amd64.tar.gz -C /usr/local/bin
rm crictl-$VERSION-linux-amd64.tar.gz

# Step 8: Pull all necessary Kubernetes images required for kubeadm init
sudo kubeadm config images pull

# Step 9: Initialize the Kubernetes control plane
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# Step 10: Set up the kubectl configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Step 11: Deploy a pod network to the cluster
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Step 12: set environment variable for kubectl to work
export KUBECONFIG=/etc/kubernetes/admin.conf

# Step 13: Validate kubectl is working
kubectl get pods --all-namespaces
kubectl get nodes

# Step 13: Output the join token to join other nodes to this cluster
kubeadm token create --print-join-command

echo "Kubernetes has been successfully installed and initialized!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And another one for the worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash


# Step 1: Update and install prerequisites
sudo apt update &amp;amp;&amp;amp; sudo apt install -y apt-transport-https curl socat conntrack

# Step 2: Install Docker
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

# Step 3: Add the Kubernetes signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Step 4: Add the Kubernetes APT repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Step 5: Update package listings and install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Step 6: Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# Step 7: Configure sysctl settings
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

# Step 8: Enable and start kubelet service
sudo systemctl enable kubelet
sudo systemctl start kubelet

# Optional: Ensure necessary ports are open (adjust as needed for your firewall settings)
sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10255/tcp
sudo ufw allow 8472/udp
sudo ufw reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to capture the output of kubeadm token create --print-join-token command from the control plane and run it on the workers to join them to the cluster.&lt;/p&gt;

&lt;p&gt;You’re done!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
