Introduction
For as easy as k3s is to use, I had a difficult time finding a simple guide for setting up a Server-Agent configuration on two separate servers. In this post, I hope to provide that guide without requiring you to download any extra dependencies or use any other tools besides the scripts provided by k3s.
I used AWS EC2 instances as my VMs, so instructions will be slightly AWS specific, but should work on any two Linux servers configured to communicate with each other.
VM Setup
Provision Servers
First, we'll need to provision two Linux servers. I used two t2.small images, which is about as low as you'd want to go even when just running a sample app.
It's also worth assigning a permanent IP address to these servers via Elastic IP if you plan on using them more than once. Instructions for doing so are provided by AWS here.
Log onto each VM
Once you have your VMs up and running, pick one to be the Server and one to be the Agent. Get the hostname of each by running hostname -i
and store both values in the ~/.bashrc
file on both the server and the agent like so:
export k3sserver=<value of hostname -i on k3s server>
export k3sagent=<value of hostname -i on k3s agent>
We will also use these values to configure the AWS Security Group.
Update AWS Security Group
Security group rules must be added to allow you to SSH into the VMs you created from your localhost, and also to SSH to and from the k3s server and agent. The rules for doing so are provided in the table below.
Type | Port Range | Source | Purpose |
---|---|---|---|
SSH | 22 | IP from http://checkip.amazonaws.com/ | SSH from localhost to Server and Agent |
All Traffic | All | Value of $k3sserver | SSH from Server to Agent |
All Traffic | All | Value of $k3sagent | SSH from Agent to Server |
These rules are added to the Security Group's Inbound Rules (instructions for doing so can be found here).
Configure SSH keys
On both the k3s Server and Agent, run the following
cd ~/.ssh
ssh-keygen
Hit enter at each of the prompts. This should result in an id_rsa
private key and an id_rsa.pub
public key being created. Copy the contents of id_rsa.pub
on the server, and paste them on a new line into ~/.ssh/authorized_keys
on the agent. Do the same for the agent, copying the contents from id_rsa.pub
on the agent and pasting them on a new line in ~/.ssh/authorized_keys
on the server.
You should now be able to ssh into the Agent from the Server, and vice versa.
Configure the VMs
On the Server and Agent, run the following script to run and configure Docker.
sudo yum update -y
sudo amazon-linux-extras install -y docker
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
sudo service docker start
sudo usermod -a -G docker ec2-user
After running, exit and log back in. Run sudo visudo
and append the following to the secure_path value: :/usr/local/bin
.
Defaults secure_path="/usr/bin:/usr/sbin:/bin:/sbin:/usr/local/bin"
Note that your secure path may have more paths before /usr/local/bin
. This is OK. Adding this path allows you to run Docker on each instance without needing to precede every command with sudo
.
Server Configuration
On the Server, run:
curl -sfL https://get.k3s.io | sh -s - --docker
sudo chmod 755 /etc/rancher/k3s/k3s.yaml
Confirm the Server is ready with the following commands:
-
k3s kubectl get node
- Should display 1 running node with rolescontrol-plane,master
-
sudo service k3s status
- Should show the service as running
Agent Configuration
On the Agent, run the following command, setting NODE_TOKEN
to the contents of the file /var/lib/rancher/k3s/server/node-token
on the Server instance:
curl -sfL https://get.k3s.io | K3S_URL=https://$k3sserver:6443 K3S_TOKEN=$NODE_TOKEN sh -s - --docker
Confirm the Agent service is running with the following commands:
sudo service k3s-agent status
journalctl -f -u k3s-agent.service
Then, on the Server VM, run kubectl get nodes
. The Agent node should now appear without a tag. Add a tag with the following command:
kubectl label node <node name from kubectl get nodes> node-role.kubernetes.io/worker=worker
Running kubectl get nodes
again should show the Agent node with a tag. At this point, we've established that the Server and Agent are communicating with each other and k3s is ready for use!
Additional Reading
There's a ton of resources available to test k3s with. If you're looking for a place to start, I recommend Digital Ocean's Getting Started with Containers and Kubernetes: A DigitalOcean Workshop Kit for running a simple Flask app.
Top comments (0)