DEV Community

Cover image for Build your Managed K8S in 5 minutes on old hardware
Sergey Ziryanov
Sergey Ziryanov

Posted on

Build your Managed K8S in 5 minutes on old hardware

Hi! More and more cloud providers around the world are offering their services for Kubernetes managed cluster in their clouds. The cost of such services is almost always a key factor when choosing a vendor, and young companies with negative profits but very big ambitions are forced to give their last money for a cluster that could replace the usual Shared hosting for 5 dollars per month. Let's figure out how to get Managed Kubernetes functionality for small projects quickly and very cheaply.

Why do companies need their own cluster?

Indeed, like I said - micro-companies don't need all the goodies of k8s, they don't need the ultra-high UPTime of their services, they don't need to create a bunch of nodes and ingresses to break down their traffic, and the desired scaling won't happen tomorrow. What they really need is the potential to quickly move to more powerful hardware that will fully satisfy all their rapidly growing needs, and kubernetes allows you to build a product infrastructure and easily migrate already ready specifications to another, for example High Available cluster as soon as the need arises.

I think all programmers agree that scalability should be built in from the beginning, but not everyone thinks how to realize this scalability from the point of view of devops practices. Kubernetes sounds complicated and dangerous, but let me tell you how to build your own cluster in 5 minutes and a 30 dollars a month, which will fully meet the needs of small companies, and which can be easily converted into a dev cluster, or can be discarded as the bottom of the rocket as soon as the need for a HA cluster with a bunch of admins on board appears.

Step 0. Buy a server

In this article I will be building a k8s cluster on a single dedicated server, which I will partition into virtual machines because it is cheap. This approach will give the company the ability to seamlessly scale the product by moving to any other cluster in 30 minutes. If you already have a need for high available cluster - rent a few virtual machines and skip the first step.

I don't want to spend too long on this step, the article isn't really about hardware. Here are the minimum requirements for each node, taken from the official documentation of the opensource solution we are going to use.

opensource requirements

In my case I managed to rent a Dedicated server for 30 US dollars per month:

CPU: Intel® Xeon® Processor Quad Core 2xL5630.
RAM: DDR3 DIMM 4Gb 1333MHz * 6 (24gb RAM).
DISK: 500GB SSD 2.5 Sata3.
OS: Ubuntu 22.04 LTS.
Enter fullscreen mode Exit fullscreen mode

screenfetch result

Step 1: Virtual Machines

If you still decide to rent several virtual machines rather than split one server into parts, skip this step.

In order to provide our cluster with full-fledged scaling between nodes (this is the environment we will have when we move to a " grown-up" cluster) - let's create virtual machines on our dedicated server.

I advise you to do this using an opensource tool called Cockpit. The tool itself allows you to administer the server through a web-interface. However, we need an addon to it - cockpit-machines. The addon allows you to create virtual machines, quickly and flexibly. It works on Qemu-KVM.

Connect via SSH to our dedicated server and execute the command:
apt-get install cockpit cockpit-machines

After the installation is complete - log in to your browser and go to ip:9090
Login and password from the cockpit control panel are the same as from SSH, i.e. login and password of your OS user.

Go to the Virtual Machines tab, and click "Create VM". Specify the virtual machine name, image, disk and amount of RAM.

Great! After the OS installation process is complete, we will have our master node. Do the same thing a couple more times for the two worker nodes.

We should end up with something like this:

Cockpit result

Then log into each of the VMs and in the VNC console set ssh.
apt-get install ssh

Done! We now have three working virtual machines that we will use for our cluster.

Step 2: Configuring the VM

If you thought we were going to edit and modify a bunch of configuration files for each virtual machine in this step - forget it. What we need to do is just install a couple packages and add a bit of sugar. You have to be careful though, as it's easy to get confused between these virtual machines.

Let's SSH into the main dedicated server, inside of which we just created 3 virtual machines. The one that looks outward with its IP v4. Execute the following commands:

apt-get install nano
nano /etc/hosts
Enter fullscreen mode Exit fullscreen mode

Add 3 lines like IP_Virtual_Machine name to the very end.

In my example it looks like this:

192.168.122.61 master-node
192.168.122.172 worker-node-1
192.168.122.105 worker-node-2
Enter fullscreen mode Exit fullscreen mode

Once we have pasted the text, we copy it, we will need it later, then press ctrl+x, y and enter.

Then, we connect to each virtual machine in turn to install additional packages on them.
ssh ubuntu@master-node
su root
nano /etc/hosts
Paste the 3 lines we copied earlier and again ctrl+x, y and enter to save.

Install the necessary packages for each node of our cluster.
apt-get install conntrack socat
Now we need to add our ubuntu user to the list of users with sudo access:
nano /etc/sudoers

After the lines:

# User privilege specification
root    ALL=(ALL:ALL) ALL
Enter fullscreen mode Exit fullscreen mode

Add a line:

ubuntu  ALL=(ALL:ALL) ALL
Enter fullscreen mode Exit fullscreen mode

Save (ctrl+x, y, enter).

Done! We have configured the master node, exit it using the exit command. We do the same process with the other two virtual machines, so in the end we need to do these steps in ALL nodes in our cluster.

After we have done the same steps on all three virtual machines - connect to the master node again and enter the password.
ssh ubuntu@master-node

Step 3: Create a cluster

This step is feared not only by programmers, but also by inexperienced devops engineers. It seems so difficult to create your own cluster, but no, we will do it quickly and very easily, and one opensource project called KubeSphere will help us.

KubeSphere is a distributed operating system for managing cloud-native applications using Kubernetes as a kernel. And also this system will help you install yourself in a couple of clicks.

It is an opensource solution with more than 13 thousand stars on github and quite an impressive community. It is also actively used by Chinese companies that build large fault-tolerant systems.

Now we are an ubuntu user with sudo access, we are in the master-node in our home directory (/home/ubuntu). We execute the following commands:

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk
./kk create config -f config.toml
Enter fullscreen mode Exit fullscreen mode

These commands have downloaded a tool called KubeKey (kk) that will allow us to install KubeSphere. And also created a cluster config, which we will now start editing.

Let's open the yet hot config.toml:
nano config.toml

I have underlined what we are interested in, but you can play around with configurations if you are interested. Kubesphere is a powerful tool, you might find the settings you need for your cluster.

Cluster config
In the name value specify the name of our cluster, for our example I will leave sample.
In the hosts list specify our virtual servers:

  - {name: master, address: 192.168.122.61, internalAddress: 192.168.122.61, user: ubuntu, password: "password"}
  - {name: worker-1, address: 192.168.122.172, internalAddress: 192.168.122.172, user: ubuntu, password: "password"}
  - {name: worker-2, address: 192.168.122.105, internalAddress: 192.168.122.105, user: ubuntu, password: "password"}
Enter fullscreen mode Exit fullscreen mode

And just below that, we assign them roles:

  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - worker-1
    - worker-2
Enter fullscreen mode Exit fullscreen mode

We should end up with something like this (of course with the IP addresses and passwords of your virtual machines):

Config result

Save and enjoy, we've configured our cluster. All that remains is to raise it. And it is even easier to do it - we just execute one command:
./kk create cluster -f config.toml --with-kubesphere
KubeKey will check the cluster nodes and if everything is OK it will ask you to confirm the cluster installation. Type yes and press enter.

KubeKey check
Almost done! Let's go drink coffee or water, if the cluster hardware is very old and the Internet is slow, you can go to the gym. However, it took me about 5 minutes to install the cluster. As soon as the installation is complete, you will see the following message:

All done

We can run to look at this gorgeous interface, but we'll be met with an error. Why? Come on, the ip is public, but it's only available within our virtual machine network. You could try to throw an external bridge, forward traffic to the virtual machine, but I'm going to make this a lot easier.

Step 4 - Final

Open our terminal and connect via SSH to the "Main" dedicated server, the one where we created the 3 virtual machines. Execute the following commands:

apt-get install nginx
nano /etc/nginx/sites-enabled/default
Enter fullscreen mode Exit fullscreen mode

We delete everything and insert the following:

server {
        listen 30880 default_server;
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_pass http://ip:30880;
        }
}
Enter fullscreen mode Exit fullscreen mode

Save and go to the browser using our public IP and port 30880. Specify login and password that was also in the message after installation (by default it is admin:P@88w0rd) and set your new password.

KubeSphere interface

Congratulations!!! Here is your own Managed Kubernetes that you can easily set up, connect gitlab's pipelines and put a whole bunch of your precious yamls in there.

In the next posts, I will try to describe other processes every company needs: configuring the pipelines, deployments, load balancing and certifications, building on the existing results of this article.

Links to all resources:

KubeSphere - Website.
KubeSphere - documentation and requirements to nodes.

Top comments (0)