<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergey Ziryanov</title>
    <description>The latest articles on DEV Community by Sergey Ziryanov (@twelvee).</description>
    <link>https://dev.to/twelvee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/twelvee"/>
    <language>en</language>
    <item>
      <title>Build your Managed K8S in 5 minutes on old hardware</title>
      <dc:creator>Sergey Ziryanov</dc:creator>
      <pubDate>Sat, 12 Aug 2023 09:37:46 +0000</pubDate>
      <link>https://dev.to/twelvee/build-your-managed-k8s-in-5-minutes-on-old-hardware-444k</link>
      <guid>https://dev.to/twelvee/build-your-managed-k8s-in-5-minutes-on-old-hardware-444k</guid>
      <description>&lt;p&gt;Hi! More and more cloud providers around the world are offering their services for Kubernetes managed cluster in their clouds. The cost of such services is almost always a key factor when choosing a vendor, and young companies with negative profits but very big ambitions are forced to give their last money for a cluster that could replace the usual Shared hosting for 5 dollars per month. Let's figure out how to get Managed Kubernetes functionality for small projects quickly and very cheaply.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do companies need their own cluster?
&lt;/h2&gt;

&lt;p&gt;Indeed, like I said - micro-companies don't need all the goodies of k8s, they don't need the ultra-high UPTime of their services, they don't need to create a bunch of nodes and ingresses to break down their traffic, and the desired scaling won't happen tomorrow. What they really need is the potential to quickly move to more powerful hardware that will fully satisfy all their rapidly growing needs, and kubernetes allows you to build a product infrastructure and easily migrate already ready specifications to another, for example High Available cluster as soon as the need arises.&lt;/p&gt;

&lt;p&gt;I think all programmers agree that scalability should be built in from the beginning, but not everyone thinks how to realize this scalability from the point of view of devops practices. Kubernetes sounds complicated and dangerous, but let me tell you how to build your own cluster in 5 minutes and a 30 dollars a month, which will fully meet the needs of small companies, and which can be easily converted into a dev cluster, or can be discarded as the bottom of the rocket as soon as the need for a HA cluster with a bunch of admins on board appears.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 0. Buy a server
&lt;/h2&gt;

&lt;p&gt;In this article I will be building a k8s cluster on a single dedicated server, which I will partition into virtual machines because it is cheap. This approach will give the company the ability to seamlessly scale the product by moving to any other cluster in 30 minutes. If you already have a need for high available cluster - rent a few virtual machines and skip the first step.&lt;/p&gt;

&lt;p&gt;I don't want to spend too long on this step, the article isn't really about hardware. Here are the minimum requirements for each node, taken from the official documentation of the opensource solution we are going to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux4aj8f889ukwf90xz46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux4aj8f889ukwf90xz46.png" alt="opensource requirements"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case I managed to rent a Dedicated server for 30 US dollars per month:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CPU: Intel® Xeon® Processor Quad Core 2xL5630.
RAM: DDR3 DIMM 4Gb 1333MHz * 6 (24gb RAM).
DISK: 500GB SSD 2.5 Sata3.
OS: Ubuntu 22.04 LTS.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwia7ir1nbx622g0010q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwia7ir1nbx622g0010q.png" alt="screenfetch result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Virtual Machines
&lt;/h2&gt;

&lt;p&gt;If you still decide to rent several virtual machines rather than split one server into parts, skip this step.&lt;/p&gt;

&lt;p&gt;In order to provide our cluster with full-fledged scaling between nodes (this is the environment we will have when we move to a " grown-up" cluster) - let's create virtual machines on our dedicated server.&lt;/p&gt;

&lt;p&gt;I advise you to do this using an opensource tool called &lt;strong&gt;Cockpit&lt;/strong&gt;. The tool itself allows you to administer the server through a web-interface. However, we need an addon to it - &lt;strong&gt;cockpit-machines&lt;/strong&gt;. The addon allows you to create virtual machines, quickly and flexibly. It works on &lt;strong&gt;Qemu-KVM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Connect via SSH to our dedicated server and execute the command:&lt;br&gt;
&lt;code&gt;apt-get install cockpit cockpit-machines&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After the installation is complete - log in to your browser and go to ip:9090&lt;br&gt;
Login and password from the cockpit control panel are the same as from SSH, i.e. login and password of your OS user.&lt;/p&gt;

&lt;p&gt;Go to the Virtual Machines tab, and click "Create VM". Specify the virtual machine name, image, disk and amount of RAM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Great!&lt;/strong&gt; After the OS installation process is complete, we will have our master node. Do the same thing a couple more times for the two worker nodes.&lt;/p&gt;

&lt;p&gt;We should end up with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rtgd4cj78drmw5fn6wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rtgd4cj78drmw5fn6wk.png" alt="Cockpit result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then log into each of the VMs and in the VNC console set ssh.&lt;br&gt;
&lt;code&gt;apt-get install ssh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Done!&lt;/strong&gt; We now have three working virtual machines that we will use for our cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Configuring the VM
&lt;/h2&gt;

&lt;p&gt;If you thought we were going to edit and modify a bunch of configuration files for each virtual machine in this step - forget it. What we need to do is just install a couple packages and add a bit of sugar. You have to be careful though, as it's easy to get confused between these virtual machines.&lt;/p&gt;

&lt;p&gt;Let's SSH into the main dedicated server, inside of which we just created 3 virtual machines. The one that looks outward with its IP v4. Execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install nano
nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add 3 lines like IP_Virtual_Machine name to the very end.&lt;/p&gt;

&lt;p&gt;In my example it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.122.61 master-node
192.168.122.172 worker-node-1
192.168.122.105 worker-node-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have pasted the text, we copy it, we will need it later, then press ctrl+x, y and enter.&lt;/p&gt;

&lt;p&gt;Then, we connect to each virtual machine in turn to install additional packages on them.&lt;br&gt;
&lt;code&gt;ssh ubuntu@master-node&lt;/code&gt;&lt;br&gt;
&lt;code&gt;su root&lt;/code&gt;&lt;br&gt;
&lt;code&gt;nano /etc/hosts&lt;/code&gt;&lt;br&gt;
Paste the 3 lines we copied earlier and again ctrl+x, y and enter to save.&lt;/p&gt;

&lt;p&gt;Install the necessary packages for each node of our cluster.&lt;br&gt;
&lt;code&gt;apt-get install conntrack socat&lt;/code&gt;&lt;br&gt;
Now we need to add our ubuntu user to the list of users with sudo access:&lt;br&gt;
&lt;code&gt;nano /etc/sudoers&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After the lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# User privilege specification
root    ALL=(ALL:ALL) ALL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu  ALL=(ALL:ALL) ALL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save (ctrl+x, y, enter).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Done!&lt;/strong&gt; We have configured the master node, exit it using the exit command. We do the same process with the &lt;strong&gt;other two virtual machines&lt;/strong&gt;, so in the end we need to &lt;strong&gt;do these steps in ALL nodes&lt;/strong&gt; in our cluster.&lt;/p&gt;

&lt;p&gt;After we have done the same steps on all three virtual machines - connect to the master node again and enter the password.&lt;br&gt;
&lt;code&gt;ssh ubuntu@master-node&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create a cluster
&lt;/h2&gt;

&lt;p&gt;This step is feared not only by programmers, but also by inexperienced devops engineers. It seems so difficult to create your own cluster, but no, we will do it quickly and very easily, and one opensource project called KubeSphere will help us.&lt;/p&gt;

&lt;p&gt;KubeSphere is a distributed operating system for managing cloud-native applications using Kubernetes as a kernel. And also this system will help you install yourself in a couple of clicks.&lt;/p&gt;

&lt;p&gt;It is an opensource solution with more than 13 thousand stars on github and quite an impressive community. It is also actively used by Chinese companies that &lt;a href="https://www.kubesphere.io/case/" rel="noopener noreferrer"&gt;build large fault-tolerant systems&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we are an ubuntu user with sudo access, we are in the master-node in our home directory (/home/ubuntu). We execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk
./kk create config -f config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands have downloaded a tool called KubeKey (kk) that will allow us to install KubeSphere. And also created a cluster config, which we will now start editing.&lt;/p&gt;

&lt;p&gt;Let's open the yet hot config.toml:&lt;br&gt;
&lt;code&gt;nano config.toml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I have underlined what we are interested in, but you can play around with configurations if you are interested. Kubesphere is a powerful tool, you might find the settings you need for your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym3xz080gwe2nv89mfel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym3xz080gwe2nv89mfel.png" alt="Cluster config"&gt;&lt;/a&gt;&lt;br&gt;
In the name value specify the name of our cluster, for our example I will leave sample.&lt;br&gt;
In the hosts list specify our virtual servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - {name: master, address: 192.168.122.61, internalAddress: 192.168.122.61, user: ubuntu, password: "password"}
  - {name: worker-1, address: 192.168.122.172, internalAddress: 192.168.122.172, user: ubuntu, password: "password"}
  - {name: worker-2, address: 192.168.122.105, internalAddress: 192.168.122.105, user: ubuntu, password: "password"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just below that, we assign them roles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - worker-1
    - worker-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should end up with something like this (of course with the IP addresses and passwords of your virtual machines):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvej70xwhr13dp6sd4ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvej70xwhr13dp6sd4ka.png" alt="Config result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and enjoy, we've configured our cluster. All that remains is to raise it. And it is even easier to do it - we just execute one command:&lt;br&gt;
&lt;code&gt;./kk create cluster -f config.toml --with-kubesphere&lt;/code&gt;&lt;br&gt;
KubeKey will check the cluster nodes and if everything is OK it will ask you to confirm the cluster installation. Type yes and press enter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7p9e0uqdkz2r6meumzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7p9e0uqdkz2r6meumzd.png" alt="KubeKey check"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Almost done!&lt;/strong&gt; Let's go drink coffee or water, if the cluster hardware is very old and the Internet is slow, you can go to the gym. However, it took me about 5 minutes to install the cluster. As soon as the installation is complete, you will see the following message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3maifct1i0ymrqw30p1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3maifct1i0ymrqw30p1t.png" alt="All done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can run to look at this gorgeous interface, but we'll be met with an error. Why? Come on, the ip is public, but it's only available within our virtual machine network. You could try to throw an external bridge, forward traffic to the virtual machine, but I'm going to make this a lot easier.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4 - Final
&lt;/h2&gt;

&lt;p&gt;Open our terminal and connect via SSH to the "Main" dedicated server, the one where we created the 3 virtual machines. Execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install nginx
nano /etc/nginx/sites-enabled/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We delete everything and insert the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
        listen 30880 default_server;
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_pass http://ip:30880;
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and go to the browser using our public IP and port 30880. Specify login and password that was also in the message after installation (by default it is admin:P@88w0rd) and set your new password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiu0xrxyiahve36ub3cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiu0xrxyiahve36ub3cv.png" alt="KubeSphere interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations!!! Here is your own Managed Kubernetes that you can easily set up, connect gitlab's pipelines and put a whole bunch of your precious yamls in there.&lt;/p&gt;

&lt;p&gt;In the next posts, I will try to describe other processes every company needs: configuring the pipelines, deployments, load balancing and certifications, building on the existing results of this article.&lt;/p&gt;

&lt;p&gt;Links to all resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubesphere.io/" rel="noopener noreferrer"&gt;KubeSphere - Website.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.kubesphere.io/docs/v3.3/installing-on-linux/introduction/intro/" rel="noopener noreferrer"&gt;KubeSphere - documentation and requirements to nodes.&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Solving the deployment bottleneck and environment replication problems</title>
      <dc:creator>Sergey Ziryanov</dc:creator>
      <pubDate>Thu, 01 Jun 2023 10:31:04 +0000</pubDate>
      <link>https://dev.to/twelvee/solving-the-deployment-bottleneck-and-environment-replication-problems-3823</link>
      <guid>https://dev.to/twelvee/solving-the-deployment-bottleneck-and-environment-replication-problems-3823</guid>
      <description>&lt;p&gt;Hi, time goes by, k8s and docker have become an integral part of our working environment, but from what I observe many companies still think that the problem of deployment bottleneck can only be solved by a bunch of bash scripts or a separate chat room where everyone should be informed about new deployments.&lt;/p&gt;

&lt;p&gt;What is this deployment bottleneck problem? The easiest way to explain it is with an example, which probably looks familiar to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  An example showing what a bottleneck is
&lt;/h2&gt;

&lt;p&gt;Let’s introduce the company “Good Company”. There are only two people working there, named CTO and CEO. One of them used to work as a senior engineer in the company “Not a Very Good Company” and at one moment, when they met a man with money, the future CEO, they decided to create their own, good company. The CTO knows very well that their company will be successful and the product will be in maximum demand, so he initially develops a microservice architecture for their project. He’s also very familiar with all the new technologies and practices — docker, kubernetes, stdout logs, that sort of thing. And now, a few months later, the guys are ready to show the MVP to their users, which the CEO found during the development.&lt;/p&gt;

&lt;p&gt;Let’s pause at this point and summarize what we have.&lt;/p&gt;

&lt;p&gt;The development pipeline looks like one master branch in each of the 4 microservices that are rolled out in the k8s cluster. This ease of rolling out new features is what allowed them to finish the MVP in just a couple of months. It’s all cool here, the guys are awesome!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxovsrmezql2beqyfz4fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxovsrmezql2beqyfz4fi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project really started to attract new users, and now the company already has 4 backend and 2 frontend developers, 1 QA, project manager, CTO and CEO. All the guys are very motivated and now they can be called a real IT company! They already have 2 clusters — one for developers and one for users. Pipeline of their development looks like this: there are 2 identical branches — master and develop, guys create their feature branch from develop and make their magic, and then merge all their changes into develop and roll them out to dev cluster. That’s where the QA tests everything and after the “ok” message in the telegram — develop gets merged into the master and the most “powerful” programmer presses the “Deploy to production” button in some gitlab-ci. Processes! The guys are good again!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiby4x0z693hovia6cvl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiby4x0z693hovia6cvl4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now the company is growing even more, there are more users, they offer a bunch of different new features and the good guys from Good Company are happy to implement them. Now they have as many as 8 backend, 4 frontend, 2 QA engineers, a project manager, and all the same, but less skinny, CEO and CTO. Their clusters work fine, but here’s the trouble — their users are starting to complain more and more about bugs, and the developers for some reason are complaining more and more about their lives. They hold meetings, trying to figure out what the problem is, and it was enough just to look at how their project is developed.&lt;/p&gt;

&lt;p&gt;8 backend developers working on 8 different tasks, their features are often returned by vigilant QA engineers and git hist looks really bad. When merging into a master there are huge conflicts because of critical bugfixes, which have to be fixed by the most unlucky team member (the one on the picture), along the way rewriting new features and creating even more bugs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ekm5oddg2xe72irg6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ekm5oddg2xe72irg6x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It means that our guys at Good Company have faced the problem of deployment bottleneck — when there are too many people who want to roll out their changes at the same time. They’ve faced this problem before, but “plugged” it by creating a develop branch and not very scalable git-flow, not even knowing what it would lead to.&lt;/p&gt;

&lt;h2&gt;
  
  
  But how to solve this problem?
&lt;/h2&gt;

&lt;p&gt;We have already described the problem, now we need to understand what causes it. Programmers are people (for now), and people make mistakes. When you add to one branch several features at once with potential bugs. Then you try to fix conflicts in code that you see for the first time in your life because your colleague wrote it, and at the same time your other colleague fixes his bug that QA returned to him and rewrite your fixes of conflicts. Your whole git-flow turns into a mess.&lt;/p&gt;

&lt;p&gt;Wait, but you can deploy a separate branch of each feature to the dev cluster. Yes, you can. But after all, that won’t get rid of the initial problem that caused you to create that branch develop. You’ll encounter the problem when they try to roll out the same service with different tags to the same cluster — deployment bottleneck.&lt;/p&gt;

&lt;p&gt;And that is where stage environments (or a chat room on Telegram, where developers “reserve” a free slot to test their feature on the dev cluster) come into play.&lt;/p&gt;

&lt;p&gt;In short, stage environments are a set of services you need for a specific feature, which you can fully test or show to a customer. Now let’s move on to practice.&lt;/p&gt;

&lt;p&gt;First I propose to &lt;strong&gt;set the following git-flow&lt;/strong&gt;: we create a branch from the master, develop the feature in it, then we create a merge request into the master and merge if everything is ok. From the master, we can roll everything out to a dev cluster, and then, by tagging off, roll in an release to our users.&lt;/p&gt;

&lt;p&gt;Along with this gif-flow we need to &lt;strong&gt;get your service’s docker image built for each branch and the ability to deploy those images as separate, independent environments.&lt;/strong&gt; As a result, we should get: the prod should have images with the release tags, the dev should have images from the master (or just the latest), and each created branch should have its own docker image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq4l8y79zknnql73lbcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq4l8y79zknnql73lbcw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The hardest part is over!
&lt;/h2&gt;

&lt;p&gt;How to correctly and quickly roll out these branch-based images is a complex problem, which is solved in different ways, but I decided to collect sugar in a bunch and make a small open-source project, which I myself will use in all my projects. I named the project k8sbox and it allows you, with a single toml specification, to roll out your microservices across your cluster.&lt;/p&gt;

&lt;p&gt;By combining terms and thinking a bit, I got the simplest and most straightforward interface for this specification. We have an environment, which contains boxes with our applications inside them.&lt;/p&gt;

&lt;p&gt;And this is what the toml specification looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;id = "${TEST_ENV}" # It could be your ${CI_SLUG} for example
name = "test environment"
namespace = "test"
variables = "${PWD}/examples/environments/.env"

[[boxes]]
type = "helm"
chart = "${PWD}/examples/environments/box1/Chart.yaml"
values = "${PWD}/examples/environments/box1/values.yaml"
name = "first-box-2"
    [[boxes.applications]]
    name = "service-nginx-1"
    chart = "${PWD}/examples/environments/box1/templates/api-nginx-service.yaml"
    [[boxes.applications]]
    name = "deployment-nginx-1"
    chart = "${PWD}/examples/environments/box1/templates/api-nginx-deployment.yaml"

[[boxes]]
type = "helm"
chart = "${PWD}/examples/environments/box2/Chart.yaml"
values = "${PWD}/examples/environments/box2/values.yaml"
name = "second-box-2"
    [[boxes.applications]]
    name = "service-nginx-2"
    chart = "${PWD}/examples/environments/box2/templates/api-nginx-service.yaml"
    [[boxes.applications]]
    name = "deployment-nginx-2"
    chart = "${PWD}/examples/environments/box2/templates/api-nginx-deployment.yaml"

[[boxes]]
type = "helm"
chart = "${PWD}/examples/environments/ingress/Chart.yaml"
name = "third-box"
values = "${PWD}/examples/environments/ingress/values.yaml"
    [[boxes.applications]]
    name = "www-ingress-toml"
    chart = "${PWD}/examples/environments/ingress/templates/ingress.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All documentation is available and can be found in the links at the end of this article, but it seems to be pretty clear here, just take a look at the example (which is completely in the githab repository).&lt;/p&gt;

&lt;p&gt;Along with this chart — we can run our k8sbox tool and it will roll out this environment on your k8s cluster. Something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k8sbox run -f environment.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If our QA finds a bug and we need to reload the environment, we just execute the same run command. It will delete all previous charts and install them again and do it very quickly. Like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42f67699vyl5xqbhsyb8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42f67699vyl5xqbhsyb8.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also always see what environments you have already rolled out to the cluster and find out more with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k8sbox get environment // list of saved environments
$ k8sbox describe environment {EnvironmentID} // describe the environment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, if your QA engineer said OK, we can easily clear the cluster from our environment by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k8sbox delete -f environment.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All your services are rolled out on YOUR k8s cluster, which means you can configure any parameters you want for them. And also the pluses are the ready-made docker images with entrypoint just on k8sbox. That means you can easily integrate this tool into any of your CI pipelines.&lt;/p&gt;

&lt;p&gt;This tool will allow you to solve your bottleneck problem in a very simple way — split development of new features, letting the developers do their magic in parallel and independently from each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.cncf.io/blog/2020/03/23/deployment-bottlenecks-and-how-to-tame-them/" rel="noopener noreferrer"&gt;An article about deployment bottleneck that advises less frequent deployment -_-&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/twelvee/k8sbox" rel="noopener noreferrer"&gt;Link to k8sbox repository&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.k8sbox.run" rel="noopener noreferrer"&gt;Link to k8sbox documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://hub.docker.com/r/twelvee/k8sbox" rel="noopener noreferrer"&gt;Link to dockerhub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
