DEV Community

Cover image for Feature-branches: Vanilla Kubernetes + Bitbucket pipelines
Dedicatted
Dedicatted

Posted on

Feature-branches: Vanilla Kubernetes + Bitbucket pipelines

Introduction

At Dedicatted, we always strive to deliver our best when addressing feature requests from our client teams, especially when it relates to the development process. In today’s example, we will demonstrate how we defined the most effective and functional solution for a client, enabling multiple teams to work independently on feature development in an environment where QA engineers can also test each feature independently, all managed seamlessly through Git.

We are embarking on a journey to explain how we achieved this and how the outcome successfully satisfied our client.

A couple of words about our application and supported platform: platform serving 500M-1MM requests on a monthly basis, so we are somewhere inside the active development process. Tech stack: MariaDB, ElasticSearch, Redis, RabbitMQ/Beanstalkd, PHP, React, On-premise, AWS, Kubernetes.

Problem

And now our development and QA teams face a challenge in increasing efficiency and speed. With multiple features being developed in parallel, testing became a bottleneck.

Our git flow looked like this:

As mentioned, we faced a situation where there was a clear bottleneck, and we needed to provide a solution that would resolve this blocker, allowing teams to work independently without depending on each other. The goal was to give QA engineers a clear list of environments, releases, builds, and endpoints to test, while providing developers with defined delivery goals and the ability to deploy and test changes as quickly as possible.

On the other hand, our team was limited by both resources and security policies: we had only one available server within a private infrastructure in a data center. Cloud management solutions were not an option, so we needed a solution that could be deployed on that server to handle the setup and provide clear visibility into the technical workflow. This is when we first considered using Kubernetes.

To achieve this, we decided to use Kubernetes to manage the platform’s environment components in the most efficient way. With simple resource configuration deployments, we gained control over the deployed resources and successfully managed them.

For automating resource deployments, DNS configuration management, and basic environment setup, we chose Bitbucket Pipelines, as the client was already using Bitbucket.

We also established and implemented a new internal development policy for the client’s teams, containing a specific set of rules and guidelines for the new functionality. Essentially, whenever a developer receives a feature-development task in the main backend Git repository, they will create a new branch following a specific naming convention that includes the task ID (e.g., “feature-dev-326,” where “dev-326” refers to the Jira task ID).

Once the branch is created, Bitbucket Pipelines automatically trigger, building backend containers and frontend assets, and uploading fresh artifacts into a private container registry hosted in an on-premises Kubernetes cluster (detailed cluster configuration below).

Simultaneously, the CI/CD job deploys a new namespace in Kubernetes, along with various platform-required resources, such as MariaDB, RabbitMQ, Redis, S3-compatible storage, volumes, Elasticsearch, and both backend and frontend applications. It also configures Cloudflare DNS and creates a new endpoint to point directly to the freshly prepared test platform environment in Kubernetes.

With this link, developers can track their code changes in real time. Each time they push to Git, the applications are dynamically redeployed with the new changes, allowing QA to test as needed in real-time. This solution also enables QA engineers to dynamically build any required environment to test or reproduce bugs and issues.

By deploying each feature to its own isolated environment, we aimed to streamline the testing process and reduce the time it takes to get features into production.

We evaluated multiple solutions before choosing Kubernetes (and some of them were funny in 2024):

  1. Docker Compose: Docker Compose would have been easier to set up for local testing, but it lacked the scalability and automation needed for our production-level infrastructure. Managing resources and networking for multiple branches would quickly become complex and unmanageable.

  2. Dedicated VMs: Another option was to spin up a dedicated VM for each feature branch. While this approach offered isolation, it was cost-inefficient and slow to provision. Managing many VMs would also lead to unnecessary complexity.

  3. Kubernetes: Ultimately, we chose Kubernetes for its resource management, scalability, and namespace isolation. Kubernetes’ integration with Helm also simplified the deployment process, making it easy to deploy each feature branch into its own isolated environment.

  4. FreeBSD Jails (note: our production infrastructure hosted on-premises instances with FreeBSD): We also considered using FreeBSD Jails, a lightweight virtualization technology specific to FreeBSD. Jails provide strong isolation with minimal overhead, making them ideal for creating multiple isolated environments on a single host. However, there were several drawbacks:

Limited Ecosystem Support: Unlike Kubernetes, Jails do not have a rich ecosystem of orchestration tools, such as Helm, to manage application deployments and scale dynamically.

Configuration Complexity: Managing networking, security, and resource isolation at scale with Jails would require significant manual configuration compared to the automated and declarative approach offered by Kubernetes.

CI/CD Integration: Bitbucket Pipelines and other modern CI/CD tools do not natively support FreeBSD Jails, making automation less straightforward and requiring custom scripts for deployments.

In summary, Kubernetes provides a more robust, scalable, and flexible solution for managing multiple feature branches in a parallel development environment. It offers better isolation, a richer ecosystem, and stronger community support compared to other options.

Solution

It may seem straightforward, but let’s dive deeper.

Since we decided to use Kubernetes, we faced another challenge: limited resources with no possibility to change or upgrade them. We had to work with a single server that could host a basic, one-node Kubernetes cluster and still meet all our requirements.

Below, you’ll find a step-by-step guide for setting up this cluster on the designated machine, configuring it, deploying Bitbucket Pipelines (CI/CD jobs), and providing the necessary level of automation. This will help achieve the main goals and streamline the development process, making it faster and simpler for the client’s team.

1. Install Kubeadm on the instance

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/1
Enter fullscreen mode Exit fullscreen mode

Note #1: The Pod and Service CIDRs should be larger than /24 and must not overlap with each other.

If you have only one node, make sure to untaint it so it can be used for your workload.

Once the cluster is deployed, test it with simple commands like kubectl get namespace -A or kubectl get pods -A to verify that it’s running correctly. Once confirmed, let’s proceed to the next steps.

Note #2: We will also be using Prometheus Operator, Grafana, and Loki from the official HELM charts to achieve the required level of observability. This setup will cover application metrics, component metrics (such as MySQL, Redis, Elasticsearch, etc.), and logs from any necessary pods/containers. Additionally, we plan to transition to Redis and Elasticsearch operators soon. This part of the configuration is not included here, as we are primarily using the base templates.

2. Install and configure Network Plugin
Network plugins are essential components of Kubernetes that provide networking capabilities for pods. But you already know it. They handle tasks like IP address assignment, routing, and network policy enforcement. In our case we need to choose something to use.
Get whatever you want:

  1. Calico
  2. Flannel
  3. Cilium
  4. Anything else

In our case, we decided to use Calico, as we have extensive experience with its configuration and it offers several advantages. Calico stands out from other network plugins due to its strong BGP support, which enables seamless inter-cluster networking and granular network policy enforcement. Additionally, Calico’s global address space simplifies IP address management across multiple Kubernetes clusters, reducing complexity and improving scalability.

However, for feature branching, Flannel could be also sufficient if you plan to deploy a separate cluster specifically for that purpose.

How to deploy Calico as the Network Plugin:

  • Download the Calico manifest file:
curl https://docs.projectcalico.org/manifests/calico.yaml -O
Enter fullscreen mode Exit fullscreen mode

Now Modify the downloaded calico.yaml to match with your custom Pod CIDR with opening of calico.yaml file in an editor with vim calico.yaml and:

  1. Find the section that specifies the CIDR block for the pods. It should look like this:
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
Enter fullscreen mode Exit fullscreen mode
  1. Change it to your CIDR value and save & close the file
  2. Restart coreDNS deployment with:
kubectl rollout restart deployment coredns -n kube-system
Enter fullscreen mode Exit fullscreen mode

3. Install Ingress controller
In our case we’ve of course decided to use NGINX Ingress Controller in default setup with minimum changes.

Installation of nginx ingress controller via helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install my-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Enter fullscreen mode Exit fullscreen mode

Note #1: We are automating DNS endpoint attachment to the desired Kubernetes service through the externalDNS controller configuration, integrated with CloudFlare for DNS management. This is done using the Kubernetes Ingress resource. The Ingress configuration won’t be shared, as there’s nothing particularly unique about it.

Note #2: We also use basic authentication and VPN access for our resources. This is not included in the main guide, as each case is highly specific and often not reusable.

*3.1 Metal LoadBalancer*
Another default decision we made was to use MetalLB for endpoints and traffic management.

Since we’ve got a dedicatted server from our main vendor we’ve decided to use metal LB which uses standard routing protocols and is pretty simple in installation, configuration and management.

Below is a guide for its installation and some important notes we’ve got during our journey.

_Note: If you are using kube-proxy in IPVS mode, starting from Kubernetes v1.14.2, you need to enable strict ARP mode.

Note: If you are using kube-router as the service proxy, you don’t need to enable strict ARP, as it is enabled by default._

You can achieve this by editing kube-proxy config in current cluster:

kubectl edit configmap -n kube-system kube-proxy
Enter fullscreen mode Exit fullscreen mode

and set:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
Enter fullscreen mode Exit fullscreen mode

Basically, to install MetalLB, apply the manifest:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
Enter fullscreen mode Exit fullscreen mode

Ok, good.

That was more than enough time to stay with metalLB, let’s move ahead.

4. CI/CD Automation — Bitbucket

We have successfully built a Kubernetes environment consisting of a one-node cluster using kubeadm and MetalLB. We performed initial tests by manually deploying several YAML files for platform components, such as MariaDB, Elasticsearch, Redis, RabbitMQ, as well as default data backup and restore scripts. Additionally, we deployed the application components (front-end and back-end) using the latest version of the code from the Git repository.

Now, it’s time to automate the deployment process for new environments, triggered by the creation of a new branch that follows a specific naming convention.

Note #1:
Sharing HELM Chart configuration files is unnecessary as they contain no unique configurations.

CI/CD Job

Now, it’s time to prepare our first and primary CI/CD pipeline, which will handle the core logic.

The pipeline will primarily sync the required HELM charts from a defined repository and execute a script on a remote server via SSH, using predefined variables based on branch naming conventions, among other criteria.

The code will include comments with detailed descriptions to clarify any unknown fields:

image: base-image

definitions:
 steps:
   - step: &move-helm-chart-to-k8s-instance # Rsync helm charts and get last availalbe versions
       name: Deploy feature branches
       deployment: Feature
       script:
         - pipe: atlassian/rsync-deploy:0.4.4
           variables:
             USER: "$USER"
             SERVER: "$SERVER"
             SSH_PORT: "$SERVER_PORT"
             REMOTE_PATH: "$REMOTE_PATH_RSYNC"
             LOCAL_PATH: "helm/"
   - step: &deploy-feature-branch # Deploy, configure and setup new feature-branch environment for this specific env
       name: Deploy feature branches
       deployment: Feature
       size: 4x
       script: # please be aware about this variables, after deployment you'll get all the needed endpoints and passwords
         - BRANCH_NAME="${BITBUCKET_BRANCH}"
         - CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
         - echo "domain - ${CLEANED_BRANCH_NAME}.example.com"
         - echo "kibana domain - kibana.${CLEANED_BRANCH_NAME}.example.com"
         - apt-get update
         - apt-get install docker -y
         - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin docker.example.com
         - chmod +x build_feature.sh # script description below
         - bash build_feature.sh
         - pipe: atlassian/ssh-run:0.3.0 
           variables:
             SSH_USER: "$USER"
             SERVER: "$SERVER_FEATURE"
             PORT: "$SERVER_PORT"
             COMMAND: |
               export BRANCH_NAME="${BITBUCKET_BRANCH}"
               export CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
               export COMMIT_HASH="${BITBUCKET_COMMIT}"
               bash “$REMOTE_PATH_RSYNC”/helm/deploy_feature.sh
pipelines: # configuration of the main rule for this pipeline to run, in our case with specific naming convention
 branches:
   feature/*:
     - step: *move-helm-chart-to-k8s-instance
     - step: *deploy-feature-branch
Enter fullscreen mode Exit fullscreen mode

As mentioned earlier, we are using Nexus as our primary artifacts and container registry. It is deployed on the same Kubernetes cluster that we are currently working on.

If you’re using Docker Hub or another registry, you’ll need to update the DNS for your Docker registry within the pipeline, specifically in this line:

echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_USERNAME”

— password-stdin docker.example.com
Enter fullscreen mode Exit fullscreen mode

Now, let’s clarify what is inside build_feature.sh and deploy_feature.sh scripts before moving ahead.

build_feature.sh:

In this step, we are passing all the necessary environment variables from the CI/CD pipeline. These variables include those needed for deploying environments, managing naming conventions, and configuring other settings. Below is the file structure along with an explanation:

#!/bin/bash
BRANCH_NAME="${BITBUCKET_BRANCH}"  # Get the branch name from Bitbucket
CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
CLEANED_BRANCH_NAME="${CLEANED_BRANCH_NAME,,}"
COMMIT_HASH="${BITBUCKET_COMMIT}"
NAMESPACE="${CLEANED_BRANCH_NAME}"
# Update the .env.fb file with branch-specific variables
sed -i "s|APP_URL=.*|APP_URL=https://${CLEANED_BRANCH_NAME}.example.com|" ".env.fb"
cp .env.fb .env
# Build and push the Docker image
docker build --build-arg WWWUSER=1000 --build-arg WWWGROUP=1000 --build-arg WWWDOMAIN=${CLEANED_BRANCH_NAME}.example.com --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" --no-cache -t docker.example.com/app-${CLEANED_BRANCH_NAME}:${COMMIT_HASH} -f helm/Dockerfile .

docker push docker.example.com/app-${CLEANED_BRANCH_NAME}:${COMMIT_HASH}
Enter fullscreen mode Exit fullscreen mode

Keep in mind that we are calling this script directly within the pipeline and providing the necessary inputs to proceed with the process. While the script is running, we update specific variables in our application configuration file (.env.fb) to prepare it for running in the Kubernetes cluster with a new DNS endpoint. After updating the configuration, we build the backend and upload it to Nexus.

Deploy_feature.sh:

#!/bin/bash
# Variables from Bitbucket Pipeline
BRANCH_NAME=$BRANCH_NAME  # Set the feature-branch name from Bitbucket inputs
# Clean the branch name by removing 'feature/'
CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
COMMIT_HASH=$COMMIT_HASH
CLEANED_BRANCH_NAME="${CLEANED_BRANCH_NAME,,}"
# Namespace based on the cleaned branch name
NAMESPACE="${CLEANED_BRANCH_NAME}"
# Define the path to the Helm values file for the app where we need to change the image
APP_VALUES_FILE="../apps/values.yaml"
# New Docker image path based on Nexus registry and build information
DOCKER_IMAGE_REPOSITORY="docker.example.com/app-${CLEANED_BRANCH_NAME}"
DOCKER_IMAGE_TAG="${COMMIT_HASH}"
# Check if the namespace exists
NAMESPACE_EXIST=$(kubectl get namespace | grep -w "${NAMESPACE}" || true)
# Modify ingress host and Docker image in the Helm values file
update_helm_values() {
  echo "Updating ingress host and Docker image for app in ${APP_VALUES_FILE}"
  # Update ingress host based on branch name
  sed -i "s/host: .*/host: ${CLEANED_BRANCH_NAME}.example.com/" "$APP_VALUES_FILE"
  sed -i "s/WWWDOMAIN: .*/WWWDOMAIN: ${CLEANED_BRANCH_NAME}.example.com/" "$APP_VALUES_FILE"
  sed -i "s|repository: .*|repository: ${DOCKER_IMAGE_REPOSITORY}|" "$APP_VALUES_FILE"
  sed -i "s|tag: .*|tag: ${DOCKER_IMAGE_TAG}|" "$APP_VALUES_FILE"
  sed -i "s/host: .*/host: kibana-${CLEANED_BRANCH_NAME}.example.com/" "../kibana/values.yaml" 
  sed -i "s/host: .*/host: beanstalkd-${CLEANED_BRANCH_NAME}.example.com/" "../beanstalkd-console/values.yaml"
}
# Function to deploy all platform environment components (such as DB, Redis, Elasticsearch, Mailpit, Main platform front-end and back-end)
deploy_all_services() {
  echo "Deploying all services in namespace ${NAMESPACE}"
  kubectl create ns ${NAMESPACE}
  # Deploy the database
  helm upgrade --install db ../helm/DB -n "${NAMESPACE}"
  POD_NAME=$(kubectl get pods -n "${NAMESPACE}" -l app=mariadb -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME}" -n "${NAMESPACE}" --timeout=60s
  sleep 50;
  DUMP_FILE="/home/ubuntu/dev_dump.sql"
  kubectl cp "${DUMP_FILE}" "${NAMESPACE}/${POD_NAME}:/tmp/dump.sql"
  kubectl exec -n "${NAMESPACE}" "${POD_NAME}" -- /bin/bash -c "mysql -u root -proot_password < /tmp/dump.sql"
  kubectl exec -n "${NAMESPACE}" "${POD_NAME}" -- /bin/bash -c "
  helm upgrade --install beanstalk ../beanstalk -n "${NAMESPACE}"
  # Deploy Redis
  helm upgrade --install redis bitnami/redis -f ../redis/values.yaml -n "${NAMESPACE}"
  # Deploy Elasticsearch
  helm upgrade --install es ../ES -n "${NAMESPACE}"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=elasticsearch -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
  helm upgrade --install app../apps -n "${NAMESPACE}" -f "$APP_VALUES_FILE"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=-app -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
  helm install kibana ../kibana -n "${NAMESPACE}" -f ../kibana/values.yaml
  helm install beanstalkd-console ../beanstalkd-console -n "${NAMESPACE}" -f ../beanstalkd-console/values.yaml
}
# Function to update only the application image
update_app_image() {
  echo "Updating the application image in namespace ${NAMESPACE}"
  # Update only the application with the new image
  helm upgrade app ../apps -n "${NAMESPACE}" -f "$APP_VALUES_FILE"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=app -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
}
# If the namespace doesn't exist, create it and deploy all services
if [ -z "$NAMESPACE_EXIST" ]; then
  echo "Namespace ${NAMESPACE} does not exist. Creating namespace and deploying services."
  # Update the Helm values with the new ingress host and Docker image
  update_helm_values
  # Deploy all services
  deploy_all_services
else
  echo "Namespace ${NAMESPACE} already exists. Updating only the application image."
  # Update the Helm values with the new Docker image (ingress does not need to be changed)
  update_helm_values
  # Update only the application image
  update_app_image
fi
Enter fullscreen mode Exit fullscreen mode

After committing the pipeline described above, preparing the Kubernetes cluster, and setting up your Helm applications, make sure to double-check everything. Once this is done, you can begin working with feature-branch deployments in Kubernetes.

Below are some examples of running pipelines and outputs from Bitbucket.

Congratulations.

You now have a ready-to-use workflow.

However, there’s one final task to complete the flow: adding automation to remove Kubernetes resources (environments) once a feature branch is merged with the target branch.

At this stage we defined that there is no available action “merge” in bitbucket pipelines.

We will use a script for this purpose and run it as a cron job, which will execute every 30 minutes. Let’s call it branch-monitor.sh:

#!/bin/bash

# Variables
REPO_URL="https://api.bitbucket.org/2.0/repositories/your-workspace/your-repo"  # Bitbucket API URL
TARGET_BRANCH="main"  # The branch where feature branches are merged into (e.g., 'main')
USERNAME="your-username"  # Bitbucket username
PASSWORD="your-password"  # Bitbucket password or access token

# Function to get all branches with "feature/" prefix
get_feature_branches() {
 # Fetch all branches and filter for "feature/" branches using Bitbucket API
 BRANCHES=$(curl -u $USERNAME:$PASSWORD -s "$REPO_URL/refs/branches" | jq -r '.values[] | select(.name | startswith("feature/")) | .name')
 echo "$BRANCHES"
}

# Function to check if a branch has been merged
is_branch_merged() {
 BRANCH_TO_CHECK=$1
 # Check the merge status using Bitbucket API
 RESPONSE=$(curl -u $USERNAME:$PASSWORD -s "$REPO_URL/merge-base?include=$TARGET_BRANCH&exclude=$BRANCH_TO_CHECK")

 # Check if the response contains a valid merge status (merged)
 if [[ "$RESPONSE" == *"error"* ]]; then
   echo "Branch $BRANCH_TO_CHECK is NOT merged into $TARGET_BRANCH."
   return 1
 else
   echo "Branch $BRANCH_TO_CHECK is merged into $TARGET_BRANCH."
   return 0
 fi
}

# Function to delete Kubernetes namespace
delete_namespace() {
 NAMESPACE=$1
 echo "Deleting namespace $NAMESPACE in Kubernetes..."
 kubectl delete namespace $NAMESPACE

 if [ $? -eq 0 ]; then
   echo "Namespace $NAMESPACE deleted successfully."
 else
   echo "Failed to delete namespace $NAMESPACE."
 fi
}

# Main logic: Loop through all feature branches
BRANCHES=$(get_feature_branches)

for BRANCH in $BRANCHES; do
 NAMESPACE=${BRANCH#feature/}  # Remove 'feature/' from branch name to get the namespace name
  echo "Checking branch $BRANCH (Namespace: $NAMESPACE)..."

 # Check if the branch has been merged
 if is_branch_merged $BRANCH; then
   # If merged, delete the corresponding namespace
   delete_namespace $NAMESPACE
 else
   echo "No action taken for $NAMESPACE."
 fi
done
Enter fullscreen mode Exit fullscreen mode

Now, each time developers deploy a branch with a specific naming convention, such as “feature/dev-***”, a new environment will automatically be deployed in the Kubernetes cluster. Developers will have direct access through a separate private DNS endpoint, allowing them to test the newly added functionality. Additionally, once the branch is merged into “main,” the corresponding namespace in Kubernetes, along with all recently deployed resources, will be removed.

Conclusion

That’s it, dear Engineers!
We understand that there may be some complications during the implementation of our approach, but this is due to the specific requirements of the project. Therefore, personal adjustments will be necessary to make it effective in your working environments. If you need clarification on anything, our engineering team is available to answer your questions in the comments and assist with any issues.

Let’s move forward and summarize the goals we achieved with the implementation of the feature-branching flow and how it has impacted the development process:

1. New Development & Testing Approach:
Our solution enables the customer’s team to automate the preparation of testing environments. Developers and QA engineers can collaborate more effectively by testing and resolving issues in our Kubernetes test environment. Each feature/release candidate is deployed with fully automated CI/CD pipelines, creating separate environments with dedicated DNS endpoints and secure authentication. This allows for faster testing and enables the team to work on multiple features in parallel. Previously, the team had only one testing environment, which required manual code deployments and lacked automation.

2. Enhanced Quality:
Continuous testing and quality gates ensure that only high-quality code is released.

3.Business Metrics & Impact:
Key metrics, such as the time to deliver a feature to market and the time-to-test, have been reduced by a factor of four.

4.Clear Process and Documentation:
Developers now have a simple, step-by-step guide along with additional documentation describing the entire process, including how to monitor application metrics, logs, and more.

After finalizing our tests and resolving a few minor issues, the customer’s teams actively began working with the new solution. Over time, we identified a few non-critical issues with Kubernetes networking and MetalLB configuration, but these were quickly resolved.

Currently, our developers and QA teams work with 6 to 14 separate environments in parallel on a daily basis, and the number of active environments is expected to grow rapidly. They report that everything is working well and fully meets their needs. We are already planning to add additional resources to the cluster when needed and provide further automation to simplify environment management.

DEDICATTED Blog | Site | LinkedIn

Authors & BIOs:

George Levytskyy
George Levytskyy (Heorhii Levytskyy) — Head of DevOps and SRE at Dedicatted. With over 8 years of experience in DevOps, Cloud Architecture and cost optimization, network administration, cybersecurity, and high-load infrastructure design, he has successfully managed the delivery of more than 30 projects in recent years. Proficient in AWS Cloud, Kubernetes, GitOps practices, Python programming, and cybersecurity. In his free time, George enjoys playing padel, mentoring and lecturing, traveling, hiking, and gaming on his Steam Deck, especially during travel.

Bohdan Mukovozov
Bohdan Mukovozov is a skilled IT professional with over 4 years of experience in DevOps. Known for a strong analytical approach and a problem-solving mindset, Bohdan Mukovozov has successfully implemented solutions that enhance system efficiency, security, and user experience. Proficient in AWS cloud, CI/CD, k8s, he’s passionate about leveraging technology to drive innovation and streamline processes. When not working, Bohdan enjoys staying in line with tech trends, play video game.

Top comments (0)