<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dedicatted</title>
    <description>The latest articles on DEV Community by Dedicatted (@dedicatted).</description>
    <link>https://dev.to/dedicatted</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dedicatted"/>
    <language>en</language>
    <item>
      <title>Secure ArgoCD Multi-Cluster Deployment in AWS EKS with IRSA</title>
      <dc:creator>Dedicatted</dc:creator>
      <pubDate>Tue, 26 Aug 2025 14:49:05 +0000</pubDate>
      <link>https://dev.to/dedicatted/secure-argocd-multi-cluster-deployment-in-aws-eks-with-irsa-36mj</link>
      <guid>https://dev.to/dedicatted/secure-argocd-multi-cluster-deployment-in-aws-eks-with-irsa-36mj</guid>
      <description>&lt;p&gt;Contents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Problem&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;AWS Multi-Account infrastructure with 2 EKS clusters&lt;/li&gt;
&lt;li&gt;ArgoCD in cluster 1 and manages cluster 2 as well&lt;/li&gt;
&lt;li&gt;Need for a secure access to a second cluster&lt;/li&gt;
&lt;li&gt;Diagram&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.Challenge&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No peering and no communication allowed between two clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4.Solution&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM OIDC provider configured for the cluster&lt;/li&gt;
&lt;li&gt;Implement access via IAM roles, IRSA and EKS IAM Access Entries&lt;/li&gt;
&lt;li&gt;IAM roles for first account and second account, policies&lt;/li&gt;
&lt;li&gt;Configure ArgoCD SA&lt;/li&gt;
&lt;li&gt;Configure ArgoCD helm values&lt;/li&gt;
&lt;li&gt;Verify access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5.Conclusion&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article, we'll explore how to set up a multi-cluster access for ArgoCD via IAM roles for Service Accounts (IRSA) to be more detailed how to securely enable ArgoCD to manage multiple EKS clusters across AWS accounts using IAM Roles for Service Accounts (IRSA).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why this matters: As organizations scale their Kubernetes workloads, GitOps needs to scale with them. ArgoCD + IRSA provides a secure, credential-free way to manage many clusters from a single control plane.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;Adopting a multi-account strategy in AWS - separating Production and Non-Production environments - is a widely recommended approach that strengthens security, simplifies resource management, and enables more granular access control. Our infrastructure followed this pattern, with each environment hosted in its own AWS account and running its own EKS cluster.&lt;/p&gt;

&lt;p&gt;Initially, we had a single EKS cluster where ArgoCD was installed and operating in in-cluster mode, deploying and managing workloads within the same cluster. However, to strengthen isolation and follow AWS security best practices, we migrated to a multi-account setup. Each environment runs its own EKS cluster in a separate AWS account — introducing the need for secure cross-cluster, cross-account deployment.&lt;/p&gt;

&lt;p&gt;This presented a new challenge: &lt;strong&gt;how to securely enable ArgoCD, residing in Cluster 1 (e.g., Production (managed internal cluster)), to manage deployments in Cluster 2 (e.g., Production)&lt;/strong&gt;. The need for cross-cluster, cross-account deployment introduced several security and configuration considerations, particularly around authentication, access control, and network reachability.&lt;/p&gt;

&lt;p&gt;Our goal was to implement this multi-cluster management securely, ensuring that ArgoCD could interact with the second cluster without compromising the isolation and integrity of either environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7yotn4kjxqktd2ihabw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7yotn4kjxqktd2ihabw.png" alt="how to securely enable ArgoCD, residing in Cluster 1 (e.g., Production (managed internal cluster)), to manage deployments in Cluster 2 (e.g., Production)." width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;This setup requires:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IRSA is enabled on your ArgoCD EKS cluster&lt;/li&gt;
&lt;li&gt;An IAM role ("management role") for your ArgoCD EKS cluster that has an appropriate trust policy and permission policies&lt;/li&gt;
&lt;li&gt;A role created for the second cluster being added to ArgoCD that is assumable by the ArgoCD management role&lt;/li&gt;
&lt;li&gt;An Access Entry within each EKS cluster is added to ArgoCD that gives the cluster's role RBAC permissions to perform actions within the cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1. Verify that IRSA is enabled on your main (management) EKS cluster
&lt;/h2&gt;

&lt;p&gt;Determine the OIDC issuer ID for your cluster: Retrieve your cluster’s OIDC issuer ID and store it in a variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cluster_name=&amp;lt;my-cluster&amp;gt;
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
echo $oidc_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Determine whether an IAM OIDC provider with your cluster’s issuer ID is already in your account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If output is returned, then you already have an IAM OIDC provider for your cluster and you can skip the next step. If no output is returned, then you must create an IAM OIDC provider for your cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2. Create the ArgoCD Management role and ArgoCD Deployer role
&lt;/h2&gt;

&lt;p&gt;The role created for Argo CD (the "management role") will need to have a trust policy suitable for assumption by certain Argo CD Service Accounts and by itself.&lt;/p&gt;

&lt;p&gt;The service accounts that need to assume this role are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;argocd-application-controller&lt;/li&gt;
&lt;li&gt;argocd-applicationset-controller&lt;/li&gt;
&lt;li&gt;argocd-server
If we create a role &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT1_ID&amp;gt;:role/argocd-manager&lt;/code&gt; for this purpose, the following is an example trust policy suitable for this need. Ensure that the Argo CD cluster has an IAM OIDC provider configured.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foflejdbrdnfaadow9ons.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foflejdbrdnfaadow9ons.png" alt="ArgoCD Management role and ArgoCD Deployer role" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get the OIDC provider URL and Cluster Certificate authority from EKS management in AWS Management Console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp2o3awkr2zubc1amov1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp2o3awkr2zubc1amov1.png" alt="OIDC provider URL and Cluster Certificate authority from EKS management in AWS Management Console" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Argo CD management role (&lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT1_ID&amp;gt;:role/argocd&lt;/code&gt;-manager in our example) additionally needs to be allowed to assume a role for each cluster added to Argo CD.&lt;/p&gt;

&lt;p&gt;As stated, each EKS cluster added to Argo CD should have its corresponding role. This role should not have any permission policies. Instead, it will be used to authenticate against the EKS cluster's API. The Argo CD management role assumes this role, and calls the AWS API to get an auth token. That token is used when connecting to the added cluster's API endpoint.&lt;/p&gt;

&lt;p&gt;To grant access for the argocd-deployer role so a cluster can be added to Argo CD, we should set its trust policy to give the Argo CD management role permission to assume it. Note that we're granting the Argo CD management role permission to assume this role above, but we also need to permit that action via the cluster role's trust policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y8xoqhk2aegiyvsa1jy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y8xoqhk2aegiyvsa1jy.png" alt="grant access for the argocd-deployer role so a cluster can be added to Argo CD" width="511" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And after that update the permission policy of the argocd-manager role to include the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf7vl5tbtdkkys3t439q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf7vl5tbtdkkys3t439q.png" alt="permission policy of the argocd-manager role" width="538" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3. Configure Cluster Credentials for the second cluster and modify argocd-cm
&lt;/h2&gt;

&lt;p&gt;Now, after everything has been configured, add these values to your ArgoCD ConfigMap or (like in our case) update the ArgoCD Helm chart values to include this configuration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i6q5dhtus8fjx15foi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i6q5dhtus8fjx15foi1.png" alt="Configure Cluster Credentials for the second cluster and modify argocd-cm" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that the CA data has been securely put to the SSM Parameter Store and fetched via aws_ssm_parameter data source.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4. Add the EKS Access Entry for each cluster
&lt;/h2&gt;

&lt;p&gt;To finalize a connection both clusters require an Access Entry within each EKS cluster added to Argo CD that gives the cluster's role RBAC permissions to perform actions within the cluster. You can either edit the access entries manually via EKS management in AWS Management Console or via Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdpmd2z2do4lckza9ja6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdpmd2z2do4lckza9ja6.png" alt="Access Entry within each EKS cluster" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create an access entry for each role, including role ARN in Principal field, i.e. &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT1_ID&amp;gt;:role/argocd-manager&lt;/code&gt; for cluster 2, and &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT2_ID&amp;gt;:role/argocd-deployer&lt;/code&gt; for cluster 1.&lt;/p&gt;

&lt;p&gt;Be sure to add AmazonEKSClusterAdminPolicy in the next window using the “Add policy” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyyl5a0wiugbk90ipyzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyyl5a0wiugbk90ipyzb.png" alt="AmazonEKSClusterAdminPolicy" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5. Finalizing &amp;amp; verification
&lt;/h2&gt;

&lt;p&gt;After changing the ArgoCD CM and triggering Terraform run make sure that everything has been updated within ArgoCD Helm deployment. Verify that ArgoCD Service Accounts have been received with the new IAM role ARN.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpw949kmk6dz27c3hmuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpw949kmk6dz27c3hmuu.png" alt="ArgoCD CM and triggering Terraform run" width="661" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything has been applied correctly, you need to restart the ArgoCD deployments using:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;kubectl rollout restart deployment argo-cd-argocd-server -n argocd&lt;/li&gt;
&lt;li&gt;kubectl rollout restart deployment argo-cd-argocd-applicationset-controller -n argocd&lt;/li&gt;
&lt;li&gt;kubectl rollout restart deployment argo-cd-argocd-application-controller -n argocd&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;To verify inter-cluster connectivity, go to the ArgoCD UI and follow to the Clusters panel. There you can verify that Cluster credentials have been configured correctly, and also verify the connection status, where, if something goes wrong, you will receive a concise error message that will help you with troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfx4rug5bwtox5a9v7uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfx4rug5bwtox5a9v7uk.png" alt="inter-cluster connectivity" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the CLI &amp;gt; cluster list output you can see it like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv47o4sds59jnza3cmvpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv47o4sds59jnza3cmvpf.png" alt="cluster list " width="800" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the connection status is Successful, that means that you can now deploy resources of your applications to the second cluster! You can use this cluster in a destination configuration of your applications this way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0v5pkjzznd6doxciyty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0v5pkjzznd6doxciyty.png" alt="cluster in a destination configuration of your application" width="683" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This concludes the tutorial! Also be sure to check out the Github repository which contains all of the source code that was shown in the article here: Explore the full repo &amp;amp; Terraform examples on &lt;a href="https://github.com/dedicatted/devops-tech/tree/main/argocd/Multi-cluster-management-IRSA" rel="noopener noreferrer"&gt;GitHub &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Managing multiple Amazon EKS clusters with ArgoCD using IAM Roles for Service Accounts (IRSA) offers a secure, scalable, and cloud-native approach to continuous deployment in AWS environments. By leveraging IRSA, you ensure fine-grained access control without hardcoding AWS credentials, aligning with best practices for cloud security and governance. &lt;/p&gt;

&lt;p&gt;This setup enables ArgoCD to seamlessly authenticate with and deploy to multiple clusters, even across different AWS accounts, while keeping operations centralized and auditable. By combining ArgoCD with IRSA and IAM-based access control, teams can scale GitOps securely across environments — with centralized control, no static credentials, and full auditability. If you're building a secure GitOps platform on AWS, this architecture is a proven foundation&lt;/p&gt;

&lt;p&gt;If you have any thoughts, questions or issues, feel free to share them in the comments. Let’s make our clusters secure and easy-to-manage!&lt;/p&gt;

&lt;p&gt;Scaling GitOps across AWS accounts? Let’s make it effortless. &lt;br&gt;
Talk to &lt;a href="https://bit.ly/3HnzDD2" rel="noopener noreferrer"&gt;Dedicatted&lt;/a&gt; — your cloud, our mission.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authors:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/georgethegreat-ua/" rel="noopener noreferrer"&gt;George Levytskyy&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/jedichchch/" rel="noopener noreferrer"&gt;Oleksandr Yudakov &lt;/a&gt;&lt;/p&gt;

</description>
      <category>octopusdeploy</category>
      <category>aws</category>
      <category>multicluster</category>
      <category>devops</category>
    </item>
    <item>
      <title>Feature-branches: Vanilla Kubernetes + Bitbucket pipelines</title>
      <dc:creator>Dedicatted</dc:creator>
      <pubDate>Tue, 12 Aug 2025 14:00:20 +0000</pubDate>
      <link>https://dev.to/dedicatted/feature-branches-vanilla-kubernetes-bitbucket-pipelines-3p5d</link>
      <guid>https://dev.to/dedicatted/feature-branches-vanilla-kubernetes-bitbucket-pipelines-3p5d</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://bit.ly/45NedII" rel="noopener noreferrer"&gt;Dedicatted&lt;/a&gt;, we always strive to deliver our best when addressing feature requests from our client teams, especially when it relates to the development process. In today’s example, we will demonstrate how we defined the most effective and functional solution for a client, enabling multiple teams to work independently on feature development in an environment where QA engineers can also test each feature independently, all managed seamlessly through Git.&lt;/p&gt;

&lt;p&gt;We are embarking on a journey to explain how we achieved this and how the outcome successfully satisfied our client.&lt;/p&gt;

&lt;p&gt;A couple of words about our application and supported platform: platform serving 500M-1MM requests on a monthly basis, so we are somewhere inside the active development process. Tech stack: MariaDB, ElasticSearch, Redis, RabbitMQ/Beanstalkd, PHP, React, On-premise, AWS, Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;And now our development and QA teams face a challenge in increasing efficiency and speed. With multiple features being developed in parallel, testing became a bottleneck.&lt;/p&gt;

&lt;p&gt;Our git flow looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjvm4ms43jdvio5egcfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjvm4ms43jdvio5egcfa.png" alt=" " width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As mentioned, we faced a situation where there was a clear bottleneck, and we needed to provide a solution that would resolve this blocker, allowing teams to work independently without depending on each other. The goal was to give QA engineers a clear list of environments, releases, builds, and endpoints to test, while providing developers with defined delivery goals and the ability to deploy and test changes as quickly as possible.&lt;/p&gt;

&lt;p&gt;On the other hand, our team was limited by both resources and security policies: we had only one available server within a private infrastructure in a data center. Cloud management solutions were not an option, so we needed a solution that could be deployed on that server to handle the setup and provide clear visibility into the technical workflow. This is when we first considered using Kubernetes.&lt;/p&gt;

&lt;p&gt;To achieve this, we decided to use Kubernetes to manage the platform’s environment components in the most efficient way. With simple resource configuration deployments, we gained control over the deployed resources and successfully managed them.&lt;/p&gt;

&lt;p&gt;For automating resource deployments, DNS configuration management, and basic environment setup, we chose Bitbucket Pipelines, as the client was already using Bitbucket.&lt;/p&gt;

&lt;p&gt;We also established and implemented a new internal development policy for the client’s teams, containing a specific set of rules and guidelines for the new functionality. Essentially, whenever a developer receives a feature-development task in the main backend Git repository, they will create a new branch following a specific naming convention that includes the task ID (e.g., “feature-dev-326,” where “dev-326” refers to the Jira task ID).&lt;/p&gt;

&lt;p&gt;Once the branch is created, Bitbucket Pipelines automatically trigger, building backend containers and frontend assets, and uploading fresh artifacts into a private container registry hosted in an on-premises Kubernetes cluster (detailed cluster configuration below).&lt;/p&gt;

&lt;p&gt;Simultaneously, the CI/CD job deploys a new namespace in Kubernetes, along with various platform-required resources, such as MariaDB, RabbitMQ, Redis, S3-compatible storage, volumes, Elasticsearch, and both backend and frontend applications. It also configures Cloudflare DNS and creates a new endpoint to point directly to the freshly prepared test platform environment in Kubernetes.&lt;/p&gt;

&lt;p&gt;With this link, developers can track their code changes in real time. Each time they push to Git, the applications are dynamically redeployed with the new changes, allowing QA to test as needed in real-time. This solution also enables QA engineers to dynamically build any required environment to test or reproduce bugs and issues.&lt;/p&gt;

&lt;p&gt;By deploying each feature to its own isolated environment, we aimed to streamline the testing process and reduce the time it takes to get features into production.&lt;/p&gt;

&lt;p&gt;We evaluated multiple solutions before choosing Kubernetes (and some of them were funny in 2024):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt;: Docker Compose would have been easier to set up for local testing, but it lacked the scalability and automation needed for our production-level infrastructure. Managing resources and networking for multiple branches would quickly become complex and unmanageable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dedicated VMs&lt;/strong&gt;: Another option was to spin up a dedicated VM for each feature branch. While this approach offered isolation, it was cost-inefficient and slow to provision. Managing many VMs would also lead to unnecessary complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;: Ultimately, we chose Kubernetes for its resource management, scalability, and namespace isolation. Kubernetes’ integration with Helm also simplified the deployment process, making it easy to deploy each feature branch into its own isolated environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FreeBSD Jails&lt;/strong&gt; (note: our production infrastructure hosted on-premises instances with FreeBSD): We also considered using FreeBSD Jails, a lightweight virtualization technology specific to FreeBSD. Jails provide strong isolation with minimal overhead, making them ideal for creating multiple isolated environments on a single host. However, there were several drawbacks:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• &lt;strong&gt;Limited Ecosystem Support&lt;/strong&gt;: Unlike Kubernetes, Jails do not have a rich ecosystem of orchestration tools, such as Helm, to manage application deployments and scale dynamically.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Configuration Complexity&lt;/strong&gt;: Managing networking, security, and resource isolation at scale with Jails would require significant manual configuration compared to the automated and declarative approach offered by Kubernetes.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;CI/CD Integration&lt;/strong&gt;: Bitbucket Pipelines and other modern CI/CD tools do not natively support FreeBSD Jails, making automation less straightforward and requiring custom scripts for deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In summary, Kubernetes provides a more robust, scalable, and flexible solution for managing multiple feature branches in a parallel development environment.&lt;/strong&gt; It offers better isolation, a richer ecosystem, and stronger community support compared to other options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4kq4nwj41ap6xocn1ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4kq4nwj41ap6xocn1ty.png" alt=" " width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It may seem straightforward, but let’s dive deeper.&lt;/p&gt;

&lt;p&gt;Since we decided to use Kubernetes, we faced another challenge: limited resources with no possibility to change or upgrade them. We had to work with a single server that could host a basic, one-node Kubernetes cluster and still meet all our requirements.&lt;/p&gt;

&lt;p&gt;Below, you’ll find a step-by-step guide for setting up this cluster on the designated machine, configuring it, deploying Bitbucket Pipelines (CI/CD jobs), and providing the necessary level of automation. This will help achieve the main goals and streamline the development process, making it faster and simpler for the client’s team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/" rel="noopener noreferrer"&gt;Kubeadm&lt;/a&gt; on the instance&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note #1&lt;/strong&gt;: The Pod and Service CIDRs should be larger than /24 and must not overlap with each other.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;If you have only one node, make sure to untaint it so it can be used for your workload.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the cluster is deployed, test it with simple commands like kubectl get namespace -A or kubectl get pods -A to verify that it’s running correctly. Once confirmed, let’s proceed to the next steps.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note #2:&lt;/strong&gt; We will also be using Prometheus Operator, Grafana, and Loki from the official HELM charts to achieve the required level of observability. This setup will cover application metrics, component metrics (such as MySQL, Redis, Elasticsearch, etc.), and logs from any necessary pods/containers. Additionally, we plan to transition to Redis and Elasticsearch operators soon. This part of the configuration is not included here, as we are primarily using the base templates.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;2. Install and configure Network Plugin&lt;/strong&gt;&lt;br&gt;
Network plugins are essential components of Kubernetes that provide networking capabilities for pods. But you already know it. They handle tasks like IP address assignment, routing, and network policy enforcement. In our case we need to choose something to use.&lt;br&gt;
Get whatever you want:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.tigera.io/" rel="noopener noreferrer"&gt;Calico&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flannel-io/flannel" rel="noopener noreferrer"&gt;Flannel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cilium.io/" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anything else&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In our case, we decided to use Calico, as we have extensive experience with its configuration and it offers several advantages. Calico stands out from other network plugins due to its strong BGP support, which enables seamless inter-cluster networking and granular network policy enforcement. Additionally, Calico’s global address space simplifies IP address management across multiple Kubernetes clusters, reducing complexity and improving scalability.&lt;/p&gt;

&lt;p&gt;However, for feature branching, Flannel could be also sufficient if you plan to deploy a separate cluster specifically for that purpose.&lt;/p&gt;

&lt;p&gt;How to deploy Calico as the Network Plugin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the Calico manifest file:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://docs.projectcalico.org/manifests/calico.yaml -O
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now Modify the downloaded calico.yaml to match with your custom Pod CIDR with opening of calico.yaml file in an editor with vim calico.yaml and:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find the section that specifies the CIDR block for the pods. It should look like this:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Change it to your CIDR value and save &amp;amp; close the file&lt;/li&gt;
&lt;li&gt;Restart coreDNS deployment with:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout restart deployment coredns -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;3. Install Ingress controller&lt;/strong&gt;&lt;br&gt;
In our case we’ve of course decided to use NGINX Ingress Controller in default setup with minimum changes.&lt;/p&gt;

&lt;p&gt;Installation of nginx ingress controller via helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install my-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note #1&lt;/strong&gt;: We are automating DNS endpoint attachment to the desired Kubernetes service through the externalDNS controller configuration, integrated with CloudFlare for DNS management. This is done using the Kubernetes Ingress resource. The Ingress configuration won’t be shared, as there’s nothing particularly unique about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note #2&lt;/strong&gt;: We also use basic authentication and VPN access for our resources. This is not included in the main guide, as each case is highly specific and often not reusable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;*&lt;em&gt;3.1 &lt;a href="https://metallb.universe.tf/" rel="noopener noreferrer"&gt;Metal LoadBalancer*&lt;/a&gt;&lt;/em&gt; &lt;br&gt;
Another default decision we made was to use MetalLB for endpoints and traffic management.&lt;/p&gt;

&lt;p&gt;Since we’ve got a dedicatted server from our main vendor we’ve decided to use metal LB which uses standard routing protocols and is pretty simple in installation, configuration and management.&lt;/p&gt;

&lt;p&gt;Below is a guide for its installation and some important notes we’ve got during our journey.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_&lt;strong&gt;Note:&lt;/strong&gt; If you are using &lt;code&gt;kube-proxy&lt;/code&gt; in IPVS mode, starting from Kubernetes v1.14.2, you need to enable strict ARP mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are using &lt;code&gt;kube-router&lt;/code&gt; as the service proxy, you don’t need to enable strict ARP, as it is enabled by default._&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can achieve this by editing kube-proxy config in current cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit configmap -n kube-system kube-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically, to install MetalLB, apply the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok, good.&lt;/p&gt;

&lt;p&gt;That was more than enough time to stay with metalLB, let’s move ahead.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. CI/CD Automation — Bitbucket
&lt;/h2&gt;

&lt;p&gt;We have successfully built a Kubernetes environment consisting of a one-node cluster using &lt;em&gt;kubeadm&lt;/em&gt; and &lt;em&gt;MetalLB&lt;/em&gt;. We performed initial tests by manually deploying several YAML files for platform components, such as MariaDB, Elasticsearch, Redis, RabbitMQ, as well as default data backup and restore scripts. Additionally, we deployed the application components (front-end and back-end) using the latest version of the code from the Git repository.&lt;/p&gt;

&lt;p&gt;Now, it’s time to automate the deployment process for new environments, triggered by the creation of a new branch that follows a specific naming convention.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note #1:&lt;br&gt;
 Sharing HELM Chart configuration files is unnecessary as they contain no unique configurations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  CI/CD Job
&lt;/h2&gt;

&lt;p&gt;Now, it’s time to prepare our first and primary CI/CD pipeline, which will handle the core logic.&lt;/p&gt;

&lt;p&gt;The pipeline will primarily sync the required HELM charts from a defined repository and execute a script on a remote server via SSH, using predefined variables based on branch naming conventions, among other criteria.&lt;/p&gt;

&lt;p&gt;The code will include comments with detailed descriptions to clarify any unknown fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: base-image

definitions:
 steps:
   - step: &amp;amp;move-helm-chart-to-k8s-instance # Rsync helm charts and get last availalbe versions
       name: Deploy feature branches
       deployment: Feature
       script:
         - pipe: atlassian/rsync-deploy:0.4.4
           variables:
             USER: "$USER"
             SERVER: "$SERVER"
             SSH_PORT: "$SERVER_PORT"
             REMOTE_PATH: "$REMOTE_PATH_RSYNC"
             LOCAL_PATH: "helm/"
   - step: &amp;amp;deploy-feature-branch # Deploy, configure and setup new feature-branch environment for this specific env
       name: Deploy feature branches
       deployment: Feature
       size: 4x
       script: # please be aware about this variables, after deployment you'll get all the needed endpoints and passwords
         - BRANCH_NAME="${BITBUCKET_BRANCH}"
         - CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
         - echo "domain - ${CLEANED_BRANCH_NAME}.example.com"
         - echo "kibana domain - kibana.${CLEANED_BRANCH_NAME}.example.com"
         - apt-get update
         - apt-get install docker -y
         - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin docker.example.com
         - chmod +x build_feature.sh # script description below
         - bash build_feature.sh
         - pipe: atlassian/ssh-run:0.3.0 
           variables:
             SSH_USER: "$USER"
             SERVER: "$SERVER_FEATURE"
             PORT: "$SERVER_PORT"
             COMMAND: |
               export BRANCH_NAME="${BITBUCKET_BRANCH}"
               export CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
               export COMMIT_HASH="${BITBUCKET_COMMIT}"
               bash “$REMOTE_PATH_RSYNC”/helm/deploy_feature.sh
pipelines: # configuration of the main rule for this pipeline to run, in our case with specific naming convention
 branches:
   feature/*:
     - step: *move-helm-chart-to-k8s-instance
     - step: *deploy-feature-branch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned earlier, we are using Nexus as our primary artifacts and container registry. It is deployed on the same Kubernetes cluster that we are currently working on.&lt;/p&gt;

&lt;p&gt;If you’re using Docker Hub or another registry, you’ll need to update the DNS for your Docker registry within the pipeline, specifically in this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_USERNAME”

— password-stdin docker.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s clarify what is inside &lt;code&gt;build_feature.sh&lt;/code&gt; and &lt;code&gt;deploy_feature.sh&lt;/code&gt; scripts before moving ahead.&lt;/p&gt;

&lt;h2&gt;
  
  
  build_feature.sh:
&lt;/h2&gt;

&lt;p&gt;In this step, we are passing all the necessary environment variables from the CI/CD pipeline. These variables include those needed for deploying environments, managing naming conventions, and configuring other settings. Below is the file structure along with an explanation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
BRANCH_NAME="${BITBUCKET_BRANCH}"  # Get the branch name from Bitbucket
CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
CLEANED_BRANCH_NAME="${CLEANED_BRANCH_NAME,,}"
COMMIT_HASH="${BITBUCKET_COMMIT}"
NAMESPACE="${CLEANED_BRANCH_NAME}"
# Update the .env.fb file with branch-specific variables
sed -i "s|APP_URL=.*|APP_URL=https://${CLEANED_BRANCH_NAME}.example.com|" ".env.fb"
cp .env.fb .env
# Build and push the Docker image
docker build --build-arg WWWUSER=1000 --build-arg WWWGROUP=1000 --build-arg WWWDOMAIN=${CLEANED_BRANCH_NAME}.example.com --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" --no-cache -t docker.example.com/app-${CLEANED_BRANCH_NAME}:${COMMIT_HASH} -f helm/Dockerfile .

docker push docker.example.com/app-${CLEANED_BRANCH_NAME}:${COMMIT_HASH}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that we are calling this script directly within the pipeline and providing the necessary inputs to proceed with the process. While the script is running, we update specific variables in our application configuration file (.env.fb) to prepare it for running in the Kubernetes cluster with a new DNS endpoint. After updating the configuration, we build the backend and upload it to Nexus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy_feature.sh:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# Variables from Bitbucket Pipeline
BRANCH_NAME=$BRANCH_NAME  # Set the feature-branch name from Bitbucket inputs
# Clean the branch name by removing 'feature/'
CLEANED_BRANCH_NAME="${BRANCH_NAME#feature/}"
COMMIT_HASH=$COMMIT_HASH
CLEANED_BRANCH_NAME="${CLEANED_BRANCH_NAME,,}"
# Namespace based on the cleaned branch name
NAMESPACE="${CLEANED_BRANCH_NAME}"
# Define the path to the Helm values file for the app where we need to change the image
APP_VALUES_FILE="../apps/values.yaml"
# New Docker image path based on Nexus registry and build information
DOCKER_IMAGE_REPOSITORY="docker.example.com/app-${CLEANED_BRANCH_NAME}"
DOCKER_IMAGE_TAG="${COMMIT_HASH}"
# Check if the namespace exists
NAMESPACE_EXIST=$(kubectl get namespace | grep -w "${NAMESPACE}" || true)
# Modify ingress host and Docker image in the Helm values file
update_helm_values() {
  echo "Updating ingress host and Docker image for app in ${APP_VALUES_FILE}"
  # Update ingress host based on branch name
  sed -i "s/host: .*/host: ${CLEANED_BRANCH_NAME}.example.com/" "$APP_VALUES_FILE"
  sed -i "s/WWWDOMAIN: .*/WWWDOMAIN: ${CLEANED_BRANCH_NAME}.example.com/" "$APP_VALUES_FILE"
  sed -i "s|repository: .*|repository: ${DOCKER_IMAGE_REPOSITORY}|" "$APP_VALUES_FILE"
  sed -i "s|tag: .*|tag: ${DOCKER_IMAGE_TAG}|" "$APP_VALUES_FILE"
  sed -i "s/host: .*/host: kibana-${CLEANED_BRANCH_NAME}.example.com/" "../kibana/values.yaml" 
  sed -i "s/host: .*/host: beanstalkd-${CLEANED_BRANCH_NAME}.example.com/" "../beanstalkd-console/values.yaml"
}
# Function to deploy all platform environment components (such as DB, Redis, Elasticsearch, Mailpit, Main platform front-end and back-end)
deploy_all_services() {
  echo "Deploying all services in namespace ${NAMESPACE}"
  kubectl create ns ${NAMESPACE}
  # Deploy the database
  helm upgrade --install db ../helm/DB -n "${NAMESPACE}"
  POD_NAME=$(kubectl get pods -n "${NAMESPACE}" -l app=mariadb -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME}" -n "${NAMESPACE}" --timeout=60s
  sleep 50;
  DUMP_FILE="/home/ubuntu/dev_dump.sql"
  kubectl cp "${DUMP_FILE}" "${NAMESPACE}/${POD_NAME}:/tmp/dump.sql"
  kubectl exec -n "${NAMESPACE}" "${POD_NAME}" -- /bin/bash -c "mysql -u root -proot_password &amp;lt; /tmp/dump.sql"
  kubectl exec -n "${NAMESPACE}" "${POD_NAME}" -- /bin/bash -c "
  helm upgrade --install beanstalk ../beanstalk -n "${NAMESPACE}"
  # Deploy Redis
  helm upgrade --install redis bitnami/redis -f ../redis/values.yaml -n "${NAMESPACE}"
  # Deploy Elasticsearch
  helm upgrade --install es ../ES -n "${NAMESPACE}"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=elasticsearch -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
  helm upgrade --install app../apps -n "${NAMESPACE}" -f "$APP_VALUES_FILE"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=-app -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
  helm install kibana ../kibana -n "${NAMESPACE}" -f ../kibana/values.yaml
  helm install beanstalkd-console ../beanstalkd-console -n "${NAMESPACE}" -f ../beanstalkd-console/values.yaml
}
# Function to update only the application image
update_app_image() {
  echo "Updating the application image in namespace ${NAMESPACE}"
  # Update only the application with the new image
  helm upgrade app ../apps -n "${NAMESPACE}" -f "$APP_VALUES_FILE"
  POD_NAME_APP=$(kubectl get pods -n "${NAMESPACE}" -l app=app -o jsonpath="{.items[0].metadata.name}")
  kubectl wait --for=condition=ready pod/"${POD_NAME_APP}" -n "${NAMESPACE}" --timeout=60s
}
# If the namespace doesn't exist, create it and deploy all services
if [ -z "$NAMESPACE_EXIST" ]; then
  echo "Namespace ${NAMESPACE} does not exist. Creating namespace and deploying services."
  # Update the Helm values with the new ingress host and Docker image
  update_helm_values
  # Deploy all services
  deploy_all_services
else
  echo "Namespace ${NAMESPACE} already exists. Updating only the application image."
  # Update the Helm values with the new Docker image (ingress does not need to be changed)
  update_helm_values
  # Update only the application image
  update_app_image
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After committing the pipeline described above, preparing the Kubernetes cluster, and setting up your Helm applications, make sure to double-check everything. Once this is done, you can begin working with feature-branch deployments in Kubernetes.&lt;/p&gt;

&lt;p&gt;Below are some examples of running pipelines and outputs from Bitbucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dj5gfgq63gsz4csoax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dj5gfgq63gsz4csoax.png" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ak36o2elbkamws1xjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ak36o2elbkamws1xjy.png" alt=" " width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations.&lt;/p&gt;

&lt;p&gt;You now have a ready-to-use workflow.&lt;/p&gt;

&lt;p&gt;However, there’s one final task to complete the flow: adding automation to remove Kubernetes resources (environments) once a feature branch is merged with the target branch.&lt;/p&gt;

&lt;p&gt;At this stage we defined that there is no available action “merge” in bitbucket pipelines.&lt;/p&gt;

&lt;p&gt;We will use a script for this purpose and run it as a cron job, which will execute every 30 minutes. Let’s call it &lt;strong&gt;branch-monitor.sh&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Variables
REPO_URL="https://api.bitbucket.org/2.0/repositories/your-workspace/your-repo"  # Bitbucket API URL
TARGET_BRANCH="main"  # The branch where feature branches are merged into (e.g., 'main')
USERNAME="your-username"  # Bitbucket username
PASSWORD="your-password"  # Bitbucket password or access token

# Function to get all branches with "feature/" prefix
get_feature_branches() {
 # Fetch all branches and filter for "feature/" branches using Bitbucket API
 BRANCHES=$(curl -u $USERNAME:$PASSWORD -s "$REPO_URL/refs/branches" | jq -r '.values[] | select(.name | startswith("feature/")) | .name')
 echo "$BRANCHES"
}

# Function to check if a branch has been merged
is_branch_merged() {
 BRANCH_TO_CHECK=$1
 # Check the merge status using Bitbucket API
 RESPONSE=$(curl -u $USERNAME:$PASSWORD -s "$REPO_URL/merge-base?include=$TARGET_BRANCH&amp;amp;exclude=$BRANCH_TO_CHECK")

 # Check if the response contains a valid merge status (merged)
 if [[ "$RESPONSE" == *"error"* ]]; then
   echo "Branch $BRANCH_TO_CHECK is NOT merged into $TARGET_BRANCH."
   return 1
 else
   echo "Branch $BRANCH_TO_CHECK is merged into $TARGET_BRANCH."
   return 0
 fi
}

# Function to delete Kubernetes namespace
delete_namespace() {
 NAMESPACE=$1
 echo "Deleting namespace $NAMESPACE in Kubernetes..."
 kubectl delete namespace $NAMESPACE

 if [ $? -eq 0 ]; then
   echo "Namespace $NAMESPACE deleted successfully."
 else
   echo "Failed to delete namespace $NAMESPACE."
 fi
}

# Main logic: Loop through all feature branches
BRANCHES=$(get_feature_branches)

for BRANCH in $BRANCHES; do
 NAMESPACE=${BRANCH#feature/}  # Remove 'feature/' from branch name to get the namespace name
  echo "Checking branch $BRANCH (Namespace: $NAMESPACE)..."

 # Check if the branch has been merged
 if is_branch_merged $BRANCH; then
   # If merged, delete the corresponding namespace
   delete_namespace $NAMESPACE
 else
   echo "No action taken for $NAMESPACE."
 fi
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, each time developers deploy a branch with a specific naming convention, such as “feature/dev-***”, a new environment will automatically be deployed in the Kubernetes cluster. Developers will have direct access through a separate private DNS endpoint, allowing them to test the newly added functionality. Additionally, once the branch is merged into “main,” the corresponding namespace in Kubernetes, along with all recently deployed resources, will be removed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That’s it, dear Engineers!&lt;br&gt;
We understand that there may be some complications during the implementation of our approach, but this is due to the specific requirements of the project. Therefore, personal adjustments will be necessary to make it effective in your working environments. If you need clarification on anything, our engineering team is available to answer your questions in the comments and assist with any issues.&lt;/p&gt;

&lt;p&gt;Let’s move forward and summarize the goals we achieved with the implementation of the feature-branching flow and how it has impacted the development process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. New Development &amp;amp; Testing Approach:&lt;/strong&gt;&lt;br&gt;
Our solution enables the customer’s team to automate the preparation of testing environments. Developers and QA engineers can collaborate more effectively by testing and resolving issues in our Kubernetes test environment. Each feature/release candidate is deployed with fully automated CI/CD pipelines, creating separate environments with dedicated DNS endpoints and secure authentication. This allows for faster testing and enables the team to work on multiple features in parallel. Previously, the team had only one testing environment, which required manual code deployments and lacked automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enhanced Quality:&lt;/strong&gt;&lt;br&gt;
Continuous testing and quality gates ensure that only high-quality code is released.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Business Metrics &amp;amp; Impact:&lt;/strong&gt;&lt;br&gt;
Key metrics, such as the time to deliver a feature to market and the time-to-test, have been reduced by a factor of four.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Clear Process and Documentation:&lt;/strong&gt;&lt;br&gt;
Developers now have a simple, step-by-step guide along with additional documentation describing the entire process, including how to monitor application metrics, logs, and more.&lt;/p&gt;

&lt;p&gt;After finalizing our tests and resolving a few minor issues, the customer’s teams actively began working with the new solution. Over time, we identified a few non-critical issues with Kubernetes networking and MetalLB configuration, but these were quickly resolved.&lt;/p&gt;

&lt;p&gt;Currently, our developers and QA teams work with 6 to 14 separate environments in parallel on a daily basis, and the number of active environments is expected to grow rapidly. They report that everything is working well and fully meets their needs. We are already planning to add additional resources to the cluster when needed and provide further automation to simplify environment management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bit.ly/45NedII" rel="noopener noreferrer"&gt;DEDICATTED&lt;/a&gt; &lt;a href="https://dedicatted.com/insights" rel="noopener noreferrer"&gt;Blog&lt;/a&gt; | &lt;a href="https://dedicatted.com/" rel="noopener noreferrer"&gt;Site&lt;/a&gt; | &lt;a href="https://www.linkedin.com/company/ddcttd/?viewAsMember=true" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Authors &amp;amp; BIOs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;George Levytskyy&lt;/strong&gt;&lt;br&gt;
George Levytskyy (Heorhii Levytskyy) — Head of DevOps and SRE at Dedicatted. With over 8 years of experience in DevOps, Cloud Architecture and cost optimization, network administration, cybersecurity, and high-load infrastructure design, he has successfully managed the delivery of more than 30 projects in recent years. Proficient in AWS Cloud, Kubernetes, GitOps practices, Python programming, and cybersecurity. In his free time, George enjoys playing padel, mentoring and lecturing, traveling, hiking, and gaming on his Steam Deck, especially during travel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bohdan Mukovozov&lt;/strong&gt;&lt;br&gt;
Bohdan Mukovozov is a skilled IT professional with over 4 years of experience in DevOps. Known for a strong analytical approach and a problem-solving mindset, Bohdan Mukovozov has successfully implemented solutions that enhance system efficiency, security, and user experience. Proficient in AWS cloud, CI/CD, k8s, he’s passionate about leveraging technology to drive innovation and streamline processes. When not working, Bohdan enjoys staying in line with tech trends, play video game.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>bitbucket</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
