<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alfred Valderrama</title>
    <description>The latest articles on DEV Community by Alfred Valderrama (@redopsbay).</description>
    <link>https://dev.to/redopsbay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/redopsbay"/>
    <language>en</language>
    <item>
      <title>CloudCycle - Set lifecycle for your cloud resources to avoid surprising costs</title>
      <dc:creator>Alfred Valderrama</dc:creator>
      <pubDate>Sat, 25 May 2024 14:48:41 +0000</pubDate>
      <link>https://dev.to/redopsbay/cloudcycle-set-lifecycle-for-your-cloud-resources-to-avoid-surprising-costs-5gpd</link>
      <guid>https://dev.to/redopsbay/cloudcycle-set-lifecycle-for-your-cloud-resources-to-avoid-surprising-costs-5gpd</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I have recently forgot to Turn off / Terminate two ec2 instances running &lt;strong&gt;m5x.large&lt;/strong&gt; and &lt;strong&gt;t3.medium&lt;/strong&gt; that runs about 1 month and I just received a notification of my AWS monthly bill and shocked me! &lt;/p&gt;

&lt;p&gt;At the first place I thought my account was hacked, But turns out that I just left two AWS EC2 Instances running. 🥲😔&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution (CloudCycle)
&lt;/h2&gt;

&lt;p&gt;So, that's why I came up with a basic solution and share it. So now, whether I forgot to turn it off / terminate a running AWS Resources, I don't have to worry about it anymore, if I have properly set the desired lifecycle of my cloud resources.&lt;/p&gt;

&lt;p&gt;I have created a lambda function which get's executed every &lt;strong&gt;15 minutes&lt;/strong&gt; to check if the supported resources are due to termination or not. &lt;/p&gt;

&lt;p&gt;Now, if you or your team forgot to terminate cloud resources, the &lt;strong&gt;CloudCycle&lt;/strong&gt; will do the job for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools &amp;amp; Language used
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Golang&lt;/strong&gt; - Since lambda function bills will be based on the duration of execution. At first, I developed this by using python and I have realized that I need to consider it's performance and execution time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; - Used only for deployment and examples.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g5em2k86yenw694t8sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g5em2k86yenw694t8sk.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup will utilized the schedule expression of event bridge, In which event bridge get's executed every 15 minutes. In doing so, lambda function get validate if the supported resources is valid for termination.&lt;/p&gt;

&lt;p&gt;But why termination? Instead of turning it off?&lt;/p&gt;

&lt;p&gt;Of course there's a free tool available called &lt;strong&gt;Cloud-Custodian&lt;/strong&gt;. But still, sometimes turning it off isn't enough and it can lead to lots of unused resources and can still occur minimal cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Code for EC2 Instance Service
&lt;/h2&gt;

&lt;p&gt;Below are the sample codes for EC2 Instance service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package services

import (
    "context"
    "github.com/aws/aws-sdk-go-v2/service/ec2"
    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/service/ec2/types"
    "github.com/redopsbay/cloudcycle/internal"
    "github.com/redopsbay/cloudcycle/internal/schedule"
    "fmt"
)

type EC2 struct {
    InstanceId         string
    CloudCycle         string
    MarkForTermination bool
}

type EC2Instances struct {
    Instances []EC2
}

func GetEC2Instances(ctx context.Context, client *ec2.Client) ([]types.Reservation, error) {
    filters := []types.Filter{{
        Name:   aws.String("tag-key"),
        Values: []string{internal.TagKey},
    },
    }

    DescribeInputs := ec2.DescribeInstancesInput{
        Filters: filters,
    }

    instances, err := client.DescribeInstances(ctx, &amp;amp;DescribeInputs)

    if err != nil {
        panic(err)
        return []types.Reservation{}, err
    }

    return instances.Reservations, nil
}

func MarkInstancesForTermination(reservations []types.Reservation) (EC2Instances, error) {
    var instances EC2Instances

    for _, reservation := range reservations {
        for _, instance := range reservation.Instances {
            for _, tag := range instance.Tags {
                if *tag.Key == internal.TagKey {
                    Lifecycle, err := schedule.GetLifeCycle(instance.LaunchTime, *tag.Value)
                    if err != nil {
                        return EC2Instances{}, err
                    }

                    if schedule.ValidForTermination(Lifecycle) {
                        ec2instance := EC2{
                            InstanceId:         *instance.InstanceId,
                            CloudCycle:         *tag.Value,
                            MarkForTermination: true,
                        }
                        instances.Instances = append(instances.Instances, ec2instance)

                    } else {
                        ec2instance := EC2{
                            InstanceId:         *instance.InstanceId,
                            CloudCycle:         *tag.Value,
                            MarkForTermination: false,
                        }
                        instances.Instances = append(instances.Instances, ec2instance)
                    }
                }
            }
        }
    }
    return instances, nil
}

func StartEC2InstanceTermination(ctx context.Context, client *ec2.Client) error {
    var instanceIds []string

    reservations, err := GetEC2Instances(ctx, client)
    if err != nil {
        fmt.Println("Unable to get instances.")
        return err
    }

    instances, err := MarkInstancesForTermination(reservations)
    if err != nil {
        fmt.Println("Unable to mark instances for termination.")
        return err
    }

    for _, instance := range instances.Instances {
        if instance.MarkForTermination {
            instanceIds = append(instanceIds, instance.InstanceId)
            fmt.Printf("\nInstanceID: %s, ForTermination: %t, CloudCycle: %s\n",
                instance.InstanceId,
                instance.MarkForTermination,
                instance.CloudCycle)
        }
    }

    TerminatedOutput, err := client.TerminateInstances(ctx, &amp;amp;ec2.TerminateInstancesInput{
        InstanceIds: instanceIds,
    })

    for _, state := range TerminatedOutput.TerminatingInstances {
        if *state.CurrentState.Code == 0 {
            fmt.Printf("InstanceID: %s, State: Pending for Termination", *state.InstanceId)
        } else if *state.CurrentState.Code == 32 {
            fmt.Printf("InstanceID: %s, State: Shutting down", *state.InstanceId)
        } else if *state.CurrentState.Code == 48 {
            fmt.Printf("InstanceID: %s, State: Shutting down", *state.InstanceId)
        } else if *state.CurrentState.Code == 16 {
            fmt.Printf("InstanceID: %s, State: Still running", *state.InstanceId)
        } else if *state.CurrentState.Code == 64 {
            fmt.Printf("InstanceID: %s, State: Stopping", *state.InstanceId)
        } else if *state.CurrentState.Code == 80 {
            fmt.Printf("InstanceID: %s, State: Stopped", *state.InstanceId)
        } else {
            fmt.Printf("InstanceID: %s, State: Unknown", *state.InstanceId)
        }
    }

    return nil

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;My objective with &lt;strong&gt;CloudCycle&lt;/strong&gt; is to automatically cleanup supported resources based on the specified &lt;strong&gt;duration&lt;/strong&gt; or &lt;strong&gt;lifecycle&lt;/strong&gt; through resource tagging, And to support the commonly used resources that causes AWS Bills to grow even though the cloud resources is not needed anymore.&lt;/p&gt;

&lt;p&gt;No more story telling! &lt;/p&gt;

&lt;p&gt;For complete documentation and project link, you can proceed directly to my GitHub repo below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/redopsbay/cloudcycle"&gt;https://github.com/redopsbay/cloudcycle&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This repo is open for contributors!!! Some documents and cloud resources are currently &lt;strong&gt;WORK-IN-PROGRESS&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Just specify that tag and set your desired lifecycle for supported resources with &lt;strong&gt;CloudCycle&lt;/strong&gt; Key. Below are the supported duration.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Suffixes&lt;/th&gt;
&lt;th&gt;Detail&lt;/th&gt;
&lt;th&gt;Sample Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;m&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Minutes&lt;/td&gt;
&lt;td&gt;&lt;code&gt;60m&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;h&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Hours&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2h&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;d&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Days&lt;/td&gt;
&lt;td&gt;&lt;code&gt;7d&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How does it works?
&lt;/h2&gt;

&lt;p&gt;CloudCycle will get all the supported resources with a tagged key &lt;strong&gt;&lt;em&gt;CloudCycle&lt;/em&gt;&lt;/strong&gt; and it will simply compare the &lt;strong&gt;&lt;em&gt;current time&lt;/em&gt;&lt;/strong&gt; vs &lt;strong&gt;&lt;em&gt;launch time&lt;/em&gt;&lt;/strong&gt; of the supported resources with the specified &lt;code&gt;key/value&lt;/code&gt; pair resource tag if the supported resources are valid for termination.&lt;/p&gt;

&lt;p&gt;Below are the sample terraform code.&lt;/p&gt;

&lt;p&gt;Specify ec2 lifecycle by 24 hours from it's launch date.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Canonical&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.micro"&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;CloudCycle&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1d"&lt;/span&gt; &lt;span class="c1"&gt;// This ec2 instance will be terminated within 24 hours from it's launch date.&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sample Terraform Deployment Usage
&lt;/h2&gt;

&lt;p&gt;For deployment, you can refer to the github repo &lt;a href="https://github.com/redopsbay/cloudcycle/blob/master/deploy/README.md"&gt;Deployment Page&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefit's
&lt;/h2&gt;

&lt;p&gt;Sometimes it's better to let go than stay strong. Leaving up unused resources will incur costs.&lt;/p&gt;

&lt;p&gt;And lastly, No nightmare's, No poverty! 😂🤣&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/redopsbay/cloudcycle"&gt;https://github.com/redopsbay/cloudcycle&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>go</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Observability with OpenTelemetry</title>
      <dc:creator>Alfred Valderrama</dc:creator>
      <pubDate>Sat, 23 Dec 2023 01:47:39 +0000</pubDate>
      <link>https://dev.to/redopsbay/observability-with-opentelemetry-go0</link>
      <guid>https://dev.to/redopsbay/observability-with-opentelemetry-go0</guid>
      <description>&lt;p&gt;As the continuous growth and adoption of cloud native and distributed systems due to it's flexibility and high availability, It is also becoming a continuous increase of complexities, specially for IT Teams to properly operate and monitor these distributed systems.&lt;/p&gt;

&lt;p&gt;But what the hell is observability? Why is it important, and How it can actually help organizations?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Observability?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; is the ability to measure the internal states of a system by examining its outputs. A system is considered “&lt;strong&gt;observable&lt;/strong&gt;” if the current state can be estimated by only using information from outputs, namely sensor data. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You simply put it as a &lt;strong&gt;KPI&lt;/strong&gt;. You'll never know if the current state is abnormal or stable if you don't have the historic data or a state that can be estimated based on it's outputs (Performance).&lt;/p&gt;

&lt;p&gt;For example, How can you tell if your employee is performing well without him/her reporting his/her daily activities or tasks? Can you tell if your employee is working well just by looking at him/her work? Nah, &lt;/p&gt;

&lt;p&gt;The same goes to your applications, You cannot tell if your application is properly performing well just by looking at the cpu / memory utilization without knowing it's internal state.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But how can you tell to your application to become observable to report and allow you to validate it's &lt;strong&gt;Key Performance Indicator's (KPI)&lt;/strong&gt;?&lt;/p&gt;

&lt;h2&gt;
  
  
  Instrumentation
&lt;/h2&gt;

&lt;p&gt;This is where instrumentation comes in, Instrumentation is a method or modifying your source code to give you an output to validate if your application is currently performing well.&lt;/p&gt;

&lt;p&gt;But how can we instrument it? &lt;/p&gt;

&lt;h2&gt;
  
  
  OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;OpenTelemetry&lt;/strong&gt; allows us to achieve our goal to tell our application to give us the right output to validate it's performance. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenTelemetry&lt;/strong&gt; or &lt;strong&gt;OTel&lt;/strong&gt; is an open source observability framework made up of a collection of tools, &lt;strong&gt;APIs&lt;/strong&gt;, and &lt;strong&gt;SDKs&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OTel&lt;/strong&gt; enables IT teams to instrument, generate, collect, and export telemetry data for analysis and to understand software performance and behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to Basic
&lt;/h2&gt;

&lt;p&gt;Now, we have a basic picture on how observability works and how instrumentation will bring something to the table. For our example, we will be using the ff. tech:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Signoz (Open Source Observability Tool)&lt;/li&gt;
&lt;li&gt;Opentelemetry (Open Source Telemetry Instrumentation Framework)&lt;/li&gt;
&lt;li&gt;Golang&lt;/li&gt;
&lt;li&gt;Gin Gonic (REST API Framework for Go)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tracing with Golang
&lt;/h3&gt;

&lt;p&gt;First, Imagining we have a microservice application that contains, &lt;code&gt;product-service&lt;/code&gt;, &lt;code&gt;reviews-service&lt;/code&gt;, &lt;code&gt;ratings-service&lt;/code&gt;. Each one of them are integrated or calling each other to get needed information requested by rest client.&lt;/p&gt;

&lt;p&gt;Let's assume that we already build our basic rest application, since it will become a huge discussion if we will start from scratch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── Makefile
├── README.md
├── docker-compose.yaml
├── k8s
├── load-gen
│   ├── Dockerfile
│   ├── go.mod
│   ├── main.go
│   └── seed
│       ├── products.json
│       ├── ratings.json
│       └── reviews.json
├── product-service
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── controller
│   │   ├── CheckEnv.go
│   │   ├── DefaultResponseController.go
│   │   ├── FeatureProductController.go
│   │   ├── ProductController.go
│   │   ├── RatingsController.go
│   │   ├── ReviewsController.go
│   │   └── UserController.go
│   ├── crypt
│   │   └── passwdhash.go
│   ├── db
│   │   ├── database.go
│   │   └── setup.go
│   ├── go.mod
│   ├── go.sum
│   ├── logging
│   │   └── logging.go
│   ├── main.go
│   ├── models
│   │   └── models.go
│   ├── routes
│   │   └── routes.go
│   └── tracer
│       └── tracer.go
├── ratings-service
│   ├── Dockerfile
│   ├── Makefile
│   ├── controller
│   │   ├── RatingsController.go
│   │   └── RestAPIResponseController.go
│   ├── db
│   │   ├── database.go
│   │   └── setup.go
│   ├── docker-compose.yaml
│   ├── go.mod
│   ├── go.sum
│   ├── logging
│   │   └── logging.go
│   ├── main.go
│   ├── models
│   │   └── model.go
│   ├── routes
│   │   └── routes.go
│   └── tracer
│       └── tracer.go
└── reviews-service
    ├── Dockerfile
    ├── Makefile
    ├── controller
    │   ├── RestAPIResponseController.go
    │   └── ReviewsController.go
    ├── db
    │   ├── database.go
    │   └── setup.go
    ├── docker-compose.yaml
    ├── go.mod
    ├── go.sum
    ├── logging
    │   └── logging.go
    ├── main.go
    ├── models
    │   └── model.go
    ├── routes
    │   └── routes.go
    └── tracer
        └── tracer.go

25 directories, 57 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, Let's look with an example by creating &lt;code&gt;tracer.go&lt;/code&gt; under &lt;code&gt;product-service&lt;/code&gt;. This file will contain the initialization of our global trace. That contains, &lt;strong&gt;trace provider&lt;/strong&gt;, &lt;strong&gt;OTEL Endpoint&lt;/strong&gt; and our &lt;strong&gt;Service Name (product-service)&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package tracer 

import (
    "context"
    "log"
    "os"
    "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
    "google.golang.org/grpc/credentials"
    "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
    "strings"
    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/propagation"
    "go.opentelemetry.io/otel/sdk/resource"
    "go.opentelemetry.io/otel/attribute"
    sdktrace "go.opentelemetry.io/otel/sdk/trace"
)

var (
    collectorURL = os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT")
    insecure     = os.Getenv("INSECURE_MODE")
    ServiceName  = os.Getenv("SERVICE_NAME")
    Tracer = otel.Tracer("gin-server")
)

func InitTracer() (*sdktrace.TracerProvider, error) {

    var secureOption otlptracegrpc.Option

    if strings.ToLower(insecure) == "false" || insecure == "0" || strings.ToLower(insecure) == "f" {
        secureOption = otlptracegrpc.WithTLSCredentials(credentials.NewClientTLSFromCert(nil, ""))
    } else {
        secureOption = otlptracegrpc.WithInsecure()
    }

    exporter, err := otlptrace.New(
        context.Background(),
        otlptracegrpc.NewClient(
            secureOption,
            otlptracegrpc.WithEndpoint(collectorURL),
        ),
    )

    if err != nil {
        log.Fatalf("Failed to create exporter: %v", err)
    }

    resources, err := resource.New(
        context.Background(),
        resource.WithAttributes(
            attribute.String("service.name", ServiceName),
            attribute.String("library.language", "go"),
        ),
    )

    traceProvider := sdktrace.NewTracerProvider(
        sdktrace.WithSampler(sdktrace.AlwaysSample()),
        sdktrace.WithBatcher(exporter),
        sdktrace.WithResource(resources),
    )
    otel.SetTracerProvider(traceProvider)
    otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{}))
    return traceProvider, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Injecting traces with Contexts
&lt;/h3&gt;

&lt;p&gt;We have provided the &lt;code&gt;tracer.go&lt;/code&gt;, Now let's start by injecting trace from our Gin Context from our &lt;code&gt;ProductController.go&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func GetProducts(c *gin.Context) {
    dbInstance, err := db.SetupDatabase()

        // Inject Gin Context
    otel.GetTextMapPropagator().Inject(c.Request.Context(), propagation.HeaderCarrier(c.Request.Header))

        // Start Tracing

    _, span := tracer.Tracer.Start(c.Request.Context(), "GetProducts")

        // End Span when function ends,
    defer span.End()


    if err != nil {
        ServerError(c)
    }

    sql := db.MySQLDB{DBhandler: dbInstance}

    products, err := sql.GetProducts(c)
    if err != nil {
        ServerError(c)
    }

    c.JSON(http.StatusOK, products)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, lets initialize our trace on our &lt;code&gt;main.go&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "github.com/gin-gonic/gin"
    "productservice/routes"
    "productservice/tracer"
    "go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin"
    "log"
    "context"
)

func main() {

    tp, err := tracer.InitTracer()
    if err != nil {
        log.Fatal(err)
    }
    defer func() {
        if err := tp.Shutdown(context.Background()); err != nil {
            log.Printf("Error shutting down tracer provider: %v", err)
        }
    }()


    router := gin.New()
    router.Use(otelgin.Middleware(tracer.ServiceName))
    routes.SetupRoute(router)
    router.Run(":8000")
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9pv09vqix9acnj4an0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9pv09vqix9acnj4an0r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap
&lt;/h2&gt;

&lt;p&gt;We don't have to go deeper on how to implement observability, Since, there's tons of ways to implement it and different kinds of use cases. But one thing to note is that, Do not ever Operate an application on &lt;strong&gt;PRODUCTION&lt;/strong&gt; without knowing it's internal state. 👌😁 &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Repo: &lt;a href="https://github.com/redopsbay/sample-apps/tree/master/Observability/Opentelemetry" rel="noopener noreferrer"&gt;https://github.com/redopsbay/sample-apps/tree/master/Observability/Opentelemetry&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloudnative</category>
      <category>observability</category>
      <category>devops</category>
    </item>
    <item>
      <title>Production grade CI/CD Pipeline with GitLab and JIRA powered by mydevopsteam.io sorcery</title>
      <dc:creator>Alfred Valderrama</dc:creator>
      <pubDate>Thu, 18 Aug 2022 14:29:05 +0000</pubDate>
      <link>https://dev.to/redopsbay/production-grade-cicd-pipeline-with-gitlab-and-jira-powered-by-mydevopsteamio-sorcery-3471</link>
      <guid>https://dev.to/redopsbay/production-grade-cicd-pipeline-with-gitlab-and-jira-powered-by-mydevopsteamio-sorcery-3471</guid>
      <description>&lt;ol&gt;
&lt;li&gt;GitLab Project Setup&lt;/li&gt;
&lt;li&gt;JIRA Setup&lt;/li&gt;
&lt;li&gt;Setup GitLab runner on Kubernetes&lt;/li&gt;
&lt;li&gt;JIRA issue integration on GitLab.&lt;/li&gt;
&lt;li&gt;Setup Traefik Proxy (optional) for edge routing experience&lt;/li&gt;
&lt;li&gt;Containerized the sample django application&lt;/li&gt;
&lt;li&gt;Populate containerized application to helm chart&lt;/li&gt;
&lt;li&gt;GitLab CI/CD Pipeline setup via &lt;code&gt;.gitlab-ci.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Demonstration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hey there again! In this post. I will demonstrate on how you can setup a production grade CI/CD pipeline for your production workloads with &lt;strong&gt;mydevopsteam.io&lt;/strong&gt; sorcery. 😅😅😅&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Most of the time, organizations struggles to implement CI/CD pipeline. Specially on a critical systems or infrastructures. Some organizations already implemented it and some organizations are not familiar with it. But most of it fails. That's why &lt;strong&gt;&lt;a href="https://mydevopsteam.io" rel="noopener noreferrer"&gt;MyDevOpsTeam&lt;/a&gt;&lt;/strong&gt; was created to help &lt;strong&gt;SME's&lt;/strong&gt; to properly implement DevOps methodologies in able for them to stay focus on the business instead of the technicalities. In this demo we will explore how CI/CD pipeline help newbies learn how to basically implement it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;GitLab Account&lt;/li&gt;
&lt;li&gt;Atlassian Account with JIRA&lt;/li&gt;
&lt;li&gt;Existing Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;kubectl (Kubernetes Client)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note: We will use only Minikube as a kubernetes cluster. Because I've forgot to terminate my EKS Cluster running an m5a.2xlarge worker nodes after my demonstration last June 2022 🤣. I just terminated it yesterday after I notice something isn't right before going to sleep.  🤣 🤣 🤣&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  GitLab Project Setup
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;GitLab is one of the available devops platform with free and premium features. It offers a lot of features and lots of integrations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Login to your account and Create a GitLab project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopc64waga9eiroyts9q9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopc64waga9eiroyts9q9.png" alt="Create Project"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e7ec4pbuv37mdfe9cxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e7ec4pbuv37mdfe9cxq.png" alt="Create Empty Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and enter the necessary details for your project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3b0pmxd727zf8rok7pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3b0pmxd727zf8rok7pu.png" alt="Devops Journey"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  JIRA Setup
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;JIRA is one of the leading Project Management and Bug tracking tools. It also has a free and premium features in able for you to create and track ongoing Bug/Issues. Also, The most common use cases of JIRA is the ability to integrate it to a CI/CD pipeline for automatic transitioning once the bug has been fixed and deployed to production environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Assuming that you have already an account on JIRA. Now it's time to create your first JIRA project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eg3mdtxgtfvnv2byuib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eg3mdtxgtfvnv2byuib.png" alt="Create JIRA Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And select Bug Tracking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcosg7nom1vq4f3kdz67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcosg7nom1vq4f3kdz67.png" alt="JIRA Bug Tracking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter your project details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ibr6jepxe6teknvrnh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ibr6jepxe6teknvrnh5.png" alt="JIRA Project Details"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Setup GitLab runner on Kubernetes
&lt;/h1&gt;

&lt;p&gt;There is a few options to setup a Gitlab runner. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the Gitlab runner on single linux instances. &lt;/li&gt;
&lt;li&gt;Run the Gitlab runner on a docker container via docker API&lt;/li&gt;
&lt;li&gt;Run the Gitlab runner as a kubernetes pod.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Advantage
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Running Gitlab runner on single linux instances let's you easily manage your gitlab runner and quick setup.&lt;/li&gt;
&lt;li&gt;Running your gitlab runner as a docker container let's you dynamically run gitlab job without worrying about software installation. And the container will be automatically deleted after job run.&lt;/li&gt;
&lt;li&gt;Running your gitlab runner as a kubernetes pod let's you run a complete dynamic jobs like run as you go. And the pod will automatically be deleted after the pipeline execution.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Disadvantage
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;It's problematic to run a Gitlab runner on an ec2 or single linux instance due to a limitations like software installations during job run. And it might cause your instance to life or death scenario. 🤣&lt;/li&gt;
&lt;li&gt;It's also problematic to run a Gitlab runner on a docker container. Since, Docker will be running also in a single instance. So what if your job requires a high memory/cpu capacity?  And you have a multiple jobs running simultaneously?&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Key Notes&lt;/strong&gt;: &lt;em&gt;My recommended option is always implement a dynamic pipeline/infrastructure etc.. So you can just plug/play your solutions in any platform just like a template. And you will not worry about vendor lock-in sh*t.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Installation of GitLab runner via Helm.
&lt;/h2&gt;

&lt;p&gt;Assuming you have a minikube or any kubernetes cluster running. First, create &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: Make sure to get the gitlab runner registration token residing on the settings of your project.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The GitLab Server URL (with protocol) that want to register the runner against
# ref: https://docs.gitlab.com/runner/commands/index.html#gitlab-runner-register
#
gitlabUrl: https://gitlab.com

# The Registration Token for adding new runners to the GitLab Server. This must
# be retrieved from your GitLab instance.
# ref: https://docs.gitlab.com/ee/ci/runners/index.html
#
runnerRegistrationToken: "&amp;lt;your-gitlab-registration-token&amp;gt;"

# For RBAC support:
rbac:
    create: true

# Run all containers with the privileged flag enabled
# This will allow the docker:dind image to run if you need to run Docker
# commands. Please read the docs before turning this on:
# ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-dockerdind
runners:
    privileged: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the helm repository via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add gitlab https://charts.gitlab.io
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install gitlab-runner gitlab/gitlab-runner \
    -n gitlab --create-namespace \
    -f ./values.yaml --version 0.43.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds, Gitlab runner should be available at your cluster and Gitlab Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hbbhhp25x3q92to8pdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hbbhhp25x3q92to8pdu.png" alt="Gitlab Runner"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  JIRA issue integration on GitLab
&lt;/h1&gt;

&lt;p&gt;Integration with JIRA and GitLab is easy. But for a long run like expanding your project. It would be a problematic. But for this one, We will only perform a quick way to integrate GitLab and JIRA software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate JIRA API Token
&lt;/h2&gt;

&lt;p&gt;Proceed to &lt;a href="https://id.atlassian.com/manage-profile/security/api-tokens" rel="noopener noreferrer"&gt;https://id.atlassian.com/manage-profile/security/api-tokens&lt;/a&gt; and generate your JIRA token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fercnotwabz6ispfo9b84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fercnotwabz6ispfo9b84.png" alt="JIRA Token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you have your API token. Proceed to your GitLab Project -&amp;gt; Settings -&amp;gt; Integrations -&amp;gt; JIRA. Then enter the necessary details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8d8mmu0a3ck4jcgi6sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8d8mmu0a3ck4jcgi6sp.png" alt="Gitlab JIRA"&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So for now, We can automatically transition reported bug issues or tickets assigned to us via commit message. But let's test it first. So we can assure it's actually working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open a sample JIRA issue
&lt;/h2&gt;

&lt;p&gt;Open a sample JIRA issue on your JIRA software. To simulate if automatic transition is working. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhjry1wpvml7zic59qv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhjry1wpvml7zic59qv4.png" alt="Test Jira ISSUE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, Update the issue status from 'To Do' -&amp;gt; 'InProgress'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feheuhgyqbcoicxnm9s7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feheuhgyqbcoicxnm9s7m.png" alt="ToDo to Inprogress"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, Try to edit some files on your repository and try to commit the changes. Also, Don't forget to get the issue key in reference for your commit message. Refer to this guide on transitioning a JIRA issue with GitLab &lt;a href="https://about.gitlab.com/blog/2021/04/12/gitlab-jira-integration-selfmanaged/" rel="noopener noreferrer"&gt;JIRA issue transitioning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqooak40g5nyze5c7t0xi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqooak40g5nyze5c7t0xi.png" alt="Editing Files"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw15vhbi4p5ca5pvzxsf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw15vhbi4p5ca5pvzxsf7.png" alt="Sample commit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to your JIRA Software and check out the previously opened issue. As you can see. The issue is automatically transitioned from '&lt;strong&gt;InProgress&lt;/strong&gt;' to '&lt;strong&gt;Done&lt;/strong&gt;'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93mrsqibzjg0sqb4p30d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93mrsqibzjg0sqb4p30d.png" alt="Image description"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97t67acg60ik40wcvg1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97t67acg60ik40wcvg1m.png" alt="Test transitioning"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why automatic ticket transitioning is important?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;It keeps your project more manageable.&lt;/li&gt;
&lt;li&gt;Clean issue tracking.&lt;/li&gt;
&lt;li&gt;Allow you to easily generate a release notes and associate each issues and commits to your release notes or &lt;strong&gt;CHANGELOG.md&lt;/strong&gt; files after production deployment.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
  
  
  Setup Traefik Proxy (optional) for edge routing experience
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Traefik&lt;/strong&gt; is an open-source &lt;strong&gt;Edge Router&lt;/strong&gt; that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Install Traefik Proxy via Helm
&lt;/h2&gt;

&lt;p&gt;First, Add the traefik repository via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add traefik https://helm.traefik.io/traefik
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Create &lt;code&gt;values.yaml&lt;/code&gt; file. If you want to know all the available parameters in &lt;code&gt;values.yaml&lt;/code&gt; you can use. &lt;code&gt;helm show values traefik/traefik&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;values.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service:
  enabled: true
  type: LoadBalancer
  externalIPs:
    - &amp;lt;your-server-ip&amp;gt; # if you are using minikube on a single ec2 instance.

podDisruptionBudget:
  enabled: true
  maxUnavailable: 0
  #maxUnavailable: 33%
  minAvailable: 1
  #minAvailable: 25%

additionalArguments:
  - "--log.level=DEBUG"
  - "--accesslog=true"
  - "--accesslog.format=json"
  - "--tracing.jaeger=true"
  - "--tracing.jaeger.traceContextHeaderName=x-trace-id"

tracing:
  serviceName: traefik
  jaeger:
    gen128Bit: true
    samplingParam: 0
    traceContextHeaderName: x-trace-id

autoscaling:
  enabled: true
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 60
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setup Traefik Proxy via Helm via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install traefik traefik/traefik \
    --namespace=traefik --create-namespace  \
    -f ./values.yaml --version 10.24.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds, You can now see traefik running on your cluster.&lt;/p&gt;

&lt;p&gt;Next, We need to enable traefik dashboard via &lt;code&gt;traefik-dashboard.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: dashboard
  namespace: traefik
spec:
  entryPoints:
    - web
    - websecure
  routes:
    - match: Host(`traefik.gitops.codes`) &amp;amp;&amp;amp; (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
      kind: Rule
      services:
        - name: api@internal
          kind: TraefikService
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ./traefik-dashboard.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can now visit your traefik dashboard via: &lt;a href="http://traefik.gitops.codes/dashboard/#/" rel="noopener noreferrer"&gt;http://traefik.gitops.codes/dashboard/#/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Containerized the sample django application
&lt;/h1&gt;

&lt;p&gt;Next, we already have a sample Django Python application. But you can prepare yours if you want. In this case, We will containerized a sample django application. So let's start modifying the &lt;code&gt;settings.py&lt;/code&gt; to make the credentials able to pass as environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"""
Django settings for ecommerce project.

Generated by 'django-admin startproject' using Django 3.0.5.

For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/

For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""

import os

# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
TEMPLATE_DIR = os.path.join(BASE_DIR,'templates')
STATIC_DIR=os.path.join(BASE_DIR,'static')


# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '#vw(03o=(9kbvg!&amp;amp;2d5i!2$_58x@_-3l4wujpow6(ym37jxnza'

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True

ALLOWED_HOSTS = ['*']


# Application definition

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'ecom',
    'widget_tweaks',
    'mathfilters'

]

MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.csrf.CsrfViewMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',

]

ROOT_URLCONF = 'ecommerce.urls'

TEMPLATES = [
    {
        'BACKEND': 'django.template.backends.django.DjangoTemplates',
        'DIRS': [TEMPLATE_DIR,],
        'APP_DIRS': True,
        'OPTIONS': {
            'context_processors': [
                'django.template.context_processors.debug',
                'django.template.context_processors.request',
                'django.contrib.auth.context_processors.auth',
                'django.contrib.messages.context_processors.messages',
            ],
        },
    },
]

WSGI_APPLICATION = 'ecommerce.wsgi.application'


# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': os.environ['DATABASE_NAME'],
        'USER': os.environ['DATABASE_USER'],
        'PASSWORD': os.environ['DATABASE_PASSWORD'],
        'HOST': os.environ['DATABASE_HOST'],
        'PORT': os.environ['DATABASE_PORT']
    }
}

# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators

AUTH_PASSWORD_VALIDATORS = [
    {
        'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
    },
]


# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/

LANGUAGE_CODE = 'en-us'

TIME_ZONE = 'UTC'

USE_I18N = True

USE_L10N = True

USE_TZ = True


# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/

STATIC_URL = '/static/'

STATICFILES_DIRS=[STATIC_DIR,]

MEDIA_ROOT=os.path.join(BASE_DIR,'static')



LOGIN_REDIRECT_URL='/afterlogin'

#for contact us give your gmail id and password
EMAIL_BACKEND ='django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = ''
EMAIL_USE_TLS = True
EMAIL_PORT = 465
EMAIL_HOST_USER = 'from@gmail.com' # this email will be used to send emails
EMAIL_HOST_PASSWORD = 'xyz' # host email password required
# now sign in with your host gmail account in your browser
# open following link and turn it ON
# https://myaccount.google.com/lesssecureapps
# otherwise you will get SMTPAuthenticationError at /contactus
# this process is required because google blocks apps authentication by default
EMAIL_RECEIVING_USER = ['to@gmail.com'] # email on which you will receive messages sent from website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Save it. And create a new file &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8.13-buster

WORKDIR /opt/ecommerce-app

ADD ./ /opt/
ADD ./docker-entrypoint.sh /

# install mysqlclient
RUN apt update -y \
    &amp;amp;&amp;amp; apt install libmariadb-dev -y

# Install pipenv and required python packages

RUN pip3 install pipenv &amp;amp;&amp;amp; \
    pipenv install &amp;amp;&amp;amp; chmod +x /docker-entrypoint.sh

EXPOSE 8000/tcp
EXPOSE 8000/udp

ENTRYPOINT ["/docker-entrypoint.sh"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, Create a &lt;code&gt;docker-entrypoint.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

pipenv run python3 manage.py runserver 0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save it. Then start building the application via docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t &amp;lt;your-container-registry-endpoint&amp;gt; . 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note: I assume that you have a basic familiarity on Django Application Structure. So we can make the story short.&lt;/strong&gt; 😁😁&lt;/p&gt;

&lt;h1&gt;
  
  
  Populate containerized application to helm chart
&lt;/h1&gt;

&lt;p&gt;Next, We will populate our application to helm chart. First, Create a helm chart via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm create ecommerce-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Modify the helm chart. Let's start with the &lt;code&gt;templates/deployment.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "ecommerce-app.fullname" . }}
  labels:
    {{- include "ecommerce-app.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "ecommerce-app.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "ecommerce-app.selectorLabels" . | nindent 8 }}
    spec:
      imagePullSecrets:
        - name: {{ include "ecommerce-app.fullname" . }}
      serviceAccountName: {{ include "ecommerce-app.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            {{- toYaml .Values.deployment.env | nindent 12}}
          {{- if .Values.persistentVolume.enabled }}    
          volumeMounts:
            - name: {{ include "ecommerce-app.fullname" . }}-uploads
              mountPath: /opt/ecommerce-app/static/product_image
          {{- end }}
          ports:
            - name: http
              containerPort: {{ .Values.deployment.containerPort }}
              protocol: TCP
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- if .Values.persistentVolume.enabled }}
      volumes:
        - name: {{ include "ecommerce-app.fullname" . }}-uploads
          persistentVolumeClaim:
            claimName: {{ include "ecommerce-app.fullname" . }}-pvc
      {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Create a &lt;strong&gt;PersistentVolume&lt;/strong&gt; and &lt;strong&gt;PersistentVolumeClaim&lt;/strong&gt; so our uploaded files will not lost if the pod get's reprovisioned.&lt;br&gt;
&lt;code&gt;pv.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- if .Values.persistentVolume.enabled -}}
{{- $fullName := include "ecommerce-app.fullname" . -}}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ $fullName }}
  labels:
    {{- include "ecommerce-app.labels" . | nindent 4 }}
spec:
  capacity:
    storage: {{ .Values.persistentVolume.storageSize }}
  volumeMode: {{ .Values.volumeMode }}
  {{- with .Values.persistentVolume.accessModes }}
  accessModes:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  persistentVolumeReclaimPolicy: {{ .Values.persistentVolume.persistentVolumeReclaimPolicy }}
  storageClassName: {{ .Values.persistentVolume.storageClassName }}
  hostPath:
    path: {{ .Values.persistentVolume.hostPath.path }}
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;pvc.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- if .Values.persistentVolume.enabled -}}
{{- $fullName := include "ecommerce-app.fullname" . -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ $fullName  }}-pvc
  labels:
    {{- include "ecommerce-app.labels" . | nindent 4 }}
spec:
  {{- with .Values.persistentVolume.accessModes }}
  accessModes:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  storageClassName: {{ .Values.persistentVolume.storageClassName }}
  resources:
    requests:
      storage: {{ .Values.persistentVolume.storageSize }}
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save it. Then create a &lt;code&gt;createuser-job.yaml&lt;/code&gt; and &lt;code&gt;migratedata-job.yaml&lt;/code&gt; to automatically seed data to MySQL database during deployment and also to create django admin user.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;createuser-job.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- if .Values.createUser.enabled -}}
{{- $superuser_username := .Values.createUser.username -}}
{{- $superuser_password := .Values.createUser.password -}}
{{- $superuser_email := .Values.createUser.userEmail -}}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "ecommerce-app.fullname" . }}-create-superuser-job
spec:
  backoffLimit: 1
  ttlSecondsAfterFinished: 10
  template:
    metadata:
      labels:
        identifier: ""
    spec:
      imagePullSecrets:
        - name: {{ include "ecommerce-app.fullname" . }}
      containers:
        - name: data-migration
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          command: ["/bin/sh","-c"]
          args:
            - &amp;gt; 
              /bin/bash &amp;lt;&amp;lt;EOF
                export DJANGO_SUPERUSER_USERNAME="{{ $superuser_username }}"
                export DJANGO_SUPERUSER_PASSWORD="{{ $superuser_password }}"
                pipenv run python3 manage.py createsuperuser --noinput --email "{{ $superuser_email }}"
              EOF
          env:
            {{- toYaml .Values.deployment.env | nindent 12 }}
      restartPolicy: Never
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;migratedata-job.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- if .Values.migrateData.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "ecommerce-app.fullname" . }}
spec:
  backoffLimit: 2
  ttlSecondsAfterFinished: 10
  template:
    metadata:
      labels:
        identifier: ""
    spec:
      imagePullSecrets:
        - name: {{ include "ecommerce-app.fullname" . }}
      containers:
        - name: data-migration
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          command: ["/bin/sh","-c"]
          args:
            - &amp;gt; 
              /bin/bash &amp;lt;&amp;lt;EOF
                pipenv run python3 manage.py makemigrations
                pipenv run python3 manage.py migrate
              EOF
          env:
            {{- toYaml .Values.deployment.env | nindent 12 }}
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Create &lt;code&gt;ecommerce-app-ingressroute.yaml&lt;/code&gt; inside the &lt;strong&gt;templates&lt;/strong&gt; folder on your helm chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- $targetHost := .Values.traefik.host -}}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: {{ include "ecommerce-app.fullname" . }}
spec:
  {{- with .Values.traefik.entrypoints }}
  entryPoints:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  routes:
  - match: Host(`{{ $targetHost }}`)
    kind: Rule
    services:
    - name: {{ include "ecommerce-app.fullname" . }}
      port: {{ .Values.service.port }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, the &lt;code&gt;values.yaml&lt;/code&gt; file that will hold all of our values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Default values for ecommerce-app.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 2

image:
  repository: registry.gitlab.com/redopsbay/my-devops-journey
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: "latest"

dockerSecret: ""
nameOverride: ""
fullnameOverride: ""

deployment:
  env:
  - name: DATABASE_NAME
    value: "ecommerce-app"
  - name: DATABASE_USER
    value: "ecommerce"
  - name: DATABASE_PASSWORD
    value: "ecommerce-secure-password"
  - name: DATABASE_HOST
    value: "mysql-server.mysql.svc.cluster.local"
  - name: DATABASE_PORT
    value: "3306"
  containerPort: 8000

traefik:
  host: ecommerce-app.gitops.codes
  entrypoints:
    - web
    - websecure

serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  className: "nginx"
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: ecommerce-app.gitops.codes
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  #limits:
  #  cpu: 200m
  #  memory: 200Mi
  #requests:
  #  cpu: 150m
  #  memory: 150Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 100
  targetCPUUtilizationPercentage: 50
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

# Specify values for Persistent Volume
persistentVolume:
  # Specify whether persistent volume should be enabled
  enabled: true
  # Specify storageClassName
  storageClassName: "ecommerce-app"
  # Specify volumeMode defaults to Filesystem
  volumeMode: Filesystem
  # Specify the accessModes for this pv
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageSize: 5Gi
  hostPath:
    path: /data/ecommerce-app/

createUser:
  enabled: false
  userEmail: "alfredvalderrama@gitops.codes"
  username: "admin"
  password: "admin"

migrateData:
  enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Your container application is now able to run on kubernetes!!!&lt;/p&gt;

&lt;h1&gt;
  
  
  GitLab CI/CD Pipeline setup via &lt;strong&gt;.gitlab-ci.yml&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;We will now start setting up our pipeline. First, We have to determine what stages and how many environments our application will go through before going to the live production environment. As per common standard. There is 3 environments available which is: &lt;/p&gt;

&lt;p&gt;Common environment workloads:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;development&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;staging&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;production&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's now create a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. I will enter the complete source code file to make the story short. Refer to comments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Required Variables to properly setup each environment
variables:
  TARGET_IMAGE: "registry.gitlab.com/redopsbay/my-devops-journey"
  RAW_KUBECONFIG: ""
  DEV_HOST: "ecommerce-app-dev.gitops.codes"
  QA_HOST: "ecommerce-app-qa.gitops.codes"
  PROD_HOST: "ecommerce-app-qa.gitops.codes"
  TARGET_IMAGE_TAG: "latest"
  CHART: "${CI_PROJECT_DIR}/.helm/ecommerce-app"
  HOST: ""
  TARGET_ENV: ""
  USER_EMAIL: ""
  DJANGO_USERNAME: ""
  DJANGO_PASSWORD: ""
  DJANGODB_SEED: ""

## My Default Container tool containing helm and kubectl binaries etc.

default:
  image: alfredvalderrama/container-tools:latest

stages:
  - bootstrap
  - install
  - build
  - dev
  - staging
  - production
  - deploy
  - linter

# Set the KUBECONFIG based on commit branch / environment etc.

'setEnv':
  stage: bootstrap
  rules:
    - changes:
      - ".gitlab-ci.yml"
      when: never
  before_script:
    - |
      case $CI_COMMIT_BRANCH in
        "dev")
          RAW_KUBECONFIG=${DEV_KUBECONFIG}
          TARGET_ENV="dev"
          export RELEASE_CYCLE="alpha"
          HOST="${DEV_HOST}"
        ;;
        "staging")
          RAW_KUBECONFIG=${QA_KUBECONFIG}
          TARGET_ENV="prod"
          export RELEASE_CYCLE="beta"
          HOST="${QA_HOST}"
        ;;
        "master" )
          RAW_KUBECONFIG=${PROD_KUBECONFIG}
          TARGET_ENV="prod"
          HOST="${PROD_HOST}"
          export RELEASE_CYCLE="stable"
        ;;
      esac

    - export TARGET_IMAGE_TAG="${RELEASE_CYCLE}-$(cat release.txt)"
    - mkdir -p ~/.kube || echo "Nothing to create..."
    - echo ${RAW_KUBECONFIG} | base64 -d &amp;gt; ~/.kube/config
    - chmod 0600 ~/.kube/config
  script:
    - echo "[bootstrap] Setting up required environment variables...."

## Build image only for installation

'Build Image For Provisioning':
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  stage: install
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $PROVISION == 'true'
      when: manual
  tags:
    - dev
    - staging
    - sandbox
    - prod
  extends:
    - 'setEnv'
  script:
    - mkdir -p /kaniko/.docker
    - |
       cat &amp;lt;&amp;lt; EOF &amp;gt; /kaniko/.docker/config.json
        {
          "auths": {
            "registry.gitlab.com": {
              "username": "${REGISTRY_USER}",
              "password": "${REGISTRY_PASSWORD}"
            }
          }
        }
        EOF
    - |
        /kaniko/executor \
          --context "${CI_PROJECT_DIR}/" \
          --dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
          --destination "${TARGET_IMAGE}:${TARGET_IMAGE_TAG}" \
          --cache=true

## Provision the environment. Since you provision the environment for the first time.
# Django needs to migrate the data and also create a superuser/administrator account.

'Provision':
  image: 
    name: alfredvalderrama/container-tools:latest
  stage: install
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $PROVISION == 'true'
      when: manual
  needs:
    - "Build Image For Provisioning"
  tags:
    - dev
    - staging
    - sandbox
    - prod
  extends:
    - 'setEnv'
  script:
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set createUser.enabled="true" \
            --set createUser.userEmail="${USER_EMAIL}" \
            --set createUser.username="${DJANGO_USERNAME}" \
            --set createUser.password="${DJANGO_PASSWORD}" \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set deployment.database_host="${DJANGO_DBHOST}" \
            --set image.tag="${TARGET_IMAGE_TAG}"

## Execute this job to validate a valid helm chart.

'Linter':
  image: 
    name: alfredvalderrama/container-tools:latest
  stage: linter
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $CI_PIPELINE_SOURCE == "merge_request_event" &amp;amp;&amp;amp; $PROVISION != 'true'
      when: always
  extends:
    - 'setEnv'
  tags:
    - dev
    - staging
    - sandbox
    - prod
  script:
    - echo "[$CI_COMMIT_BRANCH] Displaying incoming changes when creating a djando user..."
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set createUser.enabled="true" \
            --set createUser.userEmail="${USER_EMAIL}" \
            --set createUser.username="${DJANGO_USERNAME}" \
            --set createUser.password="${DJANGO_PASSWORD}" \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set image.tag="${TARGET_IMAGE_TAG}" \
            --set deployment.database_host="${DJANGO_DBHOST}" --dry-run


    - echo "[$CI_COMMIT_BRANCH] Displaying incoming changes for normal deployment...."
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set image.tag="${TARGET_IMAGE_TAG}" \
            --set deployment.database_host="${DJANGO_DBHOST}" --dry-run

## Build Docker image

'Build':
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $CI_COMMIT_BRANCH == "master" &amp;amp;&amp;amp; $PROVISION != 'true'
    - if: $CI_COMMIT_BRANCH == "dev" &amp;amp;&amp;amp; $PROVISION != 'true'
    - if: $CI_COMMIT_BRANCH == "staging" &amp;amp;&amp;amp; $PROVISION != 'true'
      when: always
  stage: build
  tags:
    - dev
    - staging
    - sandbox
    - prod
  extends:
    - 'setEnv'
  script:
    - mkdir -p /kaniko/.docker
    - |
        cat &amp;lt;&amp;lt; EOF &amp;gt; /kaniko/.docker/config.json
        {
          "auths": {
            "registry.gitlab.com": {
              "username": "${REGISTRY_USER}",
              "password": "${REGISTRY_PASSWORD}"
            }
          }
        }
        EOF
    - |
        /kaniko/executor \
          --context "${CI_PROJECT_DIR}/" \
          --dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
          --destination "${TARGET_IMAGE}:${TARGET_IMAGE_TAG}" \
          --cache=true

## Deploy Dev Environment with Helm Chart

'Deploy Dev':
  image: alfredvalderrama/container-tools:latest
  stage: dev
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $CI_COMMIT_BRANCH == "dev" &amp;amp;&amp;amp; $PROVISION != 'true'
      when: always
  extends:
    - 'setEnv'
  tags:
    - dev
    - staging
    - sandbox
    - prod
  script:
    - echo "[DEV] Deploying DEV application"
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set deployment.database_host="${DJANGO_DBHOST}" \
            --set image.tag="${TARGET_IMAGE_TAG}"

    - kubectl rollout status deployment/ecommerce-app-${TARGET_ENV} -n ${TARGET_ENV}

## Deploy Staging Environment with Helm Chart

'Deploy Staging':
  image: alfredvalderrama/container-tools:latest
  stage: staging
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $CI_COMMIT_BRANCH == "staging" &amp;amp;&amp;amp; $PROVISION != 'true'
      when: always
  extends:
    - 'setEnv'
  tags:
    - dev
    - staging
    - sandbox
    - prod
  script:
    - echo "[STAGING] Deploying STAGING application"
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set deployment.database_host="${DJANGO_DBHOST}" \
            --set image.tag="${TARGET_IMAGE_TAG}"

    - kubectl rollout status deployment/ecommerce-app-${TARGET_ENV} -n ${TARGET_ENV}

## Deploy Staging Environment with Helm Chart

'Deploy Production':
  image: alfredvalderrama/container-tools:latest
  stage: production
  rules:
    - changes:
        - ".gitlab-ci.yml"
      when: never
    - if: $CI_COMMIT_BRANCH == "prod" &amp;amp;&amp;amp; $PROVISION != 'true'
      when: always
  extends:
    - 'setEnv'
  tags:
    - dev
    - staging
    - sandbox
    - prod
  script:
    - echo "[PROD] Deploying PROD application"
    - |
      helm upgrade --install ecommerce-app-${TARGET_ENV} ${CHART} \
            -n ${TARGET_ENV} \
            --create-namespace \
            --set migrateData.enabled="${DJANGODB_SEED}" \
            --set traefik.host="${HOST}" \
            --set dockerSecret="${REGISTRY_CREDENTIAL}" \
            --set deployment.database_name="${DJANGO_DB}" \
            --set deployment.database_user="${DJANGO_DBUSER}" \
            --set deployment.database_password="${DJANGO_DBPASS}" \
            --set deployment.database_host="${DJANGO_DBHOST}" \
            --set image.tag="${TARGET_IMAGE_TAG}"

    - kubectl rollout status deployment/ecommerce-app-${TARGET_ENV} -n ${TARGET_ENV}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow of this pipeline is simple.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, A manual intervention is needed when you just want to provision the environment for the first time.&lt;/li&gt;
&lt;li&gt;Pipeline will execute when a change is committed to dev/staging/prod branches. &lt;/li&gt;
&lt;li&gt;Next, Pipeline will use a linter when a merge requests is detected.&lt;/li&gt;
&lt;li&gt;The image tag is determined via &lt;code&gt;release.txt&lt;/code&gt; file. Which contains the semantic version of the release. But you can do other solutions like release version will be based on tags. But for now I only want the &lt;code&gt;release.txt&lt;/code&gt; file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After successfully creating a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. You have the complete production grade CI/CD pipeline integrated with ISSUE tracking!!!&lt;/p&gt;

&lt;h1&gt;
  
  
  Demonstration
&lt;/h1&gt;

&lt;p&gt;Finally, We were here at the demo part. I will try to create a changes on my code. Like, Just changing the &lt;code&gt;release.txt&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note: We will not go deeper on the django app. The focus here is that we have a real working pipeline.&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgulg7kugbar7smdyuhxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgulg7kugbar7smdyuhxg.png" alt="Modify release.txt file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, That the changes has been committed. The pipeline is now started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz2wkqgjjcm3y3i3zias.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz2wkqgjjcm3y3i3zias.png" alt="Pipeline started"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06fqqskrfq65ce31x9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06fqqskrfq65ce31x9l.png" alt="Build Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see at my &lt;strong&gt;minikube-server&lt;/strong&gt;. The &lt;strong&gt;gitlab-runner&lt;/strong&gt; is now executing the jobs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgpr2iappdlovcwryma9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgpr2iappdlovcwryma9.png" alt="Gitlab Runner execution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, As you can see. The deployment is now running and it's now replacing the old pods with the newly built docker image. ❤❤&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauo074nv6pxb320kh7s1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauo074nv6pxb320kh7s1.png" alt="Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all!!! The flow will be the same with the staging and prod environment. And you can just easily transition your &lt;strong&gt;JIRA Issues&lt;/strong&gt; by committing your ISSUE ID with Git, And Gitlab will automatically finish the job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some key sorcery for starting DevOps career's
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Make it simple!&lt;/li&gt;
&lt;li&gt;Carefully plan your pipeline from the start. So, you don't have to recreate it from scratch.&lt;/li&gt;
&lt;li&gt;Make it portable &amp;amp; readable, and also reusable!&lt;/li&gt;
&lt;li&gt;Learn how to read codes from the others. So you have the ability to learn the infrastructure by just looking at the architecture &amp;amp; codes.&lt;/li&gt;
&lt;li&gt;Be the entire IT Department. So you are the &lt;strong&gt;Lebron James&lt;/strong&gt; of your team.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://gitlab.com/redopsbay/my-devops-journeys" rel="noopener noreferrer"&gt;Public Repository of this demonstration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/V_9Ts6yu_VM" rel="noopener noreferrer"&gt;MyDevOpsTeam Pipeline Demonstration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mydevopsteam.io" rel="noopener noreferrer"&gt;MyDevOpsTeam Solutions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>gitlab</category>
      <category>kubernetes</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Setup Multi-Cluster ServiceMesh with Istio on EKS</title>
      <dc:creator>Alfred Valderrama</dc:creator>
      <pubDate>Wed, 22 Jun 2022 12:53:53 +0000</pubDate>
      <link>https://dev.to/redopsbay/setup-multi-cluster-servicemesh-with-istio-on-eks-5d5</link>
      <guid>https://dev.to/redopsbay/setup-multi-cluster-servicemesh-with-istio-on-eks-5d5</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Provisioning&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. Istio Setup with Helm chart&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. Cross network gateway validation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hey! In this post, We will be exploring a technology called &lt;strong&gt;ServiceMesh&lt;/strong&gt; powered by &lt;strong&gt;Istio&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short Intro on Non-Istio users
&lt;/h2&gt;

&lt;p&gt;Istio is an open source service mesh that layers transparently onto existing distributed applications.&lt;/p&gt;

&lt;p&gt;Istio's powerful features provide a uniform and more efficient way to &lt;strong&gt;&lt;em&gt;secure, connect, and monitor services&lt;/em&gt;&lt;/strong&gt;. Its powerful control plane brings vital features, including:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Secure &lt;strong&gt;service-to-service&lt;/strong&gt; communication in a cluster with TLS encryption, strong identity-based authentication and authorization.&lt;/p&gt;

&lt;p&gt;Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic&lt;/p&gt;

&lt;p&gt;Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection&lt;/p&gt;

&lt;p&gt;A pluggable policy layer and configuration API supporting access controls, rate limits and quotas&lt;/p&gt;

&lt;p&gt;Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most of you are already familiar with Istio. Since, Kubernetes federation is currently not yet available and the latest version is on a Beta version and you want to distribute your traffic across different clusters with production grade deployment.&lt;/p&gt;

&lt;p&gt;So, Let's quickly go through the Step-by-Step procedure to implement Multi Cluster Deployment with Istio.&lt;/p&gt;

&lt;p&gt;This tutorial is highly based on AWS and Terraform and also Helm Charts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Required resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EKS Clusters&lt;/li&gt;
&lt;li&gt;Security Group / Rule&lt;/li&gt;
&lt;li&gt;S3 Bucket for terraform states&lt;/li&gt;
&lt;li&gt;IAM Roles&lt;/li&gt;
&lt;li&gt;IAM Permissions&lt;/li&gt;
&lt;li&gt;AWS ALB (Application LoadBalancer)&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  IAM Role
&lt;/h2&gt;

&lt;p&gt;Create your IAM Role for your EKS Cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;iam.tf&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_iam_role" "eks_iam_role" {
  name = "AmazonAwsEksRole"
  assume_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "Service" : "eks.amazonaws.com"
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })
  tags = local.default_tags
}

resource "aws_iam_policy_attachment" "eksClusterPolicyAttachmentDefault" {
  name       = "eksClusterPolicyAttachmentDefault"
  roles      = [aws_iam_role.eks_iam_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}

resource "aws_iam_role" "eks_iam_node_role" {
  name = "AmazonAwsEksNodeRole"
  assume_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "Service" : "ec2.amazonaws.com"
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })

  tags = local.default_tags
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_policy_attachment.eksClusterPolicyAttachmentDefault
  ]
}
resource "aws_iam_policy_attachment" "AmazonEKSWorkerNodePolicyAttachment" {
  name       = "AmazonEKSWorkerNodePolicyAttachment"
  roles      = [aws_iam_role.eks_iam_node_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

resource "aws_iam_policy_attachment" "AmazonEC2ContainerRegistryReadOnlyAttachment" {
  name       = "AmazonEC2ContainerRegistryReadOnlyAttachment"
  roles      = [aws_iam_role.eks_iam_node_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_iam_policy_attachment" "AmazonEKSCNIPolicyAttachment" {
  name       = "AmazonEKSCNIPolicyAttachment"
  roles      = [aws_iam_role.eks_iam_node_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Security Group
&lt;/h2&gt;

&lt;p&gt;AWS Security Group is required in able for your cluster to communicate.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;securitygroup.tf&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_security_group" "cluster_sg" {
  name        = "cluster-security-group"
  description = "Communication with Worker Nodes"
  vpc_id      = var.vpc_id
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 0
    to_port   = 0
    protocol  = "-1"
    self      = true
  }

  ingress {
    from_port = 0
    to_port   = 0
    protocol  = "-1"
  }

  tags = local.default_tags
}


resource "aws_security_group" "cp_sg" {
  name        = "cp-sg"
  description = "CP and Nodegroup communication"
  vpc_id      = var.vpc_id
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "Allow all"
    cidr_blocks = ["0.0.0.0/0"]
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
  }

  tags = local.default_tags
}

resource "aws_security_group" "wrkr_node" {
  name        = "worker-sg"
  description = "Worker Node SG"
  vpc_id      = var.vpc_id
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "Allow All"
    cidr_blocks = ["0.0.0.0/0"]
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
  }

  ingress {
    description = "Self Communication"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    self        = true
  }

  tags = local.default_tags
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  EKS Clusters
&lt;/h2&gt;

&lt;p&gt;In this section, you will provision the 2 EKS Cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eks.tf&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

locals {
  default_tags = {
    Provisioner = "Terraform"
    Environment = "Testing"
  }
}

resource "aws_eks_cluster" "istio_service_mesh_primary_1" {
  name     = "istio-service-mesh-primary-1"
  role_arn = aws_iam_role.eks_iam_role.arn

  vpc_config {
    subnet_ids          = var.subnet_ids
    public_access_cidrs = ["0.0.0.0/0"]
    security_group_ids = [
      aws_security_group.cluster_sg.id,
      aws_security_group.cp_sg.id
    ]
  }
  version = "1.21"

  timeouts {
    create = "15m"
  }
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_role.eks_iam_node_role,
    aws_security_group.cluster_sg,
    aws_security_group.cp_sg,
    aws_security_group.wrkr_node
  ]
}

resource "aws_eks_cluster" "istio_service_mesh_primary_2" {
  name     = "istio-service-mesh-primary-2"
  role_arn = aws_iam_role.eks_iam_role.arn

  vpc_config {
    subnet_ids          = var.subnet_ids
    public_access_cidrs = ["0.0.0.0/0"]
    security_group_ids = [
      aws_security_group.cluster_sg.id,
      aws_security_group.cp_sg.id
    ]
  }
  version = "1.21"

  timeouts {
    create = "15m"
  }
  tags = local.default_tags
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_role.eks_iam_node_role,
    aws_iam_policy_attachment.eksClusterPolicyAttachmentDefault,
    aws_security_group.cluster_sg,
    aws_security_group.cp_sg,
    aws_security_group.wrkr_node
  ]
}


resource "aws_eks_addon" "eks_addon_vpc-cni" {
  cluster_name = aws_eks_cluster.istio_service_mesh_primary_1.name
  addon_name   = "vpc-cni"
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_role.eks_iam_node_role,
    aws_security_group.cluster_sg,
    aws_security_group.cp_sg,
    aws_security_group.wrkr_node,
    aws_eks_cluster.istio_service_mesh_primary_1
  ]
}

resource "aws_eks_addon" "eks_addon_vpc-cni_2" {
  cluster_name = aws_eks_cluster.istio_service_mesh_primary_2.name
  addon_name   = "vpc-cni"
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_role.eks_iam_node_role,
    aws_security_group.cluster_sg,
    aws_security_group.cp_sg,
    aws_security_group.wrkr_node,
    aws_eks_cluster.istio_service_mesh_primary_2
  ]
}

resource "aws_eks_node_group" "istio_service_mesh_primary_worker_group_1" {
  cluster_name    = aws_eks_cluster.istio_service_mesh_primary_1.name
  node_group_name = "istio-service-mesh-primary-worker-group-1"
  node_role_arn   = aws_iam_role.eks_iam_node_role.arn
  subnet_ids      = var.subnet_ids

  remote_access {
    ec2_ssh_key               = var.ssh_key
    source_security_group_ids = [aws_security_group.wrkr_node.id]
  }

  scaling_config {
    desired_size = 2
    max_size     = 3
    min_size     = 2
  }
  instance_types = ["t3.medium"]

  update_config {
    max_unavailable = 1
  }
  depends_on = [
    aws_iam_role.eks_iam_role,
    aws_iam_role.eks_iam_node_role,
    aws_security_group.cluster_sg,
    aws_security_group.cp_sg,
    aws_security_group.wrkr_node,
    aws_eks_cluster.istio_service_mesh_primary_1,
    aws_eks_addon.eks_addon_vpc-cni
  ]

  timeouts {
    create = "15m"
  }
  tags = local.default_tags

}


resource "aws_eks_node_group" "istio_service_mesh_primary_worker_group_2" {
  cluster_name    = aws_eks_cluster.istio_service_mesh_primary_2.name
  node_group_name = "istio-service-mesh-primary-worker-group-2"
  node_role_arn   = aws_iam_role.eks_iam_node_role.arn
  subnet_ids      = var.subnet_ids
  remote_access {
    ec2_ssh_key               = var.ssh_key
    source_security_group_ids = [aws_security_group.wrkr_node.id]
  }

  scaling_config {
    desired_size = 2
    max_size     = 3
    min_size     = 2
  }
  instance_types = ["t3.medium"]
  update_config {
    max_unavailable = 1
  }
  depends_on = [
    aws_eks_cluster.istio_service_mesh_primary_2
  ]

  timeouts {
    create = "15m"
  }
  tags = local.default_tags
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After creating necessary tf configuration. It's now time to apply it.&lt;/p&gt;

&lt;p&gt;First, Create a tf workspace.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform workspace new istio-service-mesh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, Verify if your tf configuration is smooth.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform init
terraform workspace select istio-service-mesh
terraform fmt
terraform validate
terraform plan -out='plan.out'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, Apply it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform apply 'plan.out'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is now provisioning:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz56dygzm0zjofxl4bb13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz56dygzm0zjofxl4bb13.png" alt="Provisioning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After 20 minutes or more, Your cluster's is ready!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxjusditgngg6ml65slb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxjusditgngg6ml65slb.png" alt="Console Clusters"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Istio Setup with Helm chart
&lt;/h2&gt;

&lt;p&gt;It's now time to install Istio on both clusters.&lt;/p&gt;

&lt;p&gt;Required Charts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Istio Base helm chart&lt;/li&gt;
&lt;li&gt;Istiod helm chart&lt;/li&gt;
&lt;li&gt;Istio ingress gateway helm chart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, Add the helm istio repository via:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Cluster1
&lt;/h2&gt;

&lt;p&gt;Installing Istio Base helm chart via:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istio-base istio/base \
    --create-namespace -n istio-system \
    --version 1.13.2 --wait


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, Istio based is now installed. Next one is the Istio Control Plane.&lt;br&gt;
&lt;strong&gt;Note: You must specify the meshID, clusterName and network to uniquely identify your clusters when installing Istio control Plane.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istiod istio/istiod -n istio-system --create-namespace \
    --wait --version 1.13.2 \
    --set global.meshID="cluster1" \
    --set global.multiCluster.clusterName="cluster1" \
    --set global.network="cluster1"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, It's time to expose the cluster with ingress or what so called edge router by installing &lt;strong&gt;istio ingressgateway&lt;/strong&gt;. In my case, I prepare to use ALB instead of prepared &lt;strong&gt;loadbalancer&lt;/strong&gt; by Istio 😆.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create namespace istio-ingress
kubectl label namespace istio-ingress istio-injection=enabled

helm upgrade --install istio-ingressgateway istio/gateway \
     -n istio-ingress --create-namespace \
     --version 1.13.2 --set service.type="NodePort"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, create an ingress resource then associate the ingress to &lt;strong&gt;istio-ingressgateway&lt;/strong&gt; NodePort service.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ingress.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: istio-alb-ingress
  namespace: istio-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/certificate-arn: "&amp;lt;your-certificate-arn&amp;gt;"
    alb.ingress.kubernetes.io/listen-ports: '[{ "HTTP": 80 }, { "HTTPS": 443 }]'
    alb.ingress.kubernetes.io/security-groups: &amp;lt;your-security-group-id&amp;gt;
    alb.ingress.kubernetes.io/scheme: internet-facing
    #alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/tags: Environment=Test,Provisioner=Kubernetes
  labels:
    app:  "Istio"
    ingress: "Istio"
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: ssl-redirect
              servicePort: use-annotation
          - path: /healthz/ready
            backend:
              serviceName: istio-ingressgateway
              servicePort: 15021
          - path: /*
            backend:
              serviceName: istio-ingressgateway
              servicePort: 443


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Cluster2
&lt;/h2&gt;

&lt;p&gt;Same steps applied to &lt;strong&gt;cluster2&lt;/strong&gt;. But you must change the &lt;strong&gt;meshID&lt;/strong&gt;, &lt;strong&gt;clusterName&lt;/strong&gt; and network values on Istio Control plane chart.&lt;/p&gt;

&lt;p&gt;Installing Istio base chart via:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istio-base istio/base \
    --create-namespace -n istio-system \
    --version 1.13.2 --wait


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Installing Istio Control Plane:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istiod istio/istiod \
    -n istio-system --create-namespace \
    --wait --version 1.13.2 \
    --set global.meshID="cluster2" \
    --set global.multiCluster.clusterName="cluster2" \
    --set global.network="cluster2"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On cluster2, We don't have to setup additional edge &lt;strong&gt;ingressgateway&lt;/strong&gt;. Since, the connection will be started from &lt;strong&gt;cluster1&lt;/strong&gt;. But, How can we distribute the traffic from &lt;strong&gt;cluster1&lt;/strong&gt; to &lt;strong&gt;cluster2&lt;/strong&gt; ?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Answer: By exposing cluster services 💡&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On &lt;strong&gt;cluster1&lt;/strong&gt;, Create additional Loadbalancer by installing additional &lt;strong&gt;istio-ingressgateway&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istio-crossnetworkgateway istio/gateway \
     -n istio-system --create-namespace --version 1.13.2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For &lt;strong&gt;cluster2&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install istio-crossnetworkgateway istio/gateway \
     -n istio-system --create-namespace --version 1.13.2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Exposing services for both cluster. &lt;br&gt;
&lt;code&gt;istio-exposeservice.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
  namespace: istio-system
spec:
  selector:
    app: istio-crossnetworkgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, Services are now exposed. But how does istio identify or discover resources from the other cluster? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We need to enable Endpoint Discovery.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On &lt;strong&gt;cluster1&lt;/strong&gt;, I assume that your kubeconfig file is pointed to &lt;strong&gt;cluster1 context&lt;/strong&gt;. This way, We can create istio secret file that can give access on both clusters in able for them to discover resources.&lt;/p&gt;

&lt;p&gt;Create an istio secret for &lt;strong&gt;cluster2&lt;/strong&gt;. This command should be done on &lt;strong&gt;cluster1&lt;/strong&gt; context:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

istioctl x create-remote-secret --name=cluster1 &amp;gt; cluster2-secret.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On &lt;strong&gt;cluster2&lt;/strong&gt;, Create an istio secret for &lt;strong&gt;cluster1&lt;/strong&gt;. This command should be done on &lt;strong&gt;cluster2&lt;/strong&gt; context:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

istioctl x create-remote-secret --name=cluster2 &amp;gt; cluster1-secret.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you view the file, It's just a &lt;strong&gt;kubeconfig context&lt;/strong&gt; from both cluster contexts enabling API Access.&lt;/p&gt;

&lt;p&gt;Next, We should apply the secret to both clusters.&lt;br&gt;
&lt;strong&gt;Cluster1&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f cluster1-secret.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Cluster2&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f cluster2-secret.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Last, but not least. Verify if your clusters has already a trust configuration.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

diff \
   &amp;lt;(export KUBECONFIG=$(pwd)/kubeconfig_cluster1.yaml &amp;amp;&amp;amp; kubectl -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') \
   &amp;lt;(export KUBECONFIG=$(pwd)/kubeconfig_cluster2.yaml &amp;amp;&amp;amp; kubectl -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}')


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If there's no certificate found on both clusters. You can generate a self-signed root CA certificate.&lt;/p&gt;

&lt;p&gt;Kindly, visit for more info: &lt;a href="https://github.com/redopsbay/Istio-Multi-Cluster/tree/master/istio-tool" rel="noopener noreferrer"&gt;Generating self-signed root CA certificates&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate Certificates
&lt;/h2&gt;

&lt;p&gt;Istio provides basic security by default in able for the services not being accidentally exposed publicly. Istio will automatically drop client connection if the TLS handshake doesn't meet the requirements.&lt;/p&gt;

&lt;p&gt;Because Istio verifies &lt;strong&gt;service-to-service&lt;/strong&gt; communication by using Trust Configurations.&lt;/p&gt;

&lt;p&gt;Creating a root-ca certificate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd istio-tool
mkdir -p certs
pushd certs
make -f ../Makefile.selfsigned.mk root-ca


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Generate a &lt;strong&gt;cluster1&lt;/strong&gt; certificate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

make -f ../Makefile.selfsigned.mk cluster1-cacerts


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Generate a &lt;em&gt;cluster2&lt;/em&gt; certificate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

make -f ../Makefile.selfsigned.mk cluster2-cacerts


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, apply both the certificates on both cluster.&lt;br&gt;
For &lt;strong&gt;Cluster1&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create secret generic cacerts -n istio-system \
      --from-file=cluster1/ca-cert.pem \
      --from-file=cluster1/ca-key.pem \
      --from-file=cluster1/root-cert.pem \
      --from-file=cluster1/cert-chain.pem


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For &lt;strong&gt;Cluster2&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create secret generic cacerts -n istio-system \
      --from-file=cluster2/ca-cert.pem \
      --from-file=cluster2/ca-key.pem \
      --from-file=cluster2/root-cert.pem \
      --from-file=cluster2/cert-chain.pem


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After applying all the necessary steps. The &lt;strong&gt;cluster1&lt;/strong&gt; and &lt;strong&gt;cluster2&lt;/strong&gt; should now be able to distribute traffic on both clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Cross network gateway validation
&lt;/h2&gt;

&lt;p&gt;After applying the necessary steps. Of course, you need to verify if it's actually working. I've created a basic application called &lt;strong&gt;MetaPod&lt;/strong&gt; that allows you to extract the pod information or metadata through the web. So you can determine if your traffic is actually being forwarded to the 2nd cluster.&lt;/p&gt;

&lt;p&gt;MetaPod sample deployment manifest.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Cluster1&lt;/strong&gt;, Try to deploy the test deployment.&lt;br&gt;
&lt;strong&gt;Note: You must change the hosts values to make it work on your end.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: metapod-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "metapod.example.com"
    tls:
      mode: PASSTHROUGH
      httpsRedirect: true # sends 301 redirect for http requests
  - port:
      number: 443
      name: http-443
      protocol: HTTP  # http only since tls certificate is came from upstream (LoadBalancer) Level
    hosts:
    - "metapod.example.com"
    tls:
      mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: metapod
spec:
  hosts:
  - "metapod.example.com"
  - "metapod.default.svc.cluster.local"
  gateways:
  - metapod-gateway
  http:
  - route:
    - destination:
        host: metapod.default.svc.cluster.local
        port:
          number: 80
    retries:
      attempts: 5
      perTryTimeout: 5s

---
apiVersion: v1
kind: Service
metadata:
  name: metapod
  labels:
    app: metapod
    service: metapod
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: metapod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metapod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: metapod
      version: v1
  template:
    metadata:
      labels:
        app: metapod
        version: v1
    spec:
      containers:
      - image: docker.io/redopsbay/metapod:latest
        imagePullPolicy: IfNotPresent
        name: metapod
        ports:
        - containerPort: 8080
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: POD_CPU_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: requests.cpu
        - name: POD_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: limits.cpu
        - name: POD_MEM_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: requests.memory
        - name: POD_MEM_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: limits.memory
        - name: CLUSTER_NAME
          value: "Cluster1"
        - name: GIN_MODE
          value: "release"



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For &lt;strong&gt;Cluster2&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: metapod
spec:
  hosts:
  - "metapod.example.com"
  - "metapod.default.svc.cluster.local"
  gateways:
  - metapod-gateway
  http:
  - route:
    - destination:
        host: metapod.default.svc.cluster.local
        port:
          number: 80
---
apiVersion: v1
kind: Service
metadata:
  name: metapod
  labels:
    app: metapod
    service: metapod
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: metapod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metapod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: metapod
      version: v1
  template:
    metadata:
      labels:
        app: metapod
        version: v1
    spec:
      containers:
      - image: docker.io/redopsbay/metapod:latest
        imagePullPolicy: IfNotPresent
        name: metapod
        ports:
        - containerPort: 8080
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: POD_CPU_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: requests.cpu
        - name: POD_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: limits.cpu
        - name: POD_MEM_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: requests.memory
        - name: POD_MEM_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: metapod
              resource: limits.memory
        - name: CLUSTER_NAME
          value: "Cluster2"
        - name: GIN_MODE
          value: "release"



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After a few seconds, Try to visit the registered gateway on your end.&lt;/p&gt;

&lt;p&gt;In my case, &lt;a href="https://metapod.example.com" rel="noopener noreferrer"&gt;https://metapod.example.com&lt;/a&gt; and it should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferzmxb2d1m0tf56uot49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferzmxb2d1m0tf56uot49.png" alt="Cluster1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see under the &lt;strong&gt;CLUSTER NAME&lt;/strong&gt;. Your traffic is forwarded to &lt;strong&gt;Cluster1&lt;/strong&gt;. If you constantly refresh your browser page. You'll notice that your traffic is being forwarded also to &lt;strong&gt;Cluster2&lt;/strong&gt;. See below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj2ib6k3gcj9gewwyut3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj2ib6k3gcj9gewwyut3.png" alt="Cluster2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright! That's it. You may encounter a lot of problems during your journey. But it's worth to try.&lt;/p&gt;

&lt;p&gt;You can message me directly here or on my twitter account &lt;a href="https://twitter.com/redopsbay" rel="noopener noreferrer"&gt;https://twitter.com/redopsbay&lt;/a&gt; if you need help.&lt;/p&gt;

&lt;p&gt;I will try my best to help you out to fix it. 😅😅😅&lt;br&gt;
Hope you like it. Cheers!!!🍻🍻 Thanks!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/redopsbay/Istio-Multi-Cluster" rel="noopener noreferrer"&gt;Istio Multi Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/redopsbay/MetaPod" rel="noopener noreferrer"&gt;MetaPod&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs" rel="noopener noreferrer"&gt;Istio Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/" rel="noopener noreferrer"&gt;Plug in CA Certificates&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/" rel="noopener noreferrer"&gt;Installing Multi Primary Multi Network Setup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs/ops/diagnostic-tools/multicluster/" rel="noopener noreferrer"&gt;Troubleshooting multi-cluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>servicemesh</category>
    </item>
  </channel>
</rss>
