<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ian Kiprotich</title>
    <description>The latest articles on DEV Community by Ian Kiprotich (@onai254).</description>
    <link>https://dev.to/onai254</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/onai254"/>
    <language>en</language>
    <item>
      <title>Exploring Longhorn: A Game-Changer for Kubernetes Storage</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Wed, 24 May 2023 06:43:35 +0000</pubDate>
      <link>https://dev.to/onai254/exploring-longhorn-a-game-changer-for-kubernetes-storage-3a0j</link>
      <guid>https://dev.to/onai254/exploring-longhorn-a-game-changer-for-kubernetes-storage-3a0j</guid>
      <description>&lt;h2&gt;
  
  
  Exploring Longhorn: A Game-Changer for Kubernetes Storage
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Welcome back to my blog, a dedicated space where we dive into the vast ocean of Kubernetes and its thriving ecosystem. If you’re a developer, a DevOps professional, or simply a tech enthusiast, you know that Kubernetes has fundamentally transformed the way we build, deploy, and manage applications. But the story doesn’t end there.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGmStDMuya4LpZBmpwjiQ-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGmStDMuya4LpZBmpwjiQ-Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, we shift our focus towards a crucial yet often challenging aspect of Kubernetes — storage. Ensuring persistent and reliable storage in a Kubernetes environment can sometimes feel like a complex puzzle. However, there’s a solution that can help us put these pieces together seamlessly — Longhorn and runs on any enviroment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://longhorn.io/kb/" rel="noopener noreferrer"&gt;Longhorn&lt;/a&gt;, an innovative open-source project by Rancher Labs, offers a reliable, lightweight, and user-friendly &lt;strong&gt;distributed block storage&lt;/strong&gt; system for Kubernetes. It’s changing the game by making it simpler to run stateful applications in a Kubernetes environment without the storage headaches.&lt;/p&gt;

&lt;p&gt;In this blog post, we will dive into the world of Longhorn, exploring its features, architecture, and why it might be the perfect fit for your Kubernetes storage needs. Whether you’re already navigating the Kubernetes landscape or just beginning your journey, this comprehensive guide to Longhorn will provide valuable insights into managing Kubernetes storage effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AZuyWWhRILB4fQznndYWmQw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AZuyWWhRILB4fQznndYWmQw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Longhorn: Origins, Architecture, and Purpose
&lt;/h2&gt;

&lt;p&gt;Born out of Rancher Labs, Longhorn began as an open-source project in 2014. The goal was clear: to simplify, streamline, and democratize the management of persistent storage volumes in Kubernetes environments. Originally an internal project at Rancher Labs, it was donated to the Cloud Native Computing Foundation (CNCF) in 2020, cementing its position in the Kubernetes ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Longhorn?
&lt;/h3&gt;

&lt;p&gt;At its core, Longhorn is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes. In essence, it transforms commodity hardware and cloud volumes into a reliable and distributed block storage solution, providing software-defined storage (SDS) for Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does Longhorn Work?
&lt;/h3&gt;

&lt;p&gt;Longhorn operates by using containers to create a storage volume and replicates it synchronously across multiple nodes in a Kubernetes cluster. It ensures high availability of the data stored in these volumes by intelligently distributing data replicas across different nodes and disks. If a node or disk fails, Longhorn automatically switches workload to another replica to maintain availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Purpose of Longhorn
&lt;/h3&gt;

&lt;p&gt;The principal aim of Longhorn is to make it as easy as possible to manage persistent storage for Kubernetes workloads. Before tools like Longhorn, managing stateful workloads in Kubernetes was quite challenging. Developers and system administrators often had to rely on external storage systems, manually provision storage, or be restricted to specific cloud providers.&lt;/p&gt;

&lt;p&gt;With Longhorn, this changes. It allows users to create, manage, and automatically scale persistent volumes directly from the Kubernetes interface or through Kubernetes APIs. It also provides intuitive interfaces for volume backups, snapshots, and storage management — all out of the box.&lt;/p&gt;

&lt;p&gt;Longhorn is designed to be platform-agnostic. This means it can work across bare-metal, on-premises, and cloud-based Kubernetes installations, providing a consistent experience no matter where your cluster is hosted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features and Benefits of Using Longhorn in a Kubernetes Environment
&lt;/h2&gt;

&lt;p&gt;Longhorn provides a rich set of features designed to address various needs in a Kubernetes environment. These features make it an excellent choice for handling persistent storage in Kubernetes, simplifying the management of stateful applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Easy Installation and Operation
&lt;/h3&gt;

&lt;p&gt;Longhorn can be installed on any Kubernetes cluster with a single command, making it extremely straightforward to get started. It provides an intuitive graphical user interface that simplifies management tasks, such as creating and attaching volumes, taking and managing snapshots, and setting up backups.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Lightweight and Reliable
&lt;/h3&gt;

&lt;p&gt;Longhorn has a small footprint and doesn’t require any additional services to run, making it a lightweight solution. It ensures high availability by synchronously replicating volumes across different nodes in the cluster. If a node or disk fails, Longhorn automatically re-replicates data to other nodes, maintaining data safety.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Cloud-Native and Kubernetes-Native
&lt;/h3&gt;

&lt;p&gt;As a cloud-native solution, Longhorn runs on any Kubernetes cluster, regardless of whether it’s on-premises, in the cloud, or in a hybrid environment. Longhorn volumes are natively integrated into Kubernetes, which means you can manage them just like any other Kubernetes resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Disaster Recovery Volumes
&lt;/h3&gt;

&lt;p&gt;Longhorn supports cross-cluster disaster recovery volumes, enabling standby volumes to be quickly activated in case of a disaster. This feature is vital for business continuity and meets the high availability requirements of enterprise applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Incremental Snapshots and Backups
&lt;/h3&gt;

&lt;p&gt;Longhorn allows you to take incremental snapshots of your volumes, which can then be backed up to a secondary storage. This approach optimizes storage usage and speeds up backup and recovery operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. ReadWriteMany (RWX) Volumes Support
&lt;/h3&gt;

&lt;p&gt;Starting from Longhorn version 1.1.0, Longhorn supports ReadWriteMany (RWX) volumes, allowing a volume to be read and written by many nodes at the same time, which is a key requirement for certain types of applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Active Monitoring and Alerts
&lt;/h3&gt;

&lt;p&gt;Longhorn provides active monitoring and alerting through integration with Prometheus, a popular open-source monitoring solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Built-In Data Locality and Affinity Features
&lt;/h3&gt;

&lt;p&gt;Longhorn supports data locality to try and keep the data and the workload in the same node to improve performance. It also supports volume and node affinity/anti-affinity policies for advanced volume scheduling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2668%2F1%2AyM3C5UokFjnoBhyuQpl9Hg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2668%2F1%2AyM3C5UokFjnoBhyuQpl9Hg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Longhorn Architecture and Operation
&lt;/h2&gt;

&lt;p&gt;The architecture of Longhorn is designed to be lightweight and fully integrated with Kubernetes. It consists of a set of components that work together to deliver a reliable, distributed block storage solution for your Kubernetes workloads. Here’s an overview of the key components:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Longhorn Manager:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Longhorn Manager runs on each node in the Kubernetes cluster. It’s responsible for orchestrating the other components, handling scheduling, detecting node failures, and maintaining the state of the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn Engine:
&lt;/h3&gt;

&lt;p&gt;The Longhorn Engine is the data plane component responsible for reading and writing data to the storage backend. Each Longhorn volume has a corresponding Longhorn Engine instance. The engine captures snapshots, creates backups, and replicates data across different nodes for high availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Longhorn UI:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Longhorn UI is a graphical user interface that allows users to easily manage volumes, take snapshots, set up backups, and monitor the state of the storage system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn CSI (Container Storage Interface) Driver:
&lt;/h3&gt;

&lt;p&gt;The CSI driver allows Kubernetes to use Longhorn as its storage provider. It translates Kubernetes volume operations into corresponding Longhorn operations, such as creating a volume, attaching/detaching a volume to/from a node, and mounting/unmounting a volume to/from a pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn Instance Manager:
&lt;/h3&gt;

&lt;p&gt;There are two types of instance managers, one for managing Longhorn Engine instances and another for managing Longhorn Replica instances. Each node in the cluster runs one instance manager pod for each type.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn Replicas:
&lt;/h3&gt;

&lt;p&gt;These are the copies of the data that are synchronously created by the Longhorn engine on different nodes for redundancy and high availability.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Longhorn Operates
&lt;/h2&gt;

&lt;p&gt;In Longhorn, when you create a persistent volume claim (PVC) in Kubernetes, the Longhorn CSI driver communicates this request to the Longhorn Manager. The Manager then provisions a new Longhorn volume and a corresponding Longhorn Engine.&lt;/p&gt;

&lt;p&gt;The Engine stores the actual data of the volume and manages the replication to the Longhorn Replicas spread across different nodes. This replication ensures that your data remains safe and available even if a node fails.&lt;/p&gt;

&lt;p&gt;Moreover, when you take a snapshot, the Engine creates a point-in-time copy of the data without affecting the running workload. These snapshots can be used for backups, which are incremental and can be stored in secondary storage like AWS S3 or NFS server.&lt;/p&gt;

&lt;p&gt;To sum it up, the architecture of Longhorn is built to operate seamlessly with Kubernetes, with a focus on simplicity, reliability, and ease of use. The system ensures your data is always safe, available, and can be managed effortlessly right from the Kubernetes interface or via Kubernetes APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Longhorn
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before installing Longhorn, ensure that your Kubernetes cluster meets the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes v1.14 or higher&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At least 1 worker node&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm 3 installed (If you plan to use Helm for installation)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open-iscsi installed on all nodes (in most cloud Kubernetes services like GKE, EKS, AKS, this comes preinstalled)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation Steps
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Using kubectl
&lt;/h3&gt;

&lt;p&gt;First, download the Longhorn YAML file from the Longhorn releases page. For example, if you’re installing Longhorn 1.1.0, you would use:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://raw.githubusercontent.com/longhorn/longhorn/v1.1.0/deploy/longhorn.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, apply the YAML file with kubectl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f longhorn.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Using Helm
&lt;/h3&gt;

&lt;p&gt;First, add the Longhorn Helm repository:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add longhorn https://charts.longhorn.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, update your Helm repository:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create the namespace that longhorn will be installed to&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace longhorn-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, install Longhorn:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install longhorn longhorn/longhorn --namespace longhorn-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2364%2F1%2A6qUw50xA16vaktb98jRxYA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2364%2F1%2A6qUw50xA16vaktb98jRxYA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that Longhorn is installed we can view the pods running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n longhorn-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2098%2F1%2AcaXg_0BjNt1zz9A2aBjbYQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2098%2F1%2AcaXg_0BjNt1zz9A2aBjbYQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we can view the services and be able to view the longhorn ui&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n longhorn-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2138%2F1%2AnNnDckxwxlUA9smS2ZRRGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2138%2F1%2AnNnDckxwxlUA9smS2ZRRGA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we want to view the longhorn ui. I will port forward but you can use any applicable form&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2360%2F1%2Aq7NfHgwEpx-4mtsAuli5PQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2360%2F1%2Aq7NfHgwEpx-4mtsAuli5PQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get the IP address and view the UI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2842%2F1%2Awu3OhenWe9eEamhLRvi7gQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2842%2F1%2Awu3OhenWe9eEamhLRvi7gQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the dashboard, we can view different aspects. From the top bar, you can click on backups and be able to view different backups done or restore to the previous backup&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2778%2F1%2ADxRcE2WFvQ_xFGcy36N1cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2778%2F1%2ADxRcE2WFvQ_xFGcy36N1cg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Volume in Longhorn
&lt;/h2&gt;

&lt;p&gt;From the UI you can create volumes or via the Kubernetes Persistent Volume Claim (PVC)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on the Volume tab at the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the Create button. This will open the Create Volume page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill in the required details like Name, Size, Number of Replicas etc. Then click on Create&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AOMlKDq_OAGV06eE7R2s1Qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AOMlKDq_OAGV06eE7R2s1Qw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Through a Kubernetes Persistent Volume Claim (PVC):&lt;/p&gt;

&lt;p&gt;You can also create a Longhorn volume by creating a PVC in Kubernetes. Here is an example of how to do this:&lt;/p&gt;

&lt;p&gt;Create a PersistentVolumeClaim YAML file. For example, you might name it longhorn-volume-pvc.yaml:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-vol-claim
spec:
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply the PVC yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f longhorn-volume-pvc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AU7UsTuJaZ5XIPdVMbe6KUA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AU7UsTuJaZ5XIPdVMbe6KUA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now view the PVC from the dashboard and play around with it you can create backups and restore them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Remarks
&lt;/h2&gt;

&lt;p&gt;As we bring our discussion on Longhorn to a close, it’s clear to see the many advantages this cloud-native, &lt;strong&gt;lightweight&lt;/strong&gt;, and &lt;strong&gt;easy-to-use&lt;/strong&gt; storage system brings to Kubernetes environments. From its straightforward installation to its cloud-agnostic nature, right down to its &lt;strong&gt;high-availability&lt;/strong&gt; and robust data &lt;strong&gt;protection features&lt;/strong&gt;, Longhorn is built to simplify storage management for Kubernetes workloads.&lt;/p&gt;

&lt;p&gt;With Longhorn, users can focus more on building their applications rather than worrying about the underlying storage layer. The features we’ve highlighted — such as incremental snapshot and backup, support for RWX volumes, and cross-cluster disaster recovery — are just a taste of what Longhorn can offer. Not to mention its intuitive user interface, which makes managing and troubleshooting your storage resources a breeze.&lt;/p&gt;

&lt;p&gt;That being said, I believe Longhorn is a robust storage solution that can cater to both small-scale and enterprise-level Kubernetes deployments. If you haven’t tried it yet, I encourage you to do so and explore how it can streamline your storage management.&lt;/p&gt;

&lt;p&gt;Remember, as with any technology, the best way to understand it is to get hands-on. Install Longhorn, create some volumes, and see how it improves your Kubernetes experience.&lt;/p&gt;

&lt;p&gt;Thank you for joining me on this deep dive into Longhorn. We hope it has given you valuable insights and would love to hear your experiences and thoughts on this remarkable storage solution for Kubernetes. Stay tuned to this blog for more insights into the ever-evolving world of Kubernetes and its ecosystem.&lt;/p&gt;

&lt;p&gt;Happy Kubernetes-ing!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>longhorn</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>How to Provision your EKS cluster using Terraform</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 22:45:18 +0000</pubDate>
      <link>https://dev.to/onai254/how-to-provision-your-eks-cluster-using-terraform-4flo</link>
      <guid>https://dev.to/onai254/how-to-provision-your-eks-cluster-using-terraform-4flo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tUp7xqEk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbzMqWmz8Mbut9fSmWZavrQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tUp7xqEk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbzMqWmz8Mbut9fSmWZavrQ.png" alt="" width="591" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this article, we will be able to provision our cluster using terraform and make it ready for application deployments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Already have an account in AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understand Terraform basics have a different blog that talks about what terraform is and how it is used you can find it here.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic Knowledge of kubectl&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS’s Elastic Kubernetes Service (EKS) is a comprehensive and sophisticated managed service that enables you to effortlessly deploy, oversee, and grow containerized applications on Kubernetes with ease.&lt;/p&gt;

&lt;p&gt;This tutorial will walk you through the process of deploying an EKS cluster using Terraform, a robust infrastructure automation tool. Afterwards, you will configure kubectl with Terraform output and confirm that your cluster is ready for use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Deploy With Terraform
&lt;/h2&gt;

&lt;p&gt;Why choose to use Terraform for deploying EKS clusters? While it’s possible to use AWS’s built-in provisioning methods (such as the UI, CLI, or CloudFormation), Terraform provides several benefits:&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Unified Workflow *&lt;/em&gt;— If you already utilize Terraform for deploying AWS infrastructure, you can use the same process for deploying both EKS clusters and the applications that run on them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle Management&lt;/strong&gt; — Terraform handles the creation, updates, and removal of tracked resources, without requiring manual API inspection to identify those resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graph of Relationships&lt;/strong&gt; — Terraform identifies and observes the dependencies between resources. For instance, if an AWS Kubernetes cluster requires specific VPC and subnet configurations, Terraform won’t attempt to create the cluster unless the VPC and subnet have been provisioned first.&lt;/p&gt;

&lt;p&gt;First clone this repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/onai254/terraform-eks-provision/tree/main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then view the files in your preferred editor and move to the directory where the files are located in the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j5d2FoIx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2866/1%2AH0SB3morDARcNrrOmXugpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j5d2FoIx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2866/1%2AH0SB3morDARcNrrOmXugpg.png" alt="" width="880" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The configuration for this deployment specifies a fresh VPC where the cluster will be provisioned. The public EKS module is used to build the necessary resources such as Auto Scaling Groups, security groups, and IAM Roles and Policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EeRGfkue--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2An0PJU1JR776cvXzoQd4CAQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EeRGfkue--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2An0PJU1JR776cvXzoQd4CAQ.png" alt="" width="630" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can inspect the module configuration by opening the main.tf file. The eks_managed_node_groups parameter is set to deploy three nodes distributed over two node groups for the cluster.&lt;/p&gt;

&lt;p&gt;Lets look at each component in the repository&lt;/p&gt;

&lt;h2&gt;
  
  
  Main.tf
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = var.region
}

data "aws_availability_zones" "available" {}

locals {
  cluster_name = var.Cluster_name
}

resource "random_string" "suffix" {
  length  = 8
  special = false
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.19.0"

  name = var.VPC_name

  cidr = "10.0.0.0/16"
  azs  = slice(data.aws_availability_zones.available.names, 0, 3)

  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  public_subnet_tags = {
    "kubernetes.io/cluster/${var.Cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = 1
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${var.Cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = 1
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.5.1"

  cluster_name    = var.Cluster_name
  cluster_version = "1.24"

  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = module.vpc.private_subnets
  cluster_endpoint_public_access = true

  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.small"]

      min_size     = 2
      max_size     = 4
      desired_size = 3
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.small"]

      min_size     = 2
      max_size     = 4
      desired_size = 3
    }
  }
}

# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons/ 
data "aws_iam_policy" "ebs_csi_policy" {
  arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}

module "irsa-ebs-csi" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
  version = "4.7.0"

  create_role                   = true
  role_name                     = "AmazonEKSTFEBSCSIRole-${var.Cluster_name}"
  provider_url                  = module.eks.oidc_provider
  role_policy_arns              = [data.aws_iam_policy.ebs_csi_policy.arn]
  oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}

resource "aws_eks_addon" "ebs-csi" {
  cluster_name             = var.Cluster_name
  addon_name               = "aws-ebs-csi-driver"
  addon_version            = "v1.5.2-eksbuild.1"
  service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
  tags = {
    "eks_addon" = "ebs-csi"
    "terraform" = "true"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This Terraform code provisions an Amazon Elastic Kubernetes Service (EKS) cluster with an EBS CSI driver. The code defines a new VPC, with private and public subnets, and uses a public EKS module to create the required resources, including auto-scaling groups, security groups, and IAM roles and policies.&lt;/p&gt;

&lt;p&gt;The eks_managed_node_groups parameter configures the EKS cluster with three nodes across two node groups. The code also creates an IAM role with the necessary permissions to use the EBS CSI driver, and attaches this role to a Kubernetes service account. Finally, the code provisions the EBS CSI driver as an EKS add-on using the previously created IAM role.&lt;/p&gt;

&lt;p&gt;Overall, this code enables the creation of a fully-managed EKS cluster with EBS CSI driver support using Terraform, providing a unified workflow and full lifecycle management capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Output.tf
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane"
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = module.eks.cluster_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are the output values that Terraform will show after successfully running the code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“cluster_endpoint”: This is the endpoint for EKS control plane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“cluster_security_group_id”: This is the security group ids attached to the cluster control plane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“region”: This is the AWS region used in the deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“cluster_name”: This is the Kubernetes Cluster Name used in the deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These outputs are useful for accessing the deployed resources and verifying the deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform.tf&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.47.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "~&amp;gt; 3.4.3"
    }

    tls = {
      source  = "hashicorp/tls"
      version = "~&amp;gt; 4.0.4"
    }

    cloudinit = {
      source  = "hashicorp/cloudinit"
      version = "~&amp;gt; 2.2.0"
    }
  }

  required_version = "~&amp;gt; 1.3"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the Terraform configuration block that declares the required providers and the minimum required version of Terraform. In this specific example, it requires the AWS, Random, TLS, and Cloudinit providers at specific version constraints to be installed in order to run the Terraform code. Additionally, it sets the required version of Terraform to be at least version 1.3.&lt;/p&gt;

&lt;p&gt;Variable.tf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "region" {
  description = "AWS region"
  type        = string
  default     = "us-east-2"
}

variable "Cluster_name" {
  description = "Cluster Name"
  type = string
  default = "Giovanni"
}

variable "VPC_name" {
  description = "VPC Name"
  type = string
  default = "VPC_giovanni"

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are the variable definitions in the Terraform code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;region: A string variable that specifies the AWS region to use. It has a default value of "us-east-2" and a description explaining what it is used for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cluster_name: A string variable that specifies the name of the Kubernetes cluster. It has a default value of "Giovanni" and a description explaining what it is used for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC_name: A string variable that specifies the name of the VPC. It has a default value of "VPC_giovanni" and a description explaining what it is used for.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These variables can be overridden when the Terraform module is used, allowing users to customize the infrastructure to their needs. The descriptions provided can help users understand what each variable is used for and make informed decisions when setting their values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets now initialize our environment
&lt;/h2&gt;

&lt;p&gt;In order for us to configure the infrastructure we need first to initialize the environment before&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cvtr1tYh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AxmC6A50PG6lWR-yOup1eFQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cvtr1tYh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AxmC6A50PG6lWR-yOup1eFQ.png" alt="" width="880" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;terraform init is a command used to initialize a new or existing Terraform working directory. It is typically the first command to run after writing or cloning a Terraform configuration, or when starting to work on a new module or environment.&lt;/p&gt;

&lt;p&gt;When you run terraform init, it will perform the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download and install the required provider plugins and modules declared in the configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize a backend if one is not already configured, which is used to store the state of the infrastructure that Terraform manages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check for updates to the installed provider plugins and modules, and display a message if updates are available.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Running terraform init is a necessary step before using any other Terraform command, and it should be run again whenever new providers or modules are added or updated in the configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform plan
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DSvuusiH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A6wfS0mJamt52eTQtsRa1Zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DSvuusiH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A6wfS0mJamt52eTQtsRa1Zw.png" alt="" width="880" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;terraform plan will show you the execution plan for the infrastructure changes that will be applied when you run terraform apply. It will check your configuration files and compare them with the current state of your infrastructure, and then display a summary of the changes that Terraform will make to bring your infrastructure to the desired state.&lt;/p&gt;

&lt;p&gt;Before running terraform plan, you should make sure that you have initialized your Terraform working directory by running terraform init and that your AWS credentials are properly configured.&lt;/p&gt;

&lt;p&gt;Assuming your configuration files are in the current directory, you can run terraform plan by opening a terminal or command prompt, navigating to the directory where your configuration files are located, and running the following command:&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform apply
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8QSHGpmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AmGOYgg8-o5tbZQ_ujD0tzQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8QSHGpmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AmGOYgg8-o5tbZQ_ujD0tzQ.png" alt="" width="880" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run the terraform apply command, Terraform will compare the current state of your infrastructure, as defined by your Terraform configuration files (in this case, your main.tf file), with the desired state described in the configuration.&lt;/p&gt;

&lt;p&gt;Terraform will then determine what changes need to be made to the current infrastructure to bring it into the desired state, and prompt you to approve these changes before applying them.&lt;/p&gt;

&lt;p&gt;If you approve the changes, Terraform will make the necessary API calls to create, update, or delete resources as needed.&lt;/p&gt;

&lt;p&gt;Once the apply is complete, Terraform will output any relevant information, such as the IDs of the resources it created, and update the state file with the new state of the infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring your Kubectl
&lt;/h2&gt;

&lt;p&gt;Once your cluster has been created using Terraform, you’ll need to configure kubectl to interact with it. To do this, you'll need to open the outputs.tf file to review the output values. In particular, you'll use the region and cluster_name outputs from this file to configure kubectl&lt;/p&gt;

&lt;p&gt;Run the following command to retrieve the access credentials for your cluster and configure kubectl.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks --region $(terraform output -raw region) update-kubeconfig \
    --name $(terraform output -raw cluster_name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Mxk5PPh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AsKch9QWUjSOz8DE3R774Eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Mxk5PPh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AsKch9QWUjSOz8DE3R774Eg.png" alt="" width="880" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now use your kubectl to manage your cluster and deploy Kubernetes configurations to it.&lt;/p&gt;

&lt;p&gt;You can then verify the cluster if it is configured correctly&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cluster-info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kDLsb2ye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A7A5qNNcZ8zduAYVwudgfWg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kDLsb2ye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A7A5qNNcZ8zduAYVwudgfWg.png" alt="" width="678" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the Kubernetes control plane location matches the cluster_endpoint value from the terraform apply output above.&lt;/p&gt;

&lt;p&gt;You can then verify the worker nodes as part of the cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1MbcGj25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Au-7vZ7Zoxtqin8zR2PMQKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1MbcGj25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Au-7vZ7Zoxtqin8zR2PMQKw.png" alt="" width="880" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Destroy
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Destroy the resources you created in this tutorial to avoid incurring extra charges. Respond yes to the prompt to confirm the operation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PDSfXMBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AvxWUEoTC0P9SzP9OAlTNeQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PDSfXMBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AvxWUEoTC0P9SzP9OAlTNeQ.png" alt="" width="880" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That was easy with a few steps you are able to have a cluster up and running. Check out my other blogs and follow me on Twitter and LinkedIn for more content. You can also check the UI in AWS cluster to check your cluster if it is running.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Deploy Prometheus Operator in Your Kubernetes Cluster EKS</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 21:26:46 +0000</pubDate>
      <link>https://dev.to/onai254/how-to-deploy-prometheus-operator-in-your-kubernetes-cluster-eks-4p8n</link>
      <guid>https://dev.to/onai254/how-to-deploy-prometheus-operator-in-your-kubernetes-cluster-eks-4p8n</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pBwyXrt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASeCh-y0mbiDUHzeJ7_WtSQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pBwyXrt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASeCh-y0mbiDUHzeJ7_WtSQ.png" alt="" width="556" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this blog we will be able to deploy prometheous operator in our EKS cluster and be able to view different dashboards like the grafana and the prometheus&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Preriqities&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EKS cluster running&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Prometheus Operator&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's first define what Prometheus Operator is. It is an open-source project that simplifies the deployment and management of Prometheus and related monitoring components on Kubernetes.&lt;/p&gt;

&lt;p&gt;It simplifies the deployment and management of Prometheus-based monitoring by automating tasks such as setting up monitoring targets, configuring alerting rules, and scaling the Prometheus servers.&lt;/p&gt;

&lt;p&gt;Prometheus Operator is widely used in Kubernetes clusters to monitor applications and infrastructure. It is particularly useful for microservices-based architectures where applications are deployed as Kubernetes services or deployments.&lt;/p&gt;

&lt;p&gt;Simply put: the core benefit of Prometheus Operator is simple and scalable deployment of a full Prometheus monitoring stack.&lt;/p&gt;

&lt;p&gt;Traditionally, without Prometheus Operator customization and configuration of Prometheus is complex. With Prometheus Operator, K8s custom resources allow for easy customization of Prometheus, Alertmanager, and other components. Additionally, the Prometheus Operator Helm Chart includes all the dependencies required to stand up a full monitoring stack.&lt;/p&gt;

&lt;p&gt;Prometheus Operator allows users to easily configure and manage Prometheus and related monitoring tools, including Grafana dashboards, Alertmanager, and exporters.&lt;/p&gt;

&lt;p&gt;It provides a set of Kubernetes custom resources, such as Prometheus, Alertmanager, and ServiceMonitor, which can be used to configure and manage Prometheus instances in a Kubernetes environment.&lt;/p&gt;

&lt;p&gt;Here are some of the benefits of using the Prometheus Operator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified configuration:&lt;/strong&gt; With the Prometheus Operator, you can define and manage Prometheus instances using Kubernetes custom resources, which are easier to manage and maintain than traditional YAML manifests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated management:&lt;/strong&gt; The Prometheus Operator automates many of the tasks involved in deploying and managing Prometheus instances, such as scaling, rolling upgrades, and configuration changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced security:&lt;/strong&gt; The Prometheus Operator provides built-in security features, such as authentication and authorization, that can help you secure your Prometheus instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extensible monitoring:&lt;/strong&gt; The Prometheus Operator makes it easy to extend the monitoring capabilities of Prometheus by integrating with other monitoring tools and services, such as Grafana and Kubernetes metrics.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To use the Prometheus Operator, you need to install it in your Kubernetes cluster and then create custom resources to define and configure your Prometheus instances.&lt;/p&gt;

&lt;p&gt;Here are the basic steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install the Prometheus Operator: You can install the Prometheus Operator using the helm package manager or by deploying the Kubernetes manifests directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define a Prometheus resource: Use the Prometheus custom resource to define your Prometheus instance's configuration, such as the data retention period, alerting rules, and scrape targets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define an Alertmanager resource: Use the Alertmanager custom resource to define the configuration for your Alertmanager instance, such as the receivers and notification channels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define a ServiceMonitor resource: Use the ServiceMonitor custom resource to define the monitoring configuration for your applications running in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy and manage: The Prometheus Operator will create and manage your Prometheus instances and related components based on the custom resources you define.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By using the Prometheus Operator, you can simplify the deployment and management of Prometheus instances in Kubernetes and take advantage of Kubernetes’ native features for monitoring and scaling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CU0txLkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQDxoKWAQS4KfiiVXxiuR-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CU0txLkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQDxoKWAQS4KfiiVXxiuR-A.png" alt="" width="621" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Prometheus Operator CRD
&lt;/h2&gt;

&lt;p&gt;Prometheus Operator orchestrates Prometheus, Alertmanager and other monitoring resources by acting on a set of Kubernetes Custom Resource Definitions (CRDs).&lt;/p&gt;

&lt;p&gt;Understanding what each of these CRDs does will allow you to better optimize your monitoring stack. The supported CRDs are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prometheus-&lt;/strong&gt; Defines the desired state of a Prometheus Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Alertmanager- **Defines the desired state of a Alertmanager Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**ThanosRuler- **Defines the desired state of a ThanosRuler Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**ServiceMonitor- **Declaratively specifies how groups of Kubernetes services should be monitored. Relevant Prometheus scrape configuration is automatically generated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**PodMonitor- **Declaratively specifies how groups of Kubernetes pods should be monitored. Relevant Prometheus scrape configuration is automatically generated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Probe- **Declaratively specifies how ingresses or static targets should be monitored. Relevant Prometheus scrape configuration is automatically generated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**PrometheusRule- **Defines the desired state of a Prometheus Alerting and/or recording rules. Relevant Prometheus rules file is automatically generated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**AlertmanagerConfig- **Declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing Prometheus Operator using Helm&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To kickstart your monitoring stack, the initial step would be to deploy Prometheus Operator and its corresponding Custom Resource Definitions (CRDs) in your Kubernetes cluster. In order to establish a complete monitoring stack, Prometheus necessitates the deployment of Grafana, node-exporter, and kube-state-metrics. The good news is that all of these essential components are included as dependency charts in the Prometheus Operator Helm Chart, which facilitates their automatic installation and integration with Prometheus Operator.&lt;/p&gt;

&lt;p&gt;Before installing the Prometheus operator make sure your cluster is up and running since the Prometheus operator requires more resources then start the cluster with more resources using the — resource field.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --cpus 4 --memory 4096
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will create a cluster with 4cpus and a memory of 4096&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install prom-operator-run prometheus-community/kube-prometheus-stack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Give it a minute or so to install all the Prometheus components after that run the&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fJ0znxk5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2510/1%2ASflLIMtaOZu7Ra9CM_GXAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fJ0znxk5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2510/1%2ASflLIMtaOZu7Ra9CM_GXAA.png" alt="" width="880" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should be able to see the different components installed by Helm. All the components from StatefullSet to the pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v83vtEv6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2506/1%2AHE5JikZCN7_s0OtAnAIovQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v83vtEv6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2506/1%2AHE5JikZCN7_s0OtAnAIovQ.png" alt="" width="880" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the above, we see different pods created by Prometheus Operator. The operator creates other things also in our cluster you can view all by&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Configure-port forwarding for Grafana&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Since we have already installed the different components of Prometheus using Prometheus Operator we can now be able to view the Grafana dashboard in our cluster using port forwarding since the service available is the ClusterIP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IoCFx6rf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2786/1%2APUTuwB8ugYUY2T-TIhpTCA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IoCFx6rf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2786/1%2APUTuwB8ugYUY2T-TIhpTCA.png" alt="" width="880" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The command for accessing the Grafana UI interface is&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/prom-operator-01-grafana 3000:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open the browser and go to localhost 3000 The prompt for Grafana login is displayed&lt;/p&gt;

&lt;p&gt;Username = admin&lt;/p&gt;

&lt;p&gt;Password = prom-operator&lt;/p&gt;

&lt;p&gt;You can also be able to change the password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y7NDH2pF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2824/1%2A6A4LnF6wFNEpPAUA_PvFXQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y7NDH2pF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2824/1%2A6A4LnF6wFNEpPAUA_PvFXQ.png" alt="" width="880" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;with the Grafana up and running we can be able to view different scraped data from Prometheus, edit the dashboard and so on.&lt;/p&gt;

&lt;p&gt;You can also view the Prometheus UI also by port forwarding.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Important notes&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Prometheus operator watches for namespaces for the Prometheus/Alertmanager CR. In most scenarios, it watches all namespaces but you can configure it to watch for a particular namespace as well. This can be changed by using the --namespaces= flag on the Prometheus operator namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;spec.selector.matchLabels MUST match your app’s name (k8s-app: my-app in our example) for ServiceMonitor to find the corresponding endpoints during deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can access the Prometheus UI by port-forwarding to the Prometheus container or creating a service on top of it. In a production scenario, you should not expose Prometheus since it should only act as a Grafana data-source. However, if you need to expose it is recommended to use an Ingress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure Prometheus instances are configured to store metric data in persistent volumes so that it is preserved in between restarts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we will upload an application in the Kubernetes cluster and be able to monitor it using Prometheus.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>prometheus</category>
      <category>microservices</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Understand Networking in Kubernetes</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 21:12:51 +0000</pubDate>
      <link>https://dev.to/onai254/understand-networking-in-kubernetes-3lo7</link>
      <guid>https://dev.to/onai254/understand-networking-in-kubernetes-3lo7</guid>
      <description>&lt;p&gt;Networking in Kubernetes is an important topic to understand, as it enables applications running on a Kubernetes cluster to communicate with each other, as well as with external services.&lt;/p&gt;

&lt;p&gt;In a Kubernetes cluster, each pod has its own unique IP address that is routable within the cluster. This IP address is typically assigned by the Kubernetes network plugin, which is responsible for configuring the networking for the cluster. The network plugin is a key component of the Kubernetes networking architecture, and there are multiple options available, such as Calico, Flannel, and Weave Net.&lt;/p&gt;

&lt;p&gt;In addition to the network plugin, Kubernetes also includes a number of other components that are involved in networking, such as the kube-proxy, which is responsible for load balancing network traffic across multiple pods, and the Service object, which provides a stable IP address and DNS name for a set of pods.&lt;/p&gt;

&lt;p&gt;Understanding how networking works in Kubernetes is important for anyone working with Kubernetes, as it can have a significant impact on the performance and reliability of applications running on the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OkZXDhb8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Av3LCY5CI3RuHZZof4Xidbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OkZXDhb8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Av3LCY5CI3RuHZZof4Xidbg.png" alt="Traffic — ingress controller — ingress service — service-pods" width="600" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, an Ingress is an API object that provides a way to configure external access to services in a cluster. An Ingress resource typically defines a set of rules that specify how incoming traffic should be routed to the appropriate service, based on the incoming request’s host and path. It exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.&lt;/p&gt;

&lt;p&gt;When a request comes into the Kubernetes cluster, it first hits the Ingress controller, which is responsible for managing and routing traffic according to the rules defined in the Ingress resource. The Ingress controller uses a set of rules defined in the Ingress resource to determine where the traffic should be routed. In order for ingress to work you need to use an Ingress controller i.e Nginx, AWS.&lt;/p&gt;

&lt;p&gt;Once the Ingress controller has determined the appropriate service to route the traffic to, it forwards the request to the service’s ClusterIP, which is an internal IP address assigned to the service by the Kubernetes network plugin. The service then uses its own rules to determine which pod(s) to forward the traffic to.&lt;/p&gt;

&lt;p&gt;The traffic then reaches the target pod, where it is handled by the application running inside the pod. The application’s response is then sent back through the same path, via the service and the Ingress controller, to the original request sender.&lt;/p&gt;

&lt;p&gt;It’s important to note that the Ingress resource is only responsible for routing traffic to services within the cluster, and does not handle any authentication or security. Therefore, it is often used in combination with other security measures such as TLS termination or Web Application Firewalls (WAFs) to provide a secure and reliable service.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DNS — Domain Name System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---1RZ756s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Azhc42a8-hOcHnvEBc3sxjA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---1RZ756s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Azhc42a8-hOcHnvEBc3sxjA.png" alt="" width="624" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Domain Name System (DNS) is used to provide a way for services and pods to discover and communicate with each other. It is a service that is responsible for translating or resolving a service name to its IP address. Each pod in a Kubernetes cluster is assigned a unique hostname, which is derived from the pod’s name and the namespace it is running in. By default, each hostname is resolvable within the cluster’s DNS namespace.&lt;/p&gt;

&lt;p&gt;When a service is created in Kubernetes, it is assigned a stable DNS name that is used to access the service from other pods and services within the cluster.&lt;/p&gt;

&lt;p&gt;The DNS name is typically in the format servicename.namespace.svc.cluster.local, where servicename is the name of the service,&lt;/p&gt;

&lt;p&gt;namespace is the Kubernetes namespace in which the service is running, cluster.local is the default DNS domain for the cluster.&lt;/p&gt;

&lt;p&gt;When a pod wants to communicate with a service, it can simply use the service’s DNS name to connect to it. The Kubernetes DNS service will resolve the DNS name to the corresponding ClusterIP assigned to the service, and route the traffic to the appropriate pod.&lt;/p&gt;

&lt;p&gt;In addition to resolving DNS names for services, Kubernetes also supports custom DNS configurations, which allow you to specify additional DNS servers or search domains that should be used when resolving DNS queries. This can be useful if you need to resolve DNS names outside of the Kubernetes cluster, such as for accessing external services or APIs.&lt;/p&gt;

&lt;p&gt;Overall, DNS is a critical component of networking in Kubernetes, as it allows services and pods to discover and communicate with each other, and is an essential part of the infrastructure for deploying reliable and scalable applications on a Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Core DNS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;CoreDNS is a popular DNS server implementation used in Kubernetes for service discovery and DNS resolution. It is the default DNS server for Kubernetes and ensures pods and services have a Fully Qualified Domain Name (FQDN). CoreDNS is a flexible and extensible DNS server that is designed to be easily integrated into Kubernetes clusters and can be customized to support a wide range of use cases. Without CoreDNS the cluster's communication would cease to work.&lt;/p&gt;

&lt;p&gt;In Kubernetes, CoreDNS is typically deployed as a pod in the cluster and is responsible for resolving DNS queries for services and pods. CoreDNS uses the Kubernetes API to retrieve information about services and pods and automatically generates DNS records for each of them.&lt;/p&gt;

&lt;p&gt;One of the benefits of using CoreDNS in Kubernetes is that it is highly configurable, and can be extended to support custom plugins and DNS providers. For example, you can use CoreDNS plugins to add support for custom DNS zones or to integrate with external DNS providers.&lt;/p&gt;

&lt;p&gt;Another benefit of CoreDNS is that it provides better performance and scalability than the previous default DNS server in Kubernetes, kube-dns. CoreDNS is written in Go and is designed to be lightweight and efficient, which makes it well-suited for handling large volumes of DNS queries in high-traffic Kubernetes environments.&lt;/p&gt;

&lt;p&gt;To use CoreDNS in your Kubernetes cluster, you can deploy it as a pod using a Kubernetes manifest file or Helm chart. Once deployed, you can configure the CoreDNS server to meet your specific needs, such as by adding custom DNS providers, defining custom DNS zones, or integrating with other Kubernetes components such as Ingress or ExternalDNS.&lt;/p&gt;

&lt;p&gt;Overall, CoreDNS is a powerful and flexible DNS server implementation that is well-suited for use in Kubernetes clusters and provides a solid foundation for service discovery and DNS resolution in modern cloud-native applications.&lt;/p&gt;

&lt;p&gt;FQDN — Fully Qualified Domain Name is a domain name that specifies its exact location in the tree hierarchy, also known as an absolute domain functionality of core DNS.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Probes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, probes are used to determine the health of a container running in a pod. Probes are a critical part of Kubernetes’ self-healing and auto-scaling capabilities, as they provide a way for the cluster to automatically detect and recover from unhealthy containers. Probes are used to detect the state of a container.&lt;/p&gt;

&lt;p&gt;There are three types of probes in Kubernetes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liveness Probe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This type of probe is used to determine if a container is still running and is healthy. The liveness probe sends periodic requests to the container, and if the container fails to respond or responds with an error, the probe will mark the container as unhealthy and trigger a restart.&lt;/p&gt;

&lt;p&gt;A Liveness probe could catch a deadlock where an application is running but unable to make any progress instead it restarts the container this can make the application more available despite the bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Readiness Probe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This type of probe is used to determine if a container is ready to start receiving traffic. The readiness probe sends periodic requests to the container, and if the container responds with a success code, it will be marked as ready to receive traffic. If the container fails to respond or responds with an error, it will be marked as not ready, and will not receive any traffic until it becomes ready again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Startup Probe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This type of probe is used to determine if a container is in the process of starting up. The startup probe sends periodic requests to the container, and if the container responds with a success code, it will be marked as ready to receive traffic. If the container fails to respond or responds with an error, the startup probe will keep sending requests until the container becomes ready, or until a configurable timeout is reached.&lt;/p&gt;

&lt;p&gt;Probes in Kubernetes are defined in the pod’s spec section, using YAML configuration. Each probe is defined with a set of parameters, such as the type of probe, the endpoint to probe, the probe timeout, the probe period, and the success and failure thresholds.&lt;/p&gt;

&lt;p&gt;Overall, probes are a powerful feature in Kubernetes that enable containers to be automatically monitored and restarted in the event of failures or unresponsiveness, which helps to improve the reliability and availability of applications running on Kubernetes clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Netfilter&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, Netfilter is used to implement network policies, which are used to control the flow of network traffic between pods in a cluster. Network policies are Kubernetes objects that define rules for how traffic is allowed to flow between pods, and they use Netfilter rules to enforce those policies.&lt;/p&gt;

&lt;p&gt;When a network policy is applied to a Kubernetes cluster, the Kubernetes API server communicates with the network plugin, which is responsible for configuring the networking rules in the underlying network infrastructure. The network plugin, in turn, generates Netfilter rules to enforce the network policy.&lt;/p&gt;

&lt;p&gt;The Netfilter rules generated by the network plugin are based on the selectors specified in the network policy. Selectors are used to identify which pods should be affected by the policy, and they can be based on a wide range of criteria, such as the pod’s labels, namespace, or IP address. The network plugin generates Netfilter rules that match the specified selectors, and then applies the action specified in the policy to the matching packets.&lt;/p&gt;

&lt;p&gt;For example, a network policy might be defined to allow traffic to flow only between pods that have a specific label. The network plugin would generate Netfilter rules to match packets between pods with that label, and then allow those packets to flow through the network. Similarly, a network policy might be defined to deny traffic between two specific pods, in which case the network plugin would generate Netfilter rules to drop packets between those pods.&lt;/p&gt;

&lt;p&gt;Overall, Netfilter is a critical component of the network policy implementation in Kubernetes, as it allows for granular control over the flow of network traffic between pods in a cluster, and provides a powerful mechanism for enforcing security and access control policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;IPTables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;IPTables is a Linux-based firewall tool that allows users to configure and manage network filtering rules. In Kubernetes, IPTables are used to implement network policies, which control the flow of traffic between pods and services.&lt;/p&gt;

&lt;p&gt;When a network policy is created in Kubernetes, the kube-proxy component creates IPTables rules to enforce the policy. These rules are applied to network traffic as it passes through the node where the pod or service is located.&lt;/p&gt;

&lt;p&gt;The IPTables rules generated by Kubernetes are based on the network policy’s selectors and rules. Selectors identify which pods the policy applies to, while rules define what traffic should be allowed or denied. For example, a network policy could be created that only allows traffic to specific ports on pods with a certain label.&lt;/p&gt;

&lt;p&gt;The IPTables rules generated by Kubernetes are inserted into the kernel’s IPTables chains, which determine how network traffic is processed. These chains are evaluated in a specific order, with the first matching rule determining the action taken on the packet.&lt;/p&gt;

&lt;p&gt;Kubernetes also uses IPTables to implement Kubernetes Services, which provide a stable IP address and DNS name for accessing a set of pods. When a Service is created in Kubernetes, kube-proxy creates an IPTables rule to forward traffic to the appropriate pod based on the Service’s selector.&lt;/p&gt;

&lt;p&gt;Overall, IPTables are an important tool for implementing network policies and Services in Kubernetes, as it allows for fine-grained control over the flow of network traffic, and provides a reliable and scalable mechanism for load balancing and service discovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;IPVS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;IPVS (IP Virtual Server) is a Linux kernel module that provides network load-balancing capabilities. In Kubernetes, IPVS is used as an alternative to kube-proxy and IPTables for implementing Services.&lt;/p&gt;

&lt;p&gt;When a Service is created in Kubernetes and the Service type is set to “LoadBalancer”, IPVS is used to create a virtual IP address (VIP) for the Service. The VIP is used as the target address for client traffic and is associated with a set of pods that provide the actual service.&lt;/p&gt;

&lt;p&gt;IPVS works by intercepting incoming traffic to the VIP and distributing it among the available pods using a load-balancing algorithm. There are several load-balancing algorithms available in IPVS, including round-robin, least-connection, and weighted least-connection.&lt;/p&gt;

&lt;p&gt;IPVS also provides health checks to ensure that traffic is only sent to healthy pods. When a pod fails a health check, IPVS removes it from the list of available pods and redistributes traffic among the remaining healthy pods.&lt;/p&gt;

&lt;p&gt;IPVS has several advantages over kube-proxy and IPTables, including better scalability and performance, and more flexible load-balancing algorithms. IPVS can handle large numbers of connections and is optimized for high throughput and low latency. It also supports more advanced load-balancing features, such as session persistence and connection draining.&lt;/p&gt;

&lt;p&gt;However, IPVS requires additional configuration and setup compared to kube-proxy and IPTables, and may not be compatible with all network environments. IPVS also requires kernel support and may not be available on all Linux distributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Proxy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A proxy is a server application that acts as an intermediary between client requesting a resource and the server providing that resource&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubectl Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubectl Proxy is a command-line tool that enables a user to create a secure tunnel between their local machine and a Kubernetes API server. This allows the user to access the Kubernetes API server without the need for direct network access or complex authentication configurations. Kubectl Proxy is used for various purposes, such as accessing the Kubernetes Dashboard or using kubectl commands against a remote cluster.&lt;/p&gt;

&lt;p&gt;For example, suppose a user wants to access the Kubernetes Dashboard running on a remote cluster. They can use Kubectl Proxy to create a secure tunnel and then access the Dashboard through a local web browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kube-Proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, Kube-Proxy is a component that runs on each node in a Kubernetes cluster and is responsible for implementing Kubernetes Services. Kube-Proxy listens for changes to Services and then updates the local IPTables or IPVS rules accordingly. This ensures that traffic is correctly routed to the appropriate pods in the cluster.&lt;/p&gt;

&lt;p&gt;For example, suppose a Service is created in Kubernetes that maps to a set of pods with the label “app=myapp”. Kube-Proxy will create IPTables or IPVS rules that direct traffic to the appropriate pod based on the Service’s selector.&lt;/p&gt;

&lt;p&gt;Both Kubectl Proxy and Kube-Proxy have benefits and limitations. Kubectl Proxy is simple to set up and provides secure access to the Kubernetes API server, but it can be slow and may not be suitable for production environments.&lt;/p&gt;

&lt;p&gt;Kube-Proxy is reliable and scalable, but it can be complex to configure and may not be suitable for all network environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Envoy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to Kube-Proxy, another popular proxy used in Kubernetes is Envoy. Envoy is a high-performance proxy that provides advanced traffic management and load-balancing capabilities. Envoy can be used as a replacement for Kube-Proxy to implement Kubernetes Services or can be used as an independent component to provide advanced traffic management features.&lt;/p&gt;

&lt;p&gt;Envoy is used in many production environments and can provide benefits such as advanced load-balancing algorithms, circuit breaking, and distributed tracing.&lt;/p&gt;

&lt;p&gt;However, Envoy requires additional setup and configuration compared to Kube-Proxy, and may not be compatible with all network environments. Additionally, Envoy is generally used in more complex scenarios, such as multi-cluster or multi-cloud environments, and may be overkill for simpler use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Container Networking Interface&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Container Networking Interface (CNI) is a specification and set of tools for configuring networking in containerized environments, such as those provided by Kubernetes. The goal of CNI is to provide a common standard for network plugins so that container runtimes and orchestration systems can work with any networking solution that supports the CNI API.&lt;/p&gt;

&lt;p&gt;CNI defines a standard way for container runtimes, such as Docker or CRI-O, to call networking plugins to configure the network interfaces of containers. The plugins are responsible for creating and configuring network interfaces for the containers, as well as configuring the network namespace and routing tables.&lt;/p&gt;

&lt;p&gt;In Kubernetes, CNI is used by the kubelet to configure the network interfaces of pods. When a pod is created, the kubelet invokes the CNI plugin to configure the pod’s network. The CNI plugin then creates and configures the network interfaces for the pod, sets up any necessary routing rules, and adds the pod’s IP address to the appropriate network namespace.&lt;/p&gt;

&lt;p&gt;CNI plugins can be either built into the container runtime or provided as standalone binaries. There are many CNI plugins available, each with its own strengths and weaknesses. Some popular CNI plugins include Calico, Flannel, and Weave Net.&lt;/p&gt;

&lt;p&gt;The use of CNI provides several benefits in containerized environments. First, it allows for a common standard that can be used by multiple container runtimes and orchestration systems. This means that network plugins can be developed independently of the container runtime or orchestration system, which promotes flexibility and compatibility.&lt;/p&gt;

&lt;p&gt;Second, CNI provides a modular and extensible architecture that allows for easy integration with other networking solutions. This enables users to choose the best networking solution for their specific use case and avoid vendor lock-in.&lt;/p&gt;

&lt;p&gt;Finally, CNI provides a simple and flexible API for configuring container networking, which makes it easy for developers to create and deploy custom networking solutions tailored to their needs.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to Monitor your Application using Prometheus</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 21:04:27 +0000</pubDate>
      <link>https://dev.to/onai254/how-to-monitor-your-application-using-prometheus-2blj</link>
      <guid>https://dev.to/onai254/how-to-monitor-your-application-using-prometheus-2blj</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In this Blog, we will be able to deploy our application in an EKS cluster and monitor it with Prometheus&lt;br&gt;
 In the &lt;a href="https://medium.com/dev-genius/step-by-step-guide-to-setting-up-prometheus-operator-in-your-kubernetes-cluster-7167a8228877"&gt;previous&lt;/a&gt; blog, we set up a Prometheus Operator to help us monitor our applications. Now, we’ll deploy our application and use Prometheus to monitor it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The prerequisites you need to archive this goal are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You should be having an EKS cluster running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your application yaml files&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before we deploy the application let's talk about the ground things such as ServiceMonitor&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;ServiceMonitor&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ServiceMonitor is a Kubernetes object that specifies how Prometheus should monitor a set of targets that belong to a Kubernetes service.&lt;/p&gt;

&lt;p&gt;For example, suppose we have a Kubernetes service called “myapp” that has several replicas running across different pods. We want to monitor these replicas using Prometheus. To do this, we can create a ServiceMonitor object in Kubernetes that specifies the labels and endpoints for the pods running the “myapp” service.&lt;/p&gt;

&lt;p&gt;Here’s an example ServiceMonitor YAML file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: myapp-monitor
  namespace: mynamespace
  labels:
    app: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  endpoints:
  - port: web
    interval: 30s
    path: /metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this example, we define a ServiceMonitor called “myapp-monitor” in the “mynamespace” namespace. We specify that we want to monitor the pods that have the label “app=myapp”. We also specify that we want to scrape the “web” port on these pods every 30 seconds and that the metrics can be found at the “/metrics” endpoint.&lt;/p&gt;

&lt;p&gt;Once we apply this ServiceMonitor YAML file to Kubernetes, Prometheus will automatically discover and monitor the targets that belong to the “myapp” service based on the label selector defined in the ServiceMonitor.&lt;/p&gt;

&lt;p&gt;Overall, ServiceMonitors provide an easy way to configure Prometheus to scrape metrics from multiple pods that belong to a single Kubernetes service, allowing for more efficient and scalable monitoring of Kubernetes-based applications.&lt;/p&gt;

&lt;p&gt;Since we have already configured PrometheusOperator in the cluster and we already viewed the Grafana Dashboard let us now view the Prometheus dashboard and see all the metrics being scraped by Prometheus&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Dashboard
&lt;/h2&gt;

&lt;p&gt;To display Prometheus Dashboard we need to pass the port-forward attribute to the Prometheus service.&lt;/p&gt;

&lt;p&gt;First, get the service name and the port running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---KcB9Pxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2812/1%2AzKF1ppPvtek7wN9OP2d6EQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---KcB9Pxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2812/1%2AzKF1ppPvtek7wN9OP2d6EQ.png" alt="services running" width="880" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then expose the port outside using port-forward&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/prometheus-operated 9090 -n monitor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BeeS9Px7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2846/1%2AQsyHfYZhQUe7a0Ey4pXmtQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BeeS9Px7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2846/1%2AQsyHfYZhQUe7a0Ey4pXmtQ.png" alt="port-forward" width="880" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IP address is displayed go check out Prometheus&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kBfmk8uj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2830/1%2AiijBFB3CbcVK-RTjyoUHQw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kBfmk8uj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2830/1%2AiijBFB3CbcVK-RTjyoUHQw.png" alt="Prometheus dashboard" width="880" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the dashboard, we can check on the service discovery that Prometheus has discovered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gv0OceN5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2714/1%2AzEItAOZvaeAX9LFiMqrgeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gv0OceN5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2714/1%2AzEItAOZvaeAX9LFiMqrgeg.png" alt="" width="880" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check on the targets that Prometheus is scraping with the /metrics endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_t3hpjKI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2714/1%2Ameb32sB8Q8mGY_mYtX-eEQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_t3hpjKI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2714/1%2Ameb32sB8Q8mGY_mYtX-eEQ.png" alt="" width="880" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our demo, we will deploy a microservice architecture and crape the metrics using Prometheus. But one more thing before we deploy our application it is important to know what Exporters are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exporters
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What are exporters in Prometheus?
&lt;/h2&gt;

&lt;p&gt;Exporters are software components that expose metrics in a format that can be understood by Prometheus. They act as intermediaries between Prometheus and the systems and applications that are being monitored, and they provide a standard way to collect metrics from a wide variety of sources.&lt;/p&gt;

&lt;p&gt;Node-Exporter they are translator between application metrics to data that Prometheus will understand. An exporter itself will expose /metrics endpoint so that Prometheus can be able to scrape from there. They collect metrics data from applications and they can be a separate deployment in the cluster.&lt;/p&gt;

&lt;p&gt;In the context of Kubernetes, exporters are typically deployed as sidecar containers in the same pod as the application or system component they are monitoring. The exporter is responsible for collecting metrics from the system or application and exposing them over HTTP or some other protocol, so that Prometheus can scrape them.&lt;/p&gt;

&lt;p&gt;There are many different exporters available for Prometheus, covering a wide range of systems and applications. Some of the most common exporters include the Node Exporter (for collecting metrics from Linux servers), the Prometheus MySQL Exporter (for monitoring MySQL databases), and the Prometheus Blackbox Exporter (for monitoring network services).&lt;/p&gt;

&lt;p&gt;How to use exporters in Kubernetes with Prometheus?&lt;/p&gt;

&lt;p&gt;To use exporters in Kubernetes with Prometheus, you will typically follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Identify the metrics you want to monitor, you will need to identify the metrics you want to monitor from your systems and applications. This will depend on the specific use case and the systems being monitored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the appropriate exporter: Once you have identified the metrics you want to monitor, you will need to install the appropriate exporter for each system or application. This can be done using either the Helm chart or YAML manifests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure Prometheus to scrape the exporter: Finally, you will need to configure Prometheus to scrape the metrics from the exporter. This is typically done by adding a scrape configuration to the prometheus.yaml file, specifying the URL of the exporter endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, if you want to monitor the CPU usage and memory usage of a Kubernetes pod, you can deploy the Node Exporter as a sidecar container in the same pod, and configure Prometheus to scrape the metrics from the Node Exporter’s endpoint. Once this is done, Prometheus will start collecting the metrics and storing them in its time series database, where they can be analyzed and visualized using Prometheus’s query language and dashboarding tools.&lt;/p&gt;

&lt;p&gt;Why are exporters useful in Prometheus?&lt;/p&gt;

&lt;p&gt;Exporters are useful in Prometheus for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Support for diverse systems and applications&lt;/strong&gt;: Exporters provide a standard way to collect metrics from a wide variety of systems and applications, making it easy to monitor different components of a Kubernetes environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy deployment and configuration&lt;/strong&gt;: Exporters can be deployed as sidecar containers in Kubernetes pods, making it easy to configure and manage them alongside the systems and applications they are monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistent metric format:&lt;/strong&gt; Exporters expose metrics in a consistent format that can be understood by Prometheus, making it easy to write queries and build dashboards that work across different systems and applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High performance:&lt;/strong&gt; Exporters are typically designed to be highly performant and lightweight, so that they can collect metrics with minimal overhead and without impacting the performance of the systems they are monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, exporters are a powerful and flexible feature of Prometheus that make it easy to monitor diverse systems and applications in Kubernetes environments. By using exporters, you can collect detailed metrics on the health and performance of your systems, and use this data to optimize and troubleshoot&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Hands-on Demo&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the previous demo, we deployed Prometheus in our cluster we also exposed the ports for Grafana and Prometheus. Next let's install the application for this demo we will clone a git repository for a microservices application developed by google.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/GoogleCloudPlatform/microservices-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After installing the application we can view different components of the application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BusWVQhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2776/1%2AyHmPsp0gbGPZhqBj4VTMUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BusWVQhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2776/1%2AyHmPsp0gbGPZhqBj4VTMUw.png" alt="Demo app pods running" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also view the application by port-forward, or using ingress or load balancers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---OreWRNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2748/1%2ADN-QTdsvlpAzT-X5Zx-E_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---OreWRNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2748/1%2ADN-QTdsvlpAzT-X5Zx-E_w.png" alt="Demo application Frontend" width="880" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will install the exporter, and let's start with the Redis exporter. Open the Prometheus exporter page, we can search and find the Redis exporter. Exporters are also available as docker images and you can find them in the Docker Hub.&lt;/p&gt;

&lt;p&gt;Components you need to have when deploying an exporter&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The application itself in the docker image that exposes the /metrics endpoints&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A service so that Prometheus can connect to the exporter&lt;/li&gt;
&lt;li&gt;Service monitor to tell Prometheus that a new service has been created and it should start scraping it.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To install the Redis exporter in the most simplified way to use it we can search for a helm chart that is ready and configured to install the exporter.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command basically installs the Prometheus helm charts although we had already installed the chart when installing Prometheus. Next, let's install the node exporter chart.&lt;/p&gt;

&lt;p&gt;But before we deploy the exporter there are some values that we need to change from the chart. Let's create values.yaml configuration that we will use to pass while using helm to install the radis node exporter&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redisAddress: redis://redis-cart:6379
#the service we will monitor and the port 

serviceMonitor:
  additionalLabels:
    release: prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save the file before running the helm install command giving your node exporter the name and passing the -f values.yaml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install redis-exporter prometheus-community/prometheus-redis-exporter -f values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can then check the service monitor running in the cluster to see if we installed the Service monitor correctly.&lt;/p&gt;

&lt;p&gt;Tomorrow we will finish from there: checking on the service discovery function and if Prometheus discovered our service monitor.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>prometheus</category>
      <category>monitoring</category>
      <category>sre</category>
    </item>
    <item>
      <title>How to Deploy Istio to manage your Network in your Cluster</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 20:58:49 +0000</pubDate>
      <link>https://dev.to/onai254/how-to-deploy-istio-to-manage-your-network-in-your-cluster-15a6</link>
      <guid>https://dev.to/onai254/how-to-deploy-istio-to-manage-your-network-in-your-cluster-15a6</guid>
      <description>&lt;p&gt;In our previous blog post, we discussed the installation of &lt;em&gt;istioctl&lt;/em&gt; and provided an overview of the various components of Istio. In this blog post, our focus will be on the installation of Istio in our cluster and the execution of our initial application on Istio.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisite&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In order to follow along with this demo you have to have the following&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your cluster can either be minikube or a cluster running in the cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Istioctl installed and configured&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your application yaml files&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Installing Istio
&lt;/h2&gt;

&lt;p&gt;In order to install Istio in our cluster we can run the command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl install 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above definition, we are running Isitio in the demo profile we also touched upon different Istio profiles and how we can use them in our previous blog&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ukf0aL2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2264/1%2AvAfYQjqoGbYlbUo2AYcQcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ukf0aL2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2264/1%2AvAfYQjqoGbYlbUo2AYcQcw.png" alt="" width="880" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we installed Istio in our cluster it created its own Istiosystem namespace and installed some pods. Now we can be able to view the pods installed in the Istio-system namespace.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n istio-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T_wWZNyA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2276/1%2A_iOO9VW7kNzeVJcaXIBGFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T_wWZNyA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2276/1%2A_iOO9VW7kNzeVJcaXIBGFg.png" alt="" width="880" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, we can be able to verify and see all the components that Istio installed into our cluster by running the following commands&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl verify-install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VSYgRZjl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2208/1%2ADUgevNtJtbcn3EwbXadYIg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VSYgRZjl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2208/1%2ADUgevNtJtbcn3EwbXadYIg.png" alt="" width="880" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These components are just a subset of those that will be installed. To proceed with the application installation, we need to modify the label to enable Istio to discover each microservice in our application. This can be achieved by performing the following steps. Run the following code to change the labels.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label namespace default istio-injection=enabled 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The command is used to modify the label of the default namespace in Kubernetes and enable automatic sidecar injection of the Istio Envoy proxy for all the pods running in the namespace.&lt;/p&gt;

&lt;p&gt;When executed, this command adds the label “istio-injection=enabled” to the default namespace’s metadata. This tells Istio to automatically inject the Envoy sidecar proxy into all pods that are created in the namespace. By default, Istio only injects the Envoy sidecar proxy into pods that have the “istio-injection=enabled” label set on their metadata.&lt;/p&gt;

&lt;p&gt;Enabling automatic sidecar injection with this command simplifies the process of deploying microservices on the Istio service mesh. Once the default namespace is labeled, you can deploy your microservices as usual and Istio will automatically inject the Envoy sidecar proxy into each pod, allowing Istio to control and monitor network traffic between the microservices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--umAJEwav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2402/1%2A_Rm4potBxII9m5Qkpk0jBg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--umAJEwav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2402/1%2A_Rm4potBxII9m5Qkpk0jBg.png" alt="" width="880" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we have labeled the default namespace we can now deploy the application and see what happens. In this example, we will use the demo application by Istio to understand more. After deploying the application check for the pods running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CUtfWtEy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2076/1%2A7imWgeS9wAUIy-L8QeZofg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CUtfWtEy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2076/1%2A7imWgeS9wAUIy-L8QeZofg.png" alt="" width="880" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the provided example, we can see that there are pods currently active within the default namespace. Each of these pods contains two containers: the first container is responsible for executing the business logic of the pod, while the second container is an envoy proxy that has been automatically deployed on each pod. This automatic deployment was made possible by labeling the default namespace with “istio-injection=enabled”. Should additional pods be added to this namespace, the istio service discovery component will detect their presence and deploy a corresponding envoy container. We can also check this by describing the pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Kiali&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1E1h7c3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AyCKed__hDwA0LEBFrbDOIw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1E1h7c3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AyCKed__hDwA0LEBFrbDOIw.png" alt="" width="596" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will talk about Kiali and other add-ons in Istio. Kiali is a web-based graphical user interface (GUI) that provides observability and visualization features for Istio service mesh. Istio is a powerful service mesh platform that can help manage and secure microservices-based applications in a distributed system. Kiali is a tool that can help you visualize and understand the traffic flow, service dependencies, and health status of your Istio service mesh.&lt;/p&gt;

&lt;p&gt;Kiali provides a number of useful features for Istio users, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service topology visualization:&lt;/strong&gt; Kiali can display a real-time topology map of the services in your Istio service mesh, including their dependencies and traffic flows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Metrics and telemetry&lt;/strong&gt;: Kiali can display detailed metrics and telemetry data for individual services and traffic flows, including traffic rates, latency, and error rates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distributed tracing&lt;/strong&gt;: Kiali can integrate with distributed tracing systems like Jaeger to provide detailed tracing information for service calls and transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service health monitoring&lt;/strong&gt;: Kiali can provide real-time health status for individual services and the overall service mesh, including alerts and notifications for service failures and other issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration validation:&lt;/strong&gt; Kiali can validate the configuration of your Istio service mesh, including routing rules, security policies, and other settings, to help ensure that your services are running as expected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, Kiali is a powerful tool for Istio users who need to monitor and manage complex microservices-based applications in a distributed system. By providing real-time visibility and insights into your Istio service mesh, Kiali can help you diagnose and resolve issues quickly, optimize your system performance, and ensure that your services are running smoothly and securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Kiali
&lt;/h2&gt;

&lt;p&gt;There are multiple methods available for installing the Kiali demonstration using helm charts or the downloaded addons folder as part of the Istio installation process. To continue, we will carry out the installation of Istio on our cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply samples/addons/kiali.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8UDCyUaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A9YwiPVsHXnGp6atiJa-QAw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8UDCyUaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A9YwiPVsHXnGp6atiJa-QAw.png" alt="" width="880" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can verify if we have installed Kiali by checking the pods that are running in the istio-system&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n istio-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vHSubHR3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AfPJ8gTy0U84U6wYoAJBiqA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vHSubHR3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AfPJ8gTy0U84U6wYoAJBiqA.png" alt="" width="880" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing Kiali in our cluster we can view the Dashboard by using the command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl dashboard kiali
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will automatically open the Kiali dashboard running in Localhost&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4nvkTq9y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2846/1%2AiCykDsew-bvC0W8lp-42hQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4nvkTq9y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2846/1%2AiCykDsew-bvC0W8lp-42hQ.png" alt="" width="880" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successfully configuring Kiali within our cluster and accessing the Kiali dashboard, we can proceed to direct traffic towards the application and monitor it using the dashboard. To achieve this, we need to create a Gateway that will enable our service mesh to receive traffic from external sources outside the cluster.&lt;/p&gt;

&lt;p&gt;But first, lets get to know some terms and definitions in order to understand more about Traffic management in Istio&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Gateway&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o-4diFoQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AGzo6TcN6k4bV4DbVTb7QYA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o-4diFoQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AGzo6TcN6k4bV4DbVTb7QYA.png" alt="" width="598" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Gateway resource is used to configure load-balancing of traffic entering the service mesh from outside the cluster. It defines a set of protocols, ports, and workload instances to which traffic should be directed.&lt;/p&gt;

&lt;p&gt;The Gateway acts as an entry point for external traffic to enter the service mesh, and it can be configured to perform various tasks, such as TLS termination, header manipulation, or authentication.&lt;/p&gt;

&lt;p&gt;With the help of the Gateway resource, Istio can control and secure the incoming traffic to the service mesh, providing an additional layer of control and visibility over network traffic.&lt;/p&gt;

&lt;p&gt;A Gateway resource in Istio consists of two main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A set of workload instances&lt;/strong&gt; — that represent the backend services that will handle incoming traffic, along with the protocol and port numbers they are listening on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A set of networking configuration&lt;/strong&gt; parameters that define how incoming traffic should be handled, such as TLS encryption settings, routing rules, and load-balancing algorithms.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s an example of how a Gateway is used in Istio:&lt;/p&gt;

&lt;p&gt;Let’s say we have a service mesh consisting of several microservices that are running on Kubernetes pods. These microservices are accessible only within the cluster, and we want to enable external users to access them from the internet. To achieve this, we can create a Gateway resource that defines the external IP address, port number, and TLS settings, and associate it with a set of workload instances representing the backend services.&lt;/p&gt;

&lt;p&gt;For example, let’s say we have a microservice called “product” that runs on Kubernetes pods and listens on port 8080. We can create a Gateway resource that defines an external IP address of 203.0.113.0 and port 80, and associate it with the “product” service. When a user sends a request to &lt;a href="http://203.0.113.0/product"&gt;http://203.0.113.0:80/product&lt;/a&gt;, the Gateway resource will receive the request and route it to the backend “product” service.&lt;/p&gt;

&lt;p&gt;We can also configure the Gateway resource to use TLS encryption to secure the traffic between the external user and the service mesh. For example, we can configure Istio to use a certificate authority to issue and manage SSL/TLS certificates, and configure the Gateway to use these certificates to encrypt and decrypt the incoming traffic.&lt;/p&gt;

&lt;p&gt;In summary, a Gateway in Istio is a Kubernetes resource that enables external traffic to enter a service mesh, and it can be used to configure load balancing, routing, and security for incoming traffic.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: my-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "product"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The spec field has two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;selector: A set of labels used to select the gateway workload instances that will handle incoming traffic. In this example, it selects the default ingress gateway using the label istio: ingressgateway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;servers: A list of servers that the gateway will listen on, including the port number, protocol, and hostnames that should be routed to this server. In this example, it defines a server that listens on port 80 using HTTP protocol.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To view the gateway in Kubernetes run the command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get gateway
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Virtual Service&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6GFu7gfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMa6GEEtLgofyywsIYuXGDg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6GFu7gfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMa6GEEtLgofyywsIYuXGDg.png" alt="" width="551" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A VirtualService is a Kubernetes resource that defines a set of routing rules for incoming traffic to a service mesh. It allows you to control the routing of network traffic within the mesh, such as directing traffic to different versions of a service or sending traffic to a specific subset of a service.&lt;/p&gt;

&lt;p&gt;A VirtualService resource consists of two main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A set of routing rules that define how incoming traffic should be directed to one or more destination services. The routing rules can be based on various criteria such as HTTP headers, URI paths, or traffic weights.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A set of destination services that represent the backend services that will handle incoming traffic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The VirtualService resource provides a powerful way to control traffic within a service mesh, allowing you to implement advanced traffic management features such as canary releases, A/B testing, blue-green deployments, and fault injection.&lt;/p&gt;

&lt;p&gt;Here’s an example of how a VirtualService can be used in Istio:&lt;/p&gt;

&lt;p&gt;Let’s say we have a service mesh consisting of several versions of a microservice called “product” running on Kubernetes pods. We want to direct incoming traffic to the latest version of the “product” service, while still allowing a small percentage of traffic to be sent to the previous version for canary testing. To achieve this, we can create a VirtualService resource that defines two subsets of the “product” service, representing the latest version and the previous version respectively, and set traffic weights to direct 99% of the traffic to the latest version and 1% of the traffic to the previous version.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: product
spec:
  hosts:
  - product-service
  http:
  - route:
    - destination:
        host: product-service
        subset: v1
      weight: 99
    - destination:
        host: product-service
        subset: v2
      weight: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this example, we define a VirtualService resource called “product” that routes traffic to the “product-service”. It defines two routes, one for each subset of the “product-service”. The first route directs 99% of the traffic to the “v1” subset, while the second route directs 1% of the traffic to the “v2” subset. This allows us to test the new version of the “product” service with a small percentage of traffic, while still directing most of the traffic to the stable version.&lt;/p&gt;

&lt;p&gt;In this next example, we will look at a service that can be accessed through various microservices&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfor
spec:
  hosts:
  - bookinfor.app
  gateway:
  - bookinfor-gateway
  http:
  - match:
    - uri:
        exact: /productpage
    - uri
        prefix: /static
    - uri
        exact: /login
    - uri
        exact: /logout
    - uri
        prefix: api/vi/products

   - route:
    - destination:
        host: productpage
        port:
           number: 9080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above YAML file we are routing all traffic from different points to one destination that is “productpage”&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Destination Rule&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6GFu7gfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMa6GEEtLgofyywsIYuXGDg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6GFu7gfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AMa6GEEtLgofyywsIYuXGDg.png" alt="" width="551" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DestinationRule is a Kubernetes resource that defines policies for routing traffic to a specific version of a service. It allows you to configure various aspects of how traffic is directed to a service, such as load-balancing, connection pool settings, TLS settings, and more.&lt;/p&gt;

&lt;p&gt;A DestinationRule resource consists of two main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A set of rules that define how traffic should be routed to the service. This can include subsets of the service, load balancing policies, circuit breaker settings, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A set of policies that define how traffic should be secured when communicating with the service. This can include mTLS (mutual TLS) settings, certificate authorities, and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s an example of how a DestinationRule can be used in Istio:&lt;/p&gt;

&lt;p&gt;Let’s say we have a service mesh consisting of several versions of a microservice called “product” running on Kubernetes pods. We want to configure the load balancing and connection pool settings for the “product” service. To achieve this, we can create a DestinationRule resource that defines the “product” service and sets the load balancing algorithm to round-robin and the maximum number of connections to 100.&lt;/p&gt;

&lt;p&gt;Here’s an example YAML file for a DestinationRule:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: product
spec:
  host: product-service
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    connectionPool:
      http:
        http1MaxPendingRequests: 100
        maxRequestsPerConnection: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this example, we define a DestinationRule resource called “product” that sets the load balancing algorithm for the “product-service” to round-robin using the simple property. We also set the maximum number of pending requests to 100 and the maximum number of requests per connection to 5.&lt;/p&gt;

&lt;p&gt;This allows us to fine-tune the load balancing and connection pool settings for the “product” service, providing better control and management of traffic in the service mesh.&lt;/p&gt;

&lt;p&gt;In summary, a DestinationRule in Istio is a Kubernetes resource that defines policies for routing traffic to a specific version of a service, allowing you to configure various aspects of how traffic is directed to a service, such as load-balancing, connection pool settings, TLS settings, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;TrafficPolicies&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Traffic Policies are rules and configurations that control the behavior of traffic within the service mesh. They provide fine-grained control over how traffic is routed, load balanced, secured, and monitored in the service mesh.&lt;/p&gt;

&lt;p&gt;Traffic policies can be defined for individual services or for groups of services, and they can be specified at various levels in the service mesh, such as virtual services, destination rules, gateways, and more. Some of the common traffic policies in Istio include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load balancing policies:&lt;/strong&gt; These policies control how traffic is distributed among different instances of a service. For example, you can specify a round-robin load-balancing algorithm or a weighted load-balancing algorithm based on the performance and availability of each instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connection pool settings:&lt;/strong&gt; These policies control how many connections are allowed to a service at any given time, and how long they can stay open. This helps prevent connection overload and optimizes resource utilization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Traffic routing policies: **These policies control how traffic is routed to different versions of a service, based on various factors such as HTTP headers, cookies, or client IP addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security policies:&lt;/strong&gt; These policies control how traffic is secured within the service mesh. This can include mutual TLS (mTLS) authentication, certificate management, and access control policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traffic monitoring policies&lt;/strong&gt;: These policies control how traffic is monitored and logged in the service mesh. This can include configuring telemetry data collection, setting up tracing and logging, and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By using traffic policies in Istio, you can fine-tune and optimize the behavior of traffic in your service mesh, ensuring better performance, reliability, and security of your microservices-based applications.&lt;/p&gt;

&lt;p&gt;let's talk briefly about the Load Balancing Policies&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Balancing Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;load balancing policies are a key aspect of traffic management in Istio. They determine how traffic is distributed among different instances of a service within the service mesh, ensuring optimal utilization of resources and better performance and availability of microservices-based applications.&lt;/p&gt;

&lt;p&gt;There are several load-balancing policies that can be used in Istio, depending on the specific needs of the application. Some of the common load-balancing policies include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Round-robin&lt;/strong&gt;: In this policy, traffic is evenly distributed among all instances of a service, one after the other, in a cyclic order. This ensures that each instance of the service receives an equal share of the traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Weighted&lt;/strong&gt;: In this policy, traffic is distributed based on the relative weights assigned to each instance of a service. Instances with higher weights receive more traffic than instances with lower weights, allowing you to control the distribution of traffic based on the performance and availability of each instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Locality-based:&lt;/strong&gt; In this policy, traffic is distributed based on the geographic location of the client and the instances of the service. This helps optimize the routing of traffic and reduce latency by directing traffic to the closest instance of the service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Least connections:&lt;/strong&gt; In this policy, traffic is directed to the instance with the least number of active connections, ensuring that each instance of the service is not overloaded with too many connections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Random:&lt;/strong&gt; In this policy, traffic is randomly distributed among all instances of a service, allowing you to achieve a more uniform distribution of traffic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can specify the load-balancing policy for a service or a group of services in Istio by creating a DestinationRule resource and setting the appropriate load-balancing algorithm. This can be done using the simple property for simple round-robin or random load balancing, or the weighted property for weighted load balancing. Once the DestinationRule is created, Istio will automatically apply the load-balancing policy to the specified service or group of services.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>istio</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to Deploy ArgoCD in EKS cluster For Continuous Deployment</title>
      <dc:creator>Ian Kiprotich</dc:creator>
      <pubDate>Mon, 13 Mar 2023 20:32:57 +0000</pubDate>
      <link>https://dev.to/onai254/how-to-deploy-argocd-in-eks-cluster-for-continuous-deployment-2e2i</link>
      <guid>https://dev.to/onai254/how-to-deploy-argocd-in-eks-cluster-for-continuous-deployment-2e2i</guid>
      <description>&lt;p&gt;In this blog, we will be able to deploy ArgoCd in an EKS cluster and use it for Continuous Deployment process&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Have an EKS cluster running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have your yaml configuration microservices in a git repository&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is ArgoCD?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--veWjPr8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AXj91hAG4RlI_w1HhBIPcAw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--veWjPr8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AXj91hAG4RlI_w1HhBIPcAw.png" alt="" width="619" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Argo CD is a declarative continuous delivery tool for Kubernetes applications. It helps automate the deployment of applications to Kubernetes clusters by providing a GitOps approach to managing and deploying applications.&lt;/p&gt;

&lt;p&gt;With Argo CD, you can define the desired state of your application in a Git repository and have it automatically synced with your Kubernetes cluster. This allows you to easily manage your application deployments across multiple environments, such as development, staging, and production.&lt;/p&gt;

&lt;p&gt;Argo CD provides a web-based user interface as well as a command-line interface for managing your application deployments. It also integrates with other tools in the Kubernetes ecosystem, such as Helm charts, Kustomize, and Jsonnet.&lt;/p&gt;

&lt;p&gt;Overall, Argo CD helps simplify the deployment process for Kubernetes applications and provides a reliable, scalable, and secure way to manage your deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo overview steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install ArgoCD in K8s cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure ArgoCD with “Applicatin” CRD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the ArgoCD CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test our setup by updating Deployment.yaml files&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's get started by first installing ArgoCd&lt;/p&gt;

&lt;p&gt;First, create a namespace&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XFkvD2XH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ARub7ZU_wp5S9mGowvrd25g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XFkvD2XH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ARub7ZU_wp5S9mGowvrd25g.png" alt="" width="709" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's apply the yaml configuration files for ArgoCd&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nvsAmEgN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AVgxppR96RFaMcTY_ljoFMw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nvsAmEgN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AVgxppR96RFaMcTY_ljoFMw.png" alt="" width="880" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can view the pods created in the ArgoCD namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kdkd5s9G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWSKtUR5ey_-iL43IZdtOAg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kdkd5s9G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWSKtUR5ey_-iL43IZdtOAg.png" alt="" width="880" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we can acess the ArgoCD ui from the services deployed&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/argocd-server 8080:443 -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dKmP3wY0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2748/1%2A1smNocskR-AQIGCUEAIAlA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dKmP3wY0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2748/1%2A1smNocskR-AQIGCUEAIAlA.png" alt="Argocd Dashboard" width="880" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The username is admin&lt;/p&gt;

&lt;p&gt;The password is autogenerated and stored in the secret called argocd-initial-admin-secret in the argocd installation namespace.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret argocd-initial-admin-secret -n argo
cd -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KHbsfxYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbRIk2rE82QKDicPIfrxwiQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KHbsfxYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbRIk2rE82QKDicPIfrxwiQ.png" alt="" width="880" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you can decode the password&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo R215N3hvZ21LSEEwRGlIag== | base64 --decode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PWQvihXl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbhadmXd9E3Bp3PD3cCewvQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PWQvihXl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbhadmXd9E3Bp3PD3cCewvQ.png" alt="" width="869" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oX2Mz8BP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2724/1%2Au631A_l4f4aPo-qooOFzYw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oX2Mz8BP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2724/1%2Au631A_l4f4aPo-qooOFzYw.png" alt="" width="880" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ArgoCD is empty because we have to configure it thats gonna be the next step.&lt;/p&gt;

&lt;p&gt;Let's write a configuration file for ArgoCD to connect it to the git repository where the configuration files are hosted&lt;/p&gt;

&lt;p&gt;since we have to connect ArgoCD with our cluster we have to get the endpoint for our cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks describe-cluster --name &amp;lt;cluster-name&amp;gt; --query "cluster.endpoint"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can also view the endpoint using the following steps&lt;/p&gt;

&lt;p&gt;To find the endpoint using the AWS Management Console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the EKS console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under the “Configuration” tab, you should see the “Kubernetes endpoint” listed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we have to configure the CLI to our ArgoCD instance. the easiest way is to use the homebrew command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After you install you can log in to the CLI using the command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd login 127.0.0.1:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Fill in the Username and Password&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kvKVf_Au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AKbkiLnY72h2KiF-D6pbHFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kvKVf_Au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AKbkiLnY72h2KiF-D6pbHFA.png" alt="" width="850" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Register your Cluster to deploy your application&lt;/p&gt;

&lt;p&gt;This step registers a cluster’s credentials to Argo CD, and is only necessary when deploying to an external cluster. When deploying internally (to the same cluster that Argo CD is running in), &lt;a href="https://kubernetes.default.svc"&gt;https://kubernetes.default.svc&lt;/a&gt; should be used as the application’s K8s API server address.&lt;/p&gt;

&lt;p&gt;First list all clusters contexts in your current kubeconfig:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config get-contexts -o name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hWz9Xeh_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AJ6FD2g2SXerO48NV8C6OMQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hWz9Xeh_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AJ6FD2g2SXerO48NV8C6OMQ.png" alt="" width="817" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add your cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd cluster add arn:aws:eks:us-east-2:758659350150:cluster/Giovanni
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command installs a ServiceAccount (argocd-manager), into the kube-system namespace of that kubectl context, and binds the service account to an admin-level ClusterRole. Argo CD uses this service account token to perform its management tasks (i.e. deploy/monitoring).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xgbt1ywk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AzEFDmogRL72gsRH25EHyYg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xgbt1ywk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AzEFDmogRL72gsRH25EHyYg.png" alt="" width="860" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will configure the git repository &lt;a href="https://gitlab.com/onai254/giovanni.git"&gt;https://gitlab.com/onai254/giovanni.git&lt;/a&gt; to deploy the application but before that, we need to set the current namespace to ArgoCD by running the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config set-context --current --namespace=argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2xtBQTyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ajj5Fj-yAnBf2Vrnr4ZhkKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2xtBQTyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ajj5Fj-yAnBf2Vrnr4ZhkKw.png" alt="" width="847" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will create the application using the following command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd app create myapp --repo https://github.com/argoproj/argocd-example-apps.git --path . --revision master --dest-server https://11E5FA4C5A83DEA0033E57F09D23DFC5.gr7.us-east-2.eks.amazonaws.com --dest-namespace default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FUatN6D---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AO7E0YPyREEaA4_MYvoL2YQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FUatN6D---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AO7E0YPyREEaA4_MYvoL2YQ.png" alt="" width="844" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--un9VWz3S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2722/1%2AA1doQiNAz5clhrLH2D_ruw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--un9VWz3S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2722/1%2AA1doQiNAz5clhrLH2D_ruw.png" alt="" width="880" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, our application is deployed automatically by ArgoCD and also it watches the repository for any changes.&lt;/p&gt;

&lt;p&gt;You can also view more functionality of ArgoCD and view the different parts of the application.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>microservices</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
