<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sayan Moitra</title>
    <description>The latest articles on DEV Community by Sayan Moitra (@sayanmoitra).</description>
    <link>https://dev.to/sayanmoitra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sayanmoitra"/>
    <language>en</language>
    <item>
      <title>Using Amazon EKS with Karmada: Building Multi-cluster Kubernetes Management</title>
      <dc:creator>Sayan Moitra</dc:creator>
      <pubDate>Sun, 30 Mar 2025 17:12:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-amazon-eks-with-karmada-building-multi-cluster-kubernetes-management-2dm1</link>
      <guid>https://dev.to/aws-builders/using-amazon-eks-with-karmada-building-multi-cluster-kubernetes-management-2dm1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As organizations scale their Kubernetes environments, managing multiple clusters across various regions becomes increasingly complex. Amazon Elastic Kubernetes Service (EKS) provides a managed Kubernetes service, but orchestrating workloads across multiple EKS clusters presents significant operational challenges. Enter Karmada (Kubernetes Armada) — an open-source CNCF project designed to solve multi-cluster management issues.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore how to combine Amazon EKS with Karmada to build a robust, scalable multi-cluster management solution that provides high availability, disaster recovery, and improved resource utilization across your Kubernetes estate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Karmada
&lt;/h2&gt;

&lt;p&gt;Karmada, which stands for Kubernetes Armada, is a CNCF project that provides a unified control plane for managing multiple Kubernetes clusters. Unlike other multi-cluster solutions, Karmada maintains the native Kubernetes API without introducing custom APIs, making it easier to integrate with existing Kubernetes tools and workflows.&lt;/p&gt;

&lt;p&gt;Key features of Karmada include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Centralized cluster management&lt;/strong&gt;: Manage multiple clusters from a single control plane&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native Kubernetes API compatibility&lt;/strong&gt;: Use familiar Kubernetes APIs and tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource propagation&lt;/strong&gt;: Automatically distribute resources across member clusters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failover and high availability&lt;/strong&gt;: Support application failover between clusters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Override policies&lt;/strong&gt;: Define cluster-specific configurations while maintaining a single source of truth&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Combine EKS with Karmada?
&lt;/h2&gt;

&lt;p&gt;Amazon EKS simplifies running Kubernetes on AWS by eliminating the need to install and operate your own Kubernetes control plane. However, many organizations require workloads to run across multiple regions or even multiple cloud providers. Combining EKS with Karmada offers several advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Global High Availability&lt;/strong&gt;: Deploy applications across multiple regions to ensure service continuity even if an entire region experiences an outage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulatory Compliance&lt;/strong&gt;: Meet data residency requirements by deploying specific workloads to specific regions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Distribute workloads to the most cost-effective regions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Scale beyond the limits of a single Kubernetes cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Management&lt;/strong&gt;: Manage multiple clusters with consistent policies and configurations&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6jednda1aql7w0ebcdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6jednda1aql7w0ebcdx.png" alt="Architecture Overview" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's walk through a reference architecture for implementing Karmada with EKS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Karmada Control Plane&lt;/strong&gt;: Deploy the Karmada control plane in a dedicated EKS cluster, preferably in your primary region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Member Clusters&lt;/strong&gt;: Set up multiple EKS clusters across different AWS regions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connectivity&lt;/strong&gt;: Establish secure connectivity between clusters using AWS Transit Gateway or similar networking services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Configure IAM roles and service accounts for secure cluster access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Implement a unified monitoring solution across all clusters&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Setup Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI configured with appropriate permissions&lt;/li&gt;
&lt;li&gt;kubectl installed and configured&lt;/li&gt;
&lt;li&gt;Helm 3.x installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Create EKS Clusters
&lt;/h3&gt;

&lt;p&gt;First, create EKS clusters in your desired regions. Here, we'll create two clusters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a cluster in us-west-2 (for Karmada control plane)&lt;/span&gt;
eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; karmada-control &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="nt"&gt;--version&lt;/span&gt; 1.28 &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; standard-workers &lt;span class="nt"&gt;--node-type&lt;/span&gt; m5.large &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3

&lt;span class="c"&gt;# Create a member cluster in us-east-1&lt;/span&gt;
eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; eks-east &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--version&lt;/span&gt; 1.28 &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; standard-workers &lt;span class="nt"&gt;--node-type&lt;/span&gt; m5.large &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3

&lt;span class="c"&gt;# Create another member cluster in eu-west-1&lt;/span&gt;
eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; eks-europe &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-1 &lt;span class="nt"&gt;--version&lt;/span&gt; 1.28 &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; standard-workers &lt;span class="nt"&gt;--node-type&lt;/span&gt; m5.large &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install Karmada Control Plane
&lt;/h3&gt;

&lt;p&gt;Next, install the Karmada control plane on the first cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Update kubeconfig to point to the control plane cluster&lt;/span&gt;
aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; karmada-control &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2

&lt;span class="c"&gt;# Install Karmada using Helm&lt;/span&gt;
helm repo add karmada https://raw.githubusercontent.com/karmada-io/karmada/master/charts
helm repo update
helm &lt;span class="nb"&gt;install &lt;/span&gt;karmada karmada/karmada &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; karmada-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Register Member Clusters
&lt;/h3&gt;

&lt;p&gt;Now, register your EKS clusters with Karmada:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install karmadactl&lt;/span&gt;
curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | bash

&lt;span class="c"&gt;# Get kubeconfig for member clusters&lt;/span&gt;
aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; eks-east &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/eks-east.kubeconfig
aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; eks-europe &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-1 &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/eks-europe.kubeconfig

&lt;span class="c"&gt;# Register clusters with Karmada&lt;/span&gt;
karmadactl &lt;span class="nb"&gt;join &lt;/span&gt;eks-east &lt;span class="nt"&gt;--cluster-kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/eks-east.kubeconfig
karmadactl &lt;span class="nb"&gt;join &lt;/span&gt;eks-europe &lt;span class="nt"&gt;--cluster-kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/eks-europe.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Verify Cluster Registration
&lt;/h3&gt;

&lt;p&gt;Check if your clusters are properly registered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get clusters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your EKS clusters listed with their status as "Ready".&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Applications Across Clusters
&lt;/h2&gt;

&lt;p&gt;With Karmada, you have multiple ways to deploy applications across clusters:&lt;/p&gt;

&lt;h3&gt;
  
  
  Using PropagationPolicy
&lt;/h3&gt;

&lt;p&gt;The most common way is to use a PropagationPolicy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.21&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.karmada.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PropagationPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-propagation&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resourceSelectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;clusterAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;clusterNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eks-east&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eks-europe&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this YAML to your Karmada control plane, and Karmada will propagate the nginx deployment to both the eks-east and eks-europe clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using OverridePolicy for Cluster-specific Configurations
&lt;/h3&gt;

&lt;p&gt;You might want different configurations for different clusters. For instance, you might need more replicas in a region with higher traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.karmada.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OverridePolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-override&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resourceSelectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;overrideRules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targetCluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;clusterNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eks-europe&lt;/span&gt;
      &lt;span class="na"&gt;overriders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;plaintext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/spec/replicas"&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy will override the replicas count to 5 in the eks-europe cluster, while eks-east will use the default value from the original deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Failover Between Regions
&lt;/h2&gt;

&lt;p&gt;One of the key benefits of a multi-cluster setup is the ability to implement cross-region failover. Here's how to set up a simple failover mechanism using Karmada:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.karmada.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PropagationPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-failover&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resourceSelectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;replicaScheduling&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicaSchedulingType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Divided&lt;/span&gt;
      &lt;span class="na"&gt;weightPreference&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;staticWeightList&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targetCluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;clusterNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eks-east&lt;/span&gt;
            &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targetCluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;clusterNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eks-europe&lt;/span&gt;
            &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="na"&gt;clusterTolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.karmada.io/unreachable&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
        &lt;span class="na"&gt;tolerationSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this policy, your workloads initially run in eks-east. If eks-east becomes unreachable for more than 300 seconds, Karmada will reschedule the workloads to eks-europe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Your Multi-cluster Setup
&lt;/h2&gt;

&lt;p&gt;Monitoring a multi-cluster setup requires a unified approach. Consider implementing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus and Grafana&lt;/strong&gt;: Deploy Prometheus for metrics collection and Grafana for visualization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Karmada Dashboards&lt;/strong&gt;: Use Karmada-specific Grafana dashboards to monitor propagation status&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CloudWatch&lt;/strong&gt;: Integrate with CloudWatch for EKS-specific metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distributed Tracing&lt;/strong&gt;: Implement tools like Jaeger or AWS X-Ray for tracing requests across clusters&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Cost Optimization Strategies
&lt;/h2&gt;

&lt;p&gt;Running multiple EKS clusters can increase costs. Here are strategies to optimize expenses:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Right-size your clusters&lt;/strong&gt;: Use appropriate instance types and autoscaling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances&lt;/strong&gt;: Configure node groups to use Spot instances for non-critical workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Autoscaler&lt;/strong&gt;: Implement the Kubernetes Cluster Autoscaler to adjust node count based on workload&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fargate Profiles&lt;/strong&gt;: Use AWS Fargate for serverless container execution when appropriate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workload Distribution&lt;/strong&gt;: Use Karmada to place workloads in lower-cost regions when possible&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Challenges and Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Networking Complexity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge&lt;/strong&gt;: Establishing secure communication between clusters across regions.&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Use AWS Transit Gateway for regional connectivity, implement service mesh solutions like Istio or AWS App Mesh for service-to-service communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Synchronization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge&lt;/strong&gt;: Keeping data synchronized across multiple regions.&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Use region-specific persistent volumes for regional data, employ cross-region replication for shared data, or use managed services like Amazon Aurora Global Database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Certificate Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Challenge&lt;/strong&gt;: Managing TLS certificates across clusters.&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Implement cert-manager with a central certificate authority, or use AWS Certificate Manager for certificates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Combining Amazon EKS with Karmada provides a powerful solution for multi-cluster Kubernetes management. This approach enables global high availability, regulatory compliance, and cost optimization while simplifying operations through a unified control plane.&lt;/p&gt;

&lt;p&gt;As organizations continue to expand their Kubernetes footprint, multi-cluster management becomes essential. By following the steps and best practices outlined in this article, you can build a robust, scalable Kubernetes platform that spans multiple regions and even multiple cloud providers.&lt;/p&gt;

&lt;p&gt;Remember that multi-cluster management is a journey, not a destination. Start with a simple setup, gain operational experience, and gradually expand your multi-cluster capabilities as your organization's needs evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://karmada.io/docs/" rel="noopener noreferrer"&gt;Karmada Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Amazon EKS Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/karmada-io/karmada" rel="noopener noreferrer"&gt;Karmada GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>karmada</category>
      <category>eks</category>
      <category>containers</category>
      <category>aws</category>
    </item>
    <item>
      <title>Securing Kubernetes with Kyverno: A Practical Guide to Deployment with Helm and ArgoCD in EKS</title>
      <dc:creator>Sayan Moitra</dc:creator>
      <pubDate>Sat, 08 Mar 2025 05:05:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/securing-kubernetes-with-kyverno-a-practical-guide-to-deployment-with-helm-and-argocd-in-eks-4glp</link>
      <guid>https://dev.to/aws-builders/securing-kubernetes-with-kyverno-a-practical-guide-to-deployment-with-helm-and-argocd-in-eks-4glp</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Kubernetes has revolutionized container orchestration, but as clusters grow in complexity, security becomes increasingly challenging. Enter &lt;strong&gt;Kyverno&lt;/strong&gt; , a policy engine designed specifically for Kubernetes. Unlike traditional policy engines, Kyverno doesn’t require learning a new language — it uses familiar Kubernetes-style resources to define policies.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll explore how to deploy Kyverno in Amazon EKS using Helm charts managed with ArgoCD, providing a GitOps approach to policy management. By the end, you’ll have a fully functional policy enforcement system that can validate, mutate, and generate resources across your EKS clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kyverno installation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozoeuxufyx5ov14dgpgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozoeuxufyx5ov14dgpgy.png" alt="https://cdn-images-1.medium.com/max/1024/1*xPnEhp7osbvnwoU7RzKaKg.png" width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kyverno Official Doc — Installation guide&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ja940mk2unahk5dp32m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ja940mk2unahk5dp32m.png" alt="https://cdn-images-1.medium.com/max/1024/1*P1J6aQ4zhgZunMZB0D-fdw.png" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Kyverno?
&lt;/h3&gt;

&lt;p&gt;Kyverno (derived from the Greek word “govern”) is a policy engine built specifically for Kubernetes. It allows cluster administrators to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Validate&lt;/strong&gt; resources against policies before they’re admitted to the cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mutate&lt;/strong&gt; resources to ensure they conform to organizational standards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate&lt;/strong&gt; related resources automatically when certain resources are created&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean up&lt;/strong&gt; resources when their parents are deleted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All this is achieved using Kubernetes-native custom resources, making Kyverno intuitive for teams already familiar with Kubernetes.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we begin, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An EKS cluster up and running&lt;/li&gt;
&lt;li&gt;kubectl configured to communicate with your cluster&lt;/li&gt;
&lt;li&gt;Helm v3 installed&lt;/li&gt;
&lt;li&gt;ArgoCD installed in your cluster&lt;/li&gt;
&lt;li&gt;A Git repository for storing your Kyverno configurations&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Deployment Architecture
&lt;/h3&gt;

&lt;p&gt;We’ll follow a GitOps approach with three main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;  — To package and template the Kyverno installation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ArgoCD&lt;/strong&gt;  — To sync configurations from Git and manage the deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS&lt;/strong&gt;  — Our Kubernetes environment on AWS&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step 1: Setting Up the Git Repository Structure
&lt;/h3&gt;

&lt;p&gt;First, let’s set up our Git repository with the necessary configuration files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kyverno-gitops/
├── applications/
│ └── kyverno.yaml
└── helm-values/
    └── kyverno-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Creating the ArgoCD Application
&lt;/h3&gt;

&lt;p&gt;Create the applications/kyverno.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kyverno
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://kyverno.github.io/kyverno/
    targetRevision: 2.7.2
    chart: kyverno
    helm:
      valueFiles:
      - ../../helm-values/kyverno-values.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: kyverno
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Configuring Kyverno with Helm Values
&lt;/h3&gt;

&lt;p&gt;Now, let’s create the helm-values/kyverno-values.yaml file with our desired configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;replicaCount: 3

resources:
  limits:
    cpu: 1000m
    memory: 1Gi
  requests:
    cpu: 100m
    memory: 128Mi

serviceMonitor:
  enabled: true
  namespace: kyverno

extraArgs:
  - "--clientRateLimitQPS=25"
  - "--clientRateLimitBurst=50"

podSecurityContext:
  runAsNonRoot: true
  runAsUser: 1000
  fsGroup: 1000

metricsService:
  create: true
  type: ClusterIP

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app.kubernetes.io/name: kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Deploying with ArgoCD
&lt;/h3&gt;

&lt;p&gt;With our configuration files ready, let’s deploy using ArgoCD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Apply the ArgoCD application
kubectl apply -f applications/kyverno.yaml

# Check the status
argocd app get kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use the ArgoCD UI to monitor the deployment progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6beu2xhswkfwrexulgks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6beu2xhswkfwrexulgks.png" alt="https://cdn-images-1.medium.com/max/1024/1*b4uexXnn01k4DYkSRX0KmA.png" width="800" height="345"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Argocd deployment&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffouimi78ppraixneh9gi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffouimi78ppraixneh9gi.png" alt="https://cdn-images-1.medium.com/max/1024/1*iyysU88dJyIKavWW-6xnrw.png" width="800" height="327"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kyverno policies&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 5: Verifying the Installation
&lt;/h3&gt;

&lt;p&gt;Once ArgoCD reports the application as “Healthy” and “Synced,” verify the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check if Kyverno pods are running
kubectl get pods -n kyverno

# Verify the CRDs are installed
kubectl get crds | grep kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the Kyverno pods running and several Kyverno-related CRDs installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Creating Your First Policy
&lt;/h3&gt;

&lt;p&gt;Let’s create a simple policy that requires all pods to have resource limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-resource-limits
spec:
  validationFailureAction: enforce
  rules:
  - name: validate-resource-limits
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "Resource limits are required for all containers."
      pattern:
        spec:
          containers:
            - resources:
                limits:
                  memory: "?*"
                  cpu: "?*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as policies/require-resource-limits.yaml in your Git repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Managing Policies with ArgoCD
&lt;/h3&gt;

&lt;p&gt;Create another ArgoCD application to manage your policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kyverno-policies
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourusername/kyverno-gitops.git
    targetRevision: main
    path: policies
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f applications/kyverno-policies.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Testing the Policy
&lt;/h3&gt;

&lt;p&gt;Let’s test our policy by trying to create a pod without resource limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive an error message indicating that the pod was rejected because it doesn’t specify resource limits.&lt;/p&gt;

&lt;p&gt;Now, let’s create a pod that complies with our policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx
    resources:
      limits:
        memory: "256Mi"
        cpu: "500m"
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pod should be created successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Configuration: Multi-Cluster Policy Management
&lt;/h3&gt;

&lt;p&gt;For organizations with multiple EKS clusters, you can use ArgoCD’s App of Apps pattern to deploy Kyverno consistently across all clusters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kyverno-management
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourusername/kyverno-gitops.git
    targetRevision: main
    path: clusters
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within your clusters directory, include a separate application for each cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clusters/prod-cluster.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kyverno-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourusername/kyverno-gitops.git
    targetRevision: main
    path: overlays/prod
  destination:
    server: https://prod-cluster-api.example.com
    namespace: kyverno
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Performance Tuning for EKS
&lt;/h3&gt;

&lt;p&gt;When running Kyverno in a production EKS environment, consider these performance optimizations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set appropriate resource requests and limits&lt;/strong&gt; to ensure Kyverno pods have sufficient resources without over-provisioning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement pod topology spread constraints&lt;/strong&gt; to distribute Kyverno pods across availability zones:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app.kubernetes.io/name: kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure Kyverno’s rate limits&lt;/strong&gt; to prevent it from overwhelming the Kubernetes API server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extraArgs:
  - "--clientRateLimitQPS=25"
  - "--clientRateLimitBurst=50"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use webhooks failurePolicy wisely&lt;/strong&gt; :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;webhooks:
  failurePolicy: Ignore # Use "Fail" in production for critical policies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monitoring and Alerting
&lt;/h3&gt;

&lt;p&gt;To monitor Kyverno’s health and performance, enable Prometheus metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serviceMonitor:
  enabled: true
  namespace: monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a simple Prometheus rule to alert on policy violations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: kyverno-alerts
  namespace: monitoring
spec:
  groups:
  - name: kyverno.rules
    rules:
    - alert: KyvernoPolicyViolations
      expr: sum(increase(kyverno_policy_results_total{result="fail"}[15m])) &amp;gt; 10
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "High number of Kyverno policy violations"
        description: "There have been more than 10 policy violations in the last 15 minutes."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this guide, we’ve covered how to deploy Kyverno in an EKS environment using Helm and ArgoCD. By leveraging GitOps principles, we’ve created a scalable, version-controlled system for managing Kubernetes policies.&lt;/p&gt;

&lt;p&gt;Kyverno provides a powerful yet easy-to-understand approach to policy enforcement in Kubernetes. Its native integration with Kubernetes resources makes it an excellent choice for teams looking to implement policy-as-code without learning complex domain-specific languages.&lt;/p&gt;

&lt;p&gt;As you continue your Kyverno journey, explore more advanced policies for security, compliance, and operational best practices. Remember that effective policy management is an iterative process — start small, test thoroughly, and gradually expand your policy coverage as your team becomes more comfortable with the tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kyverno.io/docs/" rel="noopener noreferrer"&gt;Kyverno Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://argo-cd.readthedocs.io/" rel="noopener noreferrer"&gt;ArgoCD Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/community/tree/master/wg-policy" rel="noopener noreferrer"&gt;Kubernetes Policy Working Group&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>kubernetesargocd</category>
      <category>kyverno</category>
      <category>aws</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
