<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nishant Barola</title>
    <description>The latest articles on DEV Community by Nishant Barola (@nbarola).</description>
    <link>https://dev.to/nbarola</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nbarola"/>
    <language>en</language>
    <item>
      <title>How to Simplify Multi-cluster Istio Service Mesh using Admiral?</title>
      <dc:creator>Nishant Barola</dc:creator>
      <pubDate>Thu, 31 Aug 2023 13:59:33 +0000</pubDate>
      <link>https://dev.to/infracloud/how-to-simplify-multi-cluster-istio-service-mesh-using-admiral-26me</link>
      <guid>https://dev.to/infracloud/how-to-simplify-multi-cluster-istio-service-mesh-using-admiral-26me</guid>
      <description>&lt;p&gt;In today's rapidly evolving technological landscape, organizations are increasingly embracing cloud-native architectures and leveraging the power of Kubernetes for application deployment and management. However, as enterprises grow and their infrastructure becomes more complex, a single Kubernetes cluster on a single cloud provider may no longer suffice,  potentially leading to limitations in redundancy, disaster recovery, vendor lock-in, performance optimization, geographical diversity, cost-efficient scaling, and security and compliance measures. This is where the concept of a multi Kubernetes cluster on multi-cloud, combined with a multi-cluster service mesh, emerges as a game-changer. This does sound complex, but let's walk through and understand each part in the coming sections.&lt;/p&gt;

&lt;p&gt;In this blog post, we will learn why a multi-cluster setup is needed? How does Istio Service Mesh work on multi-clusters? How does Admiral simplify multi-cluster Istio configuration? And then we will set up the end-to-end service communication on multi-cloud Kubernetes clusters on AWS and Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use multi-cluster setup?
&lt;/h2&gt;

&lt;p&gt;Multi-cluster architecture setups offer various advantages in many use cases such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault isolation and failover&lt;/strong&gt;: Deploying multiple clusters helps you to achieve fault isolation. In case one cluster experiences an issue or goes down, the traffic can be seamlessly redirected to other healthy clusters, ensuring high availability and minimizing service disruptions. This failover capability helps maintain the overall system stability and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Location-aware routing and failover&lt;/strong&gt;: Multi-cluster setups enable location-aware routing, which means that requests can be directed to the nearest cluster or service instance based on the user's location. This routing strategy improves network latency and provides a better user experience by minimizing network delays. In case of a failure or degraded performance in a specific cluster, traffic can be automatically rerouted to an alternate cluster, ensuring continuous service availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team or project isolation&lt;/strong&gt;: Each team or project within an organization may have its own specific requirements, configurations, and policies. You achieve isolation and independence across teams or projects by deploying separate clusters. This isolation enables teams to have control over their own set of clusters, making it easier to manage and scale their services without interfering with other teams' environments. It also provides enhanced security and reduces the risk of unintentional clashes among different projects.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Multi-cluster deployments give you a greater degree of isolation and availability but it does introduce complexity. That’s where &lt;a href="https://dev.to/blogs/service-mesh-101/"&gt;service mesh&lt;/a&gt; like Istio comes into the picture. They help to streamline the management of multi-cluster deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Istio deployment models
&lt;/h2&gt;

&lt;p&gt;There are various deployment models for &lt;a href="https://istio.io/latest/about/service-mesh/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; service mesh, like &lt;a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary/" rel="noopener noreferrer"&gt;Multi-Primary&lt;/a&gt;, &lt;a href="https://istio.io/latest/docs/setup/install/multicluster/primary-remote/" rel="noopener noreferrer"&gt;Primary-Remote&lt;/a&gt;,  &lt;a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/" rel="noopener noreferrer"&gt;Multi-Primary on different networks&lt;/a&gt;, and  &lt;a href="https://istio.io/latest/docs/setup/install/multicluster/primary-remote_multi-network/" rel="noopener noreferrer"&gt;Primary-Remote on different networks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In Multi-Primary, we have the Istio control plane running on all clusters, whereas in Primary-Remote we have the Istio control plane running on a single cluster. There will be direct connectivity between service workloads across clusters if they are on the same network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Multi-Primary on different networks
&lt;/h2&gt;

&lt;p&gt;Multi-Primary Istio on different networks provide the following benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved availability&lt;/strong&gt;: Deploying multiple primary control planes enhances availability. If one control plane becomes unavailable, it only affects the workloads within its managed clusters, minimizing disruptions and allowing other clusters to function normally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration isolation&lt;/strong&gt;: Configuration changes can be made independently in one cluster, zone, or region without affecting others, allowing controlled rollout across the mesh.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Selective service visibility&lt;/strong&gt;: Selective service visibility enables service-level isolation, restricting access to specific parts of the mesh.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-network communication&lt;/strong&gt;: Istio supports spanning a single service mesh across multiple networks, known as multi-network deployments. This configuration offers benefits such as overlapping IP or VIP ranges for service endpoints, crossing administrative boundaries, fault tolerance, scaling of network addresses, and compliance with network segmentation standards. Istio gateways facilitate secure communication between workloads in different networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure communication&lt;/strong&gt;: Istio ensures secure communication by supporting cross-network communication only for workloads with an Istio proxy. Istio exposes services at the Gateway with TLS pass-through, enabling mutual TLS (mTLS) directly to the workload. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Multi-Primary on different networks Istio deployment model combines multiple clusters across distinct networks in a single mesh, enabling fault isolation, precise location-based routing, and controlled configuration changes. This approach enhances availability, scalability, and compliance, making it ideal for ensuring reliable and efficient multi-cluster setups with diverse requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Multi-Primary on different networks architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fistio.io%2Flatest%2Fdocs%2Fsetup%2Finstall%2Fmulticluster%2Fmulti-primary_multi-network%2Farch.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fistio.io%2Flatest%2Fdocs%2Fsetup%2Finstall%2Fmulticluster%2Fmulti-primary_multi-network%2Farch.svg" alt="Multi-Primary Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/arch.svg" rel="noopener noreferrer"&gt;(Image Source)&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In the Multi-Primary on different networks deployment model, the clusters within the Istio mesh are situated on separate networks. It means that there is no direct connection between the service workloads in different clusters. To enable communication between service workloads across cluster boundaries, each Istio control plane monitors the API Servers in all clusters to obtain information about service endpoints. Service workloads across cluster boundaries communicate indirectly via dedicated gateways for east-west traffic. It is crucial that the gateway in each cluster can be accessed from the other cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of Istio Multi-Primary
&lt;/h2&gt;

&lt;p&gt;As there are many benefits of Multi-Primary Istio, you should also know about the following challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As per &lt;a href="https://istio.io/latest/docs/reference/glossary/#namespace-sameness" rel="noopener noreferrer"&gt;namespace sameness&lt;/a&gt; by default, the traffic will be load-balanced across all clusters in the mesh having the same FQDN (same service name and namespace). In order to change the default load balancing, you have to apply appropriate Istio configuration such as &lt;a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/" rel="noopener noreferrer"&gt;Destination rules&lt;/a&gt; in each cluster. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the same service is in different namespaces across the clusters and requires a single global identity, then you have to implement Istio’s configuration such as &lt;a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="noopener noreferrer"&gt;service entries&lt;/a&gt; and Destination rules in each cluster to enable the service discovery across clusters. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With each cluster having its own control plane, services will get the configuration from their respective control plane. Currently, Istio does not have any way to manage the configurations across the multiple control planes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Admiral can help you in overcoming these challenges. For one, Admiral can be used in order to have a single global identity for a service, and sync its related Istio’s configuration across the multiple control planes.&lt;/p&gt;

&lt;p&gt;Let’s explore how Admiral can help you in managing the multi-cluster service mesh seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Admiral multi-cluster management
&lt;/h2&gt;

&lt;p&gt;Admiral simplifies the configuration of an Istio service mesh that spans multiple clusters, enabling it to function as a single mesh. It facilitates high-level configuration management, automating communication between services across different clusters. Admiral enables the creation of globally unique service names, ensuring consistency and avoiding naming conflicts. Furthermore, it allows the specification of custom service names for explicit environments or regions, providing flexibility in naming conventions based on specific requirements.&lt;/p&gt;

&lt;p&gt;Admiral further streamline the management process for cross-cluster services by introducing two custom resources: Dependency and GlobalTrafficPolicy. These resources are utilized to configure ServiceEntries, VirtualServices, and DestinationRules on each cluster that is involved in the cross-cluster communication. &lt;/p&gt;

&lt;h3&gt;
  
  
  Admiral architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fc32a2a41e2e41876b00a5bd4f61fe54d354d067e%2Fde628%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fadmiral-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fc32a2a41e2e41876b00a5bd4f61fe54d354d067e%2Fde628%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fadmiral-architecture.png" alt="Admiral architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Admiral serves as a controller that monitors Kubernetes clusters where it has access to credentials stored as secret objects within the namespace in which the Admiral is running. It leverages these credentials to talk to API servers of the clusters, and to distribute Istio configuration to each cluster, enabling seamless communication between services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Admiral’s CRD
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Dependency
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admiral.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dependency&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dependency&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admiral&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
  &lt;span class="na"&gt;identityLabel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;identity&lt;/span&gt;
  &lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;service2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dependency CRD is only created in the cluster where Admiral controller is running. This custom resource tells Admiral to sync Istio’s configuration for the destination services where the source service is running. In the above example, service2’s Istio configuration will be synced to all the clusters where service1 is running. This CRD is optional.&lt;/p&gt;

&lt;h4&gt;
  
  
  Global traffic policy
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admiral.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GlobalTrafficPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gtp-service-podinfo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp-eks&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;admiral.io/env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stage&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;identity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;dnsPrefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;lbType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ap-south-1&lt;/span&gt;
          &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;westus&lt;/span&gt;
          &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Global Traffic Policy can be created in any of the clusters the Admiral is watching over. It works on the identity label to generate a globally unique DNS name for all the services having the matching label. &lt;/p&gt;

&lt;p&gt;When dnsPrefix value is default or if it matches the value of admiral.io/env annotation on a deployment, then the service host is generated as - &lt;code&gt;{admiral.io/env}.{identity-label}.global&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For the above example it would be - &lt;code&gt;stage.backend.global&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If dnsPrefix value is other than default and doesn’t match the value of admiral.io/env annotation on a deployment, then the service host is generated as &lt;code&gt;{dnsPrefix}.{admiral.io/env}.{identity-label}.global&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Istio creates service entries to all the clusters with the respective endpoints matching the labels.&lt;/p&gt;

&lt;p&gt;The lbType value is to create the locality load balancing setting in Istio Destination rules for routing. Value 0 means Istio’s &lt;a href="https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/failover/" rel="noopener noreferrer"&gt;Locality failover&lt;/a&gt; and value 1 means Istio’s &lt;a href="https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/distribute/" rel="noopener noreferrer"&gt;Locality weighted distribution&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing microservices on different clouds using Istio and Admiral
&lt;/h2&gt;

&lt;p&gt;In the previous section, we learned how Admiral helps Istio in service discovery. Now, let us see how to manage microservices on different clouds using Istio and Admiral. We will start with installing Istio on Kubernetes clusters hosted on AWS and Azure. Then, we will install Admiral on one of the clusters, and register both clusters with Admiral, so Admiral will be able to watch both clusters. Next, we will deploy the podinfo application and then see how we can use Admiral for automatic Istio configuration and service discovery across clusters. Lastly, we will set up multi-cluster monitoring with Prometheus, Grafana, and Kiali.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Two Kubernetes clusters. In this blog, we will be using AWS (EKS) and Azure (AKS) Kubernetes clusters with the cluster’s API server being publicly accessible. To quickly spin up the clusters you can use the respective links - &lt;a href="https://eksctl.io/usage/creating-and-managing-clusters/" rel="noopener noreferrer"&gt;EKS&lt;/a&gt;, &lt;a href="https://eksctl.io/usage/creating-and-managing-clusters/" rel="noopener noreferrer"&gt;AKS&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mikefarah/yq#install" rel="noopener noreferrer"&gt;yq&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GIT&lt;/li&gt;
&lt;li&gt;az cli&lt;/li&gt;
&lt;li&gt;eksctl&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/infracloudio/istio-admiral/" rel="noopener noreferrer"&gt;Resources Repository&lt;/a&gt; - This repository contains resources that you can use to follow along with this tutorial.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Clone the repository
&lt;/h3&gt;

&lt;p&gt;Clone the &lt;a href="https://github.com/infracloudio/istio-admiral.git" rel="noopener noreferrer"&gt;resources repository&lt;/a&gt; and download the necessary files&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/infracloudio/istio-admiral.git

&lt;span class="nb"&gt;cd &lt;/span&gt;istio-admiral
&lt;span class="c"&gt;# Note: we will be executing all commands from the root of the directory&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;REPO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;

&lt;span class="c"&gt;# download the 1.17.2 release of Istio &lt;/span&gt;
curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://istio.io/downloadIstio | &lt;span class="nv"&gt;ISTIO_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.17.2 sh -


&lt;span class="c"&gt;# rename istio directory for simplicity&lt;/span&gt;
&lt;span class="nb"&gt;mv &lt;/span&gt;istio-1.17.2 istio

&lt;span class="c"&gt;# Add the istioctl client to your path&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"export PATH=&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;/istio/bin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Multi-cluster mesh setup
&lt;/h3&gt;

&lt;p&gt;Setup multi-cluster Istio mesh across different cloud environments&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Generate common CA certificates
&lt;/h4&gt;

&lt;p&gt;In order for both the clusters to be part of a single mesh, we will generate a common root CA, then use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. By having a common CA, all clusters within the mesh can trust the same CA for issuing and validating certificates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# certs&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$REPO_HOME&lt;/span&gt;/certs
&lt;span class="nb"&gt;cd &lt;/span&gt;certs
make &lt;span class="nt"&gt;-f&lt;/span&gt; ../istio/tools/certs/Makefile.selfsigned.mk root-ca
make &lt;span class="nt"&gt;-f&lt;/span&gt; ../istio/tools/certs/Makefile.selfsigned.mk eks-cacerts
make &lt;span class="nt"&gt;-f&lt;/span&gt; ../istio/tools/certs/Makefile.selfsigned.mk aks-cacerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Set respective contexts to the below environment variables
&lt;/h4&gt;

&lt;p&gt;You can use kubectl config get-contexts to get the required contexts of created clusters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;eks-context&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;aks-context&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Create cacerts secret from the generated certificates in istio-system namespace
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; create namespace istio-system

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; create secret generic cacerts &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks/ca-cert.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks/ca-key.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks/root-cert.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks/cert-chain.pem

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; create namespace istio-system

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; create secret generic cacerts &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aks/ca-cert.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aks/ca-key.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aks/root-cert.pem &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aks/cert-chain.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Install Istio and configure Gateway on EKS
&lt;/h4&gt;

&lt;p&gt;Follow the below steps to install Istio on the EKS cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$REPO_HOME&lt;/span&gt;

&lt;span class="c"&gt;# add network label to istio-system namespace&lt;/span&gt;

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; label namespace istio-system topology.istio.io/network&lt;span class="o"&gt;=&lt;/span&gt;eks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install Istio on EKS, we will have the following configuration. In the global values we have provided meshID as mesh1, which will be the same for both clusters as it would be a single mesh. clusterName and network values are set to eks and aks respectively. &lt;/p&gt;

&lt;p&gt;With ISTIO_META_DNS_CAPTURE, and  ISTIO_META_DNS_AUTO_ALLOCATE values set to true, Istio’s ServiceEntry addresses can be resolved without requiring a custom configuration of a DNS server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;istio-eks.yaml

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: eks
      network: eks
  meshConfig:
    defaultConfig:
      proxyMetadata:
        &lt;span class="c"&gt;# Enable basic DNS proxying&lt;/span&gt;
        ISTIO_META_DNS_CAPTURE: &lt;span class="s2"&gt;"true"&lt;/span&gt;
        &lt;span class="c"&gt;# Enable automatic address allocation&lt;/span&gt;
        ISTIO_META_DNS_AUTO_ALLOCATE: &lt;span class="s2"&gt;"true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# To Install istio we will use istioctl install command.&lt;/span&gt;
istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; istio-eks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Press y to proceed after the prompt


This will &lt;span class="nb"&gt;install &lt;/span&gt;the Istio 1.17.2 default profile with &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Istio core"&lt;/span&gt; &lt;span class="s2"&gt;"Istiod"&lt;/span&gt; &lt;span class="s2"&gt;"Ingress gateways"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; components into the cluster. Proceed? &lt;span class="o"&gt;(&lt;/span&gt;y/N&lt;span class="o"&gt;)&lt;/span&gt; y
✔ Istio core installed                                                                                                                                                                                     
✔ Istiod installed                                                                                                                                                                                         
✔ Ingress gateways installed                                                                                                                                                                               
✔ Installation &lt;span class="nb"&gt;complete                                                                                                 &lt;/span&gt;Making this installation the default &lt;span class="k"&gt;for &lt;/span&gt;injection and validation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install Istio east-west gateway&lt;/span&gt;
istio/samples/multicluster/gen-eastwest-gateway.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mesh&lt;/span&gt; mesh1 &lt;span class="nt"&gt;--cluster&lt;/span&gt; eks &lt;span class="nt"&gt;--network&lt;/span&gt; eks | &lt;span class="se"&gt;\&lt;/span&gt;
    istioctl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service istio-eastwestgateway should get an external IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get svc istio-eastwestgateway &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                                                           AGE
istio-eastwestgateway   LoadBalancer   10.100.181.251   a81362f21b84f471eb19689b5510d365-1262659891.ap-south-1.elb.amazonaws.com   15021:30962/TCP,15443:31614/TCP,15012:32334/TCP,15017:30431/TCP   25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the clusters are on different networks, we need to expose all services (*.local and *.global) on the east-west gateway in both clusters. Since the east-west gateway is exposed as a public load balancer, services behind it can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;cross-network-gateway.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - &lt;span class="s2"&gt;"*.local"&lt;/span&gt;
        - &lt;span class="s2"&gt;"*.global"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the cross-network gateway&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; cross-network-gateway.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install Istio and configure gateway on AKS
&lt;/h4&gt;

&lt;p&gt;Repeat the similar steps to install Istio on the AKS cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install on aks&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; label namespace istio-system topology.istio.io/network&lt;span class="o"&gt;=&lt;/span&gt;aks

istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; istio-aks.yaml

istio/samples/multicluster/gen-eastwest-gateway.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mesh&lt;/span&gt; mesh1 &lt;span class="nt"&gt;--cluster&lt;/span&gt; aks &lt;span class="nt"&gt;--network&lt;/span&gt; aks | &lt;span class="se"&gt;\&lt;/span&gt;
    istioctl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; -

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; cross-network-gateway.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service istio-eastwestgateway should get an external IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get svc istio-eastwestgateway &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                                                           AGE
istio-eastwestgateway   LoadBalancer   10.10.81.191   20.245.234.103  15021:30962/TCP,15443:31614/TCP,15012:32334/TCP,15017:30431/TCP   20s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Enable endpoint discovery
&lt;/h4&gt;

&lt;p&gt;We will create remote secrets in both clusters to provide access to the other cluster’s API server. So, each control plane would be able to gather information about service endpoints from the API Server in every cluster. Istio then provides each proxy with a collection of service endpoints to manage traffic within the mesh.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install a remote secret in aks that provides access to eks's API server.&lt;/span&gt;
istioctl x create-remote-secret &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks | &lt;span class="se"&gt;\&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;span class="c"&gt;# Install a remote secret in eks that provides access to aks's API server.&lt;/span&gt;
istioctl x create-remote-secret &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aks | &lt;span class="se"&gt;\&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup Admiral
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Install Admiral CRDs
&lt;/h4&gt;

&lt;p&gt;Create Admiral’s CRDs on EKS and AKS. We will be using EKS as our main cluster i.e where the Admiral controller would be running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# on EKS&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/crds/depencency-crd.yaml
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/crds/globaltrafficpolicy-crd.yaml

&lt;span class="c"&gt;# on AKS&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/crds/globaltrafficpolicy-crd.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Create admiral-sync namespace and required role bindings
&lt;/h4&gt;

&lt;p&gt;Create admiral-sync namespace and required role bindings for permission to create Istio’s custom resources. Admiral creates and syncs all the Istio’s custom resources in the admiral-sync namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# on EKS&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/admiral-sync.yaml

&lt;span class="c"&gt;# on AKS&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/admiral-sync.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Install Admiral’s control plane on EKS
&lt;/h4&gt;

&lt;p&gt;In Admiral’s deployment manifest, make sure to have this container argument gateway_app with value istio-eastwestgateway, so while creating Istio’s service entry, Admiral will use the east-west gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--gateway_app&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;istio-eastwestgateway&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/install-admiral.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Admiral control plane should be up and running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get pods &lt;span class="nt"&gt;-n&lt;/span&gt; admiral
NAME                      READY   STATUS    RESTARTS   AGE
admiral-b6c89bc44-bc5rd   1/1     Running   0          32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Register clusters with Admiral
&lt;/h4&gt;

&lt;p&gt;In order to register the clusters with Admiral, we need to provide the cluster's kubeconfig file as a secret to Admiral. You can create a service account token and add that token to the respective cluster’s kubeconfig file to provide access to the required Kubernetes resources. We should register the cluster where Admiral is running if it has other workloads that require service discovery.&lt;/p&gt;

&lt;p&gt;To prepare EKS kubeconfig file with token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create secret to generate the token&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/admiral-sa-secret.yaml

&lt;span class="c"&gt;# get the token&lt;/span&gt;
&lt;span class="nv"&gt;EKS_SA_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get secret admiral-sa-token &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.token}'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# get the kubeconfig&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; config view &lt;span class="nt"&gt;--minify&lt;/span&gt; &lt;span class="nt"&gt;--flatten&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; admiral-eks.yaml

&lt;span class="c"&gt;# Add the token to the kubeconfig file&lt;/span&gt;
&lt;span class="c"&gt;# If you have installed yq can use the below command or you can manually add the token to #the kubeconfig file under .users[0].user&lt;/span&gt;
yq &lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="s2"&gt;"del(.users[0].user) | .users[0].user = {&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;token&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$EKS_SA_TOKEN&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; admiral-eks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To prepare AKS kubeconfig file with token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create secret to generate the token&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; admiral/admiral-sa-secret.yaml

&lt;span class="c"&gt;# get the token&lt;/span&gt;
&lt;span class="nv"&gt;AKS_SA_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get secret admiral-sa-token &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.token}'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# get the kubeconfig&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; config view &lt;span class="nt"&gt;--minify&lt;/span&gt; &lt;span class="nt"&gt;--flatten&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; admiral-aks.yaml

&lt;span class="c"&gt;# Add the token to the kubeconfig file&lt;/span&gt;
&lt;span class="c"&gt;# if you have yq can use the below command&lt;/span&gt;
yq &lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="s2"&gt;"del(.users[0].user) | .users[0].user = {&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;token&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$AKS_SA_TOKEN&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; admiral-aks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To register remote clusters with Admiral, create a secret with the generated kubeconfig files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; create secret generic admiral-eks &lt;span class="nt"&gt;--from-file&lt;/span&gt; admiral-eks.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; admiral

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; create secret generic admiral-aks &lt;span class="nt"&gt;--from-file&lt;/span&gt; admiral-aks.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; admiral

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; label secret admiral-eks admiral/sync&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; admiral

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; label secret admiral-aks admiral/sync&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; admiral
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Demo with a podinfo application
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Deploy podinfo application
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fa15afaf8ef9f8403e75782caf50df5ec8dad3f57%2Fe0c89%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fpodinfo-application.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fa15afaf8ef9f8403e75782caf50df5ec8dad3f57%2Fe0c89%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fpodinfo-application.png" alt="Podinfo application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will deploy frontend on EKS and backend on both the clusters on different namespaces, webapp-eks and webapp-aks respectively. Label both the namespaces with istio-injection: enabled to auto-inject the Istio proxy to the applications. To distinguish between both the backend services, we have named the deployments accordingly: backend-eks and backend-aks.&lt;/p&gt;

&lt;p&gt;On the backend deployment, the manifest’s pod template will have the below annotations and labels for Admiral.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;admiral.io/env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stage&lt;/span&gt;
  &lt;span class="na"&gt;sidecar.istio.io/inject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;identity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the podinfo application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; app/webapp-eks/common
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; app/webapp-eks/backend
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; app/webapp-eks/frontend


kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; app/webapp-aks/common
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; app/webapp-aks/backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Setup Admiral global traffic policy
&lt;/h4&gt;

&lt;p&gt;Global traffic policy should have the same labels and annotations added to applications. In the below global traffic policy we have the lbType=1 and weight distribution is 50-50 to both regions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;gtp-podinfo.yaml

apiVersion: admiral.io/v1alpha1
kind: GlobalTrafficPolicy
metadata:
  name: gtp-service-podinfo
  namespace: webapp-eks
  annotations:
    admiral.io/env: stage
  labels:
    identity: backend
spec:
  policy:
    - dnsPrefix: default
      lbType: 1
      target:
        - region: ap-south-1
          weight: 50
        - region: westus
          weight: 50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create global traffic policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; gtp-podinfo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the service entry and destination rules created as a result.&lt;/p&gt;

&lt;p&gt;Service entry created on EKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get serviceentry stage.backend.global-se &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml 

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
    associated-gtp: gtp-service-podinfo
  labels:
    admiral.io/env: stage
    identity: backend
  name: stage.backend.global-se
  namespace: admiral-sync
spec:
  addresses:
  - 240.0.10.1
  endpoints:
  - address: backend.webapp-eks.svc.cluster.local
    locality: ap-south-1
    ports:
      http: 9898
  - address: 20.245.234.103
    locality: westus
    ports:
      http: 15443
  hosts:
  - stage.backend.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  resolution: DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service entry created on AKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get serviceentry stage.backend.global-se &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml 

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
    associated-gtp: gtp-service-podinfo
  labels:
    admiral.io/env: stage
    identity: backend
  name: stage.backend.global-se
  namespace: admiral-sync
spec:
  addresses:
  - 240.0.10.1
  endpoints:
  - address: backend.webapp-aks.svc.cluster.local
    locality: westus
    ports:
      http: 9898
  - address: a81362f21b84f471eb19689b5510d365-1262659891.ap-south-1.elb.amazonaws.com
    locality: ap-south-1
    ports:
      http: 15443
  hosts:
  - stage.backend.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  resolution: DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see that two endpoints have been added to each service entry. One is pointing to local service and another is pointing to the east-west gateway load balancer. So, requests can route to both clusters accordingly. Service will be accessible on stage.backend.global&lt;/p&gt;

&lt;p&gt;Destination rule created on EKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get dr stage.backend.global-default-dr &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
  name: stage.backend.global-default-dr
  namespace: admiral-sync
spec:
  host: stage.backend.global
  trafficPolicy:
    connectionPool:
      http:
        http2MaxRequests: 1000
        maxRequestsPerConnection: 100
    loadBalancer:
      localityLbSetting:
        distribute:
        - from: ap-south-1/&lt;span class="k"&gt;*&lt;/span&gt;
          to:
            ap-south-1: 50
            westus: 50
      simple: LEAST_REQUEST
    outlierDetection:
      baseEjectionTime: 300s
      consecutive5xxErrors: 0
      consecutiveGatewayErrors: 50
      interval: 60s
      maxEjectionPercent: 100
    tls:
      mode: ISTIO_MUTUAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Destination rule created on AKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get dr stage.backend.global-default-dr &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml 

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
  name: stage.backend.global-default-dr
  namespace: admiral-sync
spec:
  host: stage.backend.global
  trafficPolicy:
    connectionPool:
      http:
        http2MaxRequests: 1000
        maxRequestsPerConnection: 100
    loadBalancer:
      localityLbSetting:
        distribute:
        - from: westus/&lt;span class="k"&gt;*&lt;/span&gt;
          to:
            ap-south-1: 50
            westus: 50
      simple: LEAST_REQUEST
    outlierDetection:
      baseEjectionTime: 300s
      consecutive5xxErrors: 0
      consecutiveGatewayErrors: 50
      interval: 60s
      maxEjectionPercent: 100
    tls:
      mode: ISTIO_MUTUAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the destination rules created have the locality load balancing setting as distributed with 50-50 weight distribution to both regions and TLS mode as &lt;a href="https://istio.io/latest/docs/reference/config/networking/gateway/#ServerTLSSettings-TLSmode" rel="noopener noreferrer"&gt;ISTIO_MUTUAL&lt;/a&gt; enforcing mTLS communication. &lt;/p&gt;

&lt;h4&gt;
  
  
  3. Test cross cluster communication
&lt;/h4&gt;

&lt;p&gt;Execute the below command to test the cross cluster communication&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;frontend &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[*].metadata.name}'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; frontend &lt;span class="nt"&gt;--&lt;/span&gt; curl http://stage.backend.global
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Request should route to backend service in both the clusters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;frontend &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[*].metadata.name}'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; frontend &lt;span class="nt"&gt;--&lt;/span&gt; curl http://stage.backend.global
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"hostname"&lt;/span&gt;: &lt;span class="s2"&gt;"backend-eks-6cb98b774f-r2vfl"&lt;/span&gt;,
  &lt;span class="s2"&gt;"version"&lt;/span&gt;: &lt;span class="s2"&gt;"6.3.6"&lt;/span&gt;,
  &lt;span class="s2"&gt;"revision"&lt;/span&gt;: &lt;span class="s2"&gt;"073f1ec5aff930bd3411d33534e91cbe23302324"&lt;/span&gt;,
  &lt;span class="s2"&gt;"color"&lt;/span&gt;: &lt;span class="s2"&gt;"#34577c"&lt;/span&gt;,
  &lt;span class="s2"&gt;"logo"&lt;/span&gt;: &lt;span class="s2"&gt;"https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif"&lt;/span&gt;,
  &lt;span class="s2"&gt;"message"&lt;/span&gt;: &lt;span class="s2"&gt;"greetings from podinfo v6.3.6"&lt;/span&gt;,
  &lt;span class="s2"&gt;"goos"&lt;/span&gt;: &lt;span class="s2"&gt;"linux"&lt;/span&gt;,
  &lt;span class="s2"&gt;"goarch"&lt;/span&gt;: &lt;span class="s2"&gt;"amd64"&lt;/span&gt;,
  &lt;span class="s2"&gt;"runtime"&lt;/span&gt;: &lt;span class="s2"&gt;"go1.20.4"&lt;/span&gt;,
  &lt;span class="s2"&gt;"num_goroutine"&lt;/span&gt;: &lt;span class="s2"&gt;"8"&lt;/span&gt;,
  &lt;span class="s2"&gt;"num_cpu"&lt;/span&gt;: &lt;span class="s2"&gt;"2"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;frontend &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[*].metadata.name}'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; webapp-eks&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; frontend &lt;span class="nt"&gt;--&lt;/span&gt; curl http://stage.backend.global
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"hostname"&lt;/span&gt;: &lt;span class="s2"&gt;"backend-aks-58fd55b79c-wzp2n"&lt;/span&gt;,
  &lt;span class="s2"&gt;"version"&lt;/span&gt;: &lt;span class="s2"&gt;"6.3.6"&lt;/span&gt;,
  &lt;span class="s2"&gt;"revision"&lt;/span&gt;: &lt;span class="s2"&gt;"073f1ec5aff930bd3411d33534e91cbe23302324"&lt;/span&gt;,
  &lt;span class="s2"&gt;"color"&lt;/span&gt;: &lt;span class="s2"&gt;"#FFC0CB"&lt;/span&gt;,
  &lt;span class="s2"&gt;"logo"&lt;/span&gt;: &lt;span class="s2"&gt;"https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif"&lt;/span&gt;,
  &lt;span class="s2"&gt;"message"&lt;/span&gt;: &lt;span class="s2"&gt;"greetings from podinfo v6.3.6"&lt;/span&gt;,
  &lt;span class="s2"&gt;"goos"&lt;/span&gt;: &lt;span class="s2"&gt;"linux"&lt;/span&gt;,
  &lt;span class="s2"&gt;"goarch"&lt;/span&gt;: &lt;span class="s2"&gt;"amd64"&lt;/span&gt;,
  &lt;span class="s2"&gt;"runtime"&lt;/span&gt;: &lt;span class="s2"&gt;"go1.20.4"&lt;/span&gt;,
  &lt;span class="s2"&gt;"num_goroutine"&lt;/span&gt;: &lt;span class="s2"&gt;"8"&lt;/span&gt;,
  &lt;span class="s2"&gt;"num_cpu"&lt;/span&gt;: &lt;span class="s2"&gt;"2"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With requests being able to route across clusters in the mesh, we are able to successfully manage podinfo application on different clouds using Istio and Admiral.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring the multi-cluster setup
&lt;/h3&gt;

&lt;p&gt;Observability plays an important role in service mesh by providing visibility into the behavior and health of the service mesh. Monitoring setups should be centralized to provide a single pane of glass for all your services running across clusters. We will use Prometheus to scrape the metrics from both the clusters, and Grafana and Kiali to visualize those metrics.&lt;/p&gt;

&lt;p&gt;To set up the multi-cluster monitoring, follow the below steps:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Install Prometheus in the federation
&lt;/h4&gt;

&lt;p&gt;EKS Prometheus will be our primary Prometheus. Additional scrape configuration has been added to the EKS Prometheus configuration to add AKS Prometheus to the federation. We have used a global endpoint for Prometheus that Admiral will create for us after we apply the global traffic policies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;scrape_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;federate-aks-cluster'&lt;/span&gt;
      &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15s&lt;/span&gt;
      &lt;span class="na"&gt;honor_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/federate'&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;match[]'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{job="kubernetes-pods"}'&lt;/span&gt;
      &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;aks.prometheus-aks.global'&lt;/span&gt;
          &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;aks-cluster'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;federate-local'&lt;/span&gt;
      &lt;span class="na"&gt;honor_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/federate'&lt;/span&gt;
      &lt;span class="na"&gt;metric_relabel_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;replacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;eks-cluster'&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
      &lt;span class="na"&gt;kubernetes_sd_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod&lt;/span&gt;
        &lt;span class="na"&gt;namespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;istio-system'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;match[]'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{__name__=~"istio_(.*)"}'&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{__name__=~"pilot(.*)"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The EKS Prometheus deployment template has label - sidecar.istio.io/inject: "true" to inject the Istio sidecar and label - identity: prometheus-eks, annotations - admiral.io/env: eks, sidecar.istio.io/inject: "true" for Admiral.&lt;/p&gt;

&lt;p&gt;The AKS Prometheus deployment template has a label - sidecar.istio.io/inject: "true" to inject the Instio sidecar and label - identity: prometheus-aks, annotations - admiral.io/env: aks, sidecar.istio.io/inject: "true" for Admiral.&lt;/p&gt;

&lt;p&gt;Install the Prometheus on EKS and AKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; monitoring/prometheus-eks.yaml
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; monitoring/prometheus-aks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Create global traffic policy
&lt;/h4&gt;

&lt;p&gt;Create the global traffic policy for both AKS and EKS Prometheus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; gtp-prometheus-aks.yaml
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; gtp-prometheus-eks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the service entry and destination rules created for host aks.prometheus-aks.global in AKS. It only has the endpoint to AKS’s local Prometheus, so all requests will go to this local endpoint on AKS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_AKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get se aks.prometheus-aks.global-se &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
    associated-gtp: gtp-service-prometheus
  labels:
    admiral.io/env: aks
    identity: prometheus-aks
  name: aks.prometheus-aks.global-se
  namespace: admiral-sync
spec:
  addresses:
  - 240.0.10.3
  endpoints:
  - address: prometheus.istio-system.svc.cluster.local
    locality: westus
    ports:
      http: 9090
  hosts:
  - aks.prometheus-aks.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  resolution: DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There would not be a service entry created for host aks.prometheus-aks.global on EKS as there is no service running with the same labels and annotations. We need the service entry to be created in EKS for host aks.prometheus-aks.global so that EKS Prometheus can access the AKS Prometheus. For that, we will create Admiral’s dependency CRD. &lt;/p&gt;

&lt;h4&gt;
  
  
  3. Create dependency CRD on EKS
&lt;/h4&gt;

&lt;p&gt;In the Prometheus federation, we have Primary Prometheus running on the EKS cluster. To sync the Istio’s service discovery configuration for the AKS Prometheus added in the federation, we will have EKS Prometheus as the source and AKS Prometheus as the destination.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;dependency-prometheus.yaml

apiVersion: admiral.io/v1alpha1
kind: Dependency
metadata:
  name: dependency
  namespace: admiral
spec:
  &lt;span class="nb"&gt;source&lt;/span&gt;: prometheus-eks
  identityLabel: identity
  destinations:
    - prometheus-aks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the dependency CRD&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; dependency-prometheus.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating the dependency CRD, the service entry for host aks.prometheus-aks.global is created in EKS. It has an endpoint to AKS’s east-west gateway load balancer. So, when this host is used within EKS all requests will go to AKS Prometheus via the east-west gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get se aks.prometheus-aks.global-se &lt;span class="nt"&gt;-n&lt;/span&gt; admiral-sync &lt;span class="nt"&gt;-o&lt;/span&gt; yaml

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  annotations:
    app.kubernetes.io/created-by: admiral
    associated-gtp: gtp-service-prometheus
  labels:
    admiral.io/env: aks
    identity: prometheus-aks
  name: aks.prometheus-aks.global-se
  namespace: admiral-sync
spec:
  addresses:
  - 240.0.10.3
  endpoints:
  - address: 20.245.234.103
    locality: westus
    ports:
      http: 15443
  hosts:
  - aks.prometheus-aks.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  resolution: DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Install Kiali and Grafana on EKS
&lt;/h4&gt;

&lt;p&gt;The EKS Grafana deployment template has a label - sidecar.istio.io/inject: "true" to inject the Istio sidecar so that Grafana would be able to resolve host eks.prometheus-eks.global, which we have created with the Prometheus EKS global traffic policy in the previous step.&lt;/p&gt;

&lt;p&gt;Since by default, the service in the same namespace with the same name gets load balanced to all the clusters where it is running, we needed a service entry that will only route to EKS Prometheus. &lt;/p&gt;

&lt;p&gt;Install Grafana and Kiali on EKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; monitoring/grafana-eks.yaml
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; monitoring/kiali-eks.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  5. Visualize graphs and dashboards
&lt;/h4&gt;

&lt;p&gt;To access the Grafana dashboard locally, use the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;istioctl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dash grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the Grafana dashboard has opened, navigate to Istio &amp;gt; Istio workload dashboard. You should be able to view the metrics for both clusters. In the Namespace dropdown, both the namespaces should be present, namely webapp-eks and webapp-aks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fd15edf4e5ccc751712b2b68880bc02a926924239%2Fac717%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fistio-workload-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fd15edf4e5ccc751712b2b68880bc02a926924239%2Fac717%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fistio-workload-dashboard.png" alt="Istio workload dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F6820b4418f666cbe4421bb326ecea94b5288dd74%2F727b9%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fistio-workload-dashboard-metrics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F6820b4418f666cbe4421bb326ecea94b5288dd74%2F727b9%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fistio-workload-dashboard-metrics.png" alt="Istio workload dashboard metrics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above graphs show details about various metrics (success rate, request rate, latency, etc.)  for service workloads. We can see the frontend service as an Inbound workload for both the backends across clusters.&lt;/p&gt;

&lt;p&gt;To access the Kiali dashboard, use the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;istioctl &lt;span class="nt"&gt;--context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CTX_EKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dash kiali
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the Kiali dashboard is open&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select Graph from the left pane.&lt;/li&gt;
&lt;li&gt;Select webapp-eks from the Namespace dropdown.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should be able to see the traffic flow from frontend in webapp-eks namespace to backend in the same namespace and to webapp-aks namespace in another cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fba8b284f415065fd3583af881df0ad83df5a8c83%2Ff9d7d%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fkiali-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fba8b284f415065fd3583af881df0ad83df5a8c83%2Ff9d7d%2Fassets%2Fimg%2Fblog%2Fsimplify-multi-cluster-service-mesh-using-admiral%2Fkiali-dashboard.png" alt="Kiali dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: if you don't see the lock (mTLS) icon on the traffic, make sure that the security option is enabled from the Display dropdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we covered the importance of a multi-cluster setup, different Istio deployment models, and the advantages of the multi-primary deployment model on different networks. We discussed the challenges associated with multi-primary and how Admiral simplifies multi-cluster Istio configuration management. &lt;/p&gt;

&lt;p&gt;Furthermore, we explored the architecture of Admiral, including its CRDs like dependency and global traffic policy, and showcased a demo on managing microservices on different clouds using Istio and Admiral. &lt;/p&gt;

&lt;p&gt;Lastly, we successfully set up observability using Prometheus federation, and used Grafana, and Kiali to visualize application traffic flow, enabling efficient monitoring and troubleshooting in a multi-cluster environment. &lt;/p&gt;

&lt;p&gt;While the blog post showed how easy it is to leverage Admiral with Istio for automatic configuration and service discovery. In real-world scenarios with hundreds of services, clusters, and users, things could be complex. Our experienced &lt;a href="https://dev.to/istio-consulting/"&gt;Istio consulting experts&lt;/a&gt; can provide valuable assistance. Our &lt;a href="https://dev.to/istio-support/"&gt;Istio support&lt;/a&gt; team specializes in configuring Istio for large-scale production deployments and excels at resolving emergency conflicts.&lt;/p&gt;

&lt;p&gt;For more assistance please feel free to reach out and start a conversation with &lt;a href="https://www.linkedin.com/in/nishant-barola-442871a7/" rel="noopener noreferrer"&gt;Nishant Barola&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/dada-gore-37a750173/" rel="noopener noreferrer"&gt;Dada Gore&lt;/a&gt; who have jointly written this detailed blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/istio-ecosystem/admiral" rel="noopener noreferrer"&gt;Istio ecosystem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/stefanprodan/podinfo" rel="noopener noreferrer"&gt;Podinfo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs/ops/deployment/deployment-models/" rel="noopener noreferrer"&gt;Istio documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/istio-ecosystem/admiral/blob/master/docs/Examples.md" rel="noopener noreferrer"&gt;Istio Admiral doc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/latest/docs/ops/configuration/telemetry/monitoring-multicluster-prometheus/#production-prometheus-on-an-in-mesh-cluster" rel="noopener noreferrer"&gt;Istio documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>k8s</category>
      <category>devops</category>
      <category>istio</category>
      <category>servicemesh</category>
    </item>
  </channel>
</rss>
