<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adrien Trouillaud</title>
    <description>The latest articles on DEV Community by Adrien Trouillaud (@adrienjt).</description>
    <link>https://dev.to/adrienjt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adrienjt"/>
    <language>en</language>
    <item>
      <title>Running Argo Workflows Across Multiple Kubernetes Clusters</title>
      <dc:creator>Adrien Trouillaud</dc:creator>
      <pubDate>Tue, 22 Jan 2019 15:06:16 +0000</pubDate>
      <link>https://dev.to/admiralty/running-argo-workflows-across-multiple-kubernetes-clusters-4jap</link>
      <guid>https://dev.to/admiralty/running-argo-workflows-across-multiple-kubernetes-clusters-4jap</guid>
      <description>&lt;p&gt;&lt;a href="https://admiralty.io/blog/running-argo-workflows-across-multiple-kubernetes-clusters/" rel="noopener noreferrer"&gt;Originally published in Admiralty's blog.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We recently &lt;a href="https://github.com/admiraltyio/multicluster-scheduler" rel="noopener noreferrer"&gt;open-sourced multicluster-scheduler&lt;/a&gt;, a system of Kubernetes controllers that intelligently schedules workloads across clusters. In this blog post, we will use it with &lt;a href="https://argoproj.github.io/argo" rel="noopener noreferrer"&gt;Argo&lt;/a&gt; to run multicluster workflows (pipelines, DAGs, ETLs) that better utilize resources and/or combine data from different regions or clouds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Most enterprises that use Kubernetes manage multiple clusters&lt;/strong&gt;. &lt;a href="https://dev.to/blog/introducing-multicluster-controller/"&gt;For various reasons&lt;/a&gt;, you may have one or several clusters per team, region, environment, or combination thereof. Your clusters may be hosted by different cloud providers—in a multicloud infrastructure—and/or on premises—in a hybrid infrastructure. The benefits of isolation, however, come at the expense of, &lt;a href="https://dev.to/blog/introducing-multicluster-controller/"&gt;among other things&lt;/a&gt;, &lt;strong&gt;reduced &lt;a href="https://en.wikipedia.org/wiki/Bin_packing_problem" rel="noopener noreferrer"&gt;bin-packing&lt;/a&gt; efficiency and data fragmentation&lt;/strong&gt;. Let's explore two scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario A: You need to run a large parallel workflow&lt;/strong&gt;, e.g., a machine learning training pipeline, which requires more resources than available in your team's cluster. You could scale out to go fast, or limit parallelism to save money. In the meantime, available resources in other teams' clusters stay idle. Multicluster-scheduler allows you to elect pods to be delegated to other clusters, where resources are available, from the comfort of your own cluster, with a single pod or pod template annotation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B: You need to run a workflow that combines data from multiple clouds or regions&lt;/strong&gt;. It is either more efficient or required to run some steps closer to their data sources. To optimize throughput or save on data egress charges, you may want to compress or aggregate the data, before loading the results closer to you. Or to respect privacy regulations and minimize your attack surface, you may want to anonymize the data as upstream as possible. You could deploy remote services or functions and call them from your workflow, but that would be complicated. Multicluster-scheduler allows you to simply specify which cluster a pod should run in, again, with a single pod or pod template annotation.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Multicluster Pod's Journey
&lt;/h2&gt;

&lt;p&gt;Here's a quick summary of a multicluster pod's journey.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When a pod is created with the &lt;code&gt;multicluster.admiralty.io/elect=""&lt;/code&gt; annotation, the multicluster-scheduler agent's &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="noopener noreferrer"&gt;mutating pod admission webhook&lt;/a&gt; replaces the pod's containers by a dummy &lt;a href="https://hub.docker.com/_/busybox" rel="noopener noreferrer"&gt;busybox&lt;/a&gt; that just waits to be killed. The original spec is saved for later as another annotation.&lt;/li&gt;
&lt;li&gt;We call the resulting pod a &lt;strong&gt;proxy pod&lt;/strong&gt;. The agent then sends an &lt;strong&gt;observation&lt;/strong&gt; of the proxy pod to the scheduler's cluster, which can be the same cluster. The agent also watches other pods, nodes, and node pools and sends observations of them to the scheduler's cluster to guide its decisions.&lt;/li&gt;
&lt;li&gt;The scheduler creates a &lt;strong&gt;delegate pod decision&lt;/strong&gt; in its own cluster. If the original pod was annotated with &lt;code&gt;multicluster.admiralty.io/clustername=foo&lt;/code&gt;, the delegate pod decision is targeted at cluster "foo". Otherwise, the scheduler targets the cluster that could accommodate the most replicas of our pod, based on current observations. More advanced scheduling options are &lt;a href="https://github.com/admiraltyio/multicluster-scheduler#roadmap" rel="noopener noreferrer"&gt;in the works&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The agent in the target cluster sees the decision and creates a &lt;strong&gt;delegate pod&lt;/strong&gt;. The delegate pod has the same spec as the original pod.&lt;/li&gt;
&lt;li&gt;An observation of the delegate pod is sent back to the scheduler's cluster. When the delegate pod is annotated, the same annotation is fed back to the proxy pod (e.g., so Argo can read step outputs), and when it succeeds or dies, a signal is sent to the proxy pod's container for it to succeed or die too.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more details, check out the &lt;a href="https://github.com/admiraltyio/multicluster-scheduler" rel="noopener noreferrer"&gt;README&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/admiraltyio" rel="noopener noreferrer"&gt;
        admiraltyio
      &lt;/a&gt; / &lt;a href="https://github.com/admiraltyio/admiralty" rel="noopener noreferrer"&gt;
        admiralty
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A system of Kubernetes controllers that intelligently schedules workloads across clusters.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Admiralty&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;&lt;em&gt;formerly multicluster-scheduler&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Admiralty is a system of Kubernetes controllers that intelligently schedules workloads across clusters. It is simple to use and simple to integrate with other tools.&lt;/p&gt;
&lt;p&gt;The documentation hosted at &lt;a href="https://admiralty.io/docs/" rel="nofollow noopener noreferrer"&gt;https://admiralty.io/docs/&lt;/a&gt; is sourced from this repository. The links below point to the local Markdown files. Use them if you're browsing this repo without Internet access; otherwise, &lt;strong&gt;the &lt;a href="https://admiralty.io/docs/" rel="nofollow noopener noreferrer"&gt;hosted version&lt;/a&gt; is easier to navigate&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/introduction.md" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/quick_start.md" rel="noopener noreferrer"&gt;Quick Start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Concepts
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/concepts/topologies.md" rel="noopener noreferrer"&gt;Multi-Cluster Topologies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/concepts/authentication.md" rel="noopener noreferrer"&gt;Cross-Cluster Authentication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/concepts/scheduling.md" rel="noopener noreferrer"&gt;Multi-Cluster Scheduling&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Operator Guide
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/operator_guide/installation.md" rel="noopener noreferrer"&gt;Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/operator_guide/authentication.md" rel="noopener noreferrer"&gt;Configuring Authentication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltydocs/operator_guide/scheduling.md" rel="noopener noreferrer"&gt;Configuring Scheduling&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltyCONTRIBUTING.md" rel="noopener noreferrer"&gt;Contributor Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltyCHANGELOG.md" rel="noopener noreferrer"&gt;Release Notes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;API Reference
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltycharts/multicluster-scheduler/README.md" rel="noopener noreferrer"&gt;Helm Chart&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/admiraltyio/admiraltyLICENSE" rel="noopener noreferrer"&gt;License&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/admiraltyio/admiralty" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h2&gt;
  
  
  Demonstration
&lt;/h2&gt;

&lt;p&gt;Let's see that in action. Create two clusters, e.g., with &lt;a href="https://kubernetes.io/docs/setup/minikube/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt; or your &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster" rel="noopener noreferrer"&gt;favorite&lt;/a&gt; &lt;a href="https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="noopener noreferrer"&gt;cloud&lt;/a&gt; &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html" rel="noopener noreferrer"&gt;provider&lt;/a&gt;. In this blog post, we'll assume their associated contexts in your kubeconfig are "cluster1" and "cluster2", but we'll use variables so you can use your own:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CLUSTER1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster1 &lt;span class="c"&gt;# change me&lt;/span&gt;
&lt;span class="nv"&gt;CLUSTER2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster2 &lt;span class="c"&gt;# change me&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, following the &lt;a href="https://github.com/admiraltyio/multicluster-scheduler#installation" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt;, install the scheduler in cluster1 and the agent in both clusters.&lt;/p&gt;

&lt;p&gt;We also need Argo in either cluster. We'll use cluster1 in this guide, but feel free to change the variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;ARGO_CLUSTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CLUSTER1&lt;/span&gt; &lt;span class="c"&gt;# change me&lt;/span&gt;
&lt;span class="nv"&gt;NON_ARGO_CLUSTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CLUSTER2&lt;/span&gt; &lt;span class="c"&gt;# change me&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the Argo controller and UI (step 2 of the &lt;a href="https://argoproj.github.io/docs/argo/demo.html" rel="noopener noreferrer"&gt;Argo getting gtarted guide&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; create ns argo
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; apply &lt;span class="nt"&gt;-n&lt;/span&gt; argo &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo/v2.2.1/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're not big fans of giving Argo pods admin privileges, as recommended in step 3 of the Argo getting started guide, so we'll use a minimal service account instead. Because pods will run in the two clusters, we need this service account in both:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;ARGO_POD_RBAC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://raw.githubusercontent.com/admiraltyio/multicluster-scheduler/master/config/samples/argo-workflows/_service-account.yaml
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_POD_RBAC&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$NON_ARGO_CLUSTER&lt;/span&gt; apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_POD_RBAC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's also install the Argo CLI locally—although it's optional—to nicely submit and track workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On Mac:&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;argoproj/tap/argo
&lt;span class="c"&gt;# On Linux:&lt;/span&gt;
curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/argo https://github.com/argoproj/argo/releases/download/v2.2.1/argo-linux-amd64
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/argo
&lt;span class="c"&gt;# On Windows:&lt;/span&gt;
curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; argo https://github.com/argoproj/argo/releases/download/v2.2.1/argo-windows-amd64 &lt;span class="c"&gt;# and add to your PATH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now turn any Argo workflow into a multicluster workflow by adding &lt;code&gt;multicluster.admiralty.io&lt;/code&gt; annotations to its pod templates. Also, don't forget to specify resource requests if you want the scheduler to decide where to run your pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario A: Optimizing a Large Parallel Workflow
&lt;/h3&gt;

&lt;p&gt;A default GKE cluster has three nodes, with 1 vCPU and 3.75GB of memory each, out of which 940m vCPU and 2.58GiB of memory are allocatable. The system pods, along with multicluster-scheduler and Argo already request 1840m vCPU in cluster1 and 1740m vCPU in cluster2. Therefore, cluster1 has 980m vCPU available and cluster2 has 1080m. We don't need to spend extra money for this experiment: we will model a "large parallel workflow" by 10 parallel steps requiring 200m vCPU each (including 100m for the Argo sidecar).&lt;/p&gt;

&lt;p&gt;First, let's run the following single-cluster workflow (also available in the &lt;a href="https://github.com/admiraltyio/multicluster-scheduler/tree/master/config/samples/argo-workflows" rel="noopener noreferrer"&gt;multicluster-scheduler samples directory&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Workflow&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;singlecluster-parallel-&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;singlecluster-parallel&lt;/span&gt;
  &lt;span class="na"&gt;templates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;singlecluster-parallel&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
        &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
        &lt;span class="na"&gt;withItems&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;9&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt; &lt;span class="c1"&gt;# Note: Argo sidecar adds another 100m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Submit it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;argo &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; submit &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt; argo-workflow &lt;span class="nt"&gt;--watch&lt;/span&gt; https://raw.githubusercontent.com/admiraltyio/multicluster-scheduler/master/config/samples/argo-workflows/blog-scenario-a-singlecluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the final state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration:            1 minute 16 seconds

STEP                             PODNAME                                  DURATION  MESSAGE
 ✔ singlecluster-parallel-6rtkc
 └-·-✔ sleep(0:0)                singlecluster-parallel-6rtkc-839758060   11s
   ├-✔ sleep(1:1)                singlecluster-parallel-6rtkc-1823198064  12s
   ├-✔ sleep(2:2)                singlecluster-parallel-6rtkc-4064072188  11s
   ├-✔ sleep(3:3)                singlecluster-parallel-6rtkc-2040401880  27s
   ├-✔ sleep(4:4)                singlecluster-parallel-6rtkc-3078784476  27s
   ├-✔ sleep(5:5)                singlecluster-parallel-6rtkc-3529283624  27s
   ├-✔ sleep(6:6)                singlecluster-parallel-6rtkc-3081898924  43s
   ├-✔ sleep(7:7)                singlecluster-parallel-6rtkc-2914639584  43s
   ├-✔ sleep(8:9)                singlecluster-parallel-6rtkc-3024028329  43s
   └-✔ sleep(9:10)               singlecluster-parallel-6rtkc-3224503614  1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It took the workflow 1 minute 16 seconds to run on cluster1 alone. We can see that cluster1 could only run three steps concurrently, in four waves, which is less than ideal, but expected, because the 940m vCPU available are in three "bins".&lt;/p&gt;

&lt;p&gt;Let's annotate our workflow's pod template with &lt;code&gt;multicluster.admiralty.io/elect=""&lt;/code&gt; to make it run on two clusters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Workflow&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-parallel-&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-parallel&lt;/span&gt;
  &lt;span class="na"&gt;templates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-parallel&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
        &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
        &lt;span class="na"&gt;withItems&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;8&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;9&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt; &lt;span class="c1"&gt;# Note: Argo sidecar adds another 100m&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;multicluster.admiralty.io/elect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Submit it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;argo &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; submit &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt; argo-workflow &lt;span class="nt"&gt;--watch&lt;/span&gt; https://raw.githubusercontent.com/admiraltyio/multicluster-scheduler/master/config/samples/argo-workflows/blog-scenario-a-multicluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the final state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration:            31 seconds

STEP                            PODNAME                                 DURATION  MESSAGE
 ✔ multicluster-parallel-lmw2d
 └-·-✔ sleep(0:0)               multicluster-parallel-lmw2d-1353848687  12s
   ├-✔ sleep(1:1)               multicluster-parallel-lmw2d-714502387   14s
   ├-✔ sleep(2:2)               multicluster-parallel-lmw2d-894725111   14s
   ├-✔ sleep(3:3)               multicluster-parallel-lmw2d-711387939   13s
   ├-✔ sleep(4:4)               multicluster-parallel-lmw2d-479610983   14s
   ├-✔ sleep(5:5)               multicluster-parallel-lmw2d-1696675651  13s
   ├-✔ sleep(6:6)               multicluster-parallel-lmw2d-1336174783  15s
   ├-✔ sleep(7:7)               multicluster-parallel-lmw2d-2767328819  29s
   ├-✔ sleep(8:9)               multicluster-parallel-lmw2d-3117624962  29s
   └-✔ sleep(9:10)              multicluster-parallel-lmw2d-2469206667  29s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It took the workflow only 31 seconds to run across cluster1 and cluster2. Seven steps were able to run concurrently at first, followed by the three remaining steps. Notice that some of the steps were run in cluster1 and the others in cluster2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                      READY     STATUS      RESTARTS   AGE
cluster1-default-multicluster-parallel-lmw2d-1336174783   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-1696675651   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-2767328819   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-3117624962   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-479610983    0/2       Completed   0          4m
multicluster-parallel-lmw2d-1336174783                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-1353848687                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-1696675651                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-2469206667                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-2767328819                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-3117624962                    0/2       Completed   0          4m
multicluster-parallel-lmw2d-479610983                     0/2       Completed   0          4m
multicluster-parallel-lmw2d-711387939                     0/2       Completed   0          4m
multicluster-parallel-lmw2d-714502387                     0/2       Completed   0          4m
multicluster-parallel-lmw2d-894725111                     0/2       Completed   0          4m
... (and all the pods from the single-cluster workflow)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The five pods whose names are prefixed with "cluster1-default-" are delegate pods. The prefix indicates their origin. The other pods are the proxy pods.&lt;/p&gt;

&lt;p&gt;In cluster2, there are only delegate pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$NON_ARGO_CLUSTER&lt;/span&gt; get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                      READY     STATUS      RESTARTS   AGE
cluster1-default-multicluster-parallel-lmw2d-1353848687   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-2469206667   0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-711387939    0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-714502387    0/2       Completed   0          4m
cluster1-default-multicluster-parallel-lmw2d-894725111    0/2       Completed   0          4m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scenario B: Multicluster ETL
&lt;/h3&gt;

&lt;p&gt;We will model this scenario with a simple DAG workflow, where steps A and C can run in cluster1, but step B must run in cluster2; step C depends on steps A and B. Note the use of the &lt;code&gt;multicluster.admiralty.io/clustername&lt;/code&gt; pod template annotation to enforce a placement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#  A   B*&lt;/span&gt;
&lt;span class="c1"&gt;#   \ /&lt;/span&gt;
&lt;span class="c1"&gt;#    C&lt;/span&gt;
&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;span class="c1"&gt;# * B must run in cluster2&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Workflow&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-dag-&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-dag&lt;/span&gt;
  &lt;span class="na"&gt;templates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;multicluster-dag&lt;/span&gt;
    &lt;span class="na"&gt;dag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A&lt;/span&gt;
        &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;B&lt;/span&gt;
        &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep-remote&lt;/span&gt;
        &lt;span class="na"&gt;arguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustername&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster2&lt;/span&gt; &lt;span class="c1"&gt;# change me&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;C&lt;/span&gt;
        &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;A&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;B&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep-remote&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustername&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;multicluster.admiralty.io/elect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
        &lt;span class="na"&gt;multicluster.admiralty.io/clustername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{inputs.parameters.clustername}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;NON_ARGO_CLUSTER&lt;/code&gt; is not equal to "cluster2" in your case, modify the workflow before submitting it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;argo &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$ARGO_CLUSTER&lt;/span&gt; submit &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt; argo-workflow &lt;span class="nt"&gt;--watch&lt;/span&gt; https://raw.githubusercontent.com/admiraltyio/multicluster-scheduler/master/config/samples/argo-workflows/blog-scenario-b.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the final state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration:            26 seconds

STEP                       PODNAME                           DURATION  MESSAGE
 ✔ multicluster-dag-ftrwh
 ├-✔ A                     multicluster-dag-ftrwh-745251266  11s
 ├-✔ B                     multicluster-dag-ftrwh-728473647  12s
 └-✔ C                     multicluster-dag-ftrwh-711696028  12s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that step B was delegated to cluster2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nv"&gt;$NON_ARGO_CLUSTER&lt;/span&gt; get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                READY     STATUS      RESTARTS   AGE
cluster1-default-multicluster-dag-ftrwh-728473647   0/2       Completed   0          2m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a real case scenario, you would pipe data between steps using artifact repositories and/or step input/outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discussion: Pod-Level Federation
&lt;/h2&gt;

&lt;p&gt;As we've demonstrated, multicluster-scheduler integrates nicely with Argo. We didn't have to modify the Argo source code, and simple annotations to the workflows' manifests were enough to make them run across clusters. This would not have been possible with a project like &lt;a href="https://github.com/kubernetes-sigs/federation-v2" rel="noopener noreferrer"&gt;Federation v2&lt;/a&gt;, which requires clients to use new, federated APIs, e.g., federated deployment templates, placements, and overrides. The main advantage of multicluster-scheduler is that it federates clusters at the pod level, &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="noopener noreferrer"&gt;"the smallest and simplest unit in the Kubernetes object model."&lt;/a&gt; The entire Kubernetes ecosystem revolves around pods. By choosing pods as multicluster-scheduler's unit, we're enabling a series of "for free", loosely coupled integrations. Multicluster &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;Horizontal Pod Autoscaler&lt;/a&gt; with custom and external metrics &lt;em&gt;should&lt;/em&gt; work out-of-the-box (we'll prove that soon), while integrations with &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; and &lt;a href="https://cloud.google.com/knative/" rel="noopener noreferrer"&gt;Knative&lt;/a&gt; are &lt;a href="https://github.com/admiraltyio/multicluster-scheduler#roadmap" rel="noopener noreferrer"&gt;in the works&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Multicluster-scheduler can run Argo workflows across Kubernetes clusters, delegating pods to where resources are available, or as specified by the user. It can make parallel workflows run faster without scaling out clusters, and it simplifies multi-region and multicloud ETL processes. This integration was made possible by multicluster-scheduler's architecture centered around pods, "the smallest and simplest unit in the Kubernetes object model." The way forward is exciting and includes, among other things, more integrations with the cloud native ecosystem, and advanced scheduling. We're curious to hear the thoughts and feedback of the community, and we welcome contributions!&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;p&gt;Many thanks to the &lt;a href="https://github.com/argoproj/argo/graphs/contributors" rel="noopener noreferrer"&gt;Argo authors&lt;/a&gt; for designing a great cloud-native workflow engine, and to the authors of &lt;a href="https://github.com/kubernetes-sigs/controller-runtime/graphs/contributors" rel="noopener noreferrer"&gt;controller-runtime&lt;/a&gt;, which powers a lot of the components of multicluster-scheduler.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Introducing Multicluster-Service-Account</title>
      <dc:creator>Adrien Trouillaud</dc:creator>
      <pubDate>Mon, 12 Nov 2018 10:05:38 +0000</pubDate>
      <link>https://dev.to/admiralty/introducing-multicluster-service-account-1g1n</link>
      <guid>https://dev.to/admiralty/introducing-multicluster-service-account-1g1n</guid>
      <description>&lt;p&gt;&lt;a href="https://admiralty.io/blog/introducing-multicluster-service-account/" rel="noopener noreferrer"&gt;Originally published in Admiralty's blog.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you operate multiple Kubernetes clusters, and need a pod in a cluster to call the Kubernetes API server of another cluster, we know your pain. Up until now, you either had to rely on heterogeneous identity providers outside Kubernetes, or repurpose remote service accounts the hard way.&lt;/p&gt;

&lt;p&gt;A Kubernetes cluster typically authenticates API clients using two or more &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noopener noreferrer"&gt;authentication modules&lt;/a&gt;. One module authenticates Kubernetes &lt;em&gt;service accounts&lt;/em&gt;, defined in-cluster; the other modules authenticate &lt;em&gt;users&lt;/em&gt;, defined outside the cluster.&lt;/p&gt;

&lt;p&gt;User authentication depends on your Kubernetes distribution: it can simply be based on static files containing lists of passwords, tokens and/or client certificates; it often uses Open ID Connect; it can also use other methods (LDAP, SAML, Kerberos, etc.) via custom webhooks and proxies. Even with the common Open ID Connect module, the backing identity provider depends on your distribution: Google Cloud IAM, Azure Active Directory, among others.&lt;/p&gt;

&lt;p&gt;Therefore, in a hybrid and/or multicloud architecture, Kubernetes user authentication can be heterogeneous. A pod in a cluster trying to call the API servers of other clusters as a "user" may have to login to several identity providers. Even if you use a single identity provider across clusters, you're still responsible for mounting secrets inside pods—&lt;a href="https://github.com/kubernetes/client-go/tree/master/plugin/pkg/client/auth" rel="noopener noreferrer"&gt;client-go's authentication automation&lt;/a&gt; is designed for out-of-cluster, human-facing tools. To keep things simple, we need to agree on a common, automated multicluster identity provider for pods.&lt;/p&gt;

&lt;p&gt;Kubernetes may well be the identity provider we're looking for. It already provides identities to pods: each pod authenticates with its local Kubernetes API server using a &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="noopener noreferrer"&gt;service account&lt;/a&gt; in its namespace. Kubernetes even automates token generation and pod configuration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The service account controller ensures that at least a "default" service account exists within each namespace, though more can be created.&lt;/li&gt;
&lt;li&gt;The token controller generates a token for each service account and puts it in a secret in the same namespace.&lt;/li&gt;
&lt;li&gt;The service account admission controller automounts a service account secret in each pod, either for the "default" service account or the one specified in the pod spec.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Combined with &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;role-based access control (RBAC)&lt;/a&gt;, service accounts are a powerful identity and access management (IAM) solution for the Kubernetes API. However, although a service account's token can be used outside its namespace and cluster, the automated token generation and volume mounts only work in the service account’s namespace; it is your reponsibility to get an up-to-date token and configure your client accordingly.&lt;/p&gt;

&lt;p&gt;A few multicluster tools have repurposed kubeconfig files to store remote service account tokens, because they can also store remote API server URLs and root certificates, and they are supported by client-go. That's what we used at first &lt;a href="https://admiralty.io/blog/introducing-multicluster-controller/" rel="noopener noreferrer"&gt;when we open-sourced multicluster-controller&lt;/a&gt;, but that was a bit of a hack. Kubeconfig files are designed to be used out-of-cluster by Kubernetes clients like kubectl to authenticate actual users, but nothing prevents us from mounting them inside pods to authenticate remote service accounts. However, that just solves one part of the problem and kubeconfig files must still be generated and mounted inside pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/admiraltyio/multicluster-service-account" rel="noopener noreferrer"&gt;Today, we're open-sourcing multicluster-service-account&lt;/a&gt;&lt;/strong&gt;, leveraging service accounts as multicluster identities, with all the necessary automation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a ServiceAccountImport custom resource definition (CRD) and controller to import remote service accounts (and their secrets);&lt;/li&gt;
&lt;li&gt;a dynamic admission webhook to automount service account import secrets inside annotated pods, the same way regular service accounts are automounted inside pods;&lt;/li&gt;
&lt;li&gt;a Go library of helper methods to generate client-go configurations from service account imports (as well as generic methods to fall back to kubeconfig contexts and regular service accounts).&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/admiraltyio" rel="noopener noreferrer"&gt;
        admiraltyio
      &lt;/a&gt; / &lt;a href="https://github.com/admiraltyio/multicluster-service-account" rel="noopener noreferrer"&gt;
        multicluster-service-account
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Import and Automount Remote Kubernetes Service Accounts
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Multicluster-Service-Account&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Multicluster-service-account makes it easy for pods in a cluster to call the Kubernetes APIs of other clusters. It imports remote service account tokens into local secrets, and automounts them inside annotated pods.&lt;/p&gt;
&lt;p&gt;Multicluster-service-account can be used to run any Kubernetes client from another cluster. It can also be used to build operators that control Kubernetes resources across multiple clusters, e.g., with &lt;a href="https://github.com/admiraltyio/multicluster-controller" rel="noopener noreferrer"&gt;multicluster-controller&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Why? Check out &lt;a href="https://admiralty.io/blog/introducing-multicluster-service-account" rel="nofollow noopener noreferrer"&gt;Admiralty's blog post introducing multicluster-service-account&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;How it Works&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Multicluster-service-account consists of:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A binary, &lt;code&gt;kubemcsa&lt;/code&gt;, to bootstrap clusters, allowing them to import service account secrets from one another
&lt;ul&gt;
&lt;li&gt;After &lt;a href="https://github.com/admiraltyio/multicluster-service-account#step-1-installation" rel="noopener noreferrer"&gt;installing&lt;/a&gt; multicluster-service-account in cluster1, allowing cluster1 to import service account secrets from cluster2 is as simple as running
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;kubemcsa bootstrap --target-context cluster1 --source-context cluster2&lt;/pre&gt;

&lt;/div&gt;
if you work with multiple contexts in a single default kubeconfig file, or
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;kubemcsa bootstrap --target-kubeconfig cluster1 --source-kubeconfig cluster2&lt;/pre&gt;

&lt;/div&gt;
if you work with multiple kubeconfig files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;a ServiceAccountImport custom…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/admiraltyio/multicluster-service-account" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Check out the &lt;a href="https://github.com/admiraltyio/multicluster-service-account/blob/master/README.md" rel="noopener noreferrer"&gt;README&lt;/a&gt; for more details, including setup instructions. &lt;a href="https://github.com/admiraltyio/multicluster-service-account/blob/master/CONTRIBUTING.md" rel="noopener noreferrer"&gt;Contributions&lt;/a&gt;, &lt;a href="https://github.com/admiraltyio/multicluster-service-account/issues" rel="noopener noreferrer"&gt;feature requests and bug reports&lt;/a&gt; are always welcome.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: The problem space of multicluster-service-account is the authentication of Kubernetes clients only. For service-to-service multicluster authentication, we recommend &lt;a href="https://istio.io/docs/setup/kubernetes/multicluster-install/" rel="noopener noreferrer"&gt;Istio multicluster&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Introducing Multicluster-Controller</title>
      <dc:creator>Adrien Trouillaud</dc:creator>
      <pubDate>Tue, 30 Oct 2018 14:11:39 +0000</pubDate>
      <link>https://dev.to/admiralty/introducing-multicluster-controller-39f1</link>
      <guid>https://dev.to/admiralty/introducing-multicluster-controller-39f1</guid>
      <description>&lt;p&gt;&lt;a href="https://admiralty.io/blog/introducing-multicluster-controller/"&gt;&lt;em&gt;Originally published in Admiralty's blog.&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hybrid and multicloud architectures are becoming prevalent, either as a &lt;a href="https://thenewstack.io/why-enterprises-are-adopting-a-multicloud-strategy/"&gt;strategy&lt;/a&gt; or simply as a result of &lt;a href="https://www.reddit.com/r/devops/comments/91afzz/why_multicloud/e2x156y/"&gt;history and/or mergers and acquisitions&lt;/a&gt;. Luckily, to help reduce the inherent complexity, Kubernetes is standardizing the way clouds are operated: the same workflow can be used to manage resources in any cloud, whether public or private. However, managing workloads &lt;em&gt;across&lt;/em&gt; clouds is still a challenge. Technically, you &lt;em&gt;could&lt;/em&gt; create a single Kubernetes cluster encompassing your entire infrastructure, but that could invalidate some of the &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/#scope-of-a-single-cluster"&gt;assumptions&lt;/a&gt; made in the design of Kubernetes itself. Also, you would miss out on &lt;a href="https://kubernetes.io/docs/setup/pick-right-solution/"&gt;turn-key Kubernetes distributions&lt;/a&gt;. A more common approach is to operate multiple clusters.&lt;/p&gt;

&lt;p&gt;Clusters are isolated from each other by default, which helps with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fault isolation;&lt;/li&gt;
&lt;li&gt;trust boundaries;&lt;/li&gt;
&lt;li&gt;only paying for a top-tier service level in production;&lt;/li&gt;
&lt;li&gt;enforcing geographical regulations;&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, cluster boundaries get in the way when you'd like to manage the following globally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scheduling and autoscaling (ensuring high availability and low latency at the lowest cost);&lt;/li&gt;
&lt;li&gt;service discovery;&lt;/li&gt;
&lt;li&gt;storage;&lt;/li&gt;
&lt;li&gt;monitoring;&lt;/li&gt;
&lt;li&gt;backups and migrations;&lt;/li&gt;
&lt;li&gt;policy enforcement;&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need tools to manage resources across clusters. Specific solutions exist. Notably, &lt;a href="https://github.com/kubernetes-sigs/federation-v2"&gt;federation-v2&lt;/a&gt; can sync workloads and route traffic across clusters. To do so, it uses the concepts of Templates, Placements, and Overrides, propagating resources with a push reconciler.&lt;/p&gt;

&lt;p&gt;While building a multicluster scheduler at &lt;a href="https://admiralty.io"&gt;Admiralty (stay tuned)&lt;/a&gt;, we needed a lower-level abstraction: namely the controller pattern (sometimes called the operator pattern), but for resources in multiple clusters. We needed a tool like the &lt;a href="https://github.com/operator-framework/operator-sdk"&gt;Operator SDK&lt;/a&gt; or &lt;a href="https://github.com/kubernetes-sigs/kubebuilder"&gt;Kubebuilder&lt;/a&gt; (see &lt;a href="https://admiralty.io/blog/kubernetes-custom-resource-controller-and-operator-development-tools/"&gt;comparison in a previous blog post&lt;/a&gt;), but supporting multiple clusters. Unfortunately, their designs don't allow that. Their APIs would have to change significantly. So, rather than submit a pull request, we decided to make our own tool. Luckily, we were able to leverage parts of &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller-runtime&lt;/a&gt;, the library powering Kubebuilder and now also the Operator SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Today, we're open-sourcing &lt;a href="https://github.com/admiraltyio/multicluster-controller"&gt;multicluster-controller&lt;/a&gt;.&lt;/strong&gt; Check out the README for more details on how it works, including how it can be used with custom resources (using CRDs). We've also included a few examples. We hope that the community will find the project useful. (Anyone volunteering to build a multicluster Prometheus operator?)&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/admiraltyio"&gt;
        admiraltyio
      &lt;/a&gt; / &lt;a href="https://github.com/admiraltyio/multicluster-controller"&gt;
        multicluster-controller
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A Library for Building Hybrid and Multicloud Kubernetes Operators
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Multicluster-Controller&lt;/h1&gt;
&lt;p&gt;Multicluster-controller is a Go library for building Kubernetes controllers that need to watch resources in multiple clusters. It uses the best parts of &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller-runtime&lt;/a&gt; (the library powering &lt;a href="https://github.com/kubernetes-sigs/kubebuilder"&gt;kubebuilder&lt;/a&gt; and now &lt;a href="https://github.com/operator-framework/operator-sdk"&gt;operator-sdk&lt;/a&gt;) and replaces its API (the &lt;code&gt;manager&lt;/code&gt;, &lt;code&gt;controller&lt;/code&gt;, &lt;code&gt;reconcile&lt;/code&gt;, and &lt;code&gt;handler&lt;/code&gt; packages) to support multicluster operations.&lt;/p&gt;
&lt;p&gt;Why? Check out &lt;a href="https://admiralty.io/blog/introducing-multicluster-controller/" rel="nofollow"&gt;Admiralty's blog post introducing multicluster-controller&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/admiraltyio/multicluster-controller/master/#how-it-works"&gt;How it Works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/admiraltyio/multicluster-controller/master/#getting-started"&gt;Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/admiraltyio/multicluster-controller/master/#configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/admiraltyio/multicluster-controller/master/#usage-with-custom-resources"&gt;Usage with Custom Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/admiraltyio/multicluster-controller/master/#api-reference"&gt;API Reference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
How it Works&lt;/h2&gt;
&lt;p&gt;Here is a minimal multicluster controller that watches pods in two clusters. On pod events, it simply logs the pod's cluster name, namespace, and name. In a way, the only thing controlled by this controller is the standard output, but it illustrates a basic scaffold:&lt;/p&gt;
&lt;div class="highlight highlight-source-go"&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;package&lt;/span&gt; main
&lt;span class="pl-k"&gt;import&lt;/span&gt; (
    &lt;span class="pl-s"&gt;"context"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"log"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"k8s.io/api/core/v1"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"k8s.io/sample-controller/pkg/signals"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"admiralty.io/multicluster-controller/pkg/cluster"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"admiralty.io/multicluster-controller/pkg/controller"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"admiralty.io/multicluster-controller/pkg/manager"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"admiralty.io/multicluster-controller/pkg/reconcile"&lt;/span&gt;
    &lt;span class="pl-s"&gt;"admiralty.io/multicluster-service-account/pkg/config"&lt;/span&gt;
)
&lt;span class="pl-k"&gt;func&lt;/span&gt; &lt;span class="pl-en"&gt;main&lt;/span&gt;() {
    &lt;span class="pl-s1"&gt;stopCh&lt;/span&gt; &lt;span class="pl-c1"&gt;:=&lt;/span&gt; &lt;span class="pl-s1"&gt;signals&lt;/span&gt;.&lt;span class="pl-en"&gt;SetupSignalHandler&lt;/span&gt;()
    &lt;span class="pl-s1"&gt;ctx&lt;/span&gt;, &lt;span class="pl-s1"&gt;cancel&lt;/span&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/admiraltyio/multicluster-controller"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;em&gt;Warning:&lt;/em&gt; though we're already using multicluster-controller internally with great success, the project is still in its infancy and the API may break in future releases. Also, a few must-have features are still in the works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cross-cluster &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/"&gt;garbage collection&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;integration with &lt;a href="https://github.com/kubernetes/cluster-registry"&gt;cluster-registry&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;service accounts and RBAC generators;&lt;/li&gt;
&lt;li&gt;validating and mutating &lt;a href="https://book.kubebuilder.io/beyond_basics/what_is_a_webhook.html"&gt;webhooks&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://godoc.org/k8s.io/client-go/tools/leaderelection"&gt;leader election&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;more tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/admiraltyio/multicluster-controller/blob/master/CONTRIBUTING.md"&gt;Contributions&lt;/a&gt;, &lt;a href="https://github.com/admiraltyio/multicluster-controller/issues"&gt;feature requests and bug reports&lt;/a&gt; are welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;p&gt;Many thanks to all the Kubernetes authors, especially those of &lt;a href="https://github.com/kubernetes-sigs/controller-runtime/graphs/contributors"&gt;controller-runtime&lt;/a&gt;, &lt;a href="https://github.com/kubernetes/apimachinery/graphs/contributors"&gt;apimachinery&lt;/a&gt;, and &lt;a href="https://github.com/kubernetes/client-go/graphs/contributors"&gt;client-go&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
