<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nigel Brown</title>
    <description>The latest articles on DEV Community by Nigel Brown (@nbrownuk).</description>
    <link>https://dev.to/nbrownuk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nbrownuk"/>
    <language>en</language>
    <item>
      <title>Using Frp to Publicly Expose Services in a Local Kubernetes Cluster</title>
      <dc:creator>Nigel Brown</dc:creator>
      <pubDate>Thu, 27 Jun 2024 09:04:15 +0000</pubDate>
      <link>https://dev.to/nbrownuk/using-frp-to-publicly-expose-services-in-a-local-kubernetes-cluster-di4</link>
      <guid>https://dev.to/nbrownuk/using-frp-to-publicly-expose-services-in-a-local-kubernetes-cluster-di4</guid>
      <description>&lt;p&gt;I recently had cause to test out &lt;a href="https://cert-manager.io/"&gt;cert-manager&lt;/a&gt; using the Kubernetes &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt;, but wanted to do this using a local cluster on my laptop, based on &lt;a href="https://kind.sigs.k8s.io/"&gt;kind&lt;/a&gt;. I wanted cert-manager to automatically acquire an X.509 certificate on behalf of an application service running in the cluster, using the &lt;a href="https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment"&gt;ACME protocol&lt;/a&gt;. This isn’t straightforward to achieve, as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The cluster, hosting cert-manager as an ACME client, runs on a laptop on a private network behind a router, using NAT.&lt;/li&gt;
&lt;li&gt;The certificate authority (CA) issuing the X.509 certificate, which provides the ACME server component, needs to present a domain validation challenge to cert-manager, from the internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Essentially, the problem is that the cluster is on a private network, but needs to be addressable via a registered domain name, from the internet. How best to achieve this?&lt;/p&gt;

&lt;h2&gt;
  
  
  Options
&lt;/h2&gt;

&lt;p&gt;There are probably a million and one ways to achieve this, all with varying degrees of complexity. We could use &lt;a href="https://www.howtogeek.com/428413/what-is-reverse-ssh-tunneling-and-how-to-use-it/"&gt;SSH reverse tunnelling&lt;/a&gt;, or a commercial offering like &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"&gt;Cloudflare Tunnel&lt;/a&gt;, or one of the myriad of &lt;a href="https://github.com/anderspitman/awesome-tunneling?tab=readme-ov-file#open-source-at-least-with-a-reasonably-permissive-license"&gt;open source tunnelling solutions&lt;/a&gt; available. I chose to use &lt;a href="https://github.com/fatedier/frp"&gt;frp&lt;/a&gt;, “a fast reverse proxy that allows you to expose a local server located behind a NAT or firewall to the internet”. With 82,000 GitHub stars, it seems popular!&lt;/p&gt;

&lt;h2&gt;
  
  
  Frp
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I06zcnDF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://windsock.io/images/frp-client-server.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I06zcnDF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://windsock.io/images/frp-client-server.png" alt="frp-client-server" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Frp uses a client/server model to establish a connection at either end of a tunnel; the server component at the end that is publicly exposed to the internet, and the client on the private network behind the router. Client traffic arriving at the frp server end is routed to the frp client through the tunnel, according to the configuration provided for the client and server components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frp Server Configuration
&lt;/h3&gt;

&lt;p&gt;For my scenario, I chose to host the frp server on a DigitalOcean &lt;a href="https://www.digitalocean.com/products/droplets"&gt;droplet&lt;/a&gt;, with a domain name set to resolve to the droplet’s public IP address. This is the domain name that will appear in the certificate’s &lt;a href="https://en.wikipedia.org/wiki/Subject_Alternative_Name"&gt;Subject Alternative Name&lt;/a&gt; (SAN). The configuration file for the server looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Server configuration file -&amp;gt; /home/frps/frps.toml

# Bind address and port for frp server and client communication
bindAddr = "0.0.0.0"
bindPort = 7000

# Token for authenticating with client
auth.token = "CH6JuHAJFDNoieah"

# Configuration for frp server dashboard (optional)
webServer.addr = "0.0.0.0"
webServer.port = 7500
webServer.user = "admin"
webServer.password = "NGe1EFQ7w0q0smJm"

# Ports for virtual hosts (applications running in Kubernetes)
vhostHTTPPort = 80
vhostHTTPSPort = 443

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this simple scenario, the configuration provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the interfaces and port number through which the frp client interacts with the server&lt;/li&gt;
&lt;li&gt;a token used by the client and the server for authenticating with each other&lt;/li&gt;
&lt;li&gt;access details for the server dashboard that shows active connections&lt;/li&gt;
&lt;li&gt;the ports the server will listen on for virtual host traffic (&lt;code&gt;80&lt;/code&gt; for HTTP and &lt;code&gt;443&lt;/code&gt; for HTTPS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because this setup is temporary, and to make things relatively easy, the frp server can be run using a container rather than installing the binary to the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --restart always --name frps \
    -p 7000:7000 \
    -p 7500:7500 \
    -p 80:80 \
    -p 443:443 \
    -v /home/frps/frps.toml:/etc/frps.toml \
    ghcr.io/fatedier/frps:v0.58.1 -c /etc/frps.toml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A container image for the frp server is provided by the maintainer of frp, as a &lt;a href="https://github.com/users/fatedier/packages/container/package/frps"&gt;GitHub package&lt;/a&gt;. The Dockerfile from which the image is built, can also be &lt;a href="https://raw.githubusercontent.com/fatedier/frp/dev/dockerfiles/Dockerfile-for-frps"&gt;found in the repo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kind Cluster
&lt;/h3&gt;

&lt;p&gt;Before discussing the client setup, let’s just describe how the cluster is configured on the private network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network inspect -f "{{json .IPAM.Config}}" kind | jq '.[0]'
{
  "Subnet": "172.18.0.0/16",
  "Gateway": "172.18.0.1"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kind uses containers as Kubernetes nodes, which communicate using a (virtual) Docker network provisioned for the purpose, called ‘kind’. In this scenario, it uses the local subnet &lt;code&gt;172.18.0.0/16&lt;/code&gt;. Let’s keep this in the forefront of our minds for a moment, but turn to what’s running in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n envoy-gateway-system get pods,deployments,services
NAME                                            READY   STATUS    RESTARTS   AGE
pod/envoy-default-gw-3d45476e-b5474cb59-cdjps   2/2     Running   0          73m
pod/envoy-gateway-7f58b69497-xxjw5              1/1     Running   0          16h

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/envoy-default-gw-3d45476e   1/1     1            1           73m
deployment.apps/envoy-gateway               1/1     1            1           16h

NAME                                    TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)               AGE
service/envoy-default-gw-3d45476e       LoadBalancer   10.96.160.26   172.18.0.6    80:30610/TCP          73m
service/envoy-gateway                   ClusterIP      10.96.19.152   &amp;lt;none&amp;gt;        18000/TCP,18001/TCP   16h
service/envoy-gateway-metrics-service   ClusterIP      10.96.89.244   &amp;lt;none&amp;gt;        19001/TCP             16h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The target application is running in the default namespace in the cluster, and is exposed using the &lt;a href="https://gateway.envoyproxy.io/"&gt;Envoy Gateway&lt;/a&gt;, acting as a &lt;a href="https://gateway-api.sigs.k8s.io/concepts/glossary/#gateway-controller"&gt;gateway controller&lt;/a&gt; for &lt;a href="https://gateway-api.sigs.k8s.io/concepts/api-overview/#resource-model"&gt;Gateway API objects&lt;/a&gt;. The Kubernetes Gateway API supersedes the Ingress API. The Envoy Gateway provisions a deployment of &lt;a href="https://www.envoyproxy.io/"&gt;Envoy&lt;/a&gt;, which proxies HTTP/S requests for the application, using a Service of type LoadbLanacer. The &lt;a href="https://github.com/kubernetes-sigs/cloud-provider-kind"&gt;Cloud Provider for Kind&lt;/a&gt; is also used to emulate the provisioning of a ‘cloud’ load balancer, which exposes the application beyond the cluster boundary with an IP address. The IP address is &lt;code&gt;172.18.0.6&lt;/code&gt;, and is on the subnet associated with the ‘kind’ network. Remember, this IP address is still inaccessible from the internet, because it’s on the private network, behind the router.&lt;/p&gt;

&lt;p&gt;If the frp client can route traffic received from the frp server, for the domain name, to this IP address on the ‘kind’ network, it should be possible to use cert-manager to request an X.509 certificate using the ACME protocol. Further, it’ll enable anonymous, internet-facing clients to consume applications running in the cluster on the private network, too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frp Client Configuration
&lt;/h3&gt;

&lt;p&gt;Just as the frp server running on the droplet needs a configuration file, so does the frp client running on the laptop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Client configuration file -&amp;gt; /home/frpc/frpc.toml

# Address of the frp server (taken from the environment),
# along with its port
serverAddr = "{{ .Envs.FRP_SERVER_ADDR }}"
serverPort = 7000

# Token for authenticating with server
auth.token = "CH6JuHAJFDNoieah"

# Proxy definition for 'https' traffic, with the destination
# IP address taken from the environment
[[proxies]]
name = "https"
type = "https"
localIP = "{{ .Envs.FRP_PROXY_LOCAL_IP }}"
localPort = 443
customDomains = ["myhost.xyz"]

# Proxy definition for 'http' traffic, with the destination
# IP address taken from the environment
[[proxies]]
name = "http"
type = "http"
localIP = "{{ .Envs.FRP_PROXY_LOCAL_IP }}"
localPort = 80
customDomains = ["myhost.xyz"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration file content is reasonably self-explanatory, but there are a couple of things to point out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For flexibility, the IP address of the frp server is configured using an environment variable rather than being hard-coded.&lt;/li&gt;
&lt;li&gt;The file contains proxy definitions for both, HTTP and HTTPS traffic, for the domain &lt;code&gt;myhost.xyz&lt;/code&gt;. The destination IP address for this proxied traffic is also taken from the environment (which evaluates to &lt;code&gt;172.18.0.6&lt;/code&gt; in this particular scenario).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As the frp client is getting some of its configuration from the environment, the relevant environment variables need to be set. In this case, the frp server is running on a DigitalOcean droplet, which requires &lt;code&gt;doctl&lt;/code&gt; in order to interact with the DigitalOcean API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export FRP_SERVER_ADDR="$(doctl compute droplet list --tag-name kind-lab --format PublicIPv4 --no-header)"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We know the local target IP address already, but this may be different in subsequent test scenarios, so it’s best to query the cluster to retrieve the IP address and set the variable accordingly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export FRP_PROXY_LOCAL_IP="$(kubectl get gtw gw -o yaml | yq '.status.addresses.[] | select(.type == "IPAddress") | .value')"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just as the frp server was deployed as a container, so too can the frp client (Docker image for the client is &lt;a href="https://github.com/users/fatedier/packages/container/package/frpc"&gt;here&lt;/a&gt;, and the Dockerfile &lt;a href="https://raw.githubusercontent.com/fatedier/frp/dev/dockerfiles/Dockerfile-for-frpc"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --restart always --name frpc \
    --network kind \
    -p 7000:7000 \
    -v /home/frpc/frpc.toml:/etc/frpc.toml \
    -e FRP_SERVER_ADDR \
    -e FRP_PROXY_LOCAL_IP \
    ghcr.io/fatedier/frpc:v0.58.1 -c /etc/frpc.toml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The frpc client container is attached to the ‘kind’ network, so that the traffic that it proxies can be routed to the IP address defined in the &lt;code&gt;FRP_PROXY_LOCAL_IP&lt;/code&gt; variable; &lt;code&gt;172.18.0.6&lt;/code&gt;. Once deployed, the frp server and client establish a tunnel that proxies HTTP/S requests to the exposed Service in the cluster.&lt;/p&gt;

&lt;p&gt;This enables cert-manager to initiate certificate requests for &lt;a href="https://cert-manager.io/docs/usage/gateway/"&gt;suitably configured&lt;/a&gt; Gateway API objects using the ACME protocol. But, it also allows a CA (for example, &lt;a href="https://letsencrypt.org/"&gt;Let’s Encrypt&lt;/a&gt;), to challenge cert-manager with an &lt;a href="https://letsencrypt.org/docs/challenge-types/#http-01-challenge"&gt;HTTP-01&lt;/a&gt; or &lt;a href="https://letsencrypt.org/docs/challenge-types/#dns-01-challenge"&gt;DNS-01&lt;/a&gt; challenge for proof of domain control. In turn, cert-manager is able to respond to the challenge, and then establish a Kubernetes secret with the TLS artifacts (X.509 certificate and private key). The secret can then be used to establish secure TLS-encrypted communication between clients and the target application in the cluster on the private network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Not everyone wants to spin up a cloud-provided Kubernetes cluster for testing purposes; it can get expensive. Local development cluster tools, such as kind, are designed for just such requirements. But, you’ll always need to satisfy that one scenario where you need to access the local cluster from the internet, and sometimes with an addressable domain name. Frp is just one solution available, but it’s a comprehensive solution with a lot more features that haven’t been discussed here. Just to be clear, you should read up on &lt;a href="https://github.com/fatedier/frp?tab=readme-ov-file#tls"&gt;securing the connection&lt;/a&gt; between the client and server, to ensure no eavesdropping on the traffic flow.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>frp</category>
      <category>kind</category>
    </item>
    <item>
      <title>Bootstrapping Argo CD</title>
      <dc:creator>Nigel Brown</dc:creator>
      <pubDate>Mon, 19 Dec 2022 15:18:55 +0000</pubDate>
      <link>https://dev.to/nbrownuk/bootstrapping-argo-cd-jc0</link>
      <guid>https://dev.to/nbrownuk/bootstrapping-argo-cd-jc0</guid>
      <description>&lt;p&gt;This article follows on from an &lt;a href="https://dev.to/nbrownuk/bootstrapping-gitops-agents-11dj"&gt;introductory article&lt;/a&gt; that discussed the chicken or the egg paradox when it comes to bootstrapping GitOps agents into Kubernetes clusters. The article discusses how this relates to one of the popular GitOps solutions commonly used to automate application deployments to Kubernetes, &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt;. Argo CD is one of the prominent GitOps solutions that pioneers this evolving approach to application delivery, and is part of a family of tools that co-exist under the Argo umbrella. Collectively, the Argo toolset is a recently &lt;a href="https://www.cncf.io/announcements/2022/12/06/the-cloud-native-computing-foundation-announces-argo-has-graduated/" rel="noopener noreferrer"&gt;graduated project&lt;/a&gt; of the Cloud Native Computing Foundation (CNCF).&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Argo CD
&lt;/h2&gt;

&lt;p&gt;When it comes to bootstrapping Argo CD into a cluster to act as a GitOps agent, there are detailed &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/" rel="noopener noreferrer"&gt;instructions for installation&lt;/a&gt; according to the preferred setup (i.e. multi-tenant, high availability and so on). Kubernetes configuration files (including its custom resource definitions) are maintained at Argo CD's &lt;a href="https://github.com/argoproj/argo-cd/tree/master/manifests" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, and these can be applied directly with &lt;code&gt;kubectl&lt;/code&gt;, or through a &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/" rel="noopener noreferrer"&gt;Kustomization&lt;/a&gt; definition. There's a &lt;a href="https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd" rel="noopener noreferrer"&gt;Helm chart&lt;/a&gt;, too, for anyone who prefers to use the chart packaging metaphor for applications. Using one of these techniques gets Argo CD running in a target cluster.&lt;/p&gt;

&lt;p&gt;So, we know how to install Argo CD, but, what's less clear is how or if Argo CD can manage itself according to &lt;a href="https://opengitops.dev/#principles" rel="noopener noreferrer"&gt;GitOps principles&lt;/a&gt;. The project's documentation is a little opaque in this regard. But, if you look hard enough, you'll find a &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#manage-argo-cd-using-argo-cd" rel="noopener noreferrer"&gt;reference&lt;/a&gt; to managing Argo CD with Argo CD. It suggests using a Kustomization to define how Argo CD is configured to run in a Kubernetes cluster, with the config stored in a Git repo, which is monitored by the installed instance of Argo CD. There's even a &lt;a href="https://cd.apps.argoproj.io/" rel="noopener noreferrer"&gt;live online example&lt;/a&gt; of Argo CD managing itself, alongside a bunch of other applications.&lt;/p&gt;

&lt;p&gt;A deployed agent, with its configuration held in versioned storage, configured to fetch and reconcile that same configuration, fulfils the GitOps principles. What's missing from this tantalising glimpse of self-management, however, is how this works in practice. Let's see if we can unpick this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications in Argo CD
&lt;/h2&gt;

&lt;p&gt;GitOps agents work with instances of their own &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;custom resources&lt;/a&gt;, that enable them to manage applications according to the GitOps principles. For Argo CD, the main custom resource it extends the Kubernetes API with, is the 'Application'. In defining an Application object, and having it applied to the cluster, we provide Argo CD with the information it needs to manage that application. Some of the information defined in the object is mandatory, and some is optional. For example, the source location of the application's remote Git or Helm repository that contains its configuration, and the coordinates of the target cluster where the application is to be deployed, are mandatory. Here's an example Application definition that could be used by Argo CD for managing itself;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-bootstrap/argocd&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/nbrownuk/gitops-bootstrap.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This object definition would need to be applied to a cluster running Argo CD, which would then be enacted upon by Argo CD's application controller, resulting in periodic fetches of the configuration from the source repo. Any changes made to Argo CD's configuration in the repo (for example, a change in app version), would then be applied to the cluster, which would result in a rolling update of Argo CD. Hence, Argo CD is managing itself.&lt;/p&gt;

&lt;p&gt;But, let's stop to think for a moment; isn't the definition of the Application part of the overall configuration? We've manually applied the Application object definition to the cluster, but it's not stored in our 'source of truth' in the Git repo. If we had to re-create the cluster, would we remember the Git reference to use for the targetRevision; was it the HEAD, a branch, a specific commit? What was the sync policy for &lt;a href="https://assets.digitalocean.com/articles/comics/imperative-declarative-k8s.jpg" rel="noopener noreferrer"&gt;imperative changes&lt;/a&gt; made inside the cluster? This is one of the typical scenarios that inspired the GitOps philosophy - to take away the ambiguity of configuration, by using declarative configuration residing in versioned storage. This includes the configuration that drives the actions of the GitOps agent itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  App of Apps Pattern
&lt;/h2&gt;

&lt;p&gt;Using a pattern called '&lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern" rel="noopener noreferrer"&gt;app of apps&lt;/a&gt;', Argo CD enables Argo CD to manage itself declaratively. The pattern is more general purpose in utility, in that a single 'root' Application definition serves as a pointer to a Git directory path containing one or more further Application definitions. Applying the 'root' definition results in the creation of all the subordinate Application objects in the cluster, which are then acted upon by Argo CD's application controller. One of the subordinate Application objects might be one that defines Argo CD itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── managedapps
│  ├── argocd
│  │  ├── kustomization.yaml
│  │  └── namespace.yaml
│  └── podinfo
│     └── kustomization.yaml
├── rootapp
│  ├── kustomization.yaml
│  └── rootapp.yaml
└── subapps
   ├── argocd.yaml
   ├── kustomization.yaml
   └── podinfo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the 'rootapp' Application definition (contained in rootapp.yaml) references the path, 'subapps', and each of the Application definitions at this location (argocd.yaml, podinfo.yaml) reference the corresponding sub-directories under 'managedapps'.&lt;/p&gt;

&lt;p&gt;If there are just a few applications, this pattern works well. But, if there are a multitude of applications, then maintenance of the plethora of Application definitions can become burdensome. As an enhancement to this pattern, the Argo CD project introduced the &lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;ApplicationSet&lt;/a&gt; custom resource and controller. An ApplicationSet object defines one or more '&lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators/" rel="noopener noreferrer"&gt;generators&lt;/a&gt;', which generate key/value pairs called parameters. The definition will also contain a '&lt;a href="https://argocd-applicationset.readthedocs.io/en/stable/Template/" rel="noopener noreferrer"&gt;template&lt;/a&gt;', which the ApplicationSet controller will render with the corresponding parameters. In this way, a single ApplicationSet object can automatically spawn numerous Application resources, saving on the administrative overhead of maintaining lots of Application definitions by hand.&lt;/p&gt;

&lt;p&gt;An ApplicationSet definition can work equally as well as the app of apps pattern for managing Argo CD itself. Maybe you'd want to lean towards the ApplicationSet solution, as it's seen as an evolution of the app of apps pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Argo CD Autopilot
&lt;/h2&gt;

&lt;p&gt;One of the problems with setting up the 'app of apps' or ApplicationSet configuration for Argo CD self-management, is that there are a number of manual steps involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish Git repo with app configuration&lt;/li&gt;
&lt;li&gt;Install Argo CD&lt;/li&gt;
&lt;li&gt;Create secret for trusted access to Git repo&lt;/li&gt;
&lt;li&gt;Apply 'root' Application or ApplicationSet object definition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these steps are particularly onerous, but where there are manual steps there is always the opportunity to introduce error and ambiguity. The more automated everything is, the better chance of a successful outcome. This is where &lt;a href="https://argocd-autopilot.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Argo CD Autopilot&lt;/a&gt; adds some value.&lt;/p&gt;

&lt;p&gt;Argo CD Autopilot is a command line tool for bootstrapping Argo CD and managed applications into Kubernetes clusters. It &lt;a href="https://codefresh.io/blog/launching-argo-cd-autopilot-opinionated-way-manage-applications-across-environments-using-gitops-scale/" rel="noopener noreferrer"&gt;originated&lt;/a&gt; from engineers at Codefresh as an ancillary project to Argo CD. And, it seeks to take away some of the complexity of bootstrapping an Argo CD GitOps implementation, as well as to provide a sane repo structure for the configuration of the managed applications. It still uses ApplicationSet and Application objects under the covers, but hides the complexity behind its CLI. The price you pay for this simplified user experience, is an opinionated approach to the repo structure, and the way you might approach your deployment workflows.&lt;/p&gt;

&lt;p&gt;In terms of bootstrapping Argo CD itself, however, it's as simple as issuing a command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ argocd-autopilot repo bootstrap \
    --repo https://github.com/nbrownuk/argocd-bootstrap \
    --git-token "$(&amp;lt; ./github-token)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The net result is the creation of a Git repo (hence, the need to supply a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#about-personal-access-tokens" rel="noopener noreferrer"&gt;personal access token&lt;/a&gt; with 'repo' privileges) containing a prescribed directory structure, with all of the necessary configuration elements to enable self-management of Argo CD. It also results in the deployment of Argo CD to the cluster addressed by your current context. From here you can use the CLI to create projects and applications for management by Argo CD, with the necessary configuration automatically created in the Git repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── apps
│  ├── podinfo
│  │  ├── base
│  │  │  └── kustomization.yaml
│  │  └── overlays
│  │     └── prod
│  │        ├── config.json
│  │        └── kustomization.yaml
│  └── README.md
├── bootstrap
│  ├── argo-cd
│  │  └── kustomization.yaml
│  ├── argo-cd.yaml
│  ├── cluster-resources
│  │  ├── in-cluster
│  │  │  ├── argocd-ns.yaml
│  │  │  └── README.md
│  │  └── in-cluster.json
│  ├── cluster-resources.yaml
│  └── root.yaml
└── projects
   ├── prod.yaml
   └── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The directory structure above is an example of a configured Git repo when using Argo CD Autopilot.&lt;/p&gt;

&lt;p&gt;It's early days for the Argo CD Autopilot project, with some features still missing. For example, its CLI doesn't yet support the use of Helm charts as the embodiment of application configuration, although it's possible to &lt;a href="https://github.com/argoproj-labs/argocd-autopilot/issues/38#issuecomment-1117961569" rel="noopener noreferrer"&gt;work around&lt;/a&gt; this using the limited support provided by Kustomize. Argo CD Autopilot isn't necessarily for everybody, especially given its opinionated approach. But, if you're looking for a convenient method for bootstrapping Argo CD, it does the job perfectly. It even allows you to recover from the loss of a cluster using the CLI, by bootstrapping with reference to the original Git repo as the source of truth. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gitops</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Bootstrapping Flux</title>
      <dc:creator>Nigel Brown</dc:creator>
      <pubDate>Mon, 12 Dec 2022 12:13:28 +0000</pubDate>
      <link>https://dev.to/nbrownuk/bootstrapping-flux-2m2n</link>
      <guid>https://dev.to/nbrownuk/bootstrapping-flux-2m2n</guid>
      <description>&lt;p&gt;Following on from an &lt;a href="https://dev.to/nbrownuk/bootstrapping-gitops-agents-11dj"&gt;introductory article&lt;/a&gt; on the merits of bootstrapping GitOps agents into Kubernetes clusters, this article looks into how this can be achieved with &lt;a href="https://fluxcd.io/"&gt;Flux&lt;/a&gt;. Flux is a collective name to describe several discrete Kubernetes controllers (also known as the GitOps Toolkit), that each perform a separate, specific GitOps-oriented function. Flux is a &lt;a href="https://www.cncf.io/announcements/2022/11/30/flux-graduates-from-cncf-incubator/"&gt;graduated&lt;/a&gt; Cloud Native Computing Foundation (CNCF) project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Flux
&lt;/h2&gt;

&lt;p&gt;Bootstrapping involves installation, and Flux can be installed into a Kubernetes cluster in a variety of ways. A set of YAML manifests are maintained in the project's &lt;a href="https://github.com/fluxcd/flux2/tree/main/manifests"&gt;GitHub repo&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kustomize build https://github.com/fluxcd/flux2/manifests/install?ref=v0.37.0 | \
    kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But, if Helm is your thing, then there's a &lt;a href="https://artifacthub.io/packages/helm/fluxcd-community/flux2"&gt;packaged Helm chart&lt;/a&gt; maintained for installing Flux, too. There's even a command line tool, unsurprisingly called &lt;a href="https://fluxcd.io/flux/cmd/"&gt;Flux&lt;/a&gt;, that can also be used to get Flux up and running in a cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ flux install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Performing any of these actions would provide us with a working Flux deployment in a cluster, consisting of the set of controllers that govern its GitOps actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n flux-system get po
NAME                                          READY   STATUS    RESTARTS   AGE
helm-controller-7b85c84687-dzdbr              1/1     Running   0          81s
image-automation-controller-f45c4b86b-sqqlf   1/1     Running   0          81s
image-reflector-controller-59c894c647-lzb76   1/1     Running   0          80s
kustomize-controller-d88d76876-dqw2m          1/1     Running   0          80s
notification-controller-55df97fbb9-jbrqb      1/1     Running   0          80s
source-controller-5bb5c7b9bd-dsd6w            1/1     Running   0          80s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But, it doesn't provide us with a self-managed setup, using immutable, declarative configuration stored under version control.&lt;/p&gt;

&lt;p&gt;It turns out that the Flux project considers that bootstrapping, self-management and recovery, are primary concerns of GitOps deployments based on Flux. And, as such, it provides an installation mechanism that speaks to each of these concerns, which is &lt;a href="https://fluxcd.io/flux/installation/#bootstrap"&gt;described prominently&lt;/a&gt; in the documentation. So, how does Flux go about bootstrapping?&lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrapping with the Flux CLI
&lt;/h2&gt;

&lt;p&gt;The Flux CLI is a like a Swiss army knife, in that it provides a large number of useful, utilitarian features. As well as using it to install Flux's controllers into a cluster, you can use it to create the manifests for the custom resource objects that the controllers use to perform GitOps actions, for example. And a whole lot more, besides.&lt;/p&gt;

&lt;p&gt;Perhaps one of its most important functions, however, is its ability to perform a bootstrap of a GitOps environment, using a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ flux bootstrap github \
    --repository flux-bootstrap \
    --owner nbrownuk \
    --personal true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a fairly vanilla example of the use of the &lt;code&gt;flux bootstrap&lt;/code&gt; command, but there are a ton of other flags available to nuance the outcome. The command performs a lot of work behind the scenes; it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creates a remote Git repo (GitHub, GitLab, AWS CodeCommit etc.)&lt;/li&gt;
&lt;li&gt;clones the repo&lt;/li&gt;
&lt;li&gt;generates the manifests for each of Flux's controllers&lt;/li&gt;
&lt;li&gt;commits the manifests and pushes them to the remote repo&lt;/li&gt;
&lt;li&gt;installs the controllers into the cluster using the generated manifests&lt;/li&gt;
&lt;li&gt;creates an ssh key pair for trusted communication between Flux's source controller and the remote repo (if you don't bring your own)&lt;/li&gt;
&lt;li&gt;creates a secret embodying the private key, and adds the public key to the remote Git repo as a 'deploy key'&lt;/li&gt;
&lt;li&gt;generates manifests (based on Flux's custom resources) for the purposes of syncing desired state from the remote repo to the cluster&lt;/li&gt;
&lt;li&gt;commits sync manifests and pushes them to the remote repo&lt;/li&gt;
&lt;li&gt;applies the sync manifests to cluster, and waits for reconciliation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's a lot of heavy lifting done on our behalf, and results in a repo structure that contains all that's needed for Flux to manage itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
└── flux-system
   ├── gotk-components.yaml
   ├── gotk-sync.yaml
   └── kustomization.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 'gotk-components.yaml' manifest contains the custom resource definitions, and the Kubernetes object definitions for each of the controllers. And, the 'gotk-sync.yaml' manifest contains custom resource object definitions, providing Flux with the location of the repo, and the path within the repo that contains the configuration that is to be applied to the cluster. That's all Flux needs to manage itself. And, should you be unfortunate enough to lose you cluster, the bootstrap command can be run again to provision Flux to a new cluster, using the configuration in the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating Flux
&lt;/h2&gt;

&lt;p&gt;As I write, even though Flux has gained graduate status with the CNCF, and has acquired &lt;a href="https://fluxcd.io/adopters/"&gt;a sizeable community of adopters&lt;/a&gt;, the project is still &lt;a href="https://fluxcd.io/roadmap/#flux-gitops-ga-q1-2023"&gt;working towards&lt;/a&gt; General Availability. This means that Flux gets updated on a regular basis, which begs the question, how does Flux get updated after bootstrap?&lt;/p&gt;

&lt;p&gt;Firstly, with each release, a new version of the Flux CLI is provided&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. And so, to achieve an update to Flux after an initial bootstrap, all that's required is a re-run of the bootstrap command with the updated version of the CLI. Execution of the command results in an update to the controller definitions in the 'gotk-components.yaml' file in the remote Git repo, which is subsequently fetched and applied to the cluster by Flux. And this results in a rolling update of each controller running in the cluster.&lt;/p&gt;

&lt;p&gt;It may be more preferable to gate this change in the source Git repo using a pull request. The revised manifest for the new controller versions could be generated in advance using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ flux install --export &amp;gt; ./gotk-components.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using GitHub as the Git host provider, this could also be &lt;a href="https://github.com/fluxcd/flux2/tree/main/action#automate-flux-updates"&gt;automated&lt;/a&gt; using &lt;a href="https://docs.github.com/en/actions"&gt;GitHub Actions&lt;/a&gt;, which could easily be adapted for use in other CI/CD systems.&lt;/p&gt;

&lt;p&gt;Bootstrapping and updating in this manner is all well and good, if we're happy to use a command line interface to achieve this. Sometimes, DevOps teams prefer to bundle the installation of core services into the infrastructure layer. How do we manage this for Flux?&lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrapping with Terraform
&lt;/h2&gt;

&lt;p&gt;When provisioning an entire environment for running cloud-native applications, an interesting question arises. What delineates infrastructure from applications? This line can get quite blurred, and different people will have different answers as to where to draw the line. But, the question is an important one, because different tools are generally used for managing the different layers. &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; is pretty ubiquitous when it comes to managing infrastructure, and as we know, Flux is used to manage application deployments to Kubernetes.&lt;/p&gt;

&lt;p&gt;Flux sits on the boundary; you can't manage applications with Flux until it's provisioned, and you can't provision Flux until the Kubernetes cluster is provisioned. Many will choose to drop Flux into the infrastructure layer, and have Terraform handle its bootstrapping, after which, Flux is able to manage itself. Anticipating the need, and recognising the importance of catering for this scenario, the Flux project has provided a &lt;a href="https://registry.terraform.io/providers/fluxcd/flux/latest"&gt;Flux Terraform provider&lt;/a&gt; for provisioning Flux.&lt;/p&gt;

&lt;p&gt;Up to at least v0.21.0, the Flux provider consisted of two data sources; one for generating the YAML for the &lt;a href="https://registry.terraform.io/providers/fluxcd/flux/0.21.0/docs/data-sources/install"&gt;CRDs and deployment configuration&lt;/a&gt;, and one for generating the YAML for the &lt;a href="https://registry.terraform.io/providers/fluxcd/flux/0.21.0/docs/data-sources/sync"&gt;sync activity&lt;/a&gt;. The code used to achieve this is the same code used in the CLI for bootstrapping Flux. But, generation of the Kubernetes configuration doesn't get Flux bootstrapped, and the user is required to make use of other Terraform providers to store the configuration in a Git source, and then to get it applied to a cluster.&lt;/p&gt;

&lt;p&gt;The official Terraform &lt;a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest"&gt;Kubernetes provider&lt;/a&gt;, which is needed to apply the generated configuration to the cluster, comes with some &lt;a href="https://www.reddit.com/r/Terraform/comments/zam12h/kubernetes_provider_resources_v1_vs_nonv1_is_it/"&gt;drawbacks&lt;/a&gt;. And, further, the notion that the Flux provider doesn't itself handle the bootstrapping, might be considered a sub-optimal user experience. This has prompted the Flux project to &lt;a href="https://github.com/fluxcd/terraform-provider-flux/issues/301"&gt;overhaul the Flux provider&lt;/a&gt;, so that it performs the bootstrap process in its entirety. At the time of writing, a new &lt;a href="https://github.com/fluxcd/terraform-provider-flux/pull/332"&gt;bootstrap resource&lt;/a&gt; is in development for the Flux provider. With this important revision, the initial &lt;code&gt;terraform apply&lt;/code&gt; will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generate the necessary configuration,&lt;/li&gt;
&lt;li&gt;push the configuration to the Git repo,&lt;/li&gt;
&lt;li&gt;apply the configuration to the cluster, and wait for reconciliation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Subsequently, Terraform is responsible for managing the configuration in the repo, and Flux is responible for fetching and applying the configuration from the repo. This provides a clear delineation of responsibility, and a much improved user experience.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;For the sake of clarity, the version of the CLI reflects the Flux release version, whereas the individual controllers have different, independent release versions. For example, in Flux v0.37.0, the source controller version is v0.32.1, whilst the helm controller is v0.27.0. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>gitops</category>
      <category>flux</category>
    </item>
    <item>
      <title>Bootstrapping GitOps Agents</title>
      <dc:creator>Nigel Brown</dc:creator>
      <pubDate>Sun, 04 Dec 2022 11:47:54 +0000</pubDate>
      <link>https://dev.to/nbrownuk/bootstrapping-gitops-agents-11dj</link>
      <guid>https://dev.to/nbrownuk/bootstrapping-gitops-agents-11dj</guid>
      <description>&lt;p&gt;Since the term was &lt;a href="https://www.weave.works/blog/gitops-operations-by-pull-request"&gt;originally coined&lt;/a&gt; back in 2017 by Alexis Richardson (CEO of Weaveworks), the GitOps philosophy has struggled to find a coherent, consensual definition. This fostered some ambiguity, and occasionally some contradiction. For example, &lt;a href="https://www.gitops.tech/#push-based-vs-pull-based-deployments"&gt;push-based deployments&lt;/a&gt; were once categorised as a valid approach to GitOps deployments, &lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:6988163333044527104?utm_source=share&amp;amp;utm_medium=member_desktop"&gt;but now they're not&lt;/a&gt;. There was going to be a common &lt;a href="https://github.com/argoproj/gitops-engine/blob/master/specs/design.md"&gt;GitOps engine&lt;/a&gt; for the good of the community, and &lt;a href="https://fluxcd.io/flux/migration/faq-migration/#is-the-gitops-toolkit-related-to-the-gitops-engine"&gt;then there wasn't&lt;/a&gt;. Flux was in the vanguard of software tools that exemplified GitOps, and then it needed to be &lt;a href="https://fluxcd.io/flux/migration/faq-migration/#why-did-you-rewrite-flux"&gt;rewritten from the ground up&lt;/a&gt;. And, so on. &lt;/p&gt;

&lt;p&gt;It's encouraging, then, that the last 12 months or so have seen a gradual maturing of the concept, with a &lt;a href="https://opengitops.dev/"&gt;common understanding&lt;/a&gt; of what we mean by the term 'GitOps'. And the community has started working together across competing technologies, all under the umbrella of the Cloud Native Computing Foundation (CNCF). Some of the tooling has even acquired '&lt;a href="https://www.cncf.io/projects/#maturity-levels"&gt;graduated&lt;/a&gt;' project status with the CNCF. All is good in GitOps land.&lt;/p&gt;

&lt;p&gt;As the work to define standards has unfolded, and the development of software tools to meet those standards has ensued, one aspect of GitOps has consistently bugged me. It's an implementation detail that doesn't seem to get a lot of attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps Agents
&lt;/h2&gt;

&lt;p&gt;First, let's briefly outline what a GitOps agent is in the context of a Kubernetes cluster. A GitOps agent is a software application that (at least in part) implements the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/"&gt;controller pattern&lt;/a&gt;, and is responsible for reconciling the declared 'desired state' with the actual cluster state. That is, the Kubernetes configuration for an application stored in a version control system (VCS), is periodically fetched and applied to the cluster, resulting in automated deployments. Change is initiated through the VCS, and explicitly not through direct, imperative change in the cluster. This allows application deployments to be managed using the inherent control features of the hosting VCS (i.e. pull or merge requests).&lt;/p&gt;

&lt;h2&gt;
  
  
  Chicken or the Egg
&lt;/h2&gt;

&lt;p&gt;So, GitOps agents allow us to automate application installs and updates in a Kubernetes cluster, according to the &lt;a href="https://opengitops.dev/#principles"&gt;GitOps principles&lt;/a&gt;. But, aren't the agents themselves, software applications? They might be controllers, but they're still just software applications, requiring; installation, update, reconfiguration, recovery, and so on. This begs the question, how do GitOps agents get deployed to a cluster? Can they be deployed to a cluster according to GitOps principles, like the apps they manage? Can they be used to manage themselves? Seemingly, this is an example of the classic &lt;a href="https://en.wikipedia.org/wiki/Chicken_or_the_egg"&gt;chicken or the egg&lt;/a&gt; paradox!&lt;/p&gt;

&lt;p&gt;If we have to manually install or update a GitOps agent, then it's deployment is open to all of the issues that the GitOps principles seek to resolve; ambiguous state, configuration drift, unsolicited change, and much more. This effectively makes the state of the whole system as frail as if there were no GitOps agent at all. To that end, it's crucial that GitOps solutions address this problem head on, and provide a means for bootstrapping their agents, whilst allowing for their ongoing management, in line with the GitOps principles that underpin the whole approach.&lt;/p&gt;

&lt;p&gt;This is the first article in a brief series looking into the different approaches taken to the bootstrapping conundrum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Up Next
&lt;/h2&gt;

&lt;p&gt;The two leading projects in the GitOps domain (arguably), Flux and ArgoCD, go about bootstrapping in different ways. First, we'll discuss how the Flux project &lt;a href="https://dev.to/nbrownuk/bootstrapping-flux-2m2n"&gt;goes about bootstrapping&lt;/a&gt; its numerous controllers, and then we'll lift the lid on ArgoCD, to see &lt;a href="https://windsock.io/bootstrapping-argocd/"&gt;how it gets things done&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gitops</category>
    </item>
  </channel>
</rss>
