<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sacha Thommet</title>
    <description>The latest articles on DEV Community by Sacha Thommet (@depp57).</description>
    <link>https://dev.to/depp57</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/depp57"/>
    <language>en</language>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 6: Automation</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Sat, 09 Nov 2024 13:06:17 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-6-automation-3j5k</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-6-automation-3j5k</guid>
      <description>&lt;p&gt;In this section, I'll dive into how I automated my Kubernetes cluster, using two tools: Ansible for machine configuration and ArgoCD for application deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why automate ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Reducing Human Error&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In IT operations, even small mistakes can lead to service outages or security vulnerabilities. Human errors like typos may happen, that's why automation is important — it ensures that each task is executed consistently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Minimizing Repetitive Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automation reduces repetitive, time-consuming tasks like updates and patching, this let teams to work on more strategic tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability and Reproducibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automating infrastructure setup enables large-scale deployments with consistent configurations, regardless of the size of the environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Version Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tools like Ansible use declarative files that can be tracked in Git, allowing quick rollbacks and maintaining an accessible record of changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Documentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Declaratives files are easy to understand — they describe WHAT the infrastructure should look like rather than HOW, like in the imperative way. Being versioned in Git, these files are accessible to the team, allowing them to quickly review the current state of the infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code (IaS) with Ansible
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ansible.com" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; is an open-source tool that excels in &lt;strong&gt;infrastructure configuration&lt;/strong&gt;. With an agentless architecture (no services need to be installed on the managed machines), it communicates with machines over SSH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn1qgpowosfwu5oi92en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn1qgpowosfwu5oi92en.png" alt="Image description" width="543" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;inventory&lt;/em&gt; file is where you list the machines that Ansible will manage. It contains the IP addresses or hostnames of each machine. Here's mine:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;inventory.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;k8s&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, the tasks performed by Ansible are defined in files called &lt;em&gt;playbooks&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;playbook.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure useful packages are present&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# don't gather information on nodes as I don't use them&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure all apt packages are updated to their latest version&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;      &lt;span class="c1"&gt;# run the equivalent of apt-get update&lt;/span&gt;
        &lt;span class="na"&gt;cache_valid_time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;86400&lt;/span&gt; &lt;span class="c1"&gt;# in seconds: one day&lt;/span&gt;
        &lt;span class="na"&gt;upgrade&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;            &lt;span class="c1"&gt;# run apt-get upgrade&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TLP - Optimize Linux Laptop Battery Life - https://linrunner.de/tlp/&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tlp&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Enforce security&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure firewall is configured and running&lt;/span&gt;
      &lt;span class="na"&gt;block&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure firewall package is installed&lt;/span&gt;
          &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ufw&lt;/span&gt;
            &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow incoming HTTPS traffic&lt;/span&gt;
          &lt;span class="na"&gt;community.general.ufw&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;rule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow&lt;/span&gt;
            &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow incoming HTTPS traffic&lt;/span&gt;
            &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow everything from LAN&lt;/span&gt;
          &lt;span class="na"&gt;community.general.ufw&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;rule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow&lt;/span&gt;
            &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow everything from LAN&lt;/span&gt;
            &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;any&lt;/span&gt;
            &lt;span class="na"&gt;from_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.1.0/24&lt;/span&gt;

        &lt;span class="s"&gt;...&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure firewall is up and running&lt;/span&gt;
          &lt;span class="na"&gt;community.general.ufw&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enabled&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure microk8s is up and running&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;istvano.microk8s'&lt;/span&gt;
      &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;microk8s_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.29/stable&lt;/span&gt;
        &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, I only need to run the following command to configure my nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# run only once to install the required role used by the playbook&lt;/span&gt;
ansible-galaxy role &lt;span class="nb"&gt;install &lt;/span&gt;istvano.microk8s

ansible-playbook playbook.yaml &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Ansible is idempotent
&lt;/h4&gt;

&lt;p&gt;Idempotence is originally a concept from Mathematics. As defined by Wikipedia,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In Ansible, idempotence is offered as a built-in feature of many of the Ansible modules (be careful, some modules like &lt;code&gt;shell&lt;/code&gt; or &lt;code&gt;command&lt;/code&gt; don't support this feature). This means that re-running a playbook will produce the same final state without any unwanted side effects.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://akshayavb99.medium.com/automation-with-ansible-ansibles-idempotence-2c97d3081e6c" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt; for more explanation.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps with ArgoCD
&lt;/h2&gt;

&lt;p&gt;While Ansible manages the infrastructure, ArgoCD focuses on deploying and updating applications in a Kubernetes environment.&lt;br&gt;
ArgoCD automatically synchronizes applications with their configurations defined in a Git repository. When the repository is updated, ArgoCD adjusts the state of the applications in the Kubernetes cluster to match the repository's configuration. This provides a declarative and versioned approach for deploying and managing Kubernetes applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rxdp4fd3njxtmad74jb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rxdp4fd3njxtmad74jb.png" alt="Image description" width="787" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that this diagram explains the &lt;em&gt;pull-based approach&lt;/em&gt;, where ArgoCD regularly pulls the latest changes from the Git repository. You can also use the &lt;em&gt;push-based approach&lt;/em&gt; with ArgoCD, where Git notifies ArgoCD via a webhook that there is a change.&lt;/p&gt;

&lt;p&gt;Additionally, ArgoCD comes with a web interface that provides a clear view of deployment statuses. The screenshot below illustrates this interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5cmyo0y4flwgyt49fp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5cmyo0y4flwgyt49fp1.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these tools, almost everything stays in my Git repository (which I plan to make public soon, once I’ve cleared the repo of hard-coded secrets!). Git is the &lt;strong&gt;single source of truth&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 5: Monitoring</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Fri, 08 Nov 2024 16:03:38 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-5-monitoring-8gm</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-5-monitoring-8gm</guid>
      <description>&lt;p&gt;Keeping a close eye on your Kubernetes cluster is essential to detect issues early. In this section, I'll explain how I monitor my cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  A ridiculously over-engineered setup for a homelab
&lt;/h2&gt;

&lt;p&gt;At this point, I know I'm over-engineering the cluster; I'm the only one using it, and my portfolio hosted on it gets minimal traffic, 5 visits per month at most (measured by &lt;a href="https://search.google.com/search-console/about" rel="noopener noreferrer"&gt;Google Search Console&lt;/a&gt;). So, if it goes down, no one will be impacted.&lt;/p&gt;

&lt;p&gt;The reason I am doing all this is to learn things. I LEARNED A LOT (and invested a lot of time too)…&lt;/p&gt;

&lt;p&gt;I chose to deploy &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; with &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Kibana&lt;/a&gt; using the Kubernetes &lt;a href="https://prometheus-operator.dev/" rel="noopener noreferrer"&gt;prometheus operator&lt;/a&gt; which greatly simplifies the deployment process. This stack is widely used in the professional world, so I wanted to explore it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus gather data from nodes and pods.&lt;/li&gt;
&lt;li&gt;Then, Grafana displays the data.&lt;/li&gt;
&lt;li&gt;I could also set up &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager" rel="noopener noreferrer"&gt;AlertManager&lt;/a&gt; to configure alerts via email, webhooks, SMS or whatever, but I haven't gone that far yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prometheus operator
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://prometheus-operator.dev/docs/getting-started/installation" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; offers three ways to install the Prometheus Operator. I use a GitOps approach with ArgoCD to deploy everything to the cluster and chose the Helm chart for installation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chart.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-subchart&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;60.3.0&lt;/span&gt;
&lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;60.3.0"&lt;/span&gt;
&lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-prometheus-stack&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;60.3.0&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://prometheus-community.github.io/helm-charts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;values.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kube-prometheus-stack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespaceOverride&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-stack&lt;/span&gt;
  &lt;span class="na"&gt;defaultRules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;alertmanager&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;etcd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;    &lt;span class="c1"&gt;# microk8s does not use etcd if HA is enabled&lt;/span&gt;
      &lt;span class="na"&gt;windows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;alertmanager&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prometheusSpec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;retention&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30d&lt;/span&gt;
  &lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;grafana.mydomain&lt;/span&gt;
      &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;certificate-prod-grafana&lt;/span&gt;
          &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;grafana.mydomain&lt;/span&gt;
    &lt;span class="na"&gt;sidecar&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;alertmanager&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;kubeEtcd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# microk8s does not use etcd if HA is enabled&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And voila!&lt;/p&gt;

&lt;p&gt;Grafana is accessible through &lt;code&gt;grafana.mydomain&lt;/code&gt; with the default credentials &lt;code&gt;admin:prom-operator&lt;/code&gt; (change it!):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bb77p2ib15zpg9apisd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bb77p2ib15zpg9apisd.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multiple dashboards are also configured by default with the operator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F168wcn8mo7kvsqa4kbkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F168wcn8mo7kvsqa4kbkw.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now configure metric scraping for your pods and services using &lt;code&gt;PodMonitor&lt;/code&gt; and &lt;code&gt;ServiceMonitor&lt;/code&gt; CRDs &lt;a href="https://prometheus-operator.dev/docs/developer/getting-started" rel="noopener noreferrer"&gt;thanks to the operator&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 4: Storage</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Thu, 31 Oct 2024 15:18:52 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-4-storage-27oi</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-4-storage-27oi</guid>
      <description>&lt;p&gt;Welcome to the fourth part of my Kubernetes homelab guide. I will explain how I manage to set up the storage using Longhorn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Distributed storage
&lt;/h2&gt;

&lt;p&gt;Pods and their containers are ephemeral. Various factors can lead to a pod restarting, such as being OOM-killed, application crashes, or scaling down...&lt;br&gt;
The problem is, when a pod restarts, its filesystem is wiped, meaning it does not retain data persistently.&lt;/p&gt;

&lt;p&gt;So, how can we persist data inside pods, and &lt;strong&gt;how can we share data across multiple pods on different nodes&lt;/strong&gt; ?&lt;/p&gt;

&lt;p&gt;With volume mounting and &lt;strong&gt;distributed storage&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;Distributed storage systems enable us to store data that can be made available clusterwide. Excellent! But dynamically apportioning storage across a multi-node cluster is a very complex job. So this is another area where Kubernetes typically outsources the job to plugins (e.g. Cloud providers like Azure or AWS, or systems like &lt;a href="https://rook.io/" rel="noopener noreferrer"&gt;Rook&lt;/a&gt; or &lt;a href="https://longhorn.io/" rel="noopener noreferrer"&gt;Longhorn&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;External storage systems like these connect to Kubernetes by way of the &lt;strong&gt;C&lt;/strong&gt;ontainer &lt;strong&gt;S&lt;/strong&gt;torage &lt;strong&gt;I&lt;/strong&gt;nterface (CSI). This provides a standard interface between different types of storage systems&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Longhorn ?
&lt;/h2&gt;

&lt;p&gt;I’m already over-engineering by using a Kubernetes cluster to host a few private services , so the main reason I chose Longhorn was its ease of setup and management.&lt;/p&gt;

&lt;p&gt;In addition its backup functionality is extremely useful and easy to use!&lt;br&gt;
Simply set up a backup target — an endpoint for Longhorn to access a backup store. This backup store can be an NFS, SMB/CIFS server, Azure Blob Storage, or any S3-compatible server holding Longhorn volume backups. Then, to restore your entire Kubernetes storage, just point Longhorn to the backup target.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;The soft is very easy to install, as always with containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/longhorn/longhorn/&amp;lt;version&amp;gt;/deploy/longhorn.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "hard" part is with requirements. Each node in the cluster where Longhorn is installed must fulfill multiple requirements. Luckily, the &lt;a href="https://longhorn.io/docs/1.7.2/deploy/install/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; is clear and straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Longhorn comes with its &lt;code&gt;StorageClass&lt;/code&gt;, which provides dynamic provisioning of persistent volumes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5lymk7c3s1w2ur5rjj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5lymk7c3s1w2ur5rjj9.png" alt="Image description" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden/server:latest&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden&lt;/span&gt;
      &lt;span class="s"&gt;...&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden-data&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden-data&lt;/span&gt;
      &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden-data&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden-data&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 3: Networking</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Thu, 31 Oct 2024 11:17:26 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-3-networking-3bla</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-3-networking-3bla</guid>
      <description>&lt;p&gt;In this part of the Kubernetes homelab, we’ll dive into the networking setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network configuration
&lt;/h2&gt;

&lt;p&gt;My networking implementation is straightforward. All of the cluster nodes, along with the router with a built-in firewall, are on a single /24 private network. This is a standard home setup.&lt;/p&gt;

&lt;p&gt;I set up my router's DHCP to assign static IPs to servers 1 and 2 by mapping them to their respective MAC addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;server1: 192.168.1.11&lt;/li&gt;
&lt;li&gt;server2: 192.168.1.10&lt;/li&gt;
&lt;li&gt;router: 192.168.1.254&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2tffxi2p7wj750k0ir2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2tffxi2p7wj750k0ir2.jpg" alt="Image description" width="500" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes internal networking
&lt;/h2&gt;

&lt;p&gt;In order to expose your applications, you'll need an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="noopener noreferrer"&gt;Ingress Controller&lt;/a&gt;. This runs on every node in the cluster and listens on ports 80 and 443 (HTTP and HTTPS). I choose the &lt;a href="https://github.com/kubernetes/ingress-nginx" rel="noopener noreferrer"&gt;Nginx Ingress Controller&lt;/a&gt;, which is easy to install on Microk8s:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;microk8s enable ingress&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, I configured the router to forward incoming requests on ports 80 and 443 to any one of the nodes, in my case &lt;code&gt;server2&lt;/code&gt;.&lt;br&gt;
All other ports are blocked by the router’s firewall, ensuring that only necessary traffic reaches the servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0h28kzleb8efjmx25ah0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0h28kzleb8efjmx25ah0.png" alt="Image description" width="800" height="1004"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;server2&lt;/code&gt; will handle all ingress traffic, and use the Calico network plugin to route the requests to the pods on the corresponding nodes.&lt;/p&gt;

&lt;p&gt;I chose Calico for its support to &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;NetworkPolicies&lt;/a&gt;, but Kubernetes allows you to use other Container Network Interfaces (CNIs) that may better suit your setup.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: &lt;br&gt;
This means that if &lt;code&gt;server2&lt;/code&gt; is unavailable for some reason, the cluster will not respond to any incoming requests. It is a &lt;a href="https://en.wikipedia.org/wiki/Single_point_of_failure" rel="noopener noreferrer"&gt;Single Point Of Failure&lt;/a&gt;.&lt;br&gt;
One solution would be to use an IP failover mecanism like &lt;a href="https://www.keepalived.org/" rel="noopener noreferrer"&gt;keepalived&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, I also installed &lt;a href="https://cert-manager.io/" rel="noopener noreferrer"&gt;Cert Manager&lt;/a&gt;, to handle SSL certificate requests for my HTTPS routes and automatically manage renewals.&lt;/p&gt;

&lt;p&gt;Installing it is a simple as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/cert-manager/cert-manager/releases/download/&amp;lt;version&amp;gt;/cert-manager.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, I simply create an &lt;code&gt;Ingress&lt;/code&gt;, then the NGINX Ingress Controller along with Cert Manager takes care of the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portfolio-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portfolio&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydomain&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portfolio&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mydomain&lt;/span&gt;
      &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;certificate-prod-portfolio&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 2: Installation</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Thu, 17 Oct 2024 13:35:26 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-2-installation-1h5h</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-2-installation-1h5h</guid>
      <description>&lt;p&gt;In this part, I'll walk you through the (few) installation steps for the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operating system
&lt;/h2&gt;

&lt;p&gt;I opted for &lt;strong&gt;Ubuntu Server&lt;/strong&gt; since I'm already familiar with Ubuntu because I'm using it on my desktop computer.&lt;/p&gt;

&lt;p&gt;Maybe in the future I will try others systems, like &lt;a href="https://www.talos.dev" rel="noopener noreferrer"&gt;Talos&lt;/a&gt; which is designed for Kubernetes - secure, immutable, and minimal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Distribution
&lt;/h2&gt;

&lt;p&gt;Given that &lt;strong&gt;MicroK8s&lt;/strong&gt; is developed by Canonical, the same team behind Ubuntu, I naturally choose it.&lt;/p&gt;

&lt;p&gt;It supports all the features that I need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It can run on a single node without requiring &lt;strong&gt;h&lt;/strong&gt;igh &lt;strong&gt;a&lt;/strong&gt;vailability (HA).&lt;/li&gt;
&lt;li&gt;Yet it also supports HA, I can add a third node in the future.&lt;/li&gt;
&lt;li&gt;Easy to install &amp;amp; to update.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Installing MicroK8s was super simple by following the official guide available &lt;a href="https://microk8s.io/docs/getting-started" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;As with any installation of this kind, the first step is to ensure your system meets the requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At least Ubuntu 16.04 LTS, or newer to run the commands (or another operating system which supports snapd - see the snapd documentation).&lt;/li&gt;
&lt;li&gt;MicroK8s runs in as little as 540MB of memory, but to accommodate workloads, Canonical recommend a system with at least 20G of disk space and 4G of memory.&lt;/li&gt;
&lt;li&gt;An internet connection&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;

&lt;p&gt;Run these commands on every node, in my case &lt;code&gt;server1&lt;/code&gt; and &lt;code&gt;server2&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install microk8s&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;microk8s &lt;span class="nt"&gt;--classic&lt;/span&gt; &lt;span class="nt"&gt;--channel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;version&amp;gt;

&lt;span class="c"&gt;# Add your current user to the microk8s group&lt;/span&gt;
&lt;span class="c"&gt;# and gain access to the .kube directory (where some of the k8s configuration goes on) &lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-G&lt;/span&gt; microk8s &lt;span class="nv"&gt;$USER&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.kube
&lt;span class="nb"&gt;chmod &lt;/span&gt;0700 ~/.kube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, create a cluster by adding &lt;code&gt;server2&lt;/code&gt; as a worker to the &lt;code&gt;server1&lt;/code&gt; master node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On the master node (server1)&lt;/span&gt;
microk8s add-node

&lt;span class="c"&gt;# Then from the node you wish to join to this cluster, run the command displayed by the command above, like:&lt;/span&gt;

&lt;span class="c"&gt;# Join as a worker, not running the Kubernetes control plane&lt;/span&gt;
microk8s &lt;span class="nb"&gt;join&lt;/span&gt; &amp;lt;server1_ip&amp;gt;:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05 &lt;span class="nt"&gt;--worker&lt;/span&gt;

&lt;span class="c"&gt;# Finally, from the master node, run to see the nodes in the cluster:&lt;/span&gt;
microk8s kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And voila, it's that simple!&lt;/p&gt;

</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes homelab - Learning by doing, Part 1: Hardware</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Tue, 12 Mar 2024 14:15:27 +0000</pubDate>
      <link>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-1-hardware-5c7o</link>
      <guid>https://dev.to/depp57/kubernetes-homelab-learning-by-doing-part-1-hardware-5c7o</guid>
      <description>&lt;p&gt;I recently did a 6-month internship, which was extremely interesting and challenging for me!&lt;/p&gt;

&lt;p&gt;My role as a &lt;em&gt;Kubernetes consultant intern&lt;/em&gt; was to help make sure that the applications we were working on could be deployed reliably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; is a powerful &lt;a href="https://www.redhat.com/en/topics/automation/what-is-orchestration"&gt;orchestrator&lt;/a&gt; that will ease deployment and automatically manage your applications on a set of machines, called a cluster.&lt;/p&gt;

&lt;p&gt;With great power comes great complexity. Thus, learning Kubernetes is oftentimes considered to be cumbersome and complex, namely because of the number of new concepts you have to learn.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hardware
&lt;/h2&gt;

&lt;p&gt;Originally, I wanted to build a three-node cluster (&lt;a href="https://thesecretlivesofdata.com/raft/"&gt;for &lt;strong&gt;H&lt;/strong&gt;igh &lt;strong&gt;A&lt;/strong&gt;vailability&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Here is a summary of the hardware that I have:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Laptop&lt;/th&gt;
&lt;th&gt;Intel NUC NUC10i3FNK&lt;/th&gt;
&lt;th&gt;Raspberry PI 4B&lt;/th&gt;
&lt;th&gt;Raspberry PI 0&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;16GB DDR4 · 2133Mhz&lt;/td&gt;
&lt;td&gt;32GB DDR4 · 3200Mhz&lt;/td&gt;
&lt;td&gt;1GB DDR4 · 3200Mhz&lt;/td&gt;
&lt;td&gt;512Mb DDR2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;512GB SSD SATA&lt;/td&gt;
&lt;td&gt;1To SSD NVMe&lt;/td&gt;
&lt;td&gt;64GB MicroSD&lt;/td&gt;
&lt;td&gt;16GB MicroSD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;i5-7200U · 2C/4T&lt;/td&gt;
&lt;td&gt;I3-10110U · 2C/4T&lt;/td&gt;
&lt;td&gt;2C/4T&lt;/td&gt;
&lt;td&gt;1C/1T&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPU&lt;/td&gt;
&lt;td&gt;Integrated&lt;/td&gt;
&lt;td&gt;Integrated&lt;/td&gt;
&lt;td&gt;Integrated&lt;/td&gt;
&lt;td&gt;Integrated&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Issues encountered
&lt;/h3&gt;

&lt;h4&gt;
  
  
  First issue: The Raspberry PI 0 is incompatible
&lt;/h4&gt;

&lt;p&gt;Unfortunately, the Raspberry PI 0 is incompatible with Kubernetes, as described &lt;a href="https://github.com/kubernetes/kubeadm/issues/253?ref=ikarus.sg#issuecomment-296738890"&gt;in this Github issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Raspberry Pi Zero has a processor that adopts the armv6 architecture. Unfortunately, its support has been dropped by Kubernetes since v1.6.&lt;/p&gt;

&lt;h4&gt;
  
  
  Second issue: The Raspberry PI 4B does not have enough RAM
&lt;/h4&gt;

&lt;p&gt;With its 1GB RAM, the Raspberry PI 4B was not very useful. The Kubernetes control plane would use the majority of the free RAM, which leaves no room for the workload.&lt;/p&gt;




&lt;p&gt;The issues of my hardware forced me to settle for only two nodes instead of the intended three in my High Availability cluster build.&lt;/p&gt;

&lt;p&gt;Concerning the network, I have a modem provided by my ISP which includes a 500Mbps switch, a router and a firewall:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n3h3bi17sq4fwf5v0xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n3h3bi17sq4fwf5v0xk.png" alt="K8s cluster architecture" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Power consumption
&lt;/h2&gt;

&lt;p&gt;I measured the power consumption to estimate the monthly cost of running the homelab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd97vtxudufmnt7n9ep9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd97vtxudufmnt7n9ep9.jpeg" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My router accounts for 12W out of the total 54 Watts, while the two nodes consume 42W when idle.&lt;/p&gt;

&lt;p&gt;As of March 2024, the cost of one kWh is 0.25€ in France. Therefore, the operational cost of the homelab is &lt;code&gt;42*24*30/1000=30kWh * 0.25€ = 7€50&lt;/code&gt; per month.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The Learning Adventure: Tales from a Junior Developer</title>
      <dc:creator>Sacha Thommet</dc:creator>
      <pubDate>Sun, 02 Jul 2023 20:41:37 +0000</pubDate>
      <link>https://dev.to/depp57/the-learning-adventure-tales-from-a-junior-developer-n0k</link>
      <guid>https://dev.to/depp57/the-learning-adventure-tales-from-a-junior-developer-n0k</guid>
      <description>&lt;p&gt;As a passionate junior developer, I want to share my learning journey in the world of software development. In this blog post, I'll discuss the strategies that have worked well for me and contributed to my growth.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Studies
&lt;/h3&gt;

&lt;p&gt;My master's degree in computer science provided a theoretical foundation that I found very useful.&lt;/p&gt;

&lt;p&gt;While you may not regularly utilize advanced algorithms, the process of developing logical reasoning skills through their study is invaluable.&lt;/p&gt;

&lt;p&gt;I have also gained soft skills that are difficult to acquire through self-learning. Skills such as effective communication, teamwork, and time management are really useful.&lt;/p&gt;

&lt;p&gt;But the most useful skill I've learned is &lt;strong&gt;the ability to learn new things and to do research by using the internet (&lt;a href="https://en.wikipedia.org/wiki/RTFM"&gt;RTFM&lt;/a&gt;) and seeking guidance from experienced individuals&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Online tutorials, documentation and whatever.
&lt;/h3&gt;

&lt;p&gt;While the studies gives you the fundamentals, you will need to learn more by yourself by looking for tutorials on the internet.&lt;/p&gt;

&lt;p&gt;The countless hours spent searching computer-related content on Google even resulted in an unexpected invitation to partake in the intriguing Google Foobar recruitment easter egg.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Google send the invitation on the basis of your search history and your problem solving related keyword searches, like if you are a developer, it is obvious that you search a lot of problems related to programming on Google or Stack Overflow. And based on Google search algorithms, they show you an invitation for Google Foobar.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OHVv_xB5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fnk8ieeb0dk58gflhoo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OHVv_xB5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fnk8ieeb0dk58gflhoo9.png" alt="Google Foobar" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Books
&lt;/h3&gt;

&lt;p&gt;So far, I have read two popular books, "Clean Code" and "Clean Architecture" by the same author. These books have been immensely helpful, providing valuable guidance and principles for writing clean and maintainable code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--igtis4PM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcqm5azlool5r12ibh3e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igtis4PM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcqm5azlool5r12ibh3e.jpg" alt="Clean code and clean architecture books" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite facing criticism for various reasons, these two books present an &lt;strong&gt;opportunity to engage in critical thinking and reflection on various aspects of clean code.&lt;/strong&gt; I recommend exploring books by different authors to gain diverse perspectives and opinions.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Side projects
&lt;/h3&gt;

&lt;p&gt;And the most important one that works great for me: &lt;strong&gt;learning by doing mistakes&lt;/strong&gt;. You should find the answers to every “Why”. Good developers are constantly wondering why issues are solved in one way and not another. This allows us to reflect on the processes and improve them with the passage of experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PbIAT75m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp412tvep5m0towjtjeq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PbIAT75m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp412tvep5m0towjtjeq.jpg" alt="Learn by mistakes" width="602" height="579"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I enjoy working on personal projects, like managing my own virtual private server running on Linux. With this server, I learned so much about networking, system administration and Docker.&lt;/p&gt;

&lt;p&gt;Currently, I am focused on learning how to build a web application using microservices architecture. As I strive to apply best programming practices, I find myself conducting extensive research (yes, again research!) to ensure I'm following industry standards. &lt;br&gt;
Some things could be bad, over-engineered, not well understood. However, I try to do my best as a junior and it's totally normal to make mistakes!&lt;br&gt;
Also, I explain in the &lt;code&gt;readme.md&lt;/code&gt; file all my choices whenever I thing it is interesting.&lt;/p&gt;

&lt;p&gt;In the future, I also plan to participate in Open Source projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Learn with others
&lt;/h3&gt;

&lt;p&gt;I've recently learned (a bit late) the significance of networking, engaging in discussions with colleagues, asking questions, sharing ideas, and attending technical events/conferences. These experiences have proven to be invaluable for learning new things and connecting with passionate individuals in the field.&lt;/p&gt;

&lt;p&gt;For instance, I recently started attending to &lt;a href="https://www.meetup.com/fr-FR/paris-tech-meetups5/"&gt;tech meetups&lt;/a&gt; in France and it's a great way to meet people and learn many things!&lt;/p&gt;

</description>
      <category>learning</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
