<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Korakrit Chariyasathian</title>
    <description>The latest articles on DEV Community by Korakrit Chariyasathian (@korakrit).</description>
    <link>https://dev.to/korakrit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/korakrit"/>
    <language>en</language>
    <item>
      <title>On‑Premises Kubernetes Networking Architecture: A Comprehensive Architectural Synthesis</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Mon, 05 Jan 2026 08:17:01 +0000</pubDate>
      <link>https://dev.to/korakrit/on-premises-kubernetes-networking-architecture-a-comprehensive-architectural-synthesis-529o</link>
      <guid>https://dev.to/korakrit/on-premises-kubernetes-networking-architecture-a-comprehensive-architectural-synthesis-529o</guid>
      <description>&lt;h2&gt;
  
  
  1. Context, Scope, and Design Objectives
&lt;/h2&gt;

&lt;p&gt;This document presents a rigorous architectural synthesis of an on‑premises Kubernetes networking design implemented atop a Hyper‑V virtualization substrate. The objective is to construct a networking architecture that compensates for the absence of a cloud provider while remaining conceptually and operationally aligned with cloud‑native paradigms.&lt;/p&gt;

&lt;p&gt;The design explicitly targets the following goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operation in a fully on‑premises environment without reliance on a managed cloud provider&lt;/li&gt;
&lt;li&gt;Functional support for &lt;code&gt;Service&lt;/code&gt; objects of type &lt;code&gt;LoadBalancer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Clear and enforceable separation of concerns among compute, networking, and ingress responsibilities&lt;/li&gt;
&lt;li&gt;Immediate operational viability coupled with architectural compatibility for future autoscaling at both the pod and node levels&lt;/li&gt;
&lt;li&gt;Conceptual portability, such that the underlying mental model remains valid upon eventual migration to a public cloud environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core infrastructural components considered herein are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FRR Virtual Machine&lt;/strong&gt;: An Ubuntu‑based virtual appliance running FRRouting, responsible for fabric‑level routing and control‑plane signaling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Worker Nodes&lt;/strong&gt;: Linux virtual machines hosting workloads and participating in routing advertisement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MetalLB&lt;/strong&gt;: Deployed in BGP mode to provide load‑balancer semantics in the absence of a cloud‑native implementation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. Rationale for FRR and MetalLB in On‑Premises Kubernetes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Absence of a Cloud Provider and Its Consequences
&lt;/h3&gt;

&lt;p&gt;In contrast to managed Kubernetes offerings, an on‑premises Kubernetes deployment lacks the following capabilities by default:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A managed Layer‑4/Layer‑7 load balancer&lt;/li&gt;
&lt;li&gt;A managed routing plane capable of advertising service reachability&lt;/li&gt;
&lt;li&gt;Automatic integration between Kubernetes service abstractions and the surrounding network fabric&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consequently, the &lt;code&gt;Service&lt;/code&gt; abstraction of type &lt;code&gt;LoadBalancer&lt;/code&gt; is inert unless explicitly backed by external infrastructure. The burden of implementing these capabilities therefore shifts to the platform architect.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 MetalLB as a Functional Analog to Cloud Load Balancers
&lt;/h3&gt;

&lt;p&gt;MetalLB is introduced to fill this infrastructural void by providing two essential capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allocation of externally reachable virtual IP addresses for Kubernetes services&lt;/li&gt;
&lt;li&gt;Advertisement of reachability for those IPs to the upstream network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MetalLB supports two operational modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Layer‑2 (L2) Mode&lt;/strong&gt;, based on ARP/NDP announcements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Border Gateway Protocol (BGP) Mode&lt;/strong&gt;, based on explicit route advertisement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture deliberately adopts &lt;strong&gt;BGP mode&lt;/strong&gt;, for reasons grounded in scalability, determinism, and alignment with Kubernetes’ distributed systems model.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Justification for BGP Mode over L2 Mode (Engineering Perspective)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3.1 Architectural Implications of L2 Mode
&lt;/h3&gt;

&lt;p&gt;In L2 mode, MetalLB assigns ownership of a LoadBalancer IP to a single node. That node responds to ARP requests on behalf of the service, and failover is achieved indirectly via ARP cache invalidation.&lt;/p&gt;

&lt;p&gt;From an architectural standpoint, this entails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tight coupling between a network identity (the service IP) and a specific node&lt;/li&gt;
&lt;li&gt;Implicit, non‑contractual ownership semantics&lt;/li&gt;
&lt;li&gt;Dependence on shared mutable state in the form of ARP caches&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Such properties undermine predictability, complicate failure analysis, and conflict with Kubernetes’ expectation that nodes are ephemeral and interchangeable.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Architectural Advantages of BGP Mode
&lt;/h3&gt;

&lt;p&gt;Under BGP mode, service reachability is expressed through explicit route advertisements. Nodes hosting service endpoints announce routes, and the routing fabric (FRR) selects viable forwarding paths. When a node becomes unavailable, its routes are withdrawn in a deterministic and protocol‑defined manner.&lt;/p&gt;

&lt;p&gt;This yields several architectural advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit contracts governing reachability and ownership&lt;/li&gt;
&lt;li&gt;Well‑defined state machines for convergence and failure handling&lt;/li&gt;
&lt;li&gt;Decoupling of node lifecycle events from ingress semantics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The conceptual correspondence between Kubernetes primitives and BGP behavior is direct:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Kubernetes Event&lt;/th&gt;
&lt;th&gt;BGP Effect&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pod scheduled&lt;/td&gt;
&lt;td&gt;Route advertised&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pod terminated&lt;/td&gt;
&lt;td&gt;Route withdrawn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node added&lt;/td&gt;
&lt;td&gt;BGP peer established&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node removed&lt;/td&gt;
&lt;td&gt;BGP session torn down&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This symmetry renders BGP mode a natural fit for Kubernetes’ distributed control model.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Network Roles and Separation of Responsibilities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 FRR Virtual Machine as a Fabric Router
&lt;/h3&gt;

&lt;p&gt;The FRR virtual machine functions as a stable fabric‑level routing component. Its responsibilities are intentionally constrained and explicitly defined.&lt;/p&gt;

&lt;p&gt;It does &lt;strong&gt;not&lt;/strong&gt; act as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Internet gateway&lt;/li&gt;
&lt;li&gt;A NAT device&lt;/li&gt;
&lt;li&gt;A default gateway for Kubernetes nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, FRR serves exclusively as a &lt;strong&gt;fabric router&lt;/strong&gt;, providing deterministic Layer‑3 reachability for Kubernetes Service and Pod traffic via BGP.&lt;/p&gt;

&lt;h4&gt;
  
  
  Network Interface Partitioning
&lt;/h4&gt;

&lt;p&gt;The FRR VM is provisioned with two network interfaces, each mapped to a distinct routing domain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;eth0&lt;/code&gt; (External – 192.168.1.0/24)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Management access&lt;/li&gt;
&lt;li&gt;Outbound Internet connectivity (package installation, updates)&lt;/li&gt;
&lt;li&gt;Connectivity to the broader on‑premises LAN&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;eth1&lt;/code&gt; (Internal – 10.10.0.0/24)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes fabric&lt;/li&gt;
&lt;li&gt;BGP peering with Kubernetes nodes&lt;/li&gt;
&lt;li&gt;Transport for Pod and LoadBalancer traffic&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Critically, &lt;strong&gt;only internal fabric routes are advertised into BGP&lt;/strong&gt;, preserving strict separation between management traffic and cluster data paths.&lt;/p&gt;

&lt;h4&gt;
  
  
  Netplan Configuration (FRR VM)
&lt;/h4&gt;

&lt;p&gt;The following &lt;code&gt;netplan&lt;/code&gt; configuration illustrates the canonical interface setup on the FRR VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# /etc/netplan/50-cloud-init.yaml&lt;/span&gt;
&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;ethernets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eth0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dhcp4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.1.13/24&lt;/span&gt;
      &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
          &lt;span class="na"&gt;via&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.1.1&lt;/span&gt;
      &lt;span class="na"&gt;nameservers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8.8.8.8&lt;/span&gt;

    &lt;span class="na"&gt;eth1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dhcp4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.10.0.10/24&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration enforces a &lt;strong&gt;single default route via the external interface&lt;/strong&gt;, ensuring that internal fabric traffic is never misrouted toward the management plane.&lt;/p&gt;

&lt;h4&gt;
  
  
  FRR (FRRouting) Configuration Overview
&lt;/h4&gt;

&lt;p&gt;Within FRR, the routing policy is deliberately minimalistic and explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;router bgp 65001
 bgp router-id 10.10.0.10
 no bgp default ipv4-unicast

 address-family ipv4 unicast
  redistribute connected route-map INTERNAL_ONLY
 exit-address-family
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supporting policy objects restrict route advertisement to the Kubernetes fabric only:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip prefix-list INTERNAL_NET seq 10 permit 10.10.0.0/24

route-map INTERNAL_ONLY permit 10
 match ip address prefix-list INTERNAL_NET
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External subnets (e.g., 192.168.1.0/24) are never advertised to Kubernetes nodes&lt;/li&gt;
&lt;li&gt;FRR cannot be accidentally interpreted as a default or Internet gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4.2 Kubernetes Nodes as Disposable Compute Elements
&lt;/h3&gt;

&lt;p&gt;Kubernetes nodes are treated strictly as ephemeral compute resources. They host workloads and participate in routing advertisements via MetalLB, but they do not retain long‑term ownership of ingress IPs.&lt;/p&gt;

&lt;p&gt;Each node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establishes an explicit BGP peering relationship with FRR&lt;/li&gt;
&lt;li&gt;Advertises LoadBalancer IPs only while hosting active service endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design reinforces the principle that nodes remain &lt;strong&gt;stateless with respect to ingress&lt;/strong&gt;, enabling both horizontal pod autoscaling and future node‑level autoscaling without architectural refactoring.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. FRR Configuration: Conceptual Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  5.1 Governing Principles
&lt;/h3&gt;

&lt;p&gt;The FRR configuration adheres to three guiding principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The routing fabric must be stable and long‑lived&lt;/li&gt;
&lt;li&gt;Compute nodes must remain disposable&lt;/li&gt;
&lt;li&gt;External and internal routing domains must be strictly segregated&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5.2 Functional Responsibilities of FRR
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Operate a BGP process under a dedicated autonomous system (AS 65001)&lt;/li&gt;
&lt;li&gt;Accept peering relationships from Kubernetes nodes (AS 65002)&lt;/li&gt;
&lt;li&gt;Advertise only internal fabric connectivity&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  5.3 On Explicit Peer Configuration
&lt;/h3&gt;

&lt;p&gt;The requirement to explicitly configure BGP neighbors when adding new nodes is intrinsic to BGP’s design. This characteristic reflects protocol intentionality rather than architectural deficiency.&lt;/p&gt;

&lt;p&gt;Crucially, this constraint does not impede future autoscaling; it merely indicates that automation of peer management has not yet been introduced.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. MetalLB Configuration: Conceptual Overview
&lt;/h2&gt;

&lt;p&gt;In BGP mode, MetalLB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allocates LoadBalancer IPs from a predefined pool&lt;/li&gt;
&lt;li&gt;Advertises those IPs to FRR via BGP&lt;/li&gt;
&lt;li&gt;Withdraws advertisements automatically upon failure or topology changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MetalLB deliberately refrains from managing FRR configuration or dynamically creating BGP neighbors. This boundary enforces a clean separation between Kubernetes‑level intent and infrastructure‑level policy.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Autoscaling Readiness and Evolutionary Path
&lt;/h2&gt;

&lt;h3&gt;
  
  
  7.1 Pod‑Level Autoscaling
&lt;/h3&gt;

&lt;p&gt;Horizontal Pod Autoscaling operates entirely within the Kubernetes control plane and is unaffected by the external routing architecture. The design fully supports HPA without modification.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.2 Node‑Level Scaling: Present and Future
&lt;/h3&gt;

&lt;p&gt;At present:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes are provisioned manually&lt;/li&gt;
&lt;li&gt;BGP peers are added manually&lt;/li&gt;
&lt;li&gt;System behavior remains stable and predictable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a future automated environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node provisioning becomes orchestrated&lt;/li&gt;
&lt;li&gt;BGP peering is automated or abstracted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Importantly, the &lt;strong&gt;network architecture itself remains unchanged&lt;/strong&gt;; only the automation layer evolves.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. Architectural Continuity Across Cloud Migration
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;On‑Premises Component&lt;/th&gt;
&lt;th&gt;Cloud Analog&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FRR VM&lt;/td&gt;
&lt;td&gt;Managed cloud router / VPC router&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MetalLB&lt;/td&gt;
&lt;td&gt;Managed cloud LoadBalancer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Internal fabric&lt;/td&gt;
&lt;td&gt;VPC subnet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BGP semantics&lt;/td&gt;
&lt;td&gt;Provider‑managed routing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The architectural mental model therefore persists intact across deployment environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. Architecture Diagram (Textual Representation)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        External LAN (192.168.1.0/24)
                 |
              [ eth0 ]
            +----------------+
            |     FRR VM     |
            |    AS 65001    |
            +----------------+
              [ eth1 ]
                 |
        Kubernetes Fabric (10.10.0.0/24)
        ---------------------------------
        |               |               |
    [ Node‑1 ]       [ Node‑2 ]     [ Node‑N ]
     AS 65002         AS 65002        AS 65002
        |               |               |
      Pods            Pods            Pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  10. Sequence Diagram: LoadBalancer Traffic Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9gaxbv52cmsnajsgq19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9gaxbv52cmsnajsgq19.png" alt="LoadBalancer Traffic Flow" width="656" height="328"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@startuml
top to bottom direction

component Client
component "FRR\nBGP Speaker" as FRR
component "Kubernetes\nNode" as Node
component "Pod\nApplication" as Pod

Client --&amp;gt; FRR : TCP SYN to LoadBalancer IP
FRR --&amp;gt; Node : BGP-selected\nnext hop
Node --&amp;gt; Pod : Service routing\n(kube-proxy / dataplane)
Pod --&amp;gt; Node : Response payload
Node --&amp;gt; FRR : Return traffic
FRR --&amp;gt; Client : Forward response
@enduml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  11. Concluding Observations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;BGP mode is selected on the basis of architectural rigor rather than convenience&lt;/li&gt;
&lt;li&gt;FRR serves as a stable routing substrate, not an application gateway&lt;/li&gt;
&lt;li&gt;Nodes remain ephemeral and autoscaling‑compatible&lt;/li&gt;
&lt;li&gt;LoadBalancer semantics closely mirror those of managed cloud environments&lt;/li&gt;
&lt;li&gt;The design is simultaneously operationally sound today and structurally extensible for the future&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture prioritizes clarity, determinism, and long‑term evolutionary capacity—attributes essential to robust Kubernetes infrastructure at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  PlantUML Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n6el4v2zdxaos3xsmaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n6el4v2zdxaos3xsmaj.png" alt="PlantUML Diagram" width="800" height="628"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@startuml
skinparam backgroundColor #FFFFFF
skinparam shadowing false
skinparam componentStyle rectangle
skinparam defaultFontName Monospace

title Kubernetes on-prem with MetalLB (BGP) and FRR Fabric Router

'========================
' External Network
'========================
package "External LAN\n192.168.1.0/24\n(Management / Internet)" {

  node "Client\n192.168.1.2" as client

  node "is-kube-01\nControl Plane\neth0: 192.168.1.11" as kube01_ext
  node "is-kube-02\nWorker Node\neth0: 192.168.1.12" as kube02_ext

  node "FRR-VM\neth0: 192.168.1.13\n(No BGP here)" as frr_ext
}

'========================
' Internal Fabric
'========================
package "Kubernetes Internal Fabric\n10.10.0.0/24" {

  node "FRR-VM\nFabric Router\neth1: 10.10.0.1\nAS 65001" as frr_int

  node "is-kube-01\neth1: 10.10.0.11\nkubelet --node-ip\nAS 65002" as kube01_int
  node "is-kube-02\neth1: 10.10.0.12\nkubelet --node-ip\nAS 65002" as kube02_int

  component "MetalLB Speaker\n(on each node)" as metallb
}

'========================
' Kubernetes Objects
'========================
package "Kubernetes Cluster" {

  component "kube-apiserver\n(6443)" as apiserver
  component "kube-proxy\niptables / IPVS" as kubeproxy
  component "Pods\n(Containers)" as pods
}

'========================
' Management / Admin Path
'========================
client --&amp;gt; kube01_ext : SSH / HTTPS / kubectl
client --&amp;gt; kube02_ext : (optional admin)

kube01_ext --&amp;gt; apiserver : control-plane
apiserver --&amp;gt; kube01_ext
apiserver --&amp;gt; kube02_ext

'========================
' Routing Control Plane (BGP)
'========================
kube01_int --&amp;gt; frr_int : BGP (TCP 179)\nAdvertise LB IPs
kube02_int --&amp;gt; frr_int : BGP (TCP 179)\nAdvertise LB IPs

metallb --&amp;gt; kube01_int : Speaker binds\nInternalIP
metallb --&amp;gt; kube02_int

note right of frr_int
FRR role:
- BGP fabric router
- Learns LoadBalancer IPs
- No NAT
- No data forwarding
end note

'========================
' Data Plane (Service Traffic)
'========================
frr_int ..&amp;gt; kube01_int : Routing info only\n(NO packets)
frr_int ..&amp;gt; kube02_int : Routing info only\n(NO packets)

kube01_int --&amp;gt; kubeproxy
kube02_int --&amp;gt; kubeproxy
kubeproxy --&amp;gt; pods

note bottom of kubeproxy
Actual data path:
Client -&amp;gt; Node holding LB IP
kube-proxy forwards to Pod
FRR NOT in packet path
end note

'========================
' Separation of Concerns
'========================
note bottom
eth0 (192.168.1.x):
- Internet
- Admin
- OS routing / apt

eth1 (10.10.x.x):
- Kubernetes fabric
- BGP
- MetalLB
end note

@enduml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Author's Note
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;This document was prepared as a general guide and reference note only. The content was drafted with the assistance of an AI system and subsequently reviewed, refined, and curated by the author.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>architecture</category>
      <category>kubernetes</category>
      <category>networking</category>
    </item>
    <item>
      <title>Hyper-V Configuration Guide for Kubernetes and Supporting Network Services</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Sun, 04 Jan 2026 13:22:13 +0000</pubDate>
      <link>https://dev.to/korakrit/hyper-v-configuration-guide-for-kubernetes-and-supporting-network-services-1l6a</link>
      <guid>https://dev.to/korakrit/hyper-v-configuration-guide-for-kubernetes-and-supporting-network-services-1l6a</guid>
      <description>&lt;p&gt;This document describes a &lt;strong&gt;baseline, production-ready approach&lt;/strong&gt; for configuring &lt;strong&gt;Hyper-V Virtual Machines&lt;/strong&gt; used with &lt;strong&gt;Kubernetes (Control Plane / Worker Nodes)&lt;/strong&gt; and an &lt;strong&gt;FRR (Free Range Routing) VM&lt;/strong&gt;. The focus is on correct network adapter design, virtual switch usage, and security-related settings to ensure long-term stability and predictable behavior.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Hyper-V host provides at least two virtual switches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vEthernet-External&lt;/strong&gt;: Used for Internet access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;K8S-Internal&lt;/strong&gt;: Used for intra-cluster (node-to-node) communication&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Every Kubernetes node (Control Plane and Worker):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has &lt;strong&gt;two NICs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;External NIC → Internet access&lt;/li&gt;
&lt;li&gt;Internal NIC → Cluster communication&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The FRR VM acts strictly as an &lt;strong&gt;L3 router&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Baseline: Create a New Virtual Machine (Applies to All VMs)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create New Virtual Machine
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;Hyper-V Manager&lt;/strong&gt; → &lt;code&gt;New&lt;/code&gt; → &lt;code&gt;Virtual Machine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Specify Name:

&lt;ul&gt;
&lt;li&gt;Control Plane: &lt;code&gt;is-kube-control&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Worker: &lt;code&gt;is-kube-worker-XX&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;FRR: &lt;code&gt;is-frrouting-vm&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Specify Generation:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generation 2 (Required)&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;UEFI-based firmware&lt;/li&gt;
&lt;li&gt;Secure Boot support&lt;/li&gt;
&lt;li&gt;Required for modern Linux kernels (Kubernetes / FRR)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  2. Assign Memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disable Dynamic Memory&lt;/strong&gt; (recommended)&lt;/li&gt;
&lt;li&gt;Startup Memory:

&lt;ul&gt;
&lt;li&gt;Control Plane / Worker: &lt;strong&gt;≥ 4 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;FRR VM: &lt;strong&gt;≥ 2 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Rationale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes does not behave well with Dynamic Memory&lt;/li&gt;
&lt;li&gt;Avoids latency, eviction, and scheduling edge cases&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Configure Networking (Initial Wizard Step)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Assign the first NIC to:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;vEthernet-External&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The second NIC will be added after VM creation&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Connect Virtual Hard Disk
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a new virtual hard disk&lt;/li&gt;
&lt;li&gt;Format: &lt;strong&gt;VHDX (Dynamic)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Recommended size:

&lt;ul&gt;
&lt;li&gt;Control Plane / Worker: &lt;strong&gt;≥ 40 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;FRR VM: &lt;strong&gt;≥ 20 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. Installation Options
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Install an operating system later&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Mount the ISO after VM creation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Operating System Baseline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OS: &lt;strong&gt;Ubuntu 24.04 LTS (Server)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Installation Mode: &lt;strong&gt;Minimal Install&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Initial State:

&lt;ul&gt;
&lt;li&gt;No additional packages installed&lt;/li&gt;
&lt;li&gt;Docker / containerd / Kubernetes components &lt;strong&gt;not installed yet&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enabled services:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSH only&lt;/strong&gt; (for initial access and automation)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This approach minimizes the attack surface and ensures a clean baseline before installing Kubernetes or FRR components.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Netplan Baseline Configuration
&lt;/h3&gt;

&lt;p&gt;Static network configuration is used to ensure deterministic routing and traffic flow.&lt;/p&gt;

&lt;p&gt;Example: &lt;code&gt;/etc/netplan/50-cloud-init.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;ethernets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eth0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dhcp4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.1.10/24&lt;/span&gt;
      &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
          &lt;span class="na"&gt;via&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.1.1&lt;/span&gt;
      &lt;span class="na"&gt;nameservers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8.8.8.8&lt;/span&gt;

    &lt;span class="na"&gt;eth1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dhcp4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.10.0.10/24&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eth0&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;External NIC (&lt;code&gt;vEthernet-External&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Default route and Internet access&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;eth1&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;Internal NIC (&lt;code&gt;K8S-Internal&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Node-to-node / Pod network traffic&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: No default route is configured on the internal NIC to avoid asymmetric routing issues.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Post-Creation VM Configuration (Before First Boot)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Firmware / Secure Boot
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VM Settings → &lt;strong&gt;Firmware&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Secure Boot:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Enabled&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Template: &lt;code&gt;Microsoft UEFI Certificate Authority&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Compatible with Ubuntu, Debian, RHEL, Rocky Linux, and FRR-based systems.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  2. Processor / Hardware Acceleration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Virtual processors: &lt;strong&gt;≥ 2 vCPU&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Nested Virtualization:

&lt;ul&gt;
&lt;li&gt;Disable unless explicitly required (e.g., KinD, KVM-in-VM)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Default Hyper-V processor settings are sufficient. No additional hardware acceleration tuning is required.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  3. Memory Settings
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic Memory: &lt;strong&gt;Disabled&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Leave advanced memory mapping options at default values&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Network Adapters – Advanced Features
&lt;/h3&gt;

&lt;h4&gt;
  
  
  External NIC (&lt;code&gt;vEthernet-External&lt;/code&gt;)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Enable:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MAC Address Spoofing&lt;/strong&gt; (Kubernetes nodes only)&lt;/li&gt;
&lt;li&gt;VRSS&lt;/li&gt;
&lt;li&gt;RSC&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Disable:

&lt;ul&gt;
&lt;li&gt;DHCP Guard&lt;/li&gt;
&lt;li&gt;Router Guard&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Internal NIC (&lt;code&gt;K8S-Internal&lt;/code&gt;)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Add a second Network Adapter&lt;/li&gt;
&lt;li&gt;Switch: &lt;code&gt;K8S-Internal&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;MAC Address Spoofing:

&lt;ul&gt;
&lt;li&gt;Kubernetes Nodes: &lt;strong&gt;Recommended: On&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;FRR VM: &lt;strong&gt;Off&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. VLAN Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VLAN Mode: &lt;strong&gt;Untagged (Default)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  6. Integration Services
&lt;/h3&gt;

&lt;p&gt;Keep the default integration services enabled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Heartbeat&lt;/li&gt;
&lt;li&gt;Key-Value Pair Exchange&lt;/li&gt;
&lt;li&gt;Shutdown&lt;/li&gt;
&lt;li&gt;Time Synchronization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Optional:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guest Service Interface (only if file copy from host is required)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;No integration services need to be disabled for Kubernetes or FRR.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Part 1: Control Plane Node Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Network Adapter Layout
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;NIC&lt;/th&gt;
&lt;th&gt;Hyper-V Switch&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NIC-1&lt;/td&gt;
&lt;td&gt;vEthernet-External&lt;/td&gt;
&lt;td&gt;Internet, image pulls, external APIs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NIC-2&lt;/td&gt;
&lt;td&gt;K8S-Internal&lt;/td&gt;
&lt;td&gt;Pod-to-Pod and Node-to-Node traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  MAC Address Spoofing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;External NIC: &lt;strong&gt;On&lt;/strong&gt; (Required)&lt;/li&gt;
&lt;li&gt;Internal NIC:

&lt;ul&gt;
&lt;li&gt;Overlay CNI (Flannel VXLAN, Calico VXLAN): Optional&lt;/li&gt;
&lt;li&gt;Non-overlay / L2 CNI: On&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Best practice: Enable MAC Address Spoofing on both NICs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Resource Recommendation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CPU: ≥ 2 vCPU&lt;/li&gt;
&lt;li&gt;Memory: ≥ 4 GB&lt;/li&gt;
&lt;li&gt;Disk: ≥ 40 GB&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 2: Worker Node Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Network Adapter Layout
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;NIC&lt;/th&gt;
&lt;th&gt;Hyper-V Switch&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NIC-1&lt;/td&gt;
&lt;td&gt;vEthernet-External&lt;/td&gt;
&lt;td&gt;Internet and image pulls&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NIC-2&lt;/td&gt;
&lt;td&gt;K8S-Internal&lt;/td&gt;
&lt;td&gt;East–West cluster traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  MAC Address Spoofing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;External NIC: &lt;strong&gt;On (Mandatory)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Internal NIC: Depends on CNI&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Recommended: Enable on both NICs to avoid future edge cases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Resource Recommendation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CPU: ≥ 2 vCPU (scale with workload)&lt;/li&gt;
&lt;li&gt;Memory: 4–8 GB&lt;/li&gt;
&lt;li&gt;Disk: ≥ 40 GB&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 3: FRR VM Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Network Adapter Layout
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;NIC&lt;/th&gt;
&lt;th&gt;Hyper-V Switch&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NIC-1&lt;/td&gt;
&lt;td&gt;vEthernet-External&lt;/td&gt;
&lt;td&gt;Upstream / Internet routing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NIC-2&lt;/td&gt;
&lt;td&gt;K8S-Internal&lt;/td&gt;
&lt;td&gt;Internal routing / peering&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  MAC Address Spoofing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;All NICs: Off&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rationale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FRR operates strictly at Layer 3&lt;/li&gt;
&lt;li&gt;Always uses its own interface MAC addresses&lt;/li&gt;
&lt;li&gt;No container, bridge, or L2 forwarding traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Resource Recommendation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CPU: ≥ 2 vCPU&lt;/li&gt;
&lt;li&gt;Memory: ≥ 2 GB&lt;/li&gt;
&lt;li&gt;Disk: ≥ 20 GB&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary of Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable MAC Address Spoofing at least on the External NIC&lt;/li&gt;
&lt;li&gt;Prefer enabling it on both NICs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;FRR VM:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MAC Address Spoofing is not required&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Clearly separate External and Internal NIC roles&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Overlay CNIs reduce Layer-2 dependencies but do not eliminate the need for spoofing on egress NICs&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Additional Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Re-evaluate MAC Address Spoofing if the CNI or network design changes&lt;/li&gt;
&lt;li&gt;Enabling MAC Address Spoofing has no meaningful performance impact on Hyper-V&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Author's Note
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;This guide was written primarily as a set of personal design and implementation notes. Its purpose is to document not only &lt;em&gt;what&lt;/em&gt; was configured, but &lt;em&gt;why&lt;/em&gt; specific decisions were made, in order to preserve architectural intent and reduce ambiguity during future maintenance, scaling, or redesign efforts.&lt;/p&gt;

&lt;p&gt;The content of this document was drafted with the assistance of an AI system and subsequently reviewed and curated by the author.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>architecture</category>
      <category>kubernetes</category>
      <category>microsoft</category>
      <category>networking</category>
    </item>
    <item>
      <title>Installing Kubernetes and Worker Node Setup on Ubuntu 24.04</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Thu, 25 Dec 2025 07:32:08 +0000</pubDate>
      <link>https://dev.to/korakrit/installing-kubernetes-and-worker-node-setup-on-ubuntu-2404-2oag</link>
      <guid>https://dev.to/korakrit/installing-kubernetes-and-worker-node-setup-on-ubuntu-2404-2oag</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes Setup on Ubuntu 24.04 (Kubernetes v1.33)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Prepare the Ubuntu VM
&lt;/h2&gt;

&lt;p&gt;Update system packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Disable swap (required by Kubernetes):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/swap.img/ s/^/#/'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2. Enable Kernel Modules and Sysctl Settings
&lt;/h2&gt;

&lt;p&gt;Load required kernel modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure they load on boot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/kubernetes.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set required sysctl parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/sysctl.d/kubernetes.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply sysctl changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. Install and Configure Containerd (Container Runtime)
&lt;/h2&gt;

&lt;p&gt;Kubernetes v1.33 requires &lt;strong&gt;CRI v1&lt;/strong&gt;. Docker (dockershim) is not supported.&lt;/p&gt;

&lt;p&gt;Install containerd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the default containerd configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable systemd cgroup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/'&lt;/span&gt; /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart and enable containerd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The CRI plugin is enabled by default in a standard &lt;strong&gt;containerd&lt;/strong&gt; package config. You generally do not need to manually add a CRI block. If you choose to verify, ensure the CRI plugin section is present and not disabled. The pause image you referenced is reasonable.&lt;br&gt;
Ensure the CRI plugin is enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[plugins."io.containerd.grpc.v1.cri"]&lt;/span&gt;
  &lt;span class="py"&gt;sandbox_image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"registry.k8s.io/pause:3.10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart and enable containerd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  4. Add the Kubernetes APT Repository (v1.33)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. Install Kubernetes Tools
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Optional but common) Ensure kubelet is enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. Install Helm
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Helm and add repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm version
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  7. Initialize the Kubernetes Control Plane
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.0/16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  8. Set Up kubeconfig
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  9. Verify Node Status
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The node will show &lt;strong&gt;NotReady&lt;/strong&gt; until a CNI plugin is installed.&lt;/li&gt;
&lt;li&gt;Output: &lt;code&gt;is-kube&lt;/code&gt; shows as &lt;code&gt;NotReady&lt;/code&gt; (because Pod Network is not yet installed)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Install Cilium CNI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install cilium-cli
&lt;/h3&gt;

&lt;p&gt;Download the Cilium CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--remote-name&lt;/span&gt; https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
&lt;span class="nb"&gt;sudo tar &lt;/span&gt;xzvf cilium-linux-amd64.tar.gz &lt;span class="nt"&gt;-C&lt;/span&gt; /usr/local/bin
&lt;span class="nb"&gt;rm &lt;/span&gt;cilium-linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Cilium
&lt;/h3&gt;

&lt;p&gt;Install Cilium into the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify Cluster Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
cilium status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Worker Node Setup
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Prepare the Ubuntu VM
&lt;/h2&gt;

&lt;p&gt;Update system packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Setup kernel modules and sysctl
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/kubernetes.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Install and configure containerd
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd

&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/'&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Add Kubernetes apt repo (v1.33) and install kubelet/kubeadm
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings

curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm

&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Join the worker to the cluster
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Create Kubernetes Cluster's Token
&lt;/h2&gt;

&lt;p&gt;Run this on the control-plane node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It prints something like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo kubeadm join &amp;lt;CONTROL_PLANE_IP&amp;gt;:6443 --token &amp;lt;TOKEN&amp;gt; --discovery-token-ca-cert-hash sha256:&amp;lt;HASH&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Run that join command on the worker
&lt;/h2&gt;

&lt;p&gt;Paste and run it on the worker node (with &lt;strong&gt;sudo&lt;/strong&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm &lt;span class="nb"&gt;join&lt;/span&gt; &amp;lt;CONTROL_PLANE_IP&amp;gt;:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;TOKEN&amp;gt; &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;HASH&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Verify from the Control Plane
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ubuntu</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Installing Kubernetes Single-Node Setup on Ubuntu 24.04</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Wed, 21 May 2025 15:47:17 +0000</pubDate>
      <link>https://dev.to/korakrit/installing-kubernetes-single-node-setup-on-ubuntu-2404-4f47</link>
      <guid>https://dev.to/korakrit/installing-kubernetes-single-node-setup-on-ubuntu-2404-4f47</guid>
      <description>&lt;h3&gt;
  
  
  1. Prepare the Ubuntu VM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Update system packages:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Disable swap (Kubernetes requires this):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Enable Kernel Modules and Sysctl Settings
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/sysctl.d/kubernetes.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Install Docker or Containerd (Container Runtime)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker.io
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;docker
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or Install Containerd&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable systemd cgroup&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/'&lt;/span&gt; /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Search &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[plugins."io.containerd.grpc.v1.cri"]&lt;br&gt;
 and then,&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sandbox_image &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"registry.k8s.io/pause:3.10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Add the Kubernetes APT Repository
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Install Kubernetes Tools: kubeadm, kubelet, kubectl
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Install Helm
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-zxvf&lt;/span&gt; helm-v3.17.3-linux-amd64.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;linux-amd64/helm /usr/local/bin/helm
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; helm-v3.17.3-linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm version

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Initialize the Kubernetes Control Plane
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.0/16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Set up kubeconfig:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  9. Check Node Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Output: &lt;code&gt;is-kube&lt;/code&gt; shows as &lt;code&gt;NotReady&lt;/code&gt; (because Pod Network is not yet installed)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Install Cilium CNI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Install cilium-cli
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--remote-name&lt;/span&gt; https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
&lt;span class="nb"&gt;sudo tar &lt;/span&gt;xzvf cilium-linux-amd64.tar.gz &lt;span class="nt"&gt;-C&lt;/span&gt; /usr/local/bin
&lt;span class="nb"&gt;rm &lt;/span&gt;cilium-linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install Cilium
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Check Node Status and Cilium Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
cilium status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Grafana (MySQL) + Nginx reverse proxy</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Thu, 21 May 2020 17:57:49 +0000</pubDate>
      <link>https://dev.to/korakrit/grafana-mysql-nginx-reverse-proxy-1nl1</link>
      <guid>https://dev.to/korakrit/grafana-mysql-nginx-reverse-proxy-1nl1</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Git clone from korakrit-c/grafana-with-nginx-on-docker.git&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/korakrit-c/grafana-with-nginx-on-docker.git
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure you can run docker-compose then starting it&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new config file on "nginx-revers-storage/conf.d/"&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd nginx-revers-storage/conf.d/
nano grafana.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add this config&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream grafana_dashboard {
  server    grafana_dashboard:3000;
}
server {
    listen 80;
    listen [::]:80;
    server_name grafana.local;
    location / {
        proxy_pass http://grafana_dashboard;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart nginx&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec nginx_reverse nginx -s reload
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>grafana</category>
      <category>nginx</category>
      <category>reverse</category>
      <category>proxy</category>
    </item>
    <item>
      <title>Installing Docker on Raspberry Pi 4</title>
      <dc:creator>Korakrit Chariyasathian</dc:creator>
      <pubDate>Tue, 19 May 2020 17:18:27 +0000</pubDate>
      <link>https://dev.to/korakrit/docker-on-raspberry-pi-4-4m9b</link>
      <guid>https://dev.to/korakrit/docker-on-raspberry-pi-4-4m9b</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Update repository and upgrade packages&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get upgrade
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download installer from get.docker and install Docker&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL get.docker.com -o get-docker.sh &amp;amp;&amp;amp; sh get-docker.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92ft7gtqphact33xjvge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92ft7gtqphact33xjvge.png" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add "pi" user to Docker Group&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker pi
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setup the Docker Repo&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano /etc/apt/sources.list.d/docker.list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add this&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deb https://download.docker.com/linux/raspbian/ stretch stable
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update repository and upgrade packages again&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get upgrade
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>raspberrypi</category>
      <category>pi</category>
    </item>
  </channel>
</rss>
