<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: César Sepúlveda Barra</title>
    <description>The latest articles on DEV Community by César Sepúlveda Barra (@csepulvedab).</description>
    <link>https://dev.to/csepulvedab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/csepulvedab"/>
    <language>en</language>
    <item>
      <title>I Built a Kubernetes Operator That Programs My Cisco Router</title>
      <dc:creator>César Sepúlveda Barra</dc:creator>
      <pubDate>Tue, 24 Feb 2026 20:36:13 +0000</pubDate>
      <link>https://dev.to/csepulvedab/i-built-a-kubernetes-operator-that-programs-my-cisco-router-5b9e</link>
      <guid>https://dev.to/csepulvedab/i-built-a-kubernetes-operator-that-programs-my-cisco-router-5b9e</guid>
      <description>&lt;p&gt;I wrote a Kubernetes operator in Go that talks to a Cisco 4331 router via RESTCONF. It creates VLANs, DHCP pools, and ACLs on the router, all triggered by &lt;code&gt;kubectl apply&lt;/code&gt;. Pods get their IPs directly from the router's DHCP server, and inter-VLAN traffic is controlled by real ACLs running on real hardware. This is the full walkthrough.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5liji8w4w7sm7khpa5xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5liji8w4w7sm7khpa5xp.png" alt="mini eks cluster &amp;amp; cisco gear" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Why?
&lt;/h2&gt;

&lt;p&gt;Kubernetes networking is, by default, flat. Every pod can reach every other pod. That's fine for many workloads, but in plenty of scenarios you actually want segmentation. You want the database on a different network than the web servers. You want firewall rules between them.&lt;/p&gt;

&lt;h3&gt;
  
  
  There are better tools for this
&lt;/h3&gt;

&lt;p&gt;I want to be honest from the start: if you need network segmentation in Kubernetes today, &lt;strong&gt;use Cilium or Calico&lt;/strong&gt;. They provide NetworkPolicy enforcement, eBPF based segmentation, encryption, observability. They work in software, they scale, and thousands of companies run them in production. That's the right answer for most people.&lt;/p&gt;

&lt;p&gt;If you're deep in the Cisco world, &lt;strong&gt;ACI with Nexus switches&lt;/strong&gt; is the official enterprise play. It integrates natively with Kubernetes, gives you policy-driven microsegmentation, multi-tenant networking, full visibility. But it requires Nexus 9000 hardware and APIC controllers, and that's a serious investment.&lt;/p&gt;

&lt;h3&gt;
  
  
  So why did I do this?
&lt;/h3&gt;

&lt;p&gt;I had a Cisco 4331 router and a Catalyst switch on my desk. Not data center gear. Just mid-range networking equipment, the kind of thing you can pick up on eBay for a couple hundred bucks. The 4331 runs IOS-XE 16.12 and has &lt;strong&gt;RESTCONF&lt;/strong&gt; enabled: a REST API for managing the router configuration over HTTPS.&lt;/p&gt;

&lt;p&gt;I wanted to see if I could wire that into Kubernetes. Not through some vendor plugin, but through a custom operator that I wrote from scratch. The idea was simple: define VLANs and policies as Kubernetes CRDs, and let the operator program the router to make them real.&lt;/p&gt;

&lt;p&gt;A pod says "I belong to VLAN 10." The operator creates the subinterface on the router, sets up DHCP, writes the ACLs, and the pod gets an IP from the router. No CLI sessions. No separate workflow. Just YAML.&lt;/p&gt;

&lt;p&gt;This was never about competing with Cilium or replacing ACI. It was about proving that the operator pattern can extend Kubernetes to control physical network infrastructure, even with equipment that wasn't designed for this.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Physical Architecture
&lt;/h2&gt;

&lt;p&gt;This runs on real hardware. No simulations, no GNS3, no virtual routers.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Lab
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Router&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cisco 4331 (IOS-XE 16.12)&lt;/td&gt;
&lt;td&gt;192.168.200.1&lt;/td&gt;
&lt;td&gt;Gateway, DHCP, ACLs, RESTCONF API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Switch&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cisco Catalyst&lt;/td&gt;
&lt;td&gt;192.168.200.2&lt;/td&gt;
&lt;td&gt;L2 switching, VLAN trunks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Node 1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Mini PC (8 vCPU, 32 GB)&lt;/td&gt;
&lt;td&gt;192.168.200.11&lt;/td&gt;
&lt;td&gt;K3s server + worker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Node 2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Mini PC (8 vCPU, 24 GB)&lt;/td&gt;
&lt;td&gt;192.168.200.12&lt;/td&gt;
&lt;td&gt;K3s server + worker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Node 3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Mini PC (4 vCPU, 16 GB)&lt;/td&gt;
&lt;td&gt;192.168.200.13&lt;/td&gt;
&lt;td&gt;K3s server + worker&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Network Topology
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wxeibex2e2238y3tmaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wxeibex2e2238y3tmaa.png" alt="Network Topology" width="521" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Two NICs per Node
&lt;/h3&gt;

&lt;p&gt;Each node has two network interfaces. This is key to the whole design.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NIC1 (&lt;code&gt;enp1s0&lt;/code&gt;)&lt;/strong&gt;: Management. Goes to an access port on the switch, VLAN 1, network &lt;code&gt;192.168.200.0/24&lt;/code&gt;. Carries all the K3s traffic: API server, etcd, Flannel overlay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NIC2 (&lt;code&gt;enp2s0&lt;/code&gt;)&lt;/strong&gt;: Trunk. Goes to a trunk port on the switch that allows VLANs 10, 20, 30. This is the VLAN data plane. Pods attach to this interface via macvlan.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Switch Ports
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ports connected to NIC2 on each node: &lt;strong&gt;trunk mode&lt;/strong&gt;, allowing tagged traffic for VLANs 10, 20, 30&lt;/li&gt;
&lt;li&gt;Ports connected to NIC1 on each node: &lt;strong&gt;access mode&lt;/strong&gt;, VLAN 1 (management)&lt;/li&gt;
&lt;li&gt;Uplink to router Gi0/0/0: &lt;strong&gt;trunk mode&lt;/strong&gt; (802.1Q), carrying all VLANs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Router on a Stick
&lt;/h3&gt;

&lt;p&gt;The router's &lt;code&gt;GigabitEthernet0/0/0&lt;/code&gt; is a trunk. It has subinterfaces for each VLAN (&lt;code&gt;.10&lt;/code&gt;, &lt;code&gt;.20&lt;/code&gt;, &lt;code&gt;.30&lt;/code&gt;), each one with its own IP, DHCP pool, and ACL. All inter-VLAN traffic goes through the router, where ACLs decide what passes and what gets dropped.&lt;/p&gt;

&lt;h3&gt;
  
  
  Two Networks per Pod
&lt;/h3&gt;

&lt;p&gt;Every pod ends up with two interfaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Pod
├── eth0:  10.42.x.x  (Flannel overlay: K8s API, DNS, Services)
└── net1: 172.16.x.x  (VLAN via macvlan: app data traffic)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes internal stuff (API, DNS, service discovery) goes over Flannel. Application traffic between VLANs goes through the physical network, through the router, through real ACLs.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Step by Step
&lt;/h2&gt;

&lt;p&gt;Starting from a completely clean cluster. No operator installed, no VLANs, no demo apps. Everything from scratch.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Install the Operator
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;cisco-restconf-operator ./charts/cisco-restconf-operator &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; image.tag&lt;span class="o"&gt;=&lt;/span&gt;v0.8.1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; router.username&lt;span class="o"&gt;=&lt;/span&gt;admin &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; router.password&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1234'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME: cisco-restconf-operator
LAST DEPLOYED: Tue Feb 24 15:08:32 2026
NAMESPACE: default
STATUS: deployed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command. That's it. Here's what it created:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three CRDs&lt;/strong&gt; registered in the Kubernetes API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd | grep cisco
ciscorouterconfigs.cisco.io    2026-02-24T18:08:32Z
ciscovlans.cisco.io            2026-02-24T18:08:32Z
vlanpolicies.cisco.io          2026-02-24T18:08:32Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The operator pod&lt;/strong&gt; plus a &lt;strong&gt;DaemonSet&lt;/strong&gt; running on all three nodes (handles VLAN sub-interfaces and the DHCP CNI daemon):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n cisco-operator-system
NAME                                                         READY   STATUS
cisco-restconf-operator-controller-manager-74bdd6c5c-x48f2   1/1     Running
cisco-restconf-operator-vlan-setup-6qgdw                     2/2     Running
cisco-restconf-operator-vlan-setup-7dj8f                     2/2     Running
cisco-restconf-operator-vlan-setup-gldg4                     2/2     Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Two secrets&lt;/strong&gt;: one with the router credentials, one with TLS certs for the mutating webhook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get secret -n cisco-operator-system | grep -E "router|webhook"
cisco-restconf-operator-webhook-tls   kubernetes.io/tls   2
cisco-router-credentials              Opaque              2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here's the interesting part. The Helm chart also creates a &lt;strong&gt;CiscoRouterConfig&lt;/strong&gt; object, and the operator immediately connects to the router via RESTCONF:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get ciscorouterconfigs -o wide
NAME      HOST            CONNECTED   MESSAGE                                  AGE
default   192.168.200.1   true        Connected to LAB-ROUTER (IOS-XE 16.12)   26s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;connected&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LAB-ROUTER&lt;/span&gt;
  &lt;span class="na"&gt;lastConnected&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2026-02-24T18:08:35Z"&lt;/span&gt;
  &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Connected to LAB-ROUTER (IOS-XE 16.12)&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;16.12"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The operator is alive, talking to the router, and reporting back its hostname and firmware version. It will re-validate the connection every 5 minutes.&lt;/p&gt;




&lt;h3&gt;
  
  
  3.2 Create VLANs
&lt;/h3&gt;

&lt;p&gt;Three VLANs for the demo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 00-vlans.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cisco.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiscoVLAN&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-10&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vlanId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.10.0/24"&lt;/span&gt;
  &lt;span class="na"&gt;gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.10.1"&lt;/span&gt;
  &lt;span class="na"&gt;dhcpRange&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.10.100"&lt;/span&gt;
    &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.10.200"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cisco.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiscoVLAN&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-20&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vlanId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
  &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.20.0/24"&lt;/span&gt;
  &lt;span class="na"&gt;gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.20.1"&lt;/span&gt;
  &lt;span class="na"&gt;dhcpRange&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.20.100"&lt;/span&gt;
    &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.20.200"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cisco.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiscoVLAN&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-30&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vlanId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
  &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.30.0/24"&lt;/span&gt;
  &lt;span class="na"&gt;gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.30.1"&lt;/span&gt;
  &lt;span class="na"&gt;dhcpRange&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.30.100"&lt;/span&gt;
    &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.30.200"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; 00-vlans.yaml
ciscovlan.cisco.io/vlan-10 created
ciscovlan.cisco.io/vlan-20 created
ciscovlan.cisco.io/vlan-30 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few seconds later, all three are Active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get ciscovlans -o wide
NAME      VLAN-ID   CIDR             GATEWAY       STATE    PODS   AGE
vlan-10   10        172.16.10.0/24   172.16.10.1   Active          18s
vlan-20   20        172.16.20.0/24   172.16.20.1   Active          18s
vlan-30   30        172.16.30.0/24   172.16.30.1   Active          18s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The status on each VLAN tells you exactly what got created on the router:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Active&lt;/span&gt;
  &lt;span class="na"&gt;routerSubinterface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GigabitEthernet0/0/0.10&lt;/span&gt;
  &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VLAN 10 configured on GigabitEthernet0/0/0.10&lt;/span&gt;
  &lt;span class="na"&gt;lastReconciled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2026-02-24T18:09:13Z"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So what actually happened on the router? I queried it via RESTCONF to confirm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subinterfaces created&lt;/strong&gt;, each one with its own IP and &lt;code&gt;ip nat inside&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GigabitEthernet0/0/0       (trunk, no IP)
GigabitEthernet0/0/0.10    172.16.10.1/24
GigabitEthernet0/0/0.20    172.16.20.1/24
GigabitEthernet0/0/0.30    172.16.30.1/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DHCP pools&lt;/strong&gt;, one per VLAN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VLAN10_POOL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"network"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.10.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"default-router"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.10.1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VLAN20_POOL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"network"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.20.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"default-router"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.20.1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VLAN30_POOL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"network"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.30.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"default-router"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.30.1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Base ACLs&lt;/strong&gt;. This is important. By default, each VLAN can talk to itself and to the internet, but is &lt;strong&gt;blocked&lt;/strong&gt; from reaching any other VLAN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VLAN10_ACL:
  10  permit ip   172.16.10.0 0.0.0.255 -&amp;gt; 172.16.10.0 0.0.0.255    (intra-VLAN ok)
  20  deny   ip   172.16.10.0 0.0.0.255 -&amp;gt; 172.16.20.0 0.0.0.255    (block VLAN 20)
  30  deny   ip   172.16.10.0 0.0.0.255 -&amp;gt; 172.16.30.0 0.0.0.255    (block VLAN 30)
1000  permit ip   any -&amp;gt; any                                          (internet ok)

VLAN20_ACL:
  10  permit ip   172.16.20.0 0.0.0.255 -&amp;gt; 172.16.20.0 0.0.0.255
  20  deny   ip   172.16.20.0 0.0.0.255 -&amp;gt; 172.16.10.0 0.0.0.255
  30  deny   ip   172.16.20.0 0.0.0.255 -&amp;gt; 172.16.30.0 0.0.0.255
1000  permit ip   any -&amp;gt; any

VLAN30_ACL:
  10  permit ip   172.16.30.0 0.0.0.255 -&amp;gt; 172.16.30.0 0.0.0.255
  20  deny   ip   172.16.30.0 0.0.0.255 -&amp;gt; 172.16.10.0 0.0.0.255
  30  deny   ip   172.16.30.0 0.0.0.255 -&amp;gt; 172.16.20.0 0.0.0.255
1000  permit ip   any -&amp;gt; any
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One &lt;code&gt;kubectl apply&lt;/code&gt;. The router now has subinterfaces, DHCP pools, and ACLs for three VLANs. I didn't type a single command on the router CLI.&lt;/p&gt;




&lt;h3&gt;
  
  
  3.3 Deploy Applications (Before Policy)
&lt;/h3&gt;

&lt;p&gt;The demo uses three pods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;postgres&lt;/strong&gt; on VLAN 20: a PostgreSQL database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hello-world&lt;/strong&gt; on VLAN 10: a Flask app that tries to connect to PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;no-hello-world&lt;/strong&gt; on VLAN 30: the exact same Flask app, but on a different VLAN
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 04-apps.yaml (simplified)&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cisco.io/vlan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-10&lt;/span&gt;          &lt;span class="c1"&gt;# &amp;lt;-- This is all you need&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.254:5000/hello-world:v1&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_HOST&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres-vlan.demo.svc.cluster.local"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod only has &lt;code&gt;cisco.io/vlan: vlan-10&lt;/code&gt;. A &lt;strong&gt;mutating webhook&lt;/strong&gt; intercepts pod creation and injects the Multus network annotation automatically. The app uses DNS (&lt;code&gt;postgres-vlan.demo.svc.cluster.local&lt;/code&gt;) to find the database. The operator creates a headless Service that resolves to the database's DHCP-assigned VLAN IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; 04-apps.yaml
namespace/demo created
pod/postgres created
pod/hello-world created
pod/no-hello-world created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n demo -o wide
NAME             READY   STATUS    IP           NODE
hello-world      1/1     Running   10.42.1.97   node-2
no-hello-world   1/1     Running   10.42.1.98   node-2
postgres         1/1     Running   10.42.0.93   node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The VLAN CRDs now show one active pod each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get ciscovlans -o wide
NAME      VLAN-ID   CIDR             GATEWAY       STATE    PODS
vlan-10   10        172.16.10.0/24   172.16.10.1   Active   1
vlan-20   20        172.16.20.0/24   172.16.20.1   Active   1
vlan-30   30        172.16.30.0/24   172.16.30.1   Active   1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I port-forwarded both Flask apps to my laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward pod/hello-world &lt;span class="nt"&gt;-n&lt;/span&gt; demo 8080:8080 &amp;amp;
kubectl port-forward pod/no-hello-world &lt;span class="nt"&gt;-n&lt;/span&gt; demo 8081:8080 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:8080&lt;/code&gt; and &lt;code&gt;http://localhost:8081&lt;/code&gt;. Both apps have a dashboard that auto-refreshes every second, showing whether the database is reachable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both show DB UNREACHABLE.&lt;/strong&gt; Red badge on both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hello-world&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(VLAN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;at&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;http://localhost:&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="err"&gt;/api/status&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_reachable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"connection failed: No route to host"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"net1_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.10.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hello-world"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;no-hello-world&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(VLAN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;at&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;http://localhost:&lt;/span&gt;&lt;span class="mi"&gt;8081&lt;/span&gt;&lt;span class="err"&gt;/api/status&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_reachable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"connection failed: No route to host"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"net1_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.30.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-hello-world"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the base ACL doing its job. VLANs 10 and 30 both have &lt;code&gt;deny&lt;/code&gt; rules blocking traffic to VLAN 20 where PostgreSQL lives. The segmentation is real, enforced on the router hardware, not in software.&lt;/p&gt;




&lt;h3&gt;
  
  
  3.4 Apply a VLANPolicy
&lt;/h3&gt;

&lt;p&gt;Now let's selectively open access. We want hello-world (VLAN 10) to reach PostgreSQL (VLAN 20) on TCP 5432, but keep no-hello-world (VLAN 30) blocked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 01-policy.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cisco.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VLANPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-to-db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-10&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vlan-20&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
      &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;5432&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; 01-policy.yaml
vlanpolicy.cisco.io/app-to-db created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get vlanpolicies -o wide
NAME        SOURCE    DESTINATION   STATE     ACL-ENTRIES   AGE
app-to-db   vlan-10   vlan-20       Applied   10            9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The operator updated both ACLs on the router:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Applied&lt;/span&gt;
  &lt;span class="na"&gt;aclEntries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ACL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rules&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;applied:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;VLAN10_ACL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;entries),&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;VLAN20_ACL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;entries)"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what the ACLs look like now on the router. The operator inserted &lt;code&gt;permit&lt;/code&gt; rules &lt;strong&gt;before&lt;/strong&gt; the existing &lt;code&gt;deny&lt;/code&gt; rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VLAN10_ACL (after policy):
  10  permit tcp  172.16.10.0 -&amp;gt; 172.16.20.0 eq 5432    &amp;lt;&amp;lt; NEW: allow TCP 5432
  20  permit ip   172.16.10.0 -&amp;gt; 172.16.10.0             (intra-VLAN)
  30  deny   ip   172.16.10.0 -&amp;gt; 172.16.20.0             (block everything else to VLAN 20)
  40  deny   ip   172.16.10.0 -&amp;gt; 172.16.30.0             (block VLAN 30)
1000  permit ip   any -&amp;gt; any                              (internet)

VLAN20_ACL (after policy):
  10  permit tcp  172.16.20.0 -&amp;gt; 172.16.10.0 established &amp;lt;&amp;lt; NEW: return traffic only
  20  permit ip   172.16.20.0 -&amp;gt; 172.16.20.0
  30  deny   ip   172.16.20.0 -&amp;gt; 172.16.10.0
  40  deny   ip   172.16.20.0 -&amp;gt; 172.16.30.0
1000  permit ip   any -&amp;gt; any
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two things to notice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;VLAN10_ACL&lt;/strong&gt; now has a &lt;code&gt;permit tcp ... eq 5432&lt;/code&gt; at sequence 10, before the deny. VLAN 10 can initiate TCP connections to VLAN 20 on the PostgreSQL port.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VLAN20_ACL&lt;/strong&gt; has an &lt;code&gt;established&lt;/code&gt; rule. This allows return traffic from connections that VLAN 10 started, but VLAN 20 cannot initiate new connections to VLAN 10.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;VLAN30_ACL didn't change at all.&lt;/strong&gt; No policy mentions VLAN 30, so it stays fully blocked.&lt;/p&gt;




&lt;h3&gt;
  
  
  3.5 The Result
&lt;/h3&gt;

&lt;p&gt;Back to the browsers. The dashboards refresh every second.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;hello-world (VLAN 10) at &lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_reachable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"postgres-vlan.demo.svc.cluster.local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"latency_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;7.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"net1_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.10.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hello-world"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Green badge. &lt;strong&gt;DB REACHABLE&lt;/strong&gt;, 7.2ms. It can also write and read data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;http://localhost:&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="err"&gt;/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SUCCESS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"inserted"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-24T18:14:04.158103"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"recent_messages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hello from 172.16.10.2 at 18:14:04"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;no-hello-world (VLAN 30) at &lt;code&gt;http://localhost:8081&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_reachable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db_error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"connection failed: timeout expired"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"net1_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.16.30.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-hello-world"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Red badge. &lt;strong&gt;DB UNREACHABLE.&lt;/strong&gt; Same Docker image. Same code. Same DB_HOST env var. The only difference is one line in the YAML: &lt;code&gt;cisco.io/vlan: vlan-30&lt;/code&gt; instead of &lt;code&gt;vlan-10&lt;/code&gt;. The router drops the traffic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;http://localhost:&lt;/span&gt;&lt;span class="mi"&gt;8081&lt;/span&gt;&lt;span class="err"&gt;/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"connection failed: No route to host"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  4. What's Next
&lt;/h2&gt;

&lt;p&gt;The full source code is on &lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/csepulveda/cisco-kubernetes" rel="noopener noreferrer"&gt;github.com/csepulveda/cisco-kubernetes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me repeat what I said at the beginning: this is not production software. The code is tied to a specific router model (Cisco 4331), a specific IOS-XE version (16.12), and a specific physical setup. If you actually need network segmentation in Kubernetes, go with &lt;strong&gt;Cilium&lt;/strong&gt; or &lt;strong&gt;Calico&lt;/strong&gt;. If you're in the Cisco ecosystem and have the budget, &lt;strong&gt;ACI with Nexus&lt;/strong&gt; is what it's designed for. Those are real, tested, supported solutions.&lt;/p&gt;

&lt;p&gt;This project is something else. It's a lab experiment. A proof of concept. And the thing it proves is this: &lt;strong&gt;Kubernetes operators can manage anything that has an API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The operator pattern is a control loop. It watches for desired state (your CRDs) and makes actual state match. Most operators manage stuff inside the cluster: databases, message queues, certificates. But nothing says it has to stay inside. The reconciliation loop doesn't care whether it's creating a Pod or configuring a physical router 3 meters away.&lt;/p&gt;

&lt;p&gt;In this case it's a Cisco router. But it could be a firewall, a load balancer, a DNS provider, a cloud networking service, some legacy system with SOAP endpoints. If it has an API and it has state, you can write an operator for it.&lt;/p&gt;

&lt;p&gt;That's what I find exciting about this. I wired a mid-range Cisco 4331 (not a Nexus 9000, not an ACI fabric, just a regular router) into a Kubernetes control plane using nothing but RESTCONF and Go. No vendor SDK, no proprietary integration, no expensive hardware. Just the operator pattern doing what it does best: turning YAML into reality.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Go, Kubebuilder, K3s, and a lot of patience debugging YANG models at 2am.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cisco</category>
      <category>kubernetes</category>
      <category>automation</category>
      <category>networking</category>
    </item>
    <item>
      <title>From Zero to EKS and Hybrid-Nodes —Part 3: Setting up the NLB, Ingress, and Services on a Hybrid EKS Infrastructure</title>
      <dc:creator>César Sepúlveda Barra</dc:creator>
      <pubDate>Mon, 21 Apr 2025 03:41:08 +0000</pubDate>
      <link>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-3-setting-up-the-nlb-ingress-and-services-on-a-hybrid-2opl</link>
      <guid>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-3-setting-up-the-nlb-ingress-and-services-on-a-hybrid-2opl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-2-the-eks-and-hybrid-nodes-configuration-300n"&gt;previous part&lt;/a&gt;, we set up the EKS cluster and deployed the first hybrid nodes. At that point, the network topology looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2f2plnfa2y81tvwosfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2f2plnfa2y81tvwosfl.png" alt="diagram - 1" width="700" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, in this third part, we’re going to set up the &lt;strong&gt;Network Load Balancer (NLB)&lt;/strong&gt;, configure &lt;strong&gt;Ingress&lt;/strong&gt;, and deploy services to expose our application running on this hybrid infrastructure. We’ll also observe and analyze the behavior of a Kubernetes Deployment in this mixed node environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Topology Changes:
&lt;/h2&gt;

&lt;p&gt;To enable communication between pods running on AWS-managed nodes and those on hybrid nodes, &lt;strong&gt;we need to configure specific routing rules on our VPN gateway/router.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a production environment, you could enable &lt;strong&gt;BGP in Cilium&lt;/strong&gt; (if your network supports it), but since this is a lab setup, we’ll handle routing manually via static routes on the gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disabling kube-proxy on Hybrid Nodes:
&lt;/h2&gt;

&lt;p&gt;Since we’re replacing kube-proxy functionality with Cilium, we’ll prevent kube-proxy from running on hybrid nodes using the following patch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch daemonset kube-proxy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[
    {
      "op": "add",
      "path": "/spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/2/values/-",
      "value": "hybrid"
    }
  ]'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get EKS API Server Endpoint:
&lt;/h2&gt;

&lt;p&gt;This will be needed in the next step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks describe-cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-eks-cluster &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"cluster.endpoint"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Updating the Cilium Configuration:
&lt;/h2&gt;

&lt;p&gt;Next, we’ll update our cilium-values.yaml file to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable kubeProxyReplacement.&lt;/li&gt;
&lt;li&gt;Enable Layer 2 (L2) announcements.&lt;/li&gt;
&lt;li&gt;Define the /25 subnets to be used for pod IPs.&lt;/li&gt;
&lt;li&gt;Set the Kubernetes API server endpoint and port.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks.amazonaws.com/compute-type&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hybrid&lt;/span&gt;
&lt;span class="na"&gt;ipam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-pool&lt;/span&gt;
  &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;clusterPoolIPv4MaskSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;25&lt;/span&gt;
    &lt;span class="na"&gt;clusterPoolIPv4PodCIDRList&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.101.0/25&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.101.128/25&lt;/span&gt;
&lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks.amazonaws.com/compute-type&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hybrid&lt;/span&gt;
  &lt;span class="na"&gt;unmanagedPodWatcher&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;envoy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;kubeProxyReplacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;l2announcements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;leaseDuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2s&lt;/span&gt;
  &lt;span class="na"&gt;leaseRenewDeadline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
  &lt;span class="na"&gt;leaseRetryPeriod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200ms&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;externalIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;k8sClientRateLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;qps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
  &lt;span class="na"&gt;burst&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;150&lt;/span&gt;

&lt;span class="na"&gt;k8sServiceHost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;CLUSTER_HOST&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;k8sServicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;commands/02-cilium-updates
helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; cilium cilium/cilium &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-f&lt;/span&gt; cilium-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying Cilium Agents:
&lt;/h2&gt;

&lt;p&gt;You should now see cilium-agent pods running on the hybrid nodes, and no kube-proxy pods there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                               READY   STATUS    RESTARTS   AGE   IP                NODE                         NOMINATED NODE   READINESS GATES
aws-node-k8qw5                     2/2     Running   0          31m   10.0.21.61        ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-lxphl                       1/1     Running   0          48s   192.168.100.102   mi-00a55e151790a4c81         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-operator-7658767979-f57v5   1/1     Running   0          52s   192.168.100.101   mi-014887c08038cb6c2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-znwdm                       1/1     Running   0          49s   192.168.100.101   mi-014887c08038cb6c2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
coredns-6b9575c64c-fxzv9           1/1     Running   0          35m   10.0.25.252       ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
eks-pod-identity-agent-k559t       1/1     Running   0          31m   10.0.21.61        ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-proxy-6j74v                   1/1     Running   0          15m   10.0.21.61        ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Static Routing:
&lt;/h2&gt;

&lt;p&gt;Now it’s time to configure static routes on the VPN router. First, check which hybrid node is responsible for each /25 subnet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ciliumnodes
NAME                   CILIUMINTERNALIP   INTERNALIP        AGE
mi-00a55e151790a4c81   192.168.101.51     192.168.100.102   8m14s
mi-014887c08038cb6c2   192.168.101.192    192.168.100.101   8m14s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the routes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip route add 192.168.101.0/25 via 192.168.100.102
ip route add 192.168.101.128/25 via 192.168.100.101
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now out network will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb7gnmzn463otwg813hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb7gnmzn463otwg813hy.png" alt="diagram - 2 static routing" width="720" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the Load Balancer Controller:
&lt;/h2&gt;

&lt;p&gt;Add the following Terraform file to your project to install the AWS Load Balancer Controller via Helm, and create the required IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"aws-load-balancer-controller"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws-load-balancer-controller"&lt;/span&gt;
  &lt;span class="nx"&gt;chart&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws-load-balancer-controller"&lt;/span&gt;
  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://aws.github.io/eks-charts"&lt;/span&gt;
  &lt;span class="nx"&gt;namespace&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kube-system"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.12.0"&lt;/span&gt;

  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount.create"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount.name"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_loadbalancer_iam&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;iam_role_name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount.annotations.eks&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;.amazonaws&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;.com/role-arn"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_loadbalancer_iam&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;iam_role_arn&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"clusterName"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vpcId"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_remote_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Optional affinity rule&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eks.amazonaws.com/capacityType"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Exists"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"eks_loadbalancer_iam"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5.54.0"&lt;/span&gt;

  &lt;span class="nx"&gt;role_name&lt;/span&gt;                              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"load-balancer-controller"&lt;/span&gt;
  &lt;span class="nx"&gt;attach_load_balancer_controller_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;oidc_providers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;ex&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;provider_arn&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;oidc_provider_arn&lt;/span&gt;
      &lt;span class="nx"&gt;namespace_service_accounts&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kube-system:load-balancer-controller"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../../EKS-HYBRID
&lt;span class="nb"&gt;cp&lt;/span&gt; ../load-balancer/load-balancer.tf &lt;span class="nb"&gt;.&lt;/span&gt;
tofu init
tofu apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get deployment/aws-load-balancer-controller &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   1/1     1            1           5m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying the Demo App:
&lt;/h2&gt;

&lt;p&gt;Now let’s deploy a simple app with 6 replicas to observe pod distribution across AWS and hybrid nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;topologySpreadConstraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;maxSkew&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/hostname&lt;/span&gt;
          &lt;span class="na"&gt;whenUnsatisfiable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ScheduleAnyway&lt;/span&gt;
          &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;containous/whoami&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;external"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;internet-facing&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/listen-ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[{"HTTP":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;80}]'&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/target-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/load-balancer-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nlb&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alb&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../commands/03-create-service
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check pod distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nb"&gt;whoami&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                     READY   STATUS    RESTARTS   AGE    IP                NODE                         NOMINATED NODE   READINESS GATES
whoami-7cb8f48c8-bhrvw   1/1     Running   0          2m6s   192.168.101.187   mi-014887c08038cb6c2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
whoami-7cb8f48c8-gd48g   1/1     Running   0          6s     10.0.21.186       ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
whoami-7cb8f48c8-hhs49   1/1     Running   0          2m9s   192.168.101.212   mi-014887c08038cb6c2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
whoami-7cb8f48c8-n64gh   1/1     Running   0          2m9s   192.168.101.108   mi-00a55e151790a4c81         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
whoami-7cb8f48c8-q8bhj   1/1     Running   0          2m9s   192.168.101.56    mi-00a55e151790a4c81         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
whoami-7cb8f48c8-rr7kp   1/1     Running   0          9s     10.0.27.231       ip-10-0-21-61.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Round-Robin Behavior:
&lt;/h2&gt;

&lt;p&gt;Get the Ingress address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingress  &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nb"&gt;whoami 
&lt;/span&gt;NAME     CLASS   HOSTS   ADDRESS                                                               PORTS   AGE
&lt;span class="nb"&gt;whoami   &lt;/span&gt;alb     &lt;span class="k"&gt;*&lt;/span&gt;       k8s-whoami-whoami-xxxx-yyyy.us-east-1.elb.amazonaws.com   80      36s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 10&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
  &lt;span class="k"&gt;do &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://&amp;lt;your-lb-dns&amp;gt;/api | jq .ip 
&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.212"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::b405:92ff:fe92:72f4"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.212"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::b405:92ff:fe92:72f4"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.0.21.186"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::6c8f:6aff:fe38:5e55"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.0.27.231"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::a8a6:26ff:feed:d649"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.108"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::28bc:50ff:fe48:d3d9"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.56"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::7869:ccff:fee5:b61b"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.187"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::14f6:88ff:fe17:f9ae"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"192.168.101.212"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::b405:92ff:fe92:72f4"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.0.21.186"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::6c8f:6aff:fe38:5e55"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"::1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.0.27.231"&lt;/span&gt;,
  &lt;span class="s2"&gt;"fe80::a8a6:26ff:feed:d649"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see requests being served by both AWS and hybrid nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Overview:
&lt;/h2&gt;

&lt;p&gt;Hybrid nodes do not incur EC2 costs, but you’ll still pay:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS control plane fees&lt;/li&gt;
&lt;li&gt;Data transfer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hybrid node usage based on vCPU-hours&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A 32 vCPU hybrid node = 32 × 24 × 30 = 23,040 vCPU-hours/month&lt;br&gt;
At $0.02 per vCPU-hour, that’s &lt;strong&gt;~$460.80/month&lt;/strong&gt;&lt;br&gt;
The cheapest AWS instance with 32 vCPU and GPU (e.g. g5g.xlarge) costs &lt;strong&gt;~$987/month&lt;/strong&gt; — &lt;strong&gt;nearly 2x more expensive&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F803vajnyf0gp1mjbgj7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F803vajnyf0gp1mjbgj7r.png" alt="pricing" width="720" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;As you can see, it’s entirely possible to run a hybrid EKS environment combining AWS-managed and on-prem nodes. While this is not a common setup, it’s a powerful tool for organizations with existing infrastructure or specific compliance requirements.&lt;/p&gt;

&lt;p&gt;This lab setup uses static routing, eks_managed_node_groups, and minimal redundancy. In a production scenario, you’d likely adopt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VPC CNI custom networking&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Karpenter&lt;/strong&gt; for dynamic scaling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BGP routing&lt;/strong&gt; for flexibility&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A real VPN appliance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But as a proof of concept, it offers a strong foundation and a practical look into what’s required to implement EKS hybrid clusters successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvlvmah5olil61pj50qk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvlvmah5olil61pj50qk.png" alt="final - infra" width="720" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the files used by this lab, could be found &lt;a href="https://github.com/csepulveda/modular-aws-resources" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Tools I Use Daily as a DevOps Engineer</title>
      <dc:creator>César Sepúlveda Barra</dc:creator>
      <pubDate>Fri, 11 Apr 2025 17:14:40 +0000</pubDate>
      <link>https://dev.to/csepulvedab/tools-i-use-daily-as-a-devops-engineer-123d</link>
      <guid>https://dev.to/csepulvedab/tools-i-use-daily-as-a-devops-engineer-123d</guid>
      <description>&lt;p&gt;This is a post I plan to update frequently. Here’s the initial list of top tools I use in my day-to-day work — whether I'm managing Kubernetes clusters, working with AWS, or just trying to stay efficient.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 1. K9s
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;K9s&lt;/a&gt; is a must-have for Kubernetes administrators. It's lightweight, fast, and lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check and explore Kubernetes resources&lt;/li&gt;
&lt;li&gt;Create port-forwardings&lt;/li&gt;
&lt;li&gt;Modify, delete, or view logs from pods&lt;/li&gt;
&lt;li&gt;And much more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;: &lt;code&gt;brew install k9s&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Demo&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=AMUQzyPvO04" rel="noopener noreferrer"&gt;YouTube – K9s in Action&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flceforhoyl5ifzb3zs3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flceforhoyl5ifzb3zs3v.png" alt="k9s in action" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧭 2. kubectx &amp;amp; kubens
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ahmetb/kubectx" rel="noopener noreferrer"&gt;kubectx&lt;/a&gt; and &lt;code&gt;kubens&lt;/code&gt; are simple tools to manage multiple Kubernetes contexts and namespaces efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;: &lt;code&gt;brew install kubectx&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://github.com/ahmetb/kubectx" rel="noopener noreferrer"&gt;GitHub – kubectx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdx93lcvbhln4xlafnmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdx93lcvbhln4xlafnmh.png" alt="Kubens in action" width="648" height="277"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💣 3. aws-nuke
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ekristen/aws-nuke" rel="noopener noreferrer"&gt;&lt;code&gt;aws-nuke&lt;/code&gt;&lt;/a&gt; helps you clean up AWS environments easily after testing or PoCs. Just define which IAM resources to keep — and nuke everything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;: &lt;code&gt;brew install aws-nuke&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://github.com/ekristen/aws-nuke" rel="noopener noreferrer"&gt;GitHub – aws-nuke&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5wdj95nzwhkz4gk1s3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5wdj95nzwhkz4gk1s3n.png" alt="aws-nuke in acction" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💸 4. eks-node-viewer
&lt;/h2&gt;

&lt;p&gt;A tool I use almost daily to check how many EKS nodes I'm running and estimate my spending.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew tap aws/tap &amp;amp;&amp;amp; brew install eks-node-viewer&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://github.com/awslabs/eks-node-viewer" rel="noopener noreferrer"&gt;GitHub – eks-node-viewer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc69nz2tgucsmgtru0329.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc69nz2tgucsmgtru0329.png" alt="eks-node-viewer in action" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found this list helpful, let me know! I’ll keep this post updated with new tools as I incorporate them into my workflow.&lt;/p&gt;

&lt;p&gt;Stay tuned — more tools coming soon!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From Zero to EKS and Hybrid-Nodes — Part 2: The EKS and Hybrid Nodes configuration.</title>
      <dc:creator>César Sepúlveda Barra</dc:creator>
      <pubDate>Fri, 11 Apr 2025 13:33:25 +0000</pubDate>
      <link>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-2-the-eks-and-hybrid-nodes-configuration-300n</link>
      <guid>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-2-the-eks-and-hybrid-nodes-configuration-300n</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://medium.com/@csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-1-the-vpc-and-vpn-configuration-af31c69b0d1f" rel="noopener noreferrer"&gt;previous part&lt;/a&gt;, we set up the VPC and the VPN Site-to-Site connection to prepare for our EKS cluster and hybrid nodes. The network diagram at that stage looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfs7vwsa4mfqnlqi4kxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfs7vwsa4mfqnlqi4kxt.png" alt="initial infra" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this second part, we’ll configure the EKS cluster using &lt;strong&gt;OpenTofu&lt;/strong&gt;, including &lt;strong&gt;SSM Activation&lt;/strong&gt;, and manually set up two &lt;strong&gt;Hybrid nodes&lt;/strong&gt; using nodeadm.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setup the EKS cluster&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cluster creation is straightforward. We’ll use the state file from the VPC creation to retrieve outputs, and we’ll create the cluster using the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="noopener noreferrer"&gt;terraform-aws-modules/eks/aws&lt;/a&gt; module.&lt;/p&gt;

&lt;p&gt;Full code is available here: &lt;a href="https://github.com/csepulveda/modular-aws-resources/tree/main/EKS-HYBRID" rel="noopener noreferrer"&gt;https://github.com/csepulveda/modular-aws-resources/tree/main/EKS-HYBRID&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;################################################################################&lt;/span&gt;
&lt;span class="c1"&gt;# EKS Module&lt;/span&gt;
&lt;span class="c1"&gt;################################################################################&lt;/span&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"eks"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/eks/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"20.35.0"&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_version&lt;/span&gt;

  &lt;span class="nx"&gt;enable_cluster_creator_admin_permissions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_endpoint_public_access&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_addons&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;coredns&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;configuration_values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nx"&gt;replicaCount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;eks-pod-identity-agent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="nx"&gt;kube-proxy&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="nx"&gt;vpc-cni&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_remote_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_ids&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_remote_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnets&lt;/span&gt;
  &lt;span class="nx"&gt;control_plane_subnet_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_remote_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;control_plane_subnet_ids&lt;/span&gt;

  &lt;span class="nx"&gt;eks_managed_node_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;eks-base&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;ami_type&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AL2023_x86_64_STANDARD"&lt;/span&gt;
      &lt;span class="nx"&gt;instance_types&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"t3.small"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"t3a.small"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

      &lt;span class="nx"&gt;min_size&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nx"&gt;max_size&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nx"&gt;desired_size&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nx"&gt;capacity_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SPOT"&lt;/span&gt;
      &lt;span class="nx"&gt;network_interfaces&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="nx"&gt;delete_on_termination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;node_security_group_additional_rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;allow-all-80-traffic-from-loadbalancers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;elb_subnets&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cidr_block&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow all traffic from load balancers"&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TCP"&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;hybrid-all&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"192.168.100.0/23"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow all traffic from remote node/pod network"&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"all"&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_security_group_additional_rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;hybrid-all&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"192.168.100.0/23"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow all traffic from remote node/pod network"&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"all"&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;cluster_remote_network_config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;remote_node_networks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cidrs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"192.168.100.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;remote_pod_networks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cidrs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"192.168.101.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;access_entries&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;hybrid-node-role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;principal_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_hybrid_node_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HYBRID_LINUX"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;


  &lt;span class="nx"&gt;node_security_group_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"karpenter.sh/discovery"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;################################################################################&lt;/span&gt;
&lt;span class="c1"&gt;# Hybrid nodes Support&lt;/span&gt;
&lt;span class="c1"&gt;################################################################################&lt;/span&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"eks_hybrid_node_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/eks/aws//modules/hybrid-node-role"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"20.35.0"&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hybrid"&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ssm_activation"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hybrid-node"&lt;/span&gt;
  &lt;span class="nx"&gt;iam_role&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_hybrid_node_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;registration_limit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"local_file"&lt;/span&gt; &lt;span class="s2"&gt;"nodeConfig"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;content&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
    apiVersion: node.eks.aws/v1alpha1
    kind: NodeConfig
    spec:
      cluster:
        name: ${module.eks.cluster_name}
        region: ${local.region}
      hybrid:
        ssm:
          activationId: ${aws_ssm_activation.this.id}
          activationCode: ${aws_ssm_activation.this.activation_code} 
&lt;/span&gt;&lt;span class="no"&gt;  EOT
&lt;/span&gt;  &lt;span class="nx"&gt;filename&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nodeConfig.yaml"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Highlights of the configuration:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One EKS-managed node group for core services and ACK controllers (to be installed in Part 3).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC CNI enabled via cluster_addons.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress rules to allow traffic from on-premise networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defined remote node and pod networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Created a role for hybrid nodes and granted access via access_entries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Hybrid Nodes and SSM Activation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An IAM role using the hybrid node module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An SSM Activation to allow hybrid nodes to join the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A nodeConfig.yaml file containing the activation credentials, automatically generated during tofu apply.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tofu apply
....
Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 50 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generates a nodeConfig.yaml file, which is critical for authenticating on-prem nodes via &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="noopener noreferrer"&gt;AWS Systems Manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Update your kubeconfig after creation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --region us-east-1 --name my-eks-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your infrastructure should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2jq0jyezl6ff8mm4302.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2jq0jyezl6ff8mm4302.png" alt="eks on infra" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Up the Hybrid Nodes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;SSH into the nodes (e.g., 192.168.100.101 and 192.168.100.102). These are configured with two networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;192.168.100.0/24 for nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;192.168.101.0/24 for pods.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Transfer the node config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp nodeConfig.yaml cesar@192.168.100.101:/tmp/
scp nodeConfig.yaml cesar@192.168.100.102:/tmp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install and configure nodeadm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-OL&lt;/span&gt; &lt;span class="s1"&gt;'https://hybrid-assets.eks.amazonaws.com/releases/latest/bin/linux/arm64/nodeadm'&lt;/span&gt;
&lt;span class="nb"&gt;chmod &lt;/span&gt;a+x nodeadm
&lt;span class="nb"&gt;mv &lt;/span&gt;nodeadm /usr/local/bin/.

nodeadm &lt;span class="nb"&gt;install &lt;/span&gt;1.32 &lt;span class="nt"&gt;--credential-provider&lt;/span&gt; ssm
nodeadm init &lt;span class="nt"&gt;--config-source&lt;/span&gt; file:///tmp/nodeConfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repeat for both nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F330aghueuzzii1eebivj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F330aghueuzzii1eebivj.png" alt="node instalation" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, check node registration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                          STATUS     ROLES    AGE     VERSION               INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                    CONTAINER-RUNTIME
ip-10-0-19-137.ec2.internal   Ready      &amp;lt;none&amp;gt;   16m     v1.32.1-eks-5d632ec   10.0.19.137       &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250331   6.1.131-143.221.amzn2023.x86_64   containerd://1.7.27
mi-03c1eb4c6173151d6          NotReady   &amp;lt;none&amp;gt;   3m4s    v1.32.1-eks-5d632ec   192.168.100.102   &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS             5.15.0-136-generic                containerd://1.7.24
mi-091f1dddb980a80ff          NotReady   &amp;lt;none&amp;gt;   3m51s   v1.32.1-eks-5d632ec   192.168.100.101   &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS             5.15.0-136-generic                containerd://1.7.24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The hybrid nodes appear but are NotReady — this is expected until we install a network controller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disclaimer time&lt;/em&gt;:&lt;/strong&gt; Although the documentation claims compatibility with Ubuntu 22.04 and 24.04, I couldn’t get 24.04 working due to missing nf_conntrack when kube-proxy attempts to apply iptables rules. If you manage to fix this, please share!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Up the Network with Cilium&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’ll use &lt;strong&gt;Cilium&lt;/strong&gt; as CNI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cilium-values.yaml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks.amazonaws.com/compute-type&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hybrid&lt;/span&gt;
&lt;span class="na"&gt;ipam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-pool&lt;/span&gt;
  &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;clusterPoolIPv4MaskSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;24&lt;/span&gt;
    &lt;span class="na"&gt;clusterPoolIPv4PodCIDRList&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.101.0/24&lt;/span&gt;
&lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks.amazonaws.com/compute-type&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hybrid&lt;/span&gt;
  &lt;span class="na"&gt;unmanagedPodWatcher&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;envoy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This yaml file indicate the affinity on cilium pods and also the cilium operator to run only en hybrid nodes, also we set the clusterPoolIPv4PodCIDRList pool, using the subnet 192.168.101.0/24&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Install Cilium:&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add cilium https://helm.cilium.io/

helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; cilium cilium/cilium &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--version&lt;/span&gt; 1.17.2 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--values&lt;/span&gt; cilium-values.yaml

...
Release &lt;span class="s2"&gt;"cilium"&lt;/span&gt; does not exist. Installing it now.
NAME: cilium
LAST DEPLOYED: Fri Apr 11 08:35:31 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.17.2.

For any further &lt;span class="nb"&gt;help&lt;/span&gt;, visit https://docs.cilium.io/en/v1.17/gettinghelp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few minutes the nodes will be ready&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                          STATUS   ROLES    AGE   VERSION               INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                    CONTAINER-RUNTIME
ip-10-0-19-137.ec2.internal   Ready    &amp;lt;none&amp;gt;   56m   v1.32.1-eks-5d632ec   10.0.19.137       &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250331   6.1.131-143.221.amzn2023.x86_64   containerd://1.7.27
mi-03c1eb4c6173151d6          Ready    &amp;lt;none&amp;gt;   42m   v1.32.1-eks-5d632ec   192.168.100.102   &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS             5.15.0-136-generic                containerd://1.7.24
mi-091f1dddb980a80ff          Ready    &amp;lt;none&amp;gt;   43m   v1.32.1-eks-5d632ec   192.168.100.101   &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS             5.15.0-136-generic                containerd://1.7.24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your hybrid nodes are now fully integrated with EKS. 🎊&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoeouf9622ifqh6e2vva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoeouf9622ifqh6e2vva.png" alt="Eks nodes" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now your infrastructure should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jdrbt731hgok97klm7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jdrbt731hgok97klm7e.png" alt="Infra after nodes" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What’s Next?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the final part of this series, we’ll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Deploy the &lt;strong&gt;Network Load Balancer (NLB) Controller&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up an &lt;strong&gt;Ingress&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy a sample &lt;strong&gt;service&lt;/strong&gt; running entirely on hybrid/on-prem nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>eks</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>From Zero to EKS and Hybrid-Nodes — Part 1: The VPC and VPN configuration.</title>
      <dc:creator>César Sepúlveda Barra</dc:creator>
      <pubDate>Wed, 09 Apr 2025 14:07:49 +0000</pubDate>
      <link>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-1-the-vpc-and-vpn-configuration-1f11</link>
      <guid>https://dev.to/csepulvedab/from-zero-to-eks-and-hybrid-nodes-part-1-the-vpc-and-vpn-configuration-1f11</guid>
      <description>&lt;p&gt;In this post, I’ll walk you through how to create an Amazon EKS node running Kubernetes 1.32 and set up a Site-to-Site VPN to connect an on-prem virtual machine to your EKS cluster using Linux hybrid nodes. To complete the setup, we’ll deploy a Network Load Balancer (NLB) and an NGINX Ingress controller to manage traffic across both cloud-based and on-prem applications.&lt;/p&gt;

&lt;p&gt;The goal is to demonstrate how we can extend our on-prem infrastructure into EKS, enabling a seamless hybrid environment. This approach is especially useful in scenarios where you want to leverage local GPU machines for LLM workloads, or manage CDN nodes running on data center hardware while orchestrating everything through Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6uzq9lcw04w27igdrzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6uzq9lcw04w27igdrzx.png" alt="Initial state infra"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create VPC
&lt;/h2&gt;

&lt;p&gt;We’ll start by creating the VPC, which is quite straightforward using OpenTofu (formerly Terraform). Below is the configuration using the terraform-aws-modules/vpc/aws module:&lt;/p&gt;

&lt;p&gt;The code is here: &lt;a href="https://github.com/csepulveda/modular-aws-resources/tree/main/VPC" rel="noopener noreferrer"&gt;https://github.com/csepulveda/modular-aws-resources/tree/main/VPC&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/vpc/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5.19.0"&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="c1"&gt;#my-eks-cluster&lt;/span&gt;
  &lt;span class="nx"&gt;cidr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt; &lt;span class="c1"&gt;#10.0.0.0/16&lt;/span&gt;

  &lt;span class="nx"&gt;azs&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azs&lt;/span&gt; &lt;span class="c1"&gt;#["us-east-1a", "us-east-1b", "us-east-1c"]&lt;/span&gt;
  &lt;span class="nx"&gt;private_subnets&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azs&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cidrsubnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
  &lt;span class="nx"&gt;public_subnets&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azs&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cidrsubnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
  &lt;span class="nx"&gt;intra_subnets&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azs&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cidrsubnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;

  &lt;span class="nx"&gt;enable_nat_gateway&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;single_nat_gateway&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;public_subnet_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/elb"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;private_subnet_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/internal-elb"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="s2"&gt;"karpenter.sh/discovery"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To apply the changes, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tofu apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once applied, you should see an output similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5hr3zfrks785a46i7pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5hr3zfrks785a46i7pz.png" alt="VPC creation output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup creates a basic but functional VPC with three subnet types, each serving a specific purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnets&lt;/strong&gt;: Used for deploying nodes and pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnets&lt;/strong&gt;: Used for exposing services via Network Load Balancers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intra Subnets&lt;/strong&gt;: Dedicated to the EKS control plane.
In the next steps, we’ll manually configure routing and VPN components. I’m choosing not to automate those with Terraform to better explain each concept and step in detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  VPN Site to Site:
&lt;/h2&gt;

&lt;p&gt;As you could see in the infra diagram, the local network CIDR is 192.168.100.0/23 and the VPC CIDR is 10.0.0.0/16 so we need to go to each subnet in our new VPC&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Customer gateway.
&lt;/h3&gt;

&lt;p&gt;First we need to create the Customer gateway, this is a very simple step, we only need to define a name (&lt;strong&gt;Name tag — optional&lt;/strong&gt;) and also what is the public IP (&lt;strong&gt;IP address&lt;/strong&gt;) address of our local network router (in my case the routes is in a DMZ behind a NAT)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name tag&lt;/strong&gt;: eks-kybrid-customer-gateway&lt;br&gt;
&lt;strong&gt;IP Address&lt;/strong&gt;: [Your public IPv4]&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/koOA2mcF70o"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Virtual private gateway
&lt;/h3&gt;

&lt;p&gt;To create the Virtual Private Gateway, you only need to assign a name tag. This component will act as the gateway between your AWS VPC and the on-premises network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name tag&lt;/strong&gt;: eks-kybrid-private-gateway&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/inLONUUKCIQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Site-To-Site VPN connection
&lt;/h3&gt;

&lt;p&gt;Once the Customer Gateway and Virtual Private Gateway are created, we can proceed to establish the Site-to-Site VPN connection.&lt;/p&gt;

&lt;p&gt;In this step, we will configure the following settings:&lt;br&gt;
&lt;strong&gt;Name tag&lt;/strong&gt;: eks-kybrid-vpn&lt;br&gt;
&lt;strong&gt;Target gateway type&lt;/strong&gt;: Virtual Private Gateway (select the one created earlier)&lt;br&gt;
&lt;strong&gt;Customer gateway&lt;/strong&gt;: Existing (select the previously created Customer Gateway)&lt;br&gt;
&lt;strong&gt;Routing options&lt;/strong&gt;: Static&lt;br&gt;
&lt;strong&gt;Static IP prefixes&lt;/strong&gt;: 192.168.100.0/23 (your on-prem network)&lt;br&gt;
&lt;strong&gt;Local IPv4 network CIDR&lt;/strong&gt;: 192.168.100.0/23 (your on-prem network)&lt;br&gt;
&lt;strong&gt;Remote IPv4 network CIDR&lt;/strong&gt;: 10.0.0.0/16 (the VPC CIDR)&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/arDmfVvbG1c"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This configuration defines the routing paths for traffic between your on-premises network and the EKS cluster running in the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Attach the Virtual private gateway to VPC
&lt;/h3&gt;

&lt;p&gt;In this step, we will attach the previously created private gateway to our EKS VPC. This connection allows the VPC to establish communication with the on-premises network through the Site-to-Site VPN.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/0L-NljCSoz8"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Routes in VPC.
&lt;/h3&gt;

&lt;p&gt;To enable communication between resources in our VPC and the on-premises network, we need to update the routing tables. This configuration ensures that traffic destined for the local network is directed through the private gateway.&lt;/p&gt;

&lt;p&gt;We will update each of the relevant route tables: &lt;strong&gt;private&lt;/strong&gt;, &lt;strong&gt;public&lt;/strong&gt;, and &lt;strong&gt;intra&lt;/strong&gt;, by adding a route to the local CIDR block 192.168.100.0/23 with the &lt;strong&gt;Private Gateway&lt;/strong&gt; as the target.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/crkIi0tOTeI"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the client.
&lt;/h2&gt;

&lt;p&gt;Now that everything is set up on the AWS side, it’s time to configure the VPN client on your local environment.&lt;/p&gt;

&lt;p&gt;In my case, the client is a &lt;strong&gt;Ubuntu 24.04 virtual machine&lt;/strong&gt; located in a &lt;strong&gt;DMZ&lt;/strong&gt;, meaning any request sent to my public IP is forwarded to this machine. We’ll be using &lt;strong&gt;Strongswan&lt;/strong&gt; as the VPN client to connect to AWS’s Site-to-Site VPN.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Download the VPN Configuration
&lt;/h3&gt;

&lt;p&gt;Go to the &lt;strong&gt;VPC Dashboard → Site-to-Site VPN Connections&lt;/strong&gt;, select your VPN connection, and click &lt;strong&gt;Download Configuration&lt;/strong&gt;. Choose &lt;strong&gt;Strongswan&lt;/strong&gt; as the vendor.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/LqmMgF-LmZY"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Install Strongswan
&lt;/h3&gt;

&lt;p&gt;On your VPN client machine, install Strongswan using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ssh"&gt;&lt;code&gt;&lt;span class="k"&gt;apt&lt;/span&gt; update
&lt;span class="k"&gt;apt&lt;/span&gt; install strongswan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Configure the VPN Tunnels
&lt;/h3&gt;

&lt;p&gt;Use the provided AWS configuration as a base. Edit the file &lt;code&gt;/etc/ipsec.conf&lt;/code&gt; with your VPN tunnel definitions, updating values where needed (e.g., IP addresses, leftupdown hook). Here’s an example configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;config setup
        uniqueids = no

conn Tunnel1
 auto=start
 left=%defaultroute
 leftid=186.10.xx.xx
 right=34.194.25.197
 type=tunnel
 leftauth=psk
 rightauth=psk
 keyexchange=ikev1
 ike=aes128-sha1-modp1024
 ikelifetime=8h
 esp=aes128-sha1-modp1024
 lifetime=1h
 keyingtries=%forever
 leftsubnet=0.0.0.0/0
 rightsubnet=0.0.0.0/0
 dpddelay=10s
 dpdtimeout=30s
 dpdaction=restart
 mark=100
 leftupdown="/etc/ipsec.d/aws-updown.sh -ln Tunnel1 -ll 169.254.41.174/30 -lr 169.254.41.173/30 -m 100 -r 10.0.0.0/16"

conn Tunnel2
 auto=start
 left=%defaultroute
 leftid=186.10.xx.xx
 right=100.27.149.167
 type=tunnel
 leftauth=psk
 rightauth=psk
 keyexchange=ikev1
 ike=aes128-sha1-modp1024
 ikelifetime=8h
 esp=aes128-sha1-modp1024
 lifetime=1h
 keyingtries=%forever
 leftsubnet=0.0.0.0/0
 rightsubnet=0.0.0.0/0
 dpddelay=10s
 dpdtimeout=30s
 dpdaction=restart
 mark=200
 leftupdown="/etc/ipsec.d/aws-updown.sh -ln Tunnel2 -ll 169.254.125.226/30 -lr 169.254.125.225/30 -m 200 -r 10.0.0.0/16"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Update the updown Script
&lt;/h3&gt;

&lt;p&gt;The file &lt;code&gt;/etc/ipsec.d/aws-updown.sh&lt;/code&gt; manages the VTI interface setup for each tunnel. If your VPN client is behind a NAT (like mine), you need to manually set the src IP in the routing section.&lt;/p&gt;

&lt;p&gt;Locate the add_route() function and modify the ip route add line to include your local machine’s internal IP (192.168.100.100 in this case):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ip route add ${i} dev ${TUNNEL_NAME} metric ${TUNNEL_MARK}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here’s the complete updown script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 1 &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
 case&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
  &lt;span class="nt"&gt;-ln&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nt"&gt;--link-name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_PHY_INTERFACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PLUTO_INTERFACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;shift&lt;/span&gt;
   &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="nt"&gt;-ll&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nt"&gt;--link-local&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_LOCAL_ADDRESS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_LOCAL_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PLUTO_ME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;shift&lt;/span&gt;
   &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="nt"&gt;-lr&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nt"&gt;--link-remote&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_REMOTE_ADDRESS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_REMOTE_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PLUTO_PEER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;shift&lt;/span&gt;
   &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nt"&gt;--mark&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;shift&lt;/span&gt;
   &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="nt"&gt;-r&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="nt"&gt;--static-route&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nv"&gt;TUNNEL_STATIC_ROUTE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;shift&lt;/span&gt;
   &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: Unknown argument &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
   &lt;span class="p"&gt;;;&lt;/span&gt;
 &lt;span class="k"&gt;esac&lt;/span&gt;
 &lt;span class="nb"&gt;shift
&lt;/span&gt;&lt;span class="k"&gt;done

&lt;/span&gt;command_exists&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2 2&amp;gt;&amp;amp;2
&lt;span class="o"&gt;}&lt;/span&gt;

create_interface&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
 ip &lt;span class="nb"&gt;link &lt;/span&gt;add &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nb"&gt;type &lt;/span&gt;vti &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_LOCAL_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; remote &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_REMOTE_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; key &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
 ip addr add &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_LOCAL_ADDRESS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; remote &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_REMOTE_ADDRESS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; dev &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
 ip &lt;span class="nb"&gt;link set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; up mtu 1419
&lt;span class="o"&gt;}&lt;/span&gt;

configure_sysctl&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
 sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.ip_forward&lt;span class="o"&gt;=&lt;/span&gt;1
 sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.rp_filter&lt;span class="o"&gt;=&lt;/span&gt;2
 sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.disable_policy&lt;span class="o"&gt;=&lt;/span&gt;1
 sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_PHY_INTERFACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.disable_xfrm&lt;span class="o"&gt;=&lt;/span&gt;1
 sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_PHY_INTERFACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.disable_policy&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="o"&gt;}&lt;/span&gt;

add_route&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;','&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-ra&lt;/span&gt; route &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_STATIC_ROUTE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
     &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;route&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
     &lt;/span&gt;ip route add &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; dev &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; metric &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; src 192.168.100.100
 &lt;span class="k"&gt;done
 &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; mangle &lt;span class="nt"&gt;-A&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--tcp-flags&lt;/span&gt; SYN,RST SYN &lt;span class="nt"&gt;-j&lt;/span&gt; TCPMSS &lt;span class="nt"&gt;--clamp-mss-to-pmtu&lt;/span&gt;
 iptables &lt;span class="nt"&gt;-t&lt;/span&gt; mangle &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; esp &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_REMOTE_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_LOCAL_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; MARK &lt;span class="nt"&gt;--set-xmark&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
 ip route flush table 220
&lt;span class="o"&gt;}&lt;/span&gt;

cleanup&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;','&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-ra&lt;/span&gt; route &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_STATIC_ROUTE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;route&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
            &lt;/span&gt;ip route del &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; dev &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; metric &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;done
 &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; mangle &lt;span class="nt"&gt;-D&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--tcp-flags&lt;/span&gt; SYN,RST SYN &lt;span class="nt"&gt;-j&lt;/span&gt; TCPMSS &lt;span class="nt"&gt;--clamp-mss-to-pmtu&lt;/span&gt;
 iptables &lt;span class="nt"&gt;-t&lt;/span&gt; mangle &lt;span class="nt"&gt;-D&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; esp &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_REMOTE_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_LOCAL_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; MARK &lt;span class="nt"&gt;--set-xmark&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_MARK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
 ip route flush cache
&lt;span class="o"&gt;}&lt;/span&gt;

delete_interface&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
 ip &lt;span class="nb"&gt;link set&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; down
 ip &lt;span class="nb"&gt;link &lt;/span&gt;del &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TUNNEL_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# main execution starts here&lt;/span&gt;

command_exists ip &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: ip command is required to execute the script, check if you are running as root, mostly to do with path, /sbin/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2 2&amp;gt;&amp;amp;2
command_exists iptables &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: iptables command is required to execute the script, check if you are running as root, mostly to do with path, /sbin/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2 2&amp;gt;&amp;amp;2
command_exists sysctl &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: sysctl command is required to execute the script, check if you are running as root, mostly to do with path, /sbin/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2 2&amp;gt;&amp;amp;2

&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PLUTO_VERB&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
 &lt;/span&gt;up-client&lt;span class="p"&gt;)&lt;/span&gt;
  create_interface
  configure_sysctl
  add_route
  &lt;span class="p"&gt;;;&lt;/span&gt;
 down-client&lt;span class="p"&gt;)&lt;/span&gt;
  cleanup
  delete_interface
  &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Add Shared Secrets
&lt;/h3&gt;

&lt;p&gt;Open &lt;code&gt;/etc/ipsec.secrets&lt;/code&gt; and add the shared secrets provided in the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This file holds shared secrets or RSA private keys for authentication.

# RSA private key for this host, authenticating it to any other host
# which knows the public part.

186.10.xx.xx 34.194.25.197 : PSK "kPcU.tJ_sA33J7Z.I4f4gxxxxx"
186.10.xx.xx 100.27.149.167 : PSK "YtaQSGhvMKV4aLQx.4wxxxxxxx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Start the VPN Service
&lt;/h3&gt;

&lt;p&gt;Restart the service and check the tunnel status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart ipsec
ipsec status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following video demonstrates the complete setup step by step:&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/s3teyQI9zlg"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If everything is correctly configured, both tunnels should show as &lt;strong&gt;“up”&lt;/strong&gt; in the &lt;strong&gt;AWS Console → Site-to-Site VPN Connections&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;At this stage, the network connectivity between the EKS cluster and the on-premises environment is fully configured:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuql0v3q2lmb7sl0fgcfh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuql0v3q2lmb7sl0fgcfh.png" alt="Resulting infra"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Part 2&lt;/strong&gt;, we will provision the EKS cluster using &lt;strong&gt;OpenTofu&lt;/strong&gt;, set up the SSM activator, and register the on-premises nodes using &lt;strong&gt;nodeadm&lt;/strong&gt; with the &lt;strong&gt;Cilium CNI&lt;/strong&gt; driver.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Part 3&lt;/strong&gt;, we will install the &lt;strong&gt;AWS Load Balancer Controller&lt;/strong&gt;, configure &lt;strong&gt;NGINX Ingress&lt;/strong&gt;, and deploy our application across both &lt;strong&gt;EC2&lt;/strong&gt; and &lt;strong&gt;hybrid (on-prem)&lt;/strong&gt; nodes.&lt;/p&gt;

</description>
      <category>eks</category>
      <category>kubernetes</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
