<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aliebrahimy</title>
    <description>The latest articles on DEV Community by Aliebrahimy (@aliebrahimy).</description>
    <link>https://dev.to/aliebrahimy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aliebrahimy"/>
    <language>en</language>
    <item>
      <title>Implementing Kubernetes Infrastructure with eBPF: Integrating LoxiLB and Cilium API Gateway</title>
      <dc:creator>Aliebrahimy</dc:creator>
      <pubDate>Sat, 01 Mar 2025 06:43:59 +0000</pubDate>
      <link>https://dev.to/aliebrahimy/implementing-kubernetes-infrastructure-with-ebpf-integrating-loxilb-and-cilium-api-gateway-g17</link>
      <guid>https://dev.to/aliebrahimy/implementing-kubernetes-infrastructure-with-ebpf-integrating-loxilb-and-cilium-api-gateway-g17</guid>
      <description>&lt;h3&gt;
  
  
  Introduction:
&lt;/h3&gt;

&lt;p&gt;In the modern world of microservices architecture, security, scalability, and network performance are of paramount importance. To achieve these goals, various tools exist for managing inbound traffic, load balancing, and enforcing security policies. Combining two powerful tools—Cilium API Gateway and LoxiLB—can provide a comprehensive and scalable solution for traffic management and security in Kubernetes clusters. This combination leverages eBPF (Extended Berkeley Packet Filter) as its core technology, enabling efficient and sophisticated operations at the kernel level.&lt;/p&gt;

&lt;h3&gt;
  
  
  eBPF and Its Advantages:
&lt;/h3&gt;

&lt;p&gt;Extended Berkeley Packet Filter (eBPF) is an advanced technology in the Linux kernel that allows executing custom code directly in the kernel space without requiring direct modifications to the kernel or additional modules. This capability has wide applications in security, monitoring, observability, and network optimization.&lt;/p&gt;

&lt;p&gt;One of the key reasons for eBPF’s high performance is its ability to execute programs directly in the kernel. In traditional methods, network data processing required passing through multiple layers in the packet processing path, such as iptables, Netfilter, and other user-space operations. However, eBPF enables filtering and monitoring traffic directly at key points in the network stack (such as NIC or lower kernel layers), effectively bypassing unnecessary processing chains and reducing latency while increasing packet processing speed.&lt;/p&gt;

&lt;p&gt;Additionally, eBPF uses a Just-In-Time (JIT) compiler that translates eBPF bytecode into optimized machine code for efficient execution on hardware. This results in enhanced performance and reduced processing overhead. Unlike traditional methods that require frequent data transfers between user space and the kernel, eBPF executes code directly within the kernel, minimizing expensive system calls (syscalls).&lt;/p&gt;

&lt;p&gt;Overall, eBPF improves network performance by eliminating redundant processing steps, reducing context switching between kernel and user space, and executing optimized code in the kernel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cilium API Gateway:
&lt;/h3&gt;

&lt;p&gt;Most microservices architectures require exposing certain services externally and securely routing traffic into the cluster. While Kubernetes traditionally uses Ingress for routing traffic, it has limitations.&lt;/p&gt;

&lt;p&gt;The API Gateway addresses these limitations and is now supported by Cilium.&lt;/p&gt;

&lt;p&gt;Cilium is an eBPF-powered networking tool that enhances security, scalability, and observability in Kubernetes environments. Using Cilium API Gateway, you can precisely manage inbound traffic based on HTTP methods, URLs, headers, and security policies. This API Gateway enables implementing complex security policies, such as transparent traffic encryption and TLS termination, with ease. Additionally, it supports advanced features like traffic splitting and weighting, allowing effective traffic distribution across services.&lt;/p&gt;

&lt;p&gt;By leveraging eBPF, Cilium enables network operations to be executed directly at the kernel level without modifying application code or requiring additional proxies. This improves performance, provides better traffic visibility, and enforces security and routing policies more efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  LoxiLB:
&lt;/h3&gt;

&lt;p&gt;LoxiLB is a software-based load balancing solution that utilizes eBPF for scalable and efficient traffic management. It allows rapid and optimized distribution of inbound traffic across Kubernetes nodes. Since LoxiLB operates at the kernel level, it significantly reduces latency in traffic distribution. Moreover, it supports multiple protocols, including HTTP, HTTPS, TCP, UDP, and GRPC, enabling seamless management of various types of traffic without complex configurations.&lt;/p&gt;

&lt;p&gt;LoxiLB also supports health checks and scalability features, ensuring intelligent traffic routing to healthy nodes while automatically scaling when necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability (HA) in LoxiLB:
&lt;/h3&gt;

&lt;p&gt;LoxiLB provides robust load balancing capabilities in cloud and Kubernetes environments with support for High Availability (HA) to enhance service stability and availability. Implementing HA in LoxiLB allows seamless failover in case of node or component failures, ensuring uninterrupted service operation.&lt;/p&gt;

&lt;p&gt;LoxiLB can be deployed either in-cluster or externally, depending on architectural requirements. This document explores various HA deployment scenarios, including Active-Backup and Active-Active models using BGP and ECMP mechanisms.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability Scenarios in LoxiLB:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Flat L2 Network (Active-Backup):
&lt;/h4&gt;

&lt;p&gt;In this setup, all Kubernetes nodes reside within the same subnet, and LoxiLB runs as a DaemonSet on master nodes. This model is ideal for environments where services and clients share the same network.&lt;/p&gt;

&lt;h4&gt;
  
  
  L3 Network with BGP (Active-Backup):
&lt;/h4&gt;

&lt;p&gt;In this scenario, LoxiLB assigns IP addresses from an external subnet and manages communication between nodes using BGP. This is suitable for cloud environments where clients and services exist in separate networks.&lt;/p&gt;

&lt;h4&gt;
  
  
  L3 Network with BGP ECMP (Active-Active):
&lt;/h4&gt;

&lt;p&gt;This model ensures uniform traffic distribution across multiple active nodes using ECMP. While it offers superior performance, it requires network support for ECMP routing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Active-Backup with Connection Synchronization:
&lt;/h4&gt;

&lt;p&gt;This approach maintains long-lived connections even during node failures. In this setup, connection states are synchronized between LoxiLB nodes, ensuring seamless failover without losing active connections.&lt;/p&gt;

&lt;h4&gt;
  
  
  Active-Backup with Fast Failure Detection (BFD):
&lt;/h4&gt;

&lt;p&gt;LoxiLB uses Bidirectional Forwarding Detection (BFD) to rapidly detect network failures and redirect traffic to healthier nodes.&lt;/p&gt;

&lt;p&gt;LoxiLB provides diverse HA solutions, enhancing service reliability in Kubernetes environments. Depending on infrastructure needs and network type, either Active-Backup or Active-Active models can be chosen to maximize service availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating Cilium API Gateway and LoxiLB:
&lt;/h3&gt;

&lt;p&gt;Integrating Cilium API Gateway and LoxiLB in a Kubernetes cluster allows precise and efficient management of inbound traffic while ensuring security and scalability. These two tools, leveraging eBPF, execute complex routing, security, and load balancing operations directly at the kernel level, reducing network latency and improving performance.&lt;/p&gt;

&lt;p&gt;This integration is particularly beneficial for large clusters requiring secure and sophisticated traffic management. It enables leveraging Kubernetes’ full security and scalability potential without additional or complex tools.&lt;/p&gt;

&lt;p&gt;By utilizing this integration, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Effectively and securely manage inbound traffic.&lt;/li&gt;
&lt;li&gt;Easily implement traffic encryption and TLS termination.&lt;/li&gt;
&lt;li&gt;Scale traffic distribution efficiently across services.&lt;/li&gt;
&lt;li&gt;Gain deep traffic observability and quickly identify issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These capabilities make Cilium API Gateway and LoxiLB an ideal solution for complex Kubernetes architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing LoxiLB with MetalLB, NGINX, and HAProxy in Kubernetes:
&lt;/h3&gt;

&lt;p&gt;This section compares LoxiLB with MetalLB as a Kubernetes service load balancer and also examines LoxiLB in comparison with NGINX and HAProxy for Kubernetes Ingress management. The focus is on performance for modern cloud-native workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://dev.to/nikhilmalik/l4-l7-performance-comparing-loxilb-metallb-nginx-haproxy-1eh0"&gt;L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Performance Tuning:
&lt;/h3&gt;

&lt;p&gt;Below are additional optimization settings used across all solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set maximum queue size:&lt;/strong&gt; &lt;code&gt;sysctl net.core.netdev_max_backlog=10000&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable multiple queues and adjust MTU:&lt;/strong&gt; Using Vagrant with libvirt. For better performance, the number of driver queues should match the number of CPUs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disable TX XPS (for LoxiLB only):&lt;/strong&gt; This setting should be applied to all nodes running LoxiLB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Criteria
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;LoxiLB (eBPF-Based)&lt;/th&gt;
&lt;th&gt;IPTables-Based&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Higher under heavy load&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Connection Management&lt;/td&gt;
&lt;td&gt;Scalable to millions of connections&lt;/td&gt;
&lt;td&gt;Limited to IPTables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Consumption&lt;/td&gt;
&lt;td&gt;Efficient (eBPF-Based)&lt;/td&gt;
&lt;td&gt;Requires more resources&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Key Differences Between LoxiLB and MetalLB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: LoxiLB leverages eBPF for near-kernel packet processing and minimal CPU usage, whereas MetalLB uses IPTables/IPVS for packet routing, resulting in higher latency and limited scalability under heavy traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: LoxiLB manages higher workloads due to its optimized architecture, while MetalLB struggles in high-scale environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Features&lt;/strong&gt;: LoxiLB supports advanced features like direct server return (DSR), Proxy Protocol, and network observability, whereas MetalLB provides basic Layer 2 and Layer 3 load balancing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Testing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt;: Traffic from a separate VM acting as a client was routed through the load balancer to a NodePort and then to a workload. LoxiLB demonstrated superior throughput in all tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requests Per Second (RPS)&lt;/strong&gt;: Performance was measured using &lt;code&gt;go-wrk&lt;/code&gt; to simulate concurrent request handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ingress Comparison in Kubernetes
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Introduction
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NGINX&lt;/strong&gt;: A well-known Ingress controller with rich Layer 7 features like SSL termination, HTTP routing, and caching.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HAProxy&lt;/strong&gt;: Known for strong load balancing and high performance, offering precise Layer 4 and Layer 7 traffic control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoxiLB&lt;/strong&gt;: Combines Layer 4 and Layer 7 capabilities with eBPF-based performance and native Kubernetes integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;LoxiLB&lt;/th&gt;
&lt;th&gt;NGINX&lt;/th&gt;
&lt;th&gt;HAProxy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSL Termination&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Connection Management&lt;/td&gt;
&lt;td&gt;Scalable to millions&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Differences Between LoxiLB, NGINX, and HAProxy:
&lt;/h2&gt;

&lt;p&gt;Performance: LoxiLB delivers higher performance and lower latency under heavy load compared to NGINX and HAProxy.&lt;br&gt;
Scalability: LoxiLB is seamlessly scalable for modern containerized workloads, while HAProxy is good for scaling but may require additional tuning. NGINX is less optimized compared to LoxiLB and HAProxy in terms of scalability.&lt;br&gt;
Features: NGINX excels in advanced HTTP routing and SSL management, HAProxy offers robust Layer 4 and Layer 7 capabilities but is less Kubernetes-native. LoxiLB provides Layer 7 features while maintaining high performance.&lt;br&gt;
Performance Testing&lt;br&gt;
In RPS (Requests Per Second) and latency tests, LoxiLB outperformed both NGINX and HAProxy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
When evaluating networking solutions for Kubernetes, the choice depends on workload-specific requirements and scalability needs. LoxiLB consistently outperforms competitors in raw performance and scalability, making it a strong option for high-load environments. However, for traditional use cases with a focus on Layer 7 features, NGINX and HAProxy remain solid choices. For simpler setups, MetalLB may be sufficient but might struggle to meet future demands.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes Cluster Setup Guide with RKE2, Cilium API Gateway, and LoxiLB
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1 - Check the kernel version to ensure eBPF support.
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uname -r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Ensure your kernel version is 5.10 or higher for eBPF support.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Configure NetworkManager
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl is-active NetworkManager
sudo mkdir -p /etc/NetworkManager/conf.d
sudo tee /etc/NetworkManager/conf.d/cilium-cni.conf &amp;lt;&amp;lt;EOF
[keyfile]
unmanaged-devices=interface-name:cilium_net;interface-name:cilium_host;interface-name:cilium_vxlan;interface-name:cilium_geneve
EOF
sudo systemctl restart NetworkManager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  3. Disable SELinux
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;getenforce
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4. Install Required Packages
&lt;/h3&gt;
&lt;h4&gt;
  
  
  If SELinux is in Enforcing mode:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install -y iptables libnetfilter_conntrack libnfnetlink libnftnl policycoreutils-python-utils rke2-selinux

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  If SELinux is in Permissive or Disabled mode:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install -y iptables libnetfilter_conntrack libnfnetlink libnftnl policycoreutils-python-utils

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  5. Install RKE2
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Download and install the RKE2 archive
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir /root/rke2-artifacts &amp;amp;&amp;amp; cd /root/rke2-artifacts/
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/rke2-images-core.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/rke2.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/sha256sum-amd64.txt
curl -sfL https://get.rke2.io --output install.sh
INSTALL_RKE2_ARTIFACT_PATH=/root/rke2-artifacts sh install.sh

mkdir -p /etc/rancher/rke2/
vim /etc/rancher/rke2/config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Config file contents:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;write-kubeconfig-mode: "0644"
advertise-address: 192.168.100.100
node-name: kuber-master-1
tls-san:
  - 192.168.100.100
cni: none
cluster-cidr: 10.100.0.0/16
service-cidr: 10.110.0.0/16
cluster-dns: 10.110.0.10
etcd-arg: "--quota-backend-bytes 2048000000"
etcd-snapshot-schedule-cron: "0 3 * * *"
etcd-snapshot-retention: 10
disable:
  - rke2-ingress-nginx
disable-kube-proxy: true
kube-apiserver-arg:
  - '--default-not-ready-toleration-seconds=30'
  - '--default-unreachable-toleration-seconds=30'
kube-controller-manager-arg:
  - '--node-monitor-period=4s'
kubelet-arg:
  - '--node-status-update-frequency=4s'
  - '--max-pods=100'
 egress-selector-mode: disabled
 node-taint:
  - "CriticalAddonsOnly=true:NoExecute"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  6. Setup RKE2 Master
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir --p /var/lib/rancher/rke2/agent/images/
mv rke2-images-core.linux-amd64.tar.gz /var/lib/rancher/rke2/agent/images/
systemctl enable rke2-server.service
systemctl start rke2-server.service
journalctl -u rke2-server -f

echo 'PATH=$PATH:/var/lib/rancher/rke2/bin' &amp;gt;&amp;gt; ~/.bashrc
source ~/.bashrc
mkdir ~/.kube
cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  7. Install Cilium-cli
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Download and install Cilium
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir /opt/cilium &amp;amp;&amp;amp; cd /opt/cilium
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
kubectl apply -f gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply -f gateway.networking.k8s.io_gateways.yaml
kubectl apply -f gateway.networking.k8s.io_httproutes.yaml
kubectl apply -f gateway.networking.k8s.io_referencegrants.yaml
kubectl apply -f gateway.networking.k8s.io_grpcroutes.yaml
kubectl apply -f gateway.networking.k8s.io_tlsroutes.yaml

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  8. Install Cilium with desired configuration
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim cilium.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Config file contents:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeProxyReplacement: "true"
k8sServiceHost: "192.168.58.105"
k8sServicePort: "6443"
hubble:
  enabled: true
  metrics:
    enabled:
    - dns:query;ignoreAAAA
    - drop
    - tcp
    - flow
    - icmp
    - http
    dashboards:
      enabled: true
  relay:
    enabled: true
    prometheus:
      enabled: true
  ui:
    enabled: true
    baseUrl: "/"
ingressController:
  enabled: false
envoyConfig:
  enabled: true
  secretsNamespace:
    create: true
    name: cilium-secrets
debug:
  enabled: true
rbac:
  create: true
gatewayAPI:
  enabled: true
  enableProxyProtocol: false
  enableAppProtocol: false
  enableAlpn: false
  xffNumTrustedHops: 0
  externalTrafficPolicy: Cluster
  gatewayClass:
    create: auto
  secretsNamespace:
    create: true
    name: cilium-secrets
    sync: true
version: 1.17.1
operator:
  prometheus:
    enabled: true
  dashboards:
    enabled: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cilium install -f cilium.yaml
cilium status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  9. Setup Worker Nodes
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Perform all steps from 1 to 5 above.
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server: https://192.168.100.100:9345
token: XXXXXXXXXX
node-name: kuber-worker-1
kubelet-arg:
  - '--node-status-update-frequency=4s'
  - '--max-pods=100'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  10. Retrieve Token from the Master Node
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /var/lib/rancher/rke2/server/node-token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl disable rke2-server &amp;amp;&amp;amp; systemctl mask rke2-server
systemctl enable --now rke2-agent.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  11. Install LoxiLB on External Load Balancer Servers
&lt;/h3&gt;
&lt;h3&gt;
  
  
  12. Install Docker
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io

sudo systemctl enable docker
sudo systemctl start docker

Create the Docker group if it doesn’t exist:

sudo groupadd docker
Add your user to the Docker group:

sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  13. Run LoxiLB
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#llb1
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host  --name loxilb ghcr.io/loxilb-io/loxilb:latest  --cluster=192.168.58.111 --self=0 --ka=192.168.58.111:192.168.58.110

#llb2
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.58.110 --self=1 --ka=192.168.58.110:192.168.58.111
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  14. Deploy LoxiLB Controller on Kubernetes Cluster
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim kube-loxilb.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Config file contents:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-loxilb
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-loxilb
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
      - watch
      - list
      - patch
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
      - watch
      - list
      - patch
  - apiGroups:
      - ""
    resources:
      - endpoints
      - services
      - namespaces
      - services/status
    verbs:
      - get
      - watch
      - list
      - patch
      - update
  - apiGroups:
      - gateway.networking.k8s.io
    resources:
      - gatewayclasses
      - gatewayclasses/status
      - gateways
      - gateways/status
      - tcproutes
      - udproutes
    verbs: ["get", "watch", "list", "patch", "update"]
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs:
      - get
      - watch
      - list
  - apiGroups:
      - apiextensions.k8s.iovim kube-loxilb.yaml

    resources:
      - customresourcedefinitions
    verbs:
      - get
      - watch
      - list
  - apiGroups:
      - authentication.k8s.io
    resources:
      - tokenreviews
    verbs:
      - create
  - apiGroups:
      - authorization.k8s.io
    resources:
      - subjectaccessreviews
    verbs:
      - create
  - apiGroups:
      - bgppeer.loxilb.io
    resources:
      - bgppeerservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicydefinedsets.loxilb.io
    resources:
      - bgppolicydefinedsetsservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicydefinition.loxilb.io
    resources:
      - bgppolicydefinitionservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicyapply.loxilb.io
    resources:
      - bgppolicyapplyservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - loxiurl.loxilb.io
    resources:
      - loxiurls
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - egress.loxilb.io
    resources:
      - egresses
    verbs: ["get", "watch", "list", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-loxilb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-loxilb
subjects:
  - kind: ServiceAccount
    name: kube-loxilb
    namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-loxilb
  namespace: kube-system
  labels:
    app: kube-loxilb-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-loxilb-app
  template:
    metadata:
      labels:
        app: kube-loxilb-app
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
      priorityClassName: system-node-critical
      serviceAccountName: kube-loxilb
      terminationGracePeriodSeconds: 0
      containers:
      - name: kube-loxilb
        image: ghcr.io/loxilb-io/kube-loxilb:latest
        imagePullPolicy: Always
        command:
        - /bin/kube-loxilb
        args:
        - --loxiURL=http://192.168.57.7:11111,http://192.168.57.8:11111
        - --externalCIDR=192.168.57.100/32
        #- --cidrPools=defaultPool=192.168.57.100/32
        #- --monitor
        #- --setBGP=64512
        #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102
        #- --setRoles
        - --setLBMode=2
        #- --config=/opt/loxilb/agent/kube-loxilb.conf
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Set --externalCIDR to the VIP address of the load balancers
&lt;/h4&gt;
&lt;h4&gt;
  
  
  Set loxiURL to the address of the load balancers
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args:
       - --loxiURL=http://192.168.57.7:11111,http://192.168.57.8:11111
       - --externalCIDR=192.168.57.100/32
       - --setLBMode=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Oracle-Linux-Template manifests]# kubectl apply -f kube-loxilb.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  15. Integrate Cilium &amp;amp; LoxiLB using Webhook for LoadBalancer
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Deploy the following mutating-webhook for automatic LoadBalancer service creation:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim Loxilb-webhook.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: loxilb-webhook
  namespace: default
  labels:
    app: loxilb-webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: loxilb-webhook
  template:
    metadata:
      labels:
        app: loxilb-webhook
    spec:
      initContainers:
        - name: generate-certs
          image: docker.io/aebrahimy/loxilb-webhook-init:v5  
          volumeMounts:
            - name: webhook-tls
              mountPath: "/tls/"
          env:
            - name: MUTATE_CONFIG
              value: mutating-webhook-configuration
            - name: VALIDATE_CONFIG
              value: validating-webhook-configuration
            - name: WEBHOOK_SERVICE
              value: loxilb-webhook
            - name: WEBHOOK_NAMESPACE
              value: default
      containers:
        - name: webhook
          image: docker.io/aebrahimy/loxilb-webhook:v10
          ports:
            - containerPort: 443
          volumeMounts:
            - name: webhook-tls
              mountPath: "/tls/"
              readOnly: true
      volumes:
        - name: webhook-tls
          emptyDir: {}  

---
apiVersion: v1
kind: Service
metadata:
  name: loxilb-webhook
  namespace: default
spec:
  ports:
    - port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app: loxilb-webhook

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: webhook-manager
rules:
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["mutatingwebhookconfigurations"]
    verbs: ["create", "get", "list", "patch", "update", "delete"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: webhook-manager-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: webhook-manager
  apiGroup: rbac.authorization.k8s.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  17. Webhook Components:
&lt;/h3&gt;
&lt;h5&gt;
  
  
  Deployment
&lt;/h5&gt;
&lt;h6&gt;
  
  
  - initContainer
&lt;/h6&gt;
&lt;h6&gt;
  
  
  #   - Generates security certificates and creates the MutatingWebhookConfiguration
&lt;/h6&gt;
&lt;h6&gt;
  
  
  - Main Container
&lt;/h6&gt;
&lt;h6&gt;
  
  
  #   - Runs the Webhook and listens on port 443
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Service
&lt;/h5&gt;
&lt;h6&gt;
  
  
  - A service is created to expose the Webhook on port 443.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  RBAC (Access Control)
&lt;/h5&gt;
&lt;h6&gt;
  
  
  - ClusterRole allows management of MutatingWebhookConfiguration
&lt;/h6&gt;
&lt;h6&gt;
  
  
  - ClusterRoleBinding assigns the role to the default ServiceAccount.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Webhook Deployment Result:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  - When a LoadBalancer service is created, the Webhook modifies it for LoxiLB.
&lt;/h6&gt;
&lt;h6&gt;
  
  
  - This improves integration between Cilium and LoxiLB, reducing manual configurations.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  - It is recommended to deploy webhook resources in the kube-system namespace for security.
&lt;/h5&gt;
&lt;h3&gt;
  
  
  18. GatewayClass &amp;amp; Gateway:
&lt;/h3&gt;
&lt;h4&gt;
  
  
  - These CRDs define how traffic enters the cluster.
&lt;/h4&gt;
&lt;h4&gt;
  
  
  - If already deployed, Cilium automatically creates a GatewayClass.
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Oracle-Linux-Template manifests]# kubectl get gatewayclasses.gateway.networking.k8s.io
NAME     CONTROLLER                     ACCEPTED   AGE
cilium   io.cilium/gateway-controller   True       2d4h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;GatewayClass is a type of Gateway that can be deployed—in other words, it is a template. This allows infrastructure providers to offer different types of Gateways, and users can select the desired Gateway.&lt;/p&gt;

&lt;p&gt;For example, an infrastructure provider might create two GatewayClasses named "internet" and "private" for different purposes and possibly with different features—one for proxying services facing the internet and another for internal private applications.&lt;/p&gt;

&lt;p&gt;In our case, we will deploy the Cilium Gateway API (&lt;code&gt;io.cilium/gateway-controller&lt;/code&gt;).&lt;/p&gt;


&lt;h2&gt;
  
  
  HTTP Routing
&lt;/h2&gt;

&lt;p&gt;Now, let's deploy an application and configure Gateway API's &lt;code&gt;HTTPRoute&lt;/code&gt; to route HTTP traffic to the cluster. We will use the sample &lt;code&gt;bookinfo&lt;/code&gt; application.&lt;/p&gt;

&lt;p&gt;This demo consists of multiple deployments and services provided by the Istio project:&lt;/p&gt;

&lt;p&gt;🔍 Details&lt;br&gt;&lt;br&gt;
⭐ Reviews&lt;br&gt;&lt;br&gt;
✍ Ratings&lt;br&gt;&lt;br&gt;
📕 Product Page  &lt;/p&gt;

&lt;p&gt;We will use some of these services as the foundation for our Gateway API.&lt;/p&gt;


&lt;h2&gt;
  
  
  Deploying the Application
&lt;/h2&gt;

&lt;p&gt;Now, let's deploy the sample application in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~#  kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Checking Deployed Services
&lt;/h3&gt;

&lt;p&gt;Note that these services are only available internally (&lt;code&gt;ClusterIP&lt;/code&gt;) and cannot be accessed from outside the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~# kubectl get svc
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.96.41.65    &amp;lt;none&amp;gt;        9080/TCP   2m3s
kubernetes    ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP    11m
productpage   ClusterIP   10.96.147.97   &amp;lt;none&amp;gt;        9080/TCP   2m3s
ratings       ClusterIP   10.96.105.89   &amp;lt;none&amp;gt;        9080/TCP   2m3s
reviews       ClusterIP   10.96.149.14   &amp;lt;none&amp;gt;        9080/TCP   2m3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying Gateway and HTTPRoutes
&lt;/h2&gt;

&lt;p&gt;Before deploying the &lt;code&gt;Gateway&lt;/code&gt; and &lt;code&gt;HTTPRoutes&lt;/code&gt;, let's review the configuration we will use. We will go through it section by section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - protocol: HTTP
    port: 80
    name: web-gw
    allowedRoutes:
      namespaces:
        from: Same
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;code&gt;Gateway&lt;/code&gt; section, the &lt;code&gt;gatewayClassName&lt;/code&gt; field is set to &lt;code&gt;cilium&lt;/code&gt;, referring to the previously configured &lt;code&gt;Cilium&lt;/code&gt; &lt;code&gt;GatewayClass&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;Gateway&lt;/code&gt; listens on port &lt;code&gt;80&lt;/code&gt; for incoming HTTP traffic from outside the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;code&gt;allowedRoutes&lt;/code&gt; field specifies the namespaces that can attach &lt;code&gt;Routes&lt;/code&gt; to this &lt;code&gt;Gateway&lt;/code&gt;.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting it to &lt;code&gt;Same&lt;/code&gt; means only &lt;code&gt;Routes&lt;/code&gt; within the same namespace can be used by this &lt;code&gt;Gateway&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If set to &lt;code&gt;All&lt;/code&gt;, the &lt;code&gt;Gateway&lt;/code&gt; can be used by &lt;code&gt;Routes&lt;/code&gt; from any namespace, allowing a single &lt;code&gt;Gateway&lt;/code&gt; to be shared across multiple namespaces managed by different teams.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The configuration includes specific annotations to ensure that the created &lt;code&gt;LoadBalancer&lt;/code&gt; service uses &lt;code&gt;LoxiLB&lt;/code&gt;. This ensures that external &lt;code&gt;LoadBalancers&lt;/code&gt; are automatically configured to allow access to the deployed &lt;code&gt;Gateway&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying the manifests, the &lt;code&gt;Gateway&lt;/code&gt; service and the required &lt;code&gt;LoadBalancer&lt;/code&gt; for external access will be automatically created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Oracle-Linux-Template manifests]# kubectl get gateway
NAME                 CLASS    ADDRESS              PROGRAMMED   AGE
my-example-gateway   cilium   llb-192.168.57.100   True         3h55m
my-gateway           cilium   llb-192.168.57.100   True         5h56m
tls-gateway          cilium   llb-192.168.57.100   True         5h13m
[root@Oracle-Linux-Template manifests]# kubectl get svc
NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP          PORT(S)          AGE
cilium-gateway-my-example-gateway   LoadBalancer   10.110.146.122   llb-192.168.57.100   8080:32094/TCP   3h55m
cilium-gateway-my-gateway           LoadBalancer   10.110.148.120   llb-192.168.57.100   80:30790/TCP     5h56m
cilium-gateway-tls-gateway          LoadBalancer   10.110.233.230   llb-192.168.57.100   443:31522/TCP    5h13m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The external &lt;code&gt;LoadBalancers&lt;/code&gt; will also be configured automatically to route traffic to the specified endpoints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Loxi-LB1 ~]# sudo docker exec -it loxilb loxicmd get lb -o wide
|     EXT IP     | SEC IPS | SOURCES | HOST | PORT | PROTO |                        NAME                         | MARK | SEL |  MODE  | ENDPOINT  | EPORT | WEIGHT | STATE  |   COUNTERS    |
|----------------|---------|---------|------|------|-------|-----------------------------------------------------|------|-----|--------|-----------|-------|--------|--------|---------------|
| 192.168.57.100 |         |         |      |   80 | tcp   | default_cilium-gateway-my-gateway:llb-inst0         |    0 | rr  | onearm | 10.0.9.12 | 30790 |      1 | active | 11:912        |
|                |         |         |      |      |       |                                                     |      |     |        | 10.0.9.13 | 30790 |      1 | active | 0:0           |
|                |         |         |      |      |       |                                                     |      |     |        | 10.0.9.16 | 30790 |      1 | active | 0:0           |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reviewing the HTTPRoute Manifest
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;HTTPRoute&lt;/code&gt; resource is a part of the &lt;code&gt;GatewayAPI&lt;/code&gt; and is used to define the routing behavior of HTTP requests from the &lt;code&gt;Gateway&lt;/code&gt; listener to Kubernetes services.&lt;/p&gt;

&lt;p&gt;It contains rules that direct traffic based on specific conditions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The first rule&lt;/strong&gt; defines a basic &lt;code&gt;L7&lt;/code&gt; proxy path:

&lt;ul&gt;
&lt;li&gt;HTTP traffic with a path starting with &lt;code&gt;/details&lt;/code&gt; is routed to the &lt;code&gt;details&lt;/code&gt; service on port &lt;code&gt;9080&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
- matches:
  - path:
      type: PathPrefix
      value: /details
  backendRefs:
  - name: details
    port: 9080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The second rule&lt;/strong&gt; defines more specific matching criteria:

&lt;ul&gt;
&lt;li&gt;If the HTTP request contains:

&lt;ul&gt;
&lt;li&gt;An HTTP header named &lt;code&gt;magic&lt;/code&gt; with the value &lt;code&gt;foo&lt;/code&gt;, and
&lt;/li&gt;
&lt;li&gt;The HTTP method is &lt;code&gt;GET&lt;/code&gt;, and
&lt;/li&gt;
&lt;li&gt;A query parameter named &lt;code&gt;great&lt;/code&gt; with the value &lt;code&gt;example&lt;/code&gt;,
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Then the traffic is routed to the &lt;code&gt;productpage&lt;/code&gt; service on port &lt;code&gt;9080&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
  - matches:
   - headers:
      - type: Exact
        name: magic
        value: foo
      queryParams:
      - type: Exact
        name: great
        value: example
      path:
        type: PathPrefix
        value: /
      method: GET
    backendRefs:
    - name: productpage
      port: 9080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you can implement complex and consistent L7 routing rules.&lt;br&gt;&lt;br&gt;
With the traditional &lt;code&gt;Ingress&lt;/code&gt; API, achieving similar routing functionality often required using annotations, which led to inconsistencies between different &lt;code&gt;Ingress&lt;/code&gt; controllers.&lt;/p&gt;

&lt;p&gt;One major advantage of the new Gateway APIs is that they are fundamentally split into separate sections:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One for defining the &lt;code&gt;Gateway&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;One for defining &lt;code&gt;Routes&lt;/code&gt; to backend services.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By separating these functionalities, operators can modify and swap &lt;code&gt;Gateways&lt;/code&gt; while keeping the routing configuration unchanged.&lt;/p&gt;

&lt;p&gt;In other words:&lt;br&gt;&lt;br&gt;
If you decide to use a different &lt;code&gt;Gateway API&lt;/code&gt; controller, you can reuse the same manifest without modification.&lt;/p&gt;
&lt;h2&gt;
  
  
  Testing Connectivity from LoadBalancer VIP
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
curl --fail -s http://192.168.57.100/details/1 | jq                                                                                               
{
  "id": 1,
  "author": "William Shakespeare",
  "year": 1595,
  "type": "paperback",
  "pages": 200,
  "publisher": "PublisherA",
  "language": "English",
  "ISBN-10": "1234567890",
  "ISBN-13": "123-1234567890"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  TLS Termination
&lt;/h3&gt;

&lt;p&gt;While HTTP traffic routing is straightforward, securing traffic with HTTPS and TLS certificates is essential.&lt;br&gt;&lt;br&gt;
In this section, we will first deploy the TLS certificate.&lt;/p&gt;

&lt;p&gt;For demonstration purposes, we will use a self-signed TLS certificate issued by a mock Certificate Authority (CA).&lt;br&gt;&lt;br&gt;
One of the simplest ways to do this is by using the &lt;code&gt;mkcert&lt;/code&gt; tool.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Generating the TLS Certificate&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;First, we generate a TLS certificate that validates the following domains:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;bookinfo.sadadco.ir&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hipstershop.sadadco.ir&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~# mkcert '*.sadadco.ir'
Created a new local CA 💥
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically ⚠️

Created a new certificate valid for the following names 📜
 - "*.cilium.rocks"

Reminder: X.509 wildcards only go one level deep, so this won't match a.b.cilium.rocks ℹ️

The certificate is at "./_wildcard.sadadco.ir.pem" and the key at "./_wildcard.sadadco.ir-key.pem" ✅

It will expire on 9 June 2025 🗓
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;These domains represent the hostnames used in this &lt;code&gt;Gateway&lt;/code&gt; example.&lt;/p&gt;

&lt;p&gt;The generated certificate files are:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;_wildcard.sadadco.ir.pem&lt;/code&gt; (certificate)
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_wildcard.sadadco.ir-key.pem&lt;/code&gt; (private key)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will use these files for our &lt;code&gt;Gateway&lt;/code&gt; service.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Creating a TLS Secret in Kubernetes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now, we create a &lt;code&gt;TLS Secret&lt;/code&gt; in Kubernetes using the certificate and private key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~# kubectl create secret tls demo-cert --key=_wildcard.sadadco.ir-key.pem --cert=_wildcard.sadadco.ir.pem
secret/demo-cert created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying a Gateway for HTTPS Traffic
&lt;/h2&gt;

&lt;p&gt;With the TLS secret in place, we can now deploy a new &lt;code&gt;Gateway&lt;/code&gt; for handling HTTPS traffic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: tls-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - name: https-1
    protocol: HTTPS
    port: 443
    hostname: "bookinfo.sadadco.ir"
    tls:
      certificateRefs:
      - kind: Secret
        name: demo-cert
  - name: https-2
    protocol: HTTPS
    port: 443
    hostname: "hipstershop.sadadco.ir"
    tls:
      certificateRefs:
      - kind: Secret
        name: demo-cert
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: https-app-route-1
spec:
  parentRefs:
  - name: tls-gateway
  hostnames:
  - "bookinfo.sadadco.ir"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /details
    backendRefs:
    - name: details
      port: 9080
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: https-app-route-2
spec:
  parentRefs:
  - name: tls-gateway
  hostnames:
  - "hipstershop.sadadco.ir"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: productpage
      port: 9080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Traffic Splitting
&lt;/h2&gt;

&lt;p&gt;In this scenario, we will use the &lt;code&gt;Gateway API&lt;/code&gt; to distribute incoming traffic across multiple backends while assigning different weights to each.&lt;/p&gt;

&lt;p&gt;First, we deploy &lt;code&gt;Echo Servers&lt;/code&gt;, which respond to &lt;code&gt;cURL&lt;/code&gt; requests by displaying the pod name and node name.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploying Echo Servers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We deploy the &lt;code&gt;Echo Servers&lt;/code&gt; using YAML manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~# kubectl apply -f https://raw.githubusercontent.com/nvibert/gateway-api-traffic-splitting/main/echo-servers.yml
service/echo-1 created
deployment.apps/echo-1 created
service/echo-2 created
deployment.apps/echo-2 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Deploying the Gateway and HTTPRoute&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now, we proceed with deploying the &lt;code&gt;Gateway&lt;/code&gt; and &lt;code&gt;HTTPRoute&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
We apply the YAML manifests for both &lt;code&gt;Gateway&lt;/code&gt; and &lt;code&gt;HTTPRoute&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Oracle-Linux-Template manifests]# cat gateway.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-example-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - protocol: HTTP
    port: 8080
    name: web-gw-echo
    allowedRoutes:
      namespaces:
        from: Same
---
[root@Oracle-Linux-Template manifests]# cat httpRoute.yml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: example-route-1
spec:
  parentRefs:
  - name: my-example-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /echo
    backendRefs:
    - kind: Service
      name: echo-1
      port: 8080
      weight: 99
    - kind: Service
      name: echo-2
      port: 8090
      weight: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Modifying HTTP Request Headers
&lt;/h3&gt;

&lt;p&gt;With &lt;code&gt;Cilium Gateway API&lt;/code&gt;, we can add, remove, or modify incoming HTTP request headers dynamically.&lt;/p&gt;

&lt;p&gt;For testing this capability, we will use the same &lt;code&gt;Echo Servers&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
First, let's create a new &lt;code&gt;HTTPRoute&lt;/code&gt; that adds a custom header to incoming requests.&lt;/p&gt;


&lt;h2&gt;
  
  
  Creating a New HTTPRoute
&lt;/h2&gt;

&lt;p&gt;The following YAML file defines an &lt;code&gt;HTTPRoute&lt;/code&gt; that adds a header named &lt;code&gt;my-cilium-header-name&lt;/code&gt; with the value &lt;code&gt;my-cilium-header-value&lt;/code&gt; to any request that matches the &lt;code&gt;/cilium-add-a-request-header&lt;/code&gt; path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: header-http-echo
spec:
  parentRefs:
  - name: cilium-gw
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /cilium-add-a-request-header
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: my-cilium-header-name
          value: my-cilium-header-value
    backendRefs:
      - name: echo-1
        port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Removing a Header
&lt;/h2&gt;

&lt;p&gt;To remove a specific header from a request, we can use the &lt;code&gt;remove&lt;/code&gt; field.&lt;br&gt;&lt;br&gt;
For example, the following configuration removes the &lt;code&gt;x-request-id&lt;/code&gt; header from incoming requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- type: RequestHeaderModifier
  requestHeaderModifier:
    remove: ["x-request-id"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Modifying HTTP Response Headers
&lt;/h2&gt;

&lt;p&gt;Similar to modifying request headers, response header modification can be useful for various use cases.&lt;br&gt;&lt;br&gt;
For example:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams can add or remove cookies for a specific service, allowing returning users to be identified.
&lt;/li&gt;
&lt;li&gt;A frontend application can determine whether it is connected to a stable or beta backend version and adjust the UI or processing accordingly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the time of writing, this feature is part of the &lt;strong&gt;"experimental"&lt;/strong&gt; channel in the &lt;code&gt;Gateway API&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
Therefore, before using it, we must install the &lt;strong&gt;experimental CRDs&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@server:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_gateways.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_httproutes.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_referencegrants.yaml
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io configured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating an HTTPRoute to Modify Response Headers&lt;br&gt;
Now, let's create a new HTTPRoute that modifies response headers for requests matching the /multiple path.&lt;br&gt;
In this example, three new headers will be added to the response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: response-header-modifier
spec:
  parentRefs:
  - name: cilium-gw
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /multiple
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: X-Header-Add-1
          value: header-add-1
        - name: X-Header-Add-2
          value: header-add-2
        - name: X-Header-Add-3
          value: header-add-3
    backendRefs:
    - name: echo-1
      port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Proxy Protocol v2 with LoxiLB
&lt;/h3&gt;

&lt;p&gt;In network architectures involving load balancers, client connections are routed through these intermediaries before reaching backend servers. While load balancers efficiently distribute traffic, they often obscure the original client IP address and connection details, as backend servers perceive the load balancer as the source of requests.&lt;/p&gt;

&lt;p&gt;This lack of transparency introduces several challenges:&lt;/p&gt;

&lt;p&gt;Accurate Logging: Backend servers cannot log the original client information, leading to incomplete or misleading logs.&lt;/p&gt;

&lt;p&gt;Troubleshooting: Identifying and resolving client-related issues becomes difficult without connection metadata.&lt;/p&gt;

&lt;p&gt;Access Control: Implementing IP-based rules or geo-location policies is impossible without knowing the client’s original IP.&lt;/p&gt;

&lt;p&gt;Proxy Protocol v2 overcomes these limitations by embedding client connection metadata into the communication stream. This allows backend servers to process requests while preserving the original client information, enhancing transparency, accuracy, and control in modern network environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Proxy Protocol Works with LoxiLB
&lt;/h3&gt;

&lt;p&gt;When a client establishes a TCP connection with a backend server (after completing the 3-way handshake), LoxiLB appends a Proxy Protocol v2 header containing client metadata at the beginning of the data stream. The request is then forwarded to backend servers.&lt;/p&gt;

&lt;p&gt;The metadata within the Proxy Protocol v2 header includes:&lt;/p&gt;

&lt;p&gt;Client Information: Original IP address and port (source/destination).&lt;/p&gt;

&lt;p&gt;Protocol Details: TCP/UDP.&lt;/p&gt;

&lt;p&gt;Address Family: IPv4/IPv6.&lt;/p&gt;

&lt;p&gt;Additional Fields: Checksum, data length, or custom information.&lt;/p&gt;

&lt;p&gt;By encoding this metadata, LoxiLB ensures transparency, improves visibility, and enables precise logging and troubleshooting in complex environments.&lt;/p&gt;

&lt;p&gt;LoxiLB leverages eBPF technology to dynamically generate Proxy Protocol headers with minimal performance overhead. However, before enabling Proxy Protocol v2, ensure that server-side applications support this feature to prevent compatibility issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp6bec23q8tp0ahyv69o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp6bec23q8tp0ahyv69o.png" alt="Image description" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling Proxy Protocol in LoxiLB
&lt;/h3&gt;

&lt;p&gt;To create a LoadBalancer service with LoxiLB and enable Proxy Protocol v2 headers in traffic, add the following annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: tcp-lb-fullnat
  annotations:
    loxilb.io/liveness: "yes"
    loxilb.io/lbmode: "fullnat"
    loxilb.io/useproxyprotov2: "yes"
spec:
  externalTrafficPolicy: Local
  loadBalancerClass: loxilb.io/loxilb
  selector:
    what: tcp-fullnat-test
  ports:
    - port: 57002
      targetPort: 80
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enabling Proxy Protocol in Cilium API Gateway&lt;/p&gt;

&lt;p&gt;If Proxy Protocol headers are added to traffic, applications must be capable of recognizing and handling this information. From Cilium v1.15, this feature is supported.&lt;/p&gt;

&lt;p&gt;To enable this capability in Cilium during installation, modify the cilium.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gatewayAPI:
   enabled: true
   enableProxyProtocol: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After enabling it, create the gateway configuration as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-example-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "fullnat"
      loxilb.io/useproxyprotov2: "yes"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - protocol: HTTP
    port: 8080
    name: web-gw-echo
    allowedRoutes:
      namespaces:
        from: Same
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying this configuration, the load balancers will be updated as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@Loxi-LB1 ~]# sudo docker exec -it loxilb loxicmd get lb -o wide
|     EXT IP     | SEC IPS | SOURCES | HOST | PORT | PROTO |                        NAME                         | MARK | SEL |     MODE     | ENDPOINT  | EPORT | WEIGHT | STATE  | COUNTERS |
|----------------|---------|---------|------|------|-------|-----------------------------------------------------|------|-----|--------------|-----------|-------|--------|--------|----------|
| 192.168.57.100 |         |         |      |   80 | tcp   | default_cilium-gateway-my-gateway:llb-inst0         |    0 | rr  | fullnat:ppv2 | 10.0.9.12 | 31932 |      1 | active | 9:1024   |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.13 | 31932 |      1 | active | 0:0      |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.16 | 31932 |      1 | active | 0:0      |
| 192.168.57.100 |         |         |      |  443 | tcp   | default_cilium-gateway-tls-gateway:llb-inst0        |    0 | rr  | onearm       | 10.0.9.12 | 31522 |      1 | active | 0:0      |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.13 | 31522 |      1 | active | 0:0      |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.16 | 31522 |      1 | active | 0:0      |
| 192.168.57.100 |         |         |      | 8080 | tcp   | default_cilium-gateway-my-example-gateway:llb-inst0 |    0 | rr  | fullnat:ppv2 | 10.0.9.12 | 32007 |      1 | active | 12:980   |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.13 | 32007 |      1 | active | 12:980   |
|                |         |         |      |      |       |                                                     |      |     |              | 10.0.9.16 | 32007 |      1 | active | 6:490    |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Cilium API Gateway appends the original client source IP in the x-forwarded-for header of traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(09:58:34)──&amp;gt; curl --fail -s http://192.168.57.100:8080/echo | jq | grep x-forwarded-for
      "x-forwarded-for": "192.168.57.1",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(09:59:24)──&amp;gt; ip -br a | grep 57.1
vboxnet1         UP             192.168.57.1/24 fe80::800:27ff:fe00:1/6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>loxilb</category>
      <category>cilium</category>
      <category>rke2</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
