<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abdullrahman Eissa</title>
    <description>The latest articles on DEV Community by Abdullrahman Eissa (@sysalchemist).</description>
    <link>https://dev.to/sysalchemist</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sysalchemist"/>
    <language>en</language>
    <item>
      <title>How I Built a Custom Kubernetes Ingress in Rust/Go That Outperforms Nginx by 11%</title>
      <dc:creator>Abdullrahman Eissa</dc:creator>
      <pubDate>Sun, 17 May 2026 02:57:01 +0000</pubDate>
      <link>https://dev.to/sysalchemist/how-i-built-a-custom-kubernetes-ingress-in-rustgo-that-outperforms-nginx-by-11-4iag</link>
      <guid>https://dev.to/sysalchemist/how-i-built-a-custom-kubernetes-ingress-in-rustgo-that-outperforms-nginx-by-11-4iag</guid>
      <description>&lt;h2&gt;
  
  
  How I Built a Custom Kubernetes Ingress in Rust/Go That Outperforms Nginx by 11%
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Silex: Re-engineering K8s Edge Routing for the Cloud-Native Era
&lt;/h3&gt;




&lt;h3&gt;
  
  
  The Pain Point: The Heavy Burden of Legacy Ingress
&lt;/h3&gt;

&lt;p&gt;In modern cloud-native architectures, we often accept unnecessary complexity as a default. When deploying standard enterprise ingress controllers like Nginx into a lightweight cluster, several architectural inefficiencies immediately become apparent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image Bloat and Cold Starts:&lt;/strong&gt; The container footprint for standard controllers often exceeds 280MB. In our testing on a bare-metal environment, this resulted in cold start pull times stretching to over 4.5 minutes. For dynamic, auto-scaling environments, this latency is unacceptable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Pollution:&lt;/strong&gt; Traditional ingress setups pollute the global cluster scope with validating webhook configurations. If a namespace is forcefully deleted, these webhooks leave behind orphaned states that block the creation of future network resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Overhead:&lt;/strong&gt; Legacy architectures carry legacy codebases. Running a multi-process, configuration-reloading proxy inside a resource-constrained node wastes CPU cycles and memory that should belong to the actual backend applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To solve this, I designed &lt;strong&gt;Silex&lt;/strong&gt;: a dedicated Kubernetes Ingress Controller built from scratch with a strict separation of concerns, utilizing Go for configuration orchestration and Rust for raw data plane execution.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Architecture: Separating the Brain from the Muscle
&lt;/h3&gt;

&lt;p&gt;Silex splits the traditional controller architecture into two distinct, isolated layers: the Control Plane and the Data Plane.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Control Plane (Go Operator)
&lt;/h4&gt;

&lt;p&gt;The control plane is written in Go, leveraging the official Kubernetes client-go libraries. Its sole responsibility is to act as the cluster's "brain." It watches the Kubernetes API for Ingress resources and corresponding EndpointSlices. By utilizing Go's robust concurrency patterns and native integration with the Kubernetes ecosystem, it monitors cluster state changes with minimal footprint, completely decoupled from the data path.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Data Plane (Rust Ingress Engine)
&lt;/h4&gt;

&lt;p&gt;The data plane is written in pure Rust using Tokio for asynchronous network I/O. It acts as the "muscle," binding directly to host ports 80 and 443. Instead of relying on heavy configuration reloads or complex parsing libraries, the engine keeps routing rules inside an ultra-fast, concurrent hash map using the &lt;code&gt;dashmap&lt;/code&gt; crate. This allows the proxy to achieve thread-safe, non-blocking lookups and route client traffic in fractions of a millisecond.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Debugging Story: Hunting a Silent 1.8 Million Request Drop
&lt;/h3&gt;

&lt;p&gt;Building infrastructure from scratch means encountering unique engineering hurdles. During our initial benchmark, Silex registered a staggering 1.8 million read errors with zero successful responses. The engine was accepting connections at a rate of 55,000 per second without crashing, but dropping the sockets instantly.&lt;/p&gt;

&lt;p&gt;The investigation revealed a subtle architectural friction between the control plane's payload structure and the data plane's raw stream matching. To maximize routing speeds, the Rust data plane avoids heavy reflection or generic JSON deserialization. Instead, it scans the raw byte buffers using direct string pattern matching:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;h_idx&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i_idx&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="nf"&gt;.find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="nf"&gt;.find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;ip&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bug was purely syntax-driven: the testing payload contained a single white space (&lt;code&gt;"host": "silex.local"&lt;/code&gt;), which failed the strict, space-less match condition inside the Rust binary. Because the error handling logic was designed to exit the task silently rather than panicking, the thread dropped the TCP connection immediately without an HTTP response status. Correcting the payload format to align perfectly with the low-level byte-matching logic resolved the issue instantly, unlocking the engine's true performance.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Benchmark: Head-to-Head Against Nginx
&lt;/h3&gt;

&lt;p&gt;The benchmark was conducted on an identical K3s cluster, routing traffic to a standard HTTP echo-server backend under a concurrent load of 400 connections spread across 4 processing threads for 30 seconds using &lt;code&gt;wrk&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Nginx Ingress Controller&lt;/th&gt;
&lt;th&gt;Silex (Rust/Go Architecture)&lt;/th&gt;
&lt;th&gt;Performance Delta&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Requests per Second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2,274.13 req/sec&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2,536.39 req/sec&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Silex outperformed by 11.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total Processed Requests&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;68,293 requests&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;76,161 requests&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Silex served 7,868 more requests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maximum Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.97 seconds&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.98 seconds&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Identical (Backend saturated)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Socket Errors (Timeouts)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;286&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;302&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Saturation transferred to backend&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Binary Size / Boot Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~280MB / ~4.5 minutes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.7MB / Instantaneous&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Silex eliminates cold starts&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;During the Nginx test, the controller experienced 286 socket timeouts as the architecture struggled under the connection pool limit. Silex handled the connection limits with ease; the 302 timeouts observed during its run were verified to be backend drops, where the single-instance Node.js echo-server's event loop became completely saturated by the sheer speed of the Rust data plane.&lt;/p&gt;




&lt;h3&gt;
  
  
  Future Roadmap: Moving to Production-Ready
&lt;/h3&gt;

&lt;p&gt;While Silex successfully demonstrated superior raw routing capabilities and an almost non-existent resource footprint compared to Nginx, transforming it into an enterprise-grade product requires implementing production safeguards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Native TLS/HTTPS Termination:&lt;/strong&gt; Integrating native SSL/TLS decryption into the Rust data plane using &lt;code&gt;rustls&lt;/code&gt;, mapped dynamically to cert-manager secrets synced via the Go operator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active Health Checking:&lt;/strong&gt; Implementing a continuous loop where the Go operator validates backend endpoint availability, dynamically flushing dead IPs from Rust's memory map without traffic interruption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus Metrics Integration:&lt;/strong&gt; Exposing a low-overhead &lt;code&gt;/metrics&lt;/code&gt; endpoint from the Tokio worker threads to track active connection states, latency percentiles, and error tracking natively.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Open Source &amp;amp; Contributions
&lt;/h3&gt;

&lt;p&gt;If you want to try Silex, contribute to the Go Operator, or optimize the Rust Data Plane, check out the repository on GitHub.&lt;/p&gt;

&lt;p&gt;Contributions, bug reports, and performance feedback are welcome.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/AbdullrahmanEissa/Silex" rel="noopener noreferrer"&gt;github.com/AbdullrahmanEissa/Silex&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Licensing:&lt;/strong&gt; Open-Source (MIT License)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Join the project to help build a lighter, faster, and cloud-native alternative to legacy ingress architectures.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>nginx</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
