<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mustafa Veysi Soyvural</title>
    <description>The latest articles on DEV Community by Mustafa Veysi Soyvural (@veysi).</description>
    <link>https://dev.to/veysi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/veysi"/>
    <language>en</language>
    <item>
      <title>Consistent Hashing in Go: From the Math to Production-Grade Code</title>
      <dc:creator>Mustafa Veysi Soyvural</dc:creator>
      <pubDate>Thu, 02 Apr 2026 21:54:56 +0000</pubDate>
      <link>https://dev.to/veysi/i-built-consistent-hashing-from-scratch-in-go-heres-what-i-learned-24pj</link>
      <guid>https://dev.to/veysi/i-built-consistent-hashing-from-scratch-in-go-heres-what-i-learned-24pj</guid>
      <description>&lt;p&gt;When you scale a cache cluster from 5 to 6 servers using modulo hashing, 83% of your keys remap to different servers. All at once. Every one of those remapped keys hits your database simultaneously — the thundering herd problem that has taken down production systems at every scale.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Before:  hash("user:1001") % 5 = 3  → Server C
After:   hash("user:1001") % 6 = 1  → Server A  ← cache miss
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consistent hashing solves this. It's the algorithm behind DynamoDB's partitioning, Cassandra's token ring, Discord's chat server routing, and Netflix's CDN distribution. This post walks through how it works, why it works, and the implementation details that separate a textbook explanation from production-ready code — backed by benchmarks and chaos testing.&lt;/p&gt;

&lt;p&gt;The full implementation is &lt;a href="https://github.com/soyvural/consistent-hashing" rel="noopener noreferrer"&gt;available on GitHub&lt;/a&gt;: ~800 lines of Go, zero external dependencies.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Algorithm
&lt;/h2&gt;

&lt;p&gt;Consistent hashing was introduced by Karger et al. in 1997 to solve cache distribution for the early web. The original paper defined four formal properties that any consistent hash function must satisfy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Balance&lt;/strong&gt; — Keys distribute evenly across all nodes. No single node should be disproportionately loaded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monotonicity&lt;/strong&gt; — When a new node joins, keys only move &lt;em&gt;to&lt;/em&gt; the new node. Keys never reshuffle between existing nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spread&lt;/strong&gt; — Across different client views of the cluster, a key maps to a small number of distinct nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load&lt;/strong&gt; — No node receives more than its fair share of keys, regardless of which subset of nodes a client sees.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't aspirational goals — they're the mathematical contract. If your implementation violates any of them, you don't have consistent hashing; you have a hash ring with bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Ring Works
&lt;/h3&gt;

&lt;p&gt;Instead of &lt;code&gt;hash(key) % N&lt;/code&gt;, both nodes and keys are placed on a circular ring of size 2^32. To find which node owns a key:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hash the key to a position on the ring&lt;/li&gt;
&lt;li&gt;Walk clockwise until you hit a node&lt;/li&gt;
&lt;li&gt;That node owns the key&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The critical property: adding or removing a node only affects the keys in the arc between the changed node and its predecessor. Everything else stays put.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Strategy          │ Keys Remapped (5→6 nodes, 100K keys)
──────────────────┼─────────────────────────────────────
Modulo (hash%N)   │  83,803  (83.8%)
Consistent Hash   │  15,723  (15.7%)  ← ~K/N theoretical bound
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's monotonicity in action. The theoretical bound is K/N (20,000 for 100K keys across 5 nodes). The measured 15,723 falls within expected variance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Virtual Nodes: From Theory to Practice
&lt;/h2&gt;

&lt;p&gt;The raw ring algorithm satisfies monotonicity but fails badly on balance. With one position per physical node on a 5-node ring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Node A:  32,014 keys
Node B:  27,412 keys
Node C:  18,805 keys
Node D:  19,914 keys
Node E:   1,855 keys  ← 17x less than Node A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A 17:1 imbalance. In production, Node E sits idle while Node A melts.&lt;/p&gt;

&lt;p&gt;The fix is virtual nodes: each physical node gets multiple positions on the ring. &lt;code&gt;Node-A#0&lt;/code&gt; through &lt;code&gt;Node-A#149&lt;/code&gt; are each hashed independently to different ring positions. A lookup that lands on any of Node A's virtual nodes routes to the physical Node A.&lt;/p&gt;

&lt;p&gt;Here's how balance improves as you add virtual nodes (measured across 100K keys, 5 physical nodes):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Virtual Nodes&lt;/th&gt;
&lt;th&gt;Std Deviation&lt;/th&gt;
&lt;th&gt;Worst Node Ratio&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;11,353&lt;/td&gt;
&lt;td&gt;3.20x average&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;4,601&lt;/td&gt;
&lt;td&gt;1.83x average&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;150&lt;/td&gt;
&lt;td&gt;2,824&lt;/td&gt;
&lt;td&gt;1.47x average&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;976&lt;/td&gt;
&lt;td&gt;1.17x average&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Diminishing returns kick in hard after 150. Going from 1 to 150 vnodes cuts standard deviation by 75%. Going from 150 to 500 cuts it by another 65%, but at the cost of 3.3x more ring entries, more memory, and slower binary searches. 150 is the sweet spot for most deployments — and it's what production systems like Cassandra converge on.&lt;/p&gt;




&lt;h2&gt;
  
  
  Proving Correctness Under Chaos
&lt;/h2&gt;

&lt;p&gt;Benchmarks show the happy path. Chaos testing proves the guarantees hold when things break.&lt;/p&gt;

&lt;h3&gt;
  
  
  Failure Isolation
&lt;/h3&gt;

&lt;p&gt;When a node dies, only its keys are affected. This is a direct consequence of monotonicity — surviving nodes' arc boundaries don't change.&lt;/p&gt;

&lt;p&gt;Testing with 10,000 keys across five Redis nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-1     2,197 keys
redis-2     1,731 keys
redis-3     1,559 keys  ← killed
redis-4     1,730 keys
redis-5     2,783 keys

After redis-3 failure:
  Cache hit rate:  84.4%
  Cache misses:    1,559  ← exactly redis-3's key count
  Keys remapped from surviving nodes: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero remapping for surviving nodes. The blast radius is perfectly contained.&lt;/p&gt;

&lt;h3&gt;
  
  
  Catastrophic Scenarios
&lt;/h3&gt;

&lt;p&gt;Five tests verified the formal properties under extreme conditions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mass failure (50% node loss):&lt;/strong&gt; Removed 5 of 10 nodes simultaneously. Zero keys remapped between surviving nodes. All misses traced to removed nodes only — monotonicity holds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid churn (20 add/remove cycles):&lt;/strong&gt; Each operation remapped within 1.5x of the K/N theoretical bound. No cumulative drift — balance recovers after each operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent chaos:&lt;/strong&gt; 2.4 million reads during simultaneous node additions and removals. Zero data races. 100% correctness on every read that targeted a live node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't unit tests — they're property-based proofs that the implementation satisfies Karger's guarantees under adversarial conditions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Implementation Deep-Dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hash Function Selection
&lt;/h3&gt;

&lt;p&gt;The implementation supports pluggable hash functions: FNV-1a, MD5, and CRC32. The choice matters more than you'd expect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FNV-1a&lt;/strong&gt; is the default, and for good reason. Consistent hashing doesn't need collision resistance — there's no adversary trying to forge hash collisions on your cache keys. What it needs is uniform distribution and speed. FNV-1a delivers both: non-cryptographic, fast, and well-distributed across the 32-bit space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MD5&lt;/strong&gt; works but wastes cycles on cryptographic properties you don't need. You're paying for collision resistance that buys you nothing in a hash ring context. Use it only if you need compatibility with an existing system that already uses MD5 hashes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CRC32&lt;/strong&gt; is the fastest option but has known distribution weaknesses with certain input patterns. Fine for benchmarking, risky for production.&lt;/p&gt;

&lt;p&gt;If you need maximum throughput, consider xxHash — it's faster than FNV-1a with comparable distribution quality. The implementation's pluggable interface makes swapping trivial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collision Handling in Virtual Node Space
&lt;/h3&gt;

&lt;p&gt;With 150 virtual nodes per physical node across 5 nodes, you're placing 750 points in a 2^32 space. Collisions are rare but inevitable. The birthday problem applies: at 750 points in 4 billion slots, collisions will occur in roughly 1 in 5,700 deployments.&lt;/p&gt;

&lt;p&gt;The implementation detects and skips duplicate positions rather than overwriting. Overwriting silently transfers ownership of a key range from one node to another — a violation of the balance property that produces no error and no log entry. A silent data correctness bug that only manifests under load.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concurrency: RWMutex, Not Mutex
&lt;/h3&gt;

&lt;p&gt;The hash ring has a heavily skewed read/write ratio. Key lookups (reads) happen on every cache operation — potentially millions per second. Node additions and removals (writes) happen during deployments and failures — maybe a few times per day.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync.RWMutex&lt;/code&gt; allows concurrent readers with exclusive writers. A plain &lt;code&gt;sync.Mutex&lt;/code&gt; would serialize every lookup behind every other lookup, destroying throughput for no safety benefit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ring Lookup: Sorted Array + Binary Search
&lt;/h3&gt;

&lt;p&gt;The ring is stored as a sorted array of virtual node positions. Key lookup uses &lt;code&gt;sort.Search&lt;/code&gt; (binary search) to find the first node position clockwise from the key's hash — O(log n) where n is the total number of virtual nodes.&lt;/p&gt;

&lt;p&gt;An alternative is a self-balancing BST (red-black tree, etc.), which gives O(log n) insertion and deletion without re-sorting. But Go's &lt;code&gt;sort.Search&lt;/code&gt; on a slice is cache-friendly and fast in practice. Re-sorting 750 elements on the rare node change is negligible compared to the millions of lookups that benefit from contiguous memory layout.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Handling on Empty Rings
&lt;/h3&gt;

&lt;p&gt;An empty ring — no nodes registered — must return an explicit error, not a zero value. A zero-value return silently routes all keys to... nothing. No crash, no log, no alert. Data just disappears.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ErrEmptyRing&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a system boundary. Fail loudly.&lt;/p&gt;




&lt;h2&gt;
  
  
  When NOT to Use Consistent Hashing
&lt;/h2&gt;

&lt;p&gt;Consistent hashing is not universally optimal. Knowing when to reach for something else is as important as knowing the algorithm itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jump Consistent Hash
&lt;/h3&gt;

&lt;p&gt;Google's Jump Consistent Hash (Lamping &amp;amp; Veach, 2014) uses O(1) space and O(ln n) time with near-perfect balance — no virtual nodes needed. The catch: nodes must be identified by sequential integers (0, 1, 2, ...), not names or addresses. You can't remove node 3 without renumbering nodes 4+.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use jump hash when:&lt;/strong&gt; nodes are numbered slots (e.g., sharded database partitions) and you only add/remove from the tail. &lt;strong&gt;Use ring-based consistent hashing when:&lt;/strong&gt; nodes have identities (IP addresses, hostnames) and any node can fail independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bounded-Load Consistent Hashing
&lt;/h3&gt;

&lt;p&gt;Google's bounded-load variant (Mirrokni et al., 2018) caps each node's load at &lt;code&gt;ceil(average_load * (1 + epsilon))&lt;/code&gt;. When a node would exceed its cap, the algorithm continues clockwise to the next under-capacity node.&lt;/p&gt;

&lt;p&gt;This solves the "celebrity problem" — when a small number of keys receive massively disproportionate traffic. Standard consistent hashing distributes &lt;em&gt;keys&lt;/em&gt; evenly but not &lt;em&gt;load&lt;/em&gt; if key popularity is skewed. A single viral cache key can overwhelm its assigned node while others idle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use bounded-load when:&lt;/strong&gt; key access patterns are highly skewed (social media feeds, trending content, celebrity profiles).&lt;/p&gt;

&lt;h3&gt;
  
  
  Rendezvous Hashing (Highest Random Weight)
&lt;/h3&gt;

&lt;p&gt;Each key computes a weight for every node; the highest-weight node wins. O(n) per lookup, but n is typically small (tens of nodes, not thousands). No ring, no virtual nodes, no rebalancing logic. Elegant for small clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use rendezvous when:&lt;/strong&gt; your node count is small (&amp;lt; 50) and you value simplicity over lookup speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plain Modulo with Planned Migration
&lt;/h3&gt;

&lt;p&gt;If you can tolerate a maintenance window, &lt;code&gt;hash % N&lt;/code&gt; with a full data migration on resize is simpler, faster per-lookup, and has zero overhead. Don't reach for consistent hashing if your cluster never changes size in production or if downtime during resizing is acceptable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use modulo when:&lt;/strong&gt; the cluster is static, or downtime during scaling is acceptable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where Consistent Hashing Runs in Production
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt; — The 2007 Dynamo paper popularized consistent hashing for key-value partitioning. Each node owns a range of the ring; virtual nodes handle heterogeneous hardware.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apache Cassandra&lt;/strong&gt; — Token ring partitioning assigns each node a set of token ranges. Virtual nodes (vnodes) were added in Cassandra 1.2 for automatic rebalancing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discord&lt;/strong&gt; — Routes users to chat servers using consistent hashing, ensuring session stickiness during server scaling events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Netflix&lt;/strong&gt; — CDN edge servers use consistent hashing to route content requests, minimizing cache misses during server pool changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memcached clients&lt;/strong&gt; — libmemcached and most client libraries use consistent hashing by default for key-to-server routing.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Code &amp;amp; Resources
&lt;/h2&gt;

&lt;p&gt;The full implementation: &lt;a href="https://github.com/soyvural/consistent-hashing" rel="noopener noreferrer"&gt;github.com/soyvural/consistent-hashing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;~800 lines of Go. Zero external dependencies. Pluggable hash functions (FNV-1a, MD5, CRC32). Includes the benchmarks and chaos tests described above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/soyvural/consistent-hashing.git
&lt;span class="nb"&gt;cd &lt;/span&gt;consistent-hashing
go run cmd/demo/main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Further Reading
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Karger et al. (1997)&lt;/strong&gt; — &lt;a href="https://www.cs.princeton.edu/courses/archive/fall09/cos518/papers/chash.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;Consistent Hashing and Random Trees&lt;/em&gt;&lt;/a&gt;. The original paper that defined the formal properties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeCandia et al. (2007)&lt;/strong&gt; — &lt;a href="https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;Dynamo: Amazon's Highly Available Key-Value Store&lt;/em&gt;&lt;/a&gt;. The paper that brought consistent hashing to production-scale systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lamping &amp;amp; Veach (2014)&lt;/strong&gt; — &lt;a href="https://arxiv.org/abs/1406.2294" rel="noopener noreferrer"&gt;&lt;em&gt;A Fast, Minimal Memory, Consistent Hash Algorithm&lt;/em&gt;&lt;/a&gt;. Jump consistent hash — the O(1) space alternative.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mirrokni et al. (2018)&lt;/strong&gt; — &lt;a href="https://arxiv.org/abs/1608.01350" rel="noopener noreferrer"&gt;&lt;em&gt;Consistent Hashing with Bounded Loads&lt;/em&gt;&lt;/a&gt;. Google's solution to the hot-key problem.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>backend</category>
      <category>go</category>
      <category>systemdesign</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>I Built a Read-Only kubectl So AI Agents Can't Break My Cluster</title>
      <dc:creator>Mustafa Veysi Soyvural</dc:creator>
      <pubDate>Sun, 29 Mar 2026 14:22:37 +0000</pubDate>
      <link>https://dev.to/veysi/kubectl-ro-read-only-kubernetes-access-for-ai-agents-and-humans-1okg</link>
      <guid>https://dev.to/veysi/kubectl-ro-read-only-kubernetes-access-for-ai-agents-and-humans-1okg</guid>
      <description>&lt;p&gt;Last month I gave Claude access to one of our staging clusters. Within minutes it tried to &lt;code&gt;kubectl exec&lt;/code&gt; into a pod and ran &lt;code&gt;kubectl get secret -o yaml&lt;/code&gt;. Nothing bad happened — but it made me think: &lt;strong&gt;what if it had been production?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://github.com/soyvural/kubectl-ro" rel="noopener noreferrer"&gt;kubectl-ro&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;It's a thin wrapper around kubectl that only allows read-only commands. You use it exactly like kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl-ro get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system        &lt;span class="c"&gt;# works&lt;/span&gt;
kubectl-ro logs deployment/my-app          &lt;span class="c"&gt;# works&lt;/span&gt;
kubectl-ro delete pod nginx                &lt;span class="c"&gt;# nope&lt;/span&gt;
&lt;span class="c"&gt;# ✘ BLOCKED: 'delete' is a mutating command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. If the command would change anything in your cluster, it gets blocked before kubectl ever sees it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just use RBAC?
&lt;/h2&gt;

&lt;p&gt;You absolutely should use RBAC. But RBAC is server-side — it requires cluster admin setup, service accounts, role bindings. &lt;code&gt;kubectl-ro&lt;/code&gt; is client-side. You install it, point your AI agent at it, and you're done. No cluster changes needed.&lt;/p&gt;

&lt;p&gt;Think of it as a seatbelt, not a replacement for airbags.&lt;/p&gt;

&lt;h2&gt;
  
  
  It also protects secrets
&lt;/h2&gt;

&lt;p&gt;This was the part that surprised me most. Even "read-only" kubectl can leak sensitive data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret db-creds &lt;span class="nt"&gt;-o&lt;/span&gt; yaml    &lt;span class="c"&gt;# prints base64-encoded passwords&lt;/span&gt;
kubectl describe secret db-creds       &lt;span class="c"&gt;# same thing, different format&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;kubectl-ro&lt;/code&gt; blocks these. You can list secrets (names and types), but you can't extract values. In MCP mode, secret values are replaced with &lt;code&gt;[REDACTED]&lt;/code&gt; automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  It works as an MCP server too
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most excited about. Run &lt;code&gt;kubectl-ro serve&lt;/code&gt; and it becomes an &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt; with 20 read-only tools that any AI agent can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kubectl-ro"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"kubectl-ro"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"serve"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your AI can &lt;code&gt;list_pods&lt;/code&gt;, &lt;code&gt;get_pod_logs&lt;/code&gt;, &lt;code&gt;list_deployments&lt;/code&gt;, &lt;code&gt;get_events&lt;/code&gt; — all the things you'd want it to see, nothing it shouldn't touch.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the policy works
&lt;/h2&gt;

&lt;p&gt;There's no config file. The policy is baked into the binary on purpose — you can't accidentally misconfigure it.&lt;/p&gt;

&lt;p&gt;The logic is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mutating commands&lt;/strong&gt; (delete, apply, create, exec, scale, drain...) → always blocked&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read commands&lt;/strong&gt; (get, describe, logs, top, events...) → allowed, with secrets checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unknown commands&lt;/strong&gt; → blocked by default (fail-safe)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also rejects arguments with control characters, which prevents a class of prompt injection attacks where an LLM hallucinates weird bytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Every action is logged
&lt;/h2&gt;

&lt;p&gt;Everything goes to &lt;code&gt;~/.kubectl-ro/audit.log&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2026-03-29T13:04:36Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"get pods"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"allowed"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2026-03-29T13:04:36Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"delete pod x"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"blocked"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"'delete' is a mutating command"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if something weird happens, you can see exactly what was attempted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;github.com/soyvural/kubectl-ro@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or grab a binary from the &lt;a href="https://github.com/soyvural/kubectl-ro/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can test the policy without running anything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl-ro &lt;span class="nt"&gt;--check&lt;/span&gt; get pods           &lt;span class="c"&gt;# prints: OK&lt;/span&gt;
kubectl-ro &lt;span class="nt"&gt;--check&lt;/span&gt; delete pod nginx   &lt;span class="c"&gt;# prints: BLOCKED&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Put it on your PATH and it works as a kubectl plugin too: &lt;code&gt;kubectl ro get pods&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;The repo is at &lt;a href="https://github.com/soyvural/kubectl-ro" rel="noopener noreferrer"&gt;github.com/soyvural/kubectl-ro&lt;/a&gt;. It's MIT licensed, written in Go, and has zero external runtime dependencies.&lt;/p&gt;

&lt;p&gt;If you're giving AI agents access to your clusters, I'd love to hear how you're handling the safety side. What's your approach?&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ai</category>
      <category>go</category>
      <category>security</category>
    </item>
    <item>
      <title>I Benchmarked fasthttp vs net/http — The Results Surprised Me</title>
      <dc:creator>Mustafa Veysi Soyvural</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:25:26 +0000</pubDate>
      <link>https://dev.to/veysi/i-benchmarked-fasthttp-vs-nethttp-the-results-surprised-me-1ho6</link>
      <guid>https://dev.to/veysi/i-benchmarked-fasthttp-vs-nethttp-the-results-surprised-me-1ho6</guid>
      <description>&lt;p&gt;I kept hearing that &lt;code&gt;fasthttp&lt;/code&gt; was faster than Go's standard &lt;code&gt;net/http&lt;/code&gt;. So I decided to stop guessing and just measure it.&lt;/p&gt;

&lt;p&gt;The result? &lt;strong&gt;5.6x faster. Zero heap allocations.&lt;/strong&gt; And the benchmark is only two files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping It Fair
&lt;/h2&gt;

&lt;p&gt;The trick to an honest benchmark: remove everything except what you're testing.&lt;/p&gt;

&lt;p&gt;Both servers use the same in-memory listener — no TCP, no network noise. Same endpoints, same response bodies. The only difference is the HTTP stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// net/http&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;SimpleHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, World!"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// fasthttp&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;SimpleFastHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;fasthttp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RequestCtx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, World!"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same job. Very different performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Apple M2 Pro, Go 1.23.4&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Benchmark&lt;/th&gt;
&lt;th&gt;net/http&lt;/th&gt;
&lt;th&gt;fasthttp&lt;/th&gt;
&lt;th&gt;Speedup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Simple&lt;/td&gt;
&lt;td&gt;14,449 ns/op, 63 allocs&lt;/td&gt;
&lt;td&gt;2,563 ns/op, &lt;strong&gt;0 allocs&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5.6x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON&lt;/td&gt;
&lt;td&gt;15,561 ns/op, 72 allocs&lt;/td&gt;
&lt;td&gt;3,136 ns/op, 6 allocs&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5.0x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Parallel&lt;/td&gt;
&lt;td&gt;3,541 ns/op, 63 allocs&lt;/td&gt;
&lt;td&gt;1,437 ns/op, &lt;strong&gt;0 allocs&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.5x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Zero allocations on simple requests. That's not a typo.&lt;/p&gt;

&lt;p&gt;The parallel gap is smaller (2.5x) because &lt;code&gt;net/http&lt;/code&gt; already handles concurrency well. But even there — 63 allocations vs. 0.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes fasthttp So Fast?
&lt;/h2&gt;

&lt;p&gt;It comes down to one idea: &lt;strong&gt;don't allocate, reuse.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;net/http&lt;/code&gt; creates fresh &lt;code&gt;Request&lt;/code&gt; and &lt;code&gt;ResponseWriter&lt;/code&gt; objects for every single request. fasthttp pools them with &lt;code&gt;sync.Pool&lt;/code&gt; and resets them between requests. That's why you see 63 allocs drop to 0.&lt;/p&gt;

&lt;p&gt;It also skips things most apps don't need — header map cloning, chunked encoding by default, HTTP/2. Less work per request = less time per request.&lt;/p&gt;

&lt;h2&gt;
  
  
  So Should You Switch?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use fasthttp when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're building a proxy, gateway, or high-throughput service&lt;/li&gt;
&lt;li&gt;Allocation pressure and GC pauses are a real problem&lt;/li&gt;
&lt;li&gt;You need every microsecond on hot paths&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stick with net/http when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need HTTP/2&lt;/li&gt;
&lt;li&gt;You rely on the Go middleware ecosystem&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;context.Context&lt;/code&gt; propagation matters to you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most apps, &lt;code&gt;net/http&lt;/code&gt; is the right choice. But when performance is the priority, fasthttp earns its name.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run It Yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/soyvural/fasthttp-vs-nethttp.git
&lt;span class="nb"&gt;cd &lt;/span&gt;fasthttp-vs-nethttp
go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-bench&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-benchmem&lt;/span&gt; &lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two files. No dependencies beyond fasthttp. Clone it, run it, see your own numbers.&lt;/p&gt;




&lt;p&gt;Full source: &lt;a href="https://github.com/soyvural/fasthttp-vs-nethttp" rel="noopener noreferrer"&gt;soyvural/fasthttp-vs-nethttp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Got different results on your machine? I'd love to see them in the comments.&lt;/p&gt;

</description>
      <category>go</category>
      <category>performance</category>
      <category>fasthttp</category>
      <category>webdev</category>
    </item>
    <item>
      <title>From OpenAPI Spec to MCP Tools — Automatically</title>
      <dc:creator>Mustafa Veysi Soyvural</dc:creator>
      <pubDate>Fri, 27 Mar 2026 01:35:37 +0000</pubDate>
      <link>https://dev.to/veysi/i-built-a-tool-that-turns-any-api-into-something-your-mcp-client-can-use-45np</link>
      <guid>https://dev.to/veysi/i-built-a-tool-that-turns-any-api-into-something-your-mcp-client-can-use-45np</guid>
      <description>&lt;p&gt;Hey folks 👋&lt;/p&gt;

&lt;p&gt;I've been deep in the AI tooling world lately, and I kept running into the same annoying problem: &lt;strong&gt;connecting LLMs to existing APIs is way harder than it should be.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every time I wanted an LLM to call a REST API, I had to write a custom MCP server from scratch. Define the tools. Map the parameters. Handle auth. Wire up the HTTP calls. Over and over again.&lt;/p&gt;

&lt;p&gt;So I built something to fix that.&lt;/p&gt;




&lt;h2&gt;
  
  
  Meet mcp-server-openapi
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/soyvural/mcp-server-openapi" rel="noopener noreferrer"&gt;mcp-server-openapi&lt;/a&gt; is an open-source Go CLI that &lt;strong&gt;automatically converts your OpenAPI spec into MCP tools&lt;/strong&gt;. If your API has an OpenAPI doc (and let's be honest, most do), you can expose it to any MCP client — Claude Desktop, Cursor, VS Code, you name it — in about 30 seconds.&lt;/p&gt;

&lt;p&gt;No boilerplate. No hand-wiring. Just point it at your spec and go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mcp-server-openapi &lt;span class="nt"&gt;--spec&lt;/span&gt; ./your-api.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Your API endpoints are now tools that any MCP client can call.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wait, What's MCP?
&lt;/h2&gt;

&lt;p&gt;If you're new to this — &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; is an open standard that lets AI assistants interact with external tools and data sources. Think of it as a &lt;strong&gt;USB-C port for AI&lt;/strong&gt;: one standard connection that works everywhere.&lt;/p&gt;

&lt;p&gt;MCP servers expose "tools" (basically functions) that an MCP client can discover and call. The problem is, building these servers by hand gets tedious fast — especially when your API already describes itself perfectly through OpenAPI.&lt;/p&gt;

&lt;p&gt;That's the gap &lt;code&gt;mcp-server-openapi&lt;/code&gt; fills.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Processing Pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  OpenAPI Spec        mcp-server-openapi           MCP Client
  ------------    --------------------------    ---------------

  +---------+     +-----+  +------+  +-----+   +-----------+
  |  YAML/  |----&amp;gt;|Parse|-&amp;gt;|Filter|-&amp;gt;|Schema|--&amp;gt;| Discovers |
  |  JSON   |     |     |  |      |  | Gen  |   |   Tools   |
  +---------+     +-----+  +------+  +-----+   +-----+-----+
                                                      |
                  +--------------------------+         |
                  |  When tool is called:    |&amp;lt;--------+
                  |                          |
                  |  1. Map args -&amp;gt; HTTP req |
                  |  2. Inject auth headers  |
                  |  3. Execute request      |---&amp;gt; Your API
                  |  4. Return response      |&amp;lt;--- Response
                  +--------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How It Works Step by Step
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Parses your spec&lt;/strong&gt; using &lt;a href="https://github.com/getkin/kin-openapi" rel="noopener noreferrer"&gt;kin-openapi&lt;/a&gt; — the gold standard for OpenAPI parsing in Go. Full &lt;code&gt;$ref&lt;/code&gt; resolution, &lt;code&gt;oneOf/anyOf/allOf&lt;/code&gt; — the works.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filters operations&lt;/strong&gt; — only endpoints tagged with &lt;code&gt;mcp&lt;/code&gt; get exposed (you choose what gets visible)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generates JSON Schema&lt;/strong&gt; for each tool's input — path params, query params, headers, and body are all mapped automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serves tools&lt;/strong&gt; via stdio or Streamable HTTP using the &lt;a href="https://github.com/mark3labs/mcp-go" rel="noopener noreferrer"&gt;mcp-go&lt;/a&gt; SDK&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Executes HTTP requests&lt;/strong&gt; when an MCP client calls a tool, mapping responses back cleanly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key insight: &lt;strong&gt;your OpenAPI spec already has everything needed to generate MCP tools&lt;/strong&gt; — parameter types, descriptions, required fields, request bodies. Why write it twice?&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Start (5 Minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Install
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;github.com/soyvural/mcp-server-openapi/cmd/mcp-server-openapi@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Tag Your Endpoints
&lt;/h3&gt;

&lt;p&gt;Add the &lt;code&gt;mcp&lt;/code&gt; tag to any operation you want to expose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;/v1/forecast&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mcp&lt;/span&gt;              &lt;span class="c1"&gt;# &amp;lt;-- This is the magic switch&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getForecast&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get weather forecast&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latitude&lt;/span&gt;
          &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;query&lt;/span&gt;
          &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;number&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longitude&lt;/span&gt;
          &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;query&lt;/span&gt;
          &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Configure Your MCP Client
&lt;/h3&gt;

&lt;p&gt;Add this to your MCP client config (works with any MCP-compatible app — Claude Desktop, Cursor, etc.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp-server-openapi"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"--spec"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/your/openapi.yaml"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart your MCP client, and your API tools show up automatically. Done. 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It With a Real API — Zero Config
&lt;/h2&gt;

&lt;p&gt;The repo includes a ready-to-use &lt;a href="https://open-meteo.com/" rel="noopener noreferrer"&gt;Open-Meteo&lt;/a&gt; weather example. No API key needed — it just works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;github.com/soyvural/mcp-server-openapi/cmd/mcp-server-openapi@latest
mcp-server-openapi &lt;span class="nt"&gt;--spec&lt;/span&gt; examples/weather/weather.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ask your MCP client things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What's the weather in Berlin right now?"&lt;/li&gt;
&lt;li&gt;"Give me a 5-day forecast for New York in Fahrenheit"&lt;/li&gt;
&lt;li&gt;"What's the elevation of Denver, Colorado?"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The x-mcp Extensions (My Favorite Part)
&lt;/h2&gt;

&lt;p&gt;You can fine-tune how your API appears to the MCP client using OpenAPI vendor extensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;/users/{id}&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;mcp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getUserById&lt;/span&gt;
    &lt;span class="na"&gt;x-mcp-tool-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get_user&lt;/span&gt;            &lt;span class="c1"&gt;# Friendlier name&lt;/span&gt;
    &lt;span class="na"&gt;x-mcp-description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;                 &lt;span class="c1"&gt;# AI-optimized description&lt;/span&gt;
      &lt;span class="s"&gt;Fetch detailed user info including&lt;/span&gt;
      &lt;span class="s"&gt;profile, settings, and activity.&lt;/span&gt;

&lt;span class="na"&gt;/internal/health&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;mcp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;x-mcp-hidden&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;                   &lt;span class="c1"&gt;# Hide from AI&lt;/span&gt;

&lt;span class="na"&gt;/debug/stats&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;internal&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;                     &lt;span class="c1"&gt;# No mcp tag, but...&lt;/span&gt;
    &lt;span class="na"&gt;x-mcp-hidden&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;                  &lt;span class="c1"&gt;# Force visible anyway&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what each extension does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-------------------+----------+--------------------------------+
| Extension         | Type     | What It Does                   |
+-------------------+----------+--------------------------------+
| x-mcp-tool-name   | string   | Override the generated name    |
| x-mcp-description | string   | Write AI-friendly description  |
| x-mcp-hidden      | boolean  | Show/hide regardless of tags   |
+-------------------+----------+--------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Think of &lt;code&gt;x-mcp-description&lt;/code&gt; as &lt;strong&gt;writing instructions specifically for the AI&lt;/strong&gt;. Your OpenAPI &lt;code&gt;summary&lt;/code&gt; might say "Get user by ID" (fine for docs), but the MCP description can add context that helps the LLM make smarter decisions about when and how to use the tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  Authentication Built In
&lt;/h2&gt;

&lt;p&gt;Most real APIs need auth. We've got you covered:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bearer token:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ghp_..."&lt;/span&gt;
mcp-server-openapi &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--spec&lt;/span&gt; ./github-api.yaml &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-type&lt;/span&gt; bearer &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-token-env&lt;/span&gt; GITHUB_TOKEN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;API key (header or query):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;MY_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sk_..."&lt;/span&gt;
mcp-server-openapi &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--spec&lt;/span&gt; ./api.yaml &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-type&lt;/span&gt; api-key &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-key-env&lt;/span&gt; MY_KEY &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-key-name&lt;/span&gt; X-API-Key &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;--auth-key-in&lt;/span&gt; header
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Credentials stay in environment variables — never in config files or CLI args. 🔒&lt;/p&gt;




&lt;h2&gt;
  
  
  Error Handling That Makes Sense
&lt;/h2&gt;

&lt;p&gt;When things go wrong (and they will), error mapping is clear:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+----------------+---------------------------------------------+
| What Happened  | What the MCP Client Sees                    |
+----------------+---------------------------------------------+
| 2xx response   | The actual response body (success!)         |
| 400-499 error  | Error with status code + response body      |
| 500-599 error  | "Upstream server error" + status code       |
| Timeout        | "Request timed out"                         |
| Can't connect  | "Failed to connect to upstream"             |
+----------------+---------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All errors include enough context for the AI to understand what went wrong and communicate it to the user. No mysterious failures.&lt;/p&gt;




&lt;h2&gt;
  
  
  Docker Support
&lt;/h2&gt;

&lt;p&gt;If you prefer containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="se"&gt;\\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/specs:/specs &lt;span class="se"&gt;\\&lt;/span&gt;
  mcp-server-openapi &lt;span class="nt"&gt;--spec&lt;/span&gt; /specs/my-api.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works great for teams who want a standardized MCP setup across environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;Honestly? Laziness. The good kind.&lt;/p&gt;

&lt;p&gt;I was working on a project where I needed my AI tools to interact with about a dozen internal APIs. Writing a custom MCP server for each one felt insane. The specs were already there. The parameter types were already documented. The auth was already defined.&lt;/p&gt;

&lt;p&gt;I just needed something to &lt;strong&gt;connect the dots&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, adding a new API to my MCP setup takes me less than a minute: tag the endpoints, point the server at the spec, done.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Coming Next
&lt;/h2&gt;

&lt;p&gt;The project is MIT-licensed and open for contributions. Here's what's on the roadmap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔐 OAuth2 client credentials flow&lt;/li&gt;
&lt;li&gt;📄 Config file support (YAML)&lt;/li&gt;
&lt;li&gt;🔄 SSE transport&lt;/li&gt;
&lt;li&gt;⚠️ &lt;code&gt;x-mcp-confirm&lt;/code&gt; extension for destructive operations&lt;/li&gt;
&lt;li&gt;🔥 Spec hot-reload&lt;/li&gt;
&lt;li&gt;⏱️ Rate limiting and retry policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of that sounds interesting, PRs are welcome!&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;github.com/soyvural/mcp-server-openapi/cmd/mcp-server-openapi@latest
mcp-server-openapi &lt;span class="nt"&gt;--spec&lt;/span&gt; examples/weather/weather.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⭐ &lt;strong&gt;&lt;a href="https://github.com/soyvural/mcp-server-openapi" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'd love to hear what APIs you connect with this. Drop a comment or open an issue — always happy to chat.&lt;/p&gt;

&lt;p&gt;Happy building! 🚀&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>llm</category>
      <category>openapi</category>
      <category>go</category>
    </item>
    <item>
      <title>connpool: A Zero-Alloc TCP Connection Pool for Go</title>
      <dc:creator>Mustafa Veysi Soyvural</dc:creator>
      <pubDate>Thu, 26 Mar 2026 17:11:11 +0000</pubDate>
      <link>https://dev.to/veysi/connpool-a-zero-alloc-tcp-connection-pool-for-go-2hbj</link>
      <guid>https://dev.to/veysi/connpool-a-zero-alloc-tcp-connection-pool-for-go-2hbj</guid>
      <description>&lt;h2&gt;
  
  
  The Story
&lt;/h2&gt;

&lt;p&gt;Years ago, when I first needed TCP connection pooling in Go, I found &lt;a href="https://github.com/fatih/pool" rel="noopener noreferrer"&gt;fatih/pool&lt;/a&gt;. It was elegant, simple, and it taught me a lot about channel-based pooling design. It was a real inspiration.&lt;/p&gt;

&lt;p&gt;But as I worked on high-throughput systems at scale, I kept running into gaps: no health checks, no idle eviction, no lifetime management, no metrics. I'd bolt these on as wrappers, and eventually realized I was maintaining a full pool implementation anyway.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;connpool&lt;/strong&gt; — taking the channel-based foundation that &lt;code&gt;fatih/pool&lt;/code&gt; popularized and adding the features that production systems actually need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing connpool
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/soyvural/connpool" rel="noopener noreferrer"&gt;&lt;strong&gt;github.com/soyvural/connpool&lt;/strong&gt;&lt;/a&gt; — a production-grade TCP connection pool for Go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go get github.com/soyvural/connpool@v1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero dependencies. ~370 lines of code. Zero-alloc fast path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;cfg&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;connpool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;MinSize&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;     &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;MaxSize&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;     &lt;span class="m"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Increment&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;IdleTimeout&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;MaxLifetime&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Minute&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Ping&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetReadDeadline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Millisecond&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;netErr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;netErr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Timeout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetReadDeadline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="c"&gt;// timeout = alive&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="c"&gt;// dead&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetReadDeadline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;connpool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DialTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"localhost:6379"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;connpool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"redis-pool"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// returns to pool&lt;/span&gt;

&lt;span class="c"&gt;// If the connection is broken:&lt;/span&gt;
&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MarkUnusable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// destroys instead of returning&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Makes It Different
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Health Check on Get()
&lt;/h3&gt;

&lt;p&gt;Every &lt;code&gt;Get()&lt;/code&gt; call validates the connection before returning it. The check order is optimized — cheapest first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Lifetime check (time comparison) → Idle check (time comparison) → Ping (network I/O)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time comparisons cost nanoseconds. The expensive network ping only runs if the connection passes the time-based checks. If a connection fails health check, it's discarded and the pool retries (up to 3 times) before growing.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Max Lifetime with Jitter
&lt;/h3&gt;

&lt;p&gt;Connections have a maximum lifetime, but here's the trick: each connection gets a &lt;strong&gt;10% random jitter&lt;/strong&gt; on its expiration.&lt;/p&gt;

&lt;p&gt;Why? Without jitter, if you create 20 connections at startup, they all expire at the exact same moment — causing a &lt;strong&gt;thundering herd&lt;/strong&gt; of reconnections. Jitter spreads the expiration across time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;maxLifetimeWithJitter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Duration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxLifetime&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;jitter&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Int64N&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int64&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxLifetime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxLifetime&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;jitter&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the same pattern used by pgx and Vitess.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Background Evictor
&lt;/h3&gt;

&lt;p&gt;Without a background evictor, stale connections only get cleaned up when someone calls &lt;code&gt;Get()&lt;/code&gt;. If the pool is idle (no traffic), dead connections accumulate silently.&lt;/p&gt;

&lt;p&gt;The evictor runs every 30 seconds (configurable), drains the channel, health-checks each connection, discards the stale ones, and replenishes to &lt;code&gt;MinSize&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Zero-Alloc Fast Path
&lt;/h3&gt;

&lt;p&gt;The pool uses a &lt;strong&gt;channel&lt;/strong&gt; internally (inspired by &lt;code&gt;fatih/pool&lt;/code&gt;'s original design), not a mutex+slice. This gives us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Natural blocking semantics (when pool is at max, &lt;code&gt;Get()&lt;/code&gt; blocks on channel receive)&lt;/li&gt;
&lt;li&gt;Non-blocking fast path via &lt;code&gt;select&lt;/code&gt;/&lt;code&gt;default&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero heap allocations&lt;/strong&gt; on the hot path
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;BenchmarkGetPut_Sequential    8,556,068    139.9 ns/op    0 B/op    0 allocs/op
BenchmarkGetPut_Parallel      5,075,728    245.1 ns/op    0 B/op    0 allocs/op
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Context-Aware Get()
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Get()&lt;/code&gt; respects context deadlines and cancellation. If the pool is exhausted and a connection doesn't become available before the deadline, you get &lt;code&gt;context.DeadlineExceeded&lt;/code&gt; — not a hang.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Comprehensive Metrics
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stats&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"size=%d active=%d available=%d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Active&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Available&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"idle_closed=%d lifetime_closed=%d ping_failed=%d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IdleClosed&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LifetimeClosed&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PingFailed&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"wait_count=%d wait_time=%v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitCount&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitTime&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every metric you need for alerting and debugging — without pulling in a metrics library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarks
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;goos: darwin
goarch: arm64
cpu: Apple M2 Pro
BenchmarkGetPut_Sequential    8,556,068    139.9 ns/op    0 B/op    0 allocs/op
BenchmarkGetPut_Parallel      5,075,728    245.1 ns/op    0 B/op    0 allocs/op
BenchmarkGetPut_WithPing        216,386   5571   ns/op   81 B/op    2 allocs/op
BenchmarkGetPut_Contended     2,405,068    478.7 ns/op    0 B/op    0 allocs/op
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fast path (sequential, no ping) runs at &lt;strong&gt;139ns with zero allocations&lt;/strong&gt;. Even under heavy contention (8 workers fighting over 2 connections), it stays under 500ns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Decisions
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Decision&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Channel over mutex+slice&lt;/td&gt;
&lt;td&gt;Natural blocking, zero-alloc &lt;code&gt;select&lt;/code&gt;/&lt;code&gt;default&lt;/code&gt; fast path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lifetime jitter&lt;/td&gt;
&lt;td&gt;Prevents thundering herd reconnection storms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Health check ordering&lt;/td&gt;
&lt;td&gt;Cheapest checks first (time → time → network)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Background evictor&lt;/td&gt;
&lt;td&gt;Don't wait for &lt;code&gt;Get()&lt;/code&gt; to clean dead connections&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;conn.Close()&lt;/code&gt; returns to pool&lt;/td&gt;
&lt;td&gt;Familiar &lt;code&gt;net.Conn&lt;/code&gt; interface — no new API to learn&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Inspired By the Best
&lt;/h2&gt;

&lt;p&gt;This pool draws ideas from several production-grade systems:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Where I Learned It&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Channel-based pool&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/fatih/pool" rel="noopener noreferrer"&gt;fatih/pool&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Idle timeout + max lifetime&lt;/td&gt;
&lt;td&gt;pgx, Vitess&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Health check on borrow&lt;/td&gt;
&lt;td&gt;go-redis, Apache Commons Pool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Background evictor&lt;/td&gt;
&lt;td&gt;pgx, Apache Commons Pool, Vitess&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context-aware Get&lt;/td&gt;
&lt;td&gt;Vitess, pgx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MarkUnusable&lt;/td&gt;
&lt;td&gt;fatih/pool&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go get github.com/soyvural/connpool@v1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The repo includes three working examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;tcp-echo&lt;/strong&gt; — basic pool usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;redis-proxy&lt;/strong&gt; — pooling with health checks against Redis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;load-balancer&lt;/strong&gt; — round-robin across multiple backend pools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/soyvural/connpool" rel="noopener noreferrer"&gt;&lt;strong&gt;GitHub: soyvural/connpool&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you're working with TCP connections in Go and need something more robust than a basic pool, give connpool a try. Star the repo if you find it useful — it helps others discover it.&lt;/p&gt;

&lt;p&gt;I'd love feedback on the API design, especially if you have use cases I haven't considered. Drop a comment or open an issue!&lt;/p&gt;

</description>
      <category>go</category>
      <category>opensource</category>
      <category>networking</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
