<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 날다람쥐</title>
    <description>The latest articles on DEV Community by 날다람쥐 (@flyingsquirrel0419).</description>
    <link>https://dev.to/flyingsquirrel0419</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/flyingsquirrel0419"/>
    <language>en</language>
    <item>
      <title>I tried every TypeScript Result library. So I built a better one.</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Tue, 28 Apr 2026 16:22:34 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-tried-every-typescript-result-library-so-i-built-a-better-one-3a5h</link>
      <guid>https://dev.to/flyingsquirrel0419/i-tried-every-typescript-result-library-so-i-built-a-better-one-3a5h</guid>
      <description>&lt;p&gt;I've been writing TypeScript for years, and &lt;code&gt;try/catch&lt;/code&gt; has always bothered me.&lt;/p&gt;

&lt;p&gt;Not because error handling is hard — but because &lt;strong&gt;errors are invisible in the type system.&lt;/strong&gt; A function that says &lt;code&gt;Promise&amp;lt;User&amp;gt;&lt;/code&gt; might throw. Or might not. You genuinely can't tell without reading the implementation.&lt;/p&gt;

&lt;p&gt;So I went looking for a library that solves this properly.&lt;/p&gt;

&lt;p&gt;I tried them all. None of them were quite right.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;&lt;a href="https://github.com/flyingsquirrel0419/verdict-ts" rel="noopener noreferrer"&gt;verdict-ts&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem, quickly
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This function signature is lying to you&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;// It can actually do this at runtime&lt;/span&gt;
&lt;span class="c1"&gt;// Uncaught Error: Network timeout&lt;/span&gt;
&lt;span class="c1"&gt;// Uncaught SyntaxError: Unexpected token in JSON&lt;/span&gt;
&lt;span class="c1"&gt;// Uncaught TypeError: Cannot read properties of null&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution Rust came up with: make failure part of the return type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This function tells the truth&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ApiError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the compiler won't let you access the user without handling the error case first. Failures are visible. The type doesn't lie.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why not the existing libraries?
&lt;/h2&gt;

&lt;p&gt;Great question — and honestly the main reason I'm writing this post. Let's go through them.&lt;/p&gt;




&lt;h3&gt;
  
  
  neverthrow
&lt;/h3&gt;

&lt;p&gt;The most popular option, and for good reason — it works well and is actively maintained. But it has one fundamental design choice I kept bumping into:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's class-based.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;neverthrow&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;ResultOk&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Classes mean prototype chains, and prototype chains cause real problems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Breaks across Worker boundaries&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./worker.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// structured clone strips the prototype&lt;/span&gt;
&lt;span class="c1"&gt;// Other side receives a plain object — methods are gone&lt;/span&gt;

&lt;span class="c1"&gt;// ❌ Breaks across iframes&lt;/span&gt;
&lt;span class="c1"&gt;// ❌ JSON.stringify loses the methods&lt;/span&gt;
&lt;span class="c1"&gt;// ❌ structuredClone loses the methods&lt;/span&gt;

&lt;span class="c1"&gt;// This silently fails at runtime even though TypeScript says it's fine&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're building for Cloudflare Workers, Next.js Edge Runtime, or anything that crosses a serialization boundary — classes are a footgun.&lt;/p&gt;

&lt;p&gt;Also, at &lt;strong&gt;112KB unpacked&lt;/strong&gt;, it's larger than I'd want for a utility that goes into other packages as a dependency.&lt;/p&gt;




&lt;h3&gt;
  
  
  true-myth
&lt;/h3&gt;

&lt;p&gt;Solid functional programming library. If you want &lt;code&gt;Maybe&amp;lt;T&amp;gt;&lt;/code&gt; alongside &lt;code&gt;Result&amp;lt;T, E&amp;gt;&lt;/code&gt;, it's excellent.&lt;/p&gt;

&lt;p&gt;But: &lt;strong&gt;793KB unpacked.&lt;/strong&gt; That's not a typo.&lt;/p&gt;

&lt;p&gt;It also requires you to buy into its full worldview — &lt;code&gt;Maybe&lt;/code&gt;, &lt;code&gt;Task&lt;/code&gt;, the whole functional ecosystem. If you just want &lt;code&gt;Result&lt;/code&gt;, you're bringing in a lot you won't use.&lt;/p&gt;




&lt;h3&gt;
  
  
  ts-results
&lt;/h3&gt;

&lt;p&gt;Spiritually the closest to what I wanted. But:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Last published: May 2022.&lt;/strong&gt; Three years without an update.&lt;/li&gt;
&lt;li&gt;TypeScript has changed a lot since then — inference has gotten smarter, and ts-results doesn't take advantage of it.&lt;/li&gt;
&lt;li&gt;Issues have been piling up without responses.&lt;/li&gt;
&lt;li&gt;Weaker tuple inference in &lt;code&gt;combine()&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  result.ts
&lt;/h3&gt;

&lt;p&gt;Has a dependency (&lt;code&gt;maybe.ts&lt;/code&gt;). That immediately ruled it out — if I'm adding this as a dependency to my own packages, I don't want transitive deps creeping in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What verdict-ts does differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Plain objects, not classes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Literally: { ok: true, value: 42 }&lt;/span&gt;

&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;      &lt;span class="c1"&gt;// ✅ works&lt;/span&gt;
&lt;span class="nf"&gt;structuredClone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;     &lt;span class="c1"&gt;// ✅ works&lt;/span&gt;
&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;         &lt;span class="c1"&gt;// ✅ works across Workers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No prototype, no &lt;code&gt;instanceof&lt;/code&gt;, no serialization surprises. It's just data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Zero dependencies&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When verdict-ts goes into your SDK as a dependency, nothing comes with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. 491 bytes gzipped&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For comparison: neverthrow is ~4KB gzipped, true-myth is ~12KB. verdict-ts is smaller than most SVG icons (491B — yes, really).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Proper tuple inference in &lt;code&gt;combine()&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every Result library has &lt;code&gt;combine()&lt;/code&gt;. Most of them return &lt;code&gt;Result&amp;lt;T[], E&amp;gt;&lt;/code&gt;, which loses the tuple type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Other libraries:&lt;/span&gt;
&lt;span class="nf"&gt;combine&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="c1"&gt;// Result&amp;lt;(number | string)[], Error&amp;gt;  ← types are merged, index info lost&lt;/span&gt;

&lt;span class="c1"&gt;// verdict-ts:&lt;/span&gt;
&lt;span class="nf"&gt;combine&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="c1"&gt;// Result&amp;lt;[number, string], Error&amp;gt;  ← tuple preserved, index 0 is number, index 1 is string&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matters for validation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;combine&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="nf"&gt;validateEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;    &lt;span class="c1"&gt;// Result&amp;lt;string, ValidationError&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;validateAge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;age&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;        &lt;span class="c1"&gt;// Result&amp;lt;number, ValidationError&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;validateUsername&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="c1"&gt;// Result&amp;lt;string, ValidationError&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;]);&lt;/span&gt;

&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;//   ^^^^^ string  ^^^ number  ^^^^^^^^ string&lt;/span&gt;
    &lt;span class="c1"&gt;// TypeScript knows all three types at each index&lt;/span&gt;
    &lt;span class="nf"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;showError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. &lt;code&gt;AsyncResult&amp;lt;T, E&amp;gt;&lt;/code&gt; type alias&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A small thing that makes async code much cleaner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AsyncResult&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Instead of this&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ApiError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;// Write this&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;AsyncResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ApiError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The full API
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;// constructors&lt;/span&gt;
  &lt;span class="nx"&gt;trySync&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;// wrap synchronous throwables&lt;/span&gt;
  &lt;span class="nx"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;         &lt;span class="c1"&gt;// wrap async throwables&lt;/span&gt;
  &lt;span class="nx"&gt;combine&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;// merge multiple Results&lt;/span&gt;
  &lt;span class="nx"&gt;isOk&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isErr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;// type guards&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;AsyncResult&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating Results:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                     &lt;span class="c1"&gt;// Ok&amp;lt;number&amp;gt;&lt;/span&gt;
&lt;span class="nf"&gt;err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;oops&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;     &lt;span class="c1"&gt;// Err&amp;lt;Error&amp;gt;&lt;/span&gt;
&lt;span class="nf"&gt;err&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;         &lt;span class="c1"&gt;// Err&amp;lt;{ code: number }&amp;gt;&lt;/span&gt;

&lt;span class="nf"&gt;trySync&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;str&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;                  &lt;span class="c1"&gt;// Result&amp;lt;unknown, Error&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;                &lt;span class="c1"&gt;// Result&amp;lt;Response, Error&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Transforming:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;result&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;               &lt;span class="c1"&gt;// transform Ok value&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapErr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AppError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;// transform Err&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flatMap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;                      &lt;span class="c1"&gt;// Ok → another Result&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;negative&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Extracting:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unwrap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;           &lt;span class="c1"&gt;// value or throws&lt;/span&gt;
&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unwrapOr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;        &lt;span class="c1"&gt;// value or default&lt;/span&gt;
&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`got &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`failed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Real-world example: API client
&lt;/h2&gt;

&lt;p&gt;This is the pattern that made me want to build this. When you write an SDK, your functions should tell the truth about what can fail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AsyncResult&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ApiError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;network&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not_found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unauthorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;AsyncResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ApiError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/api/users/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapErr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;network&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flatMap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;err&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unauthorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;err&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not_found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;mapErr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;network&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}));&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Caller gets full type safety on the error&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;renderProfile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;network&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;showNetworkError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not_found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;showNotFound&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unauthorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirectToLogin&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// TypeScript ensures all cases are handled&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No try/catch. No &lt;code&gt;unknown&lt;/code&gt; errors. Every failure mode is in the type.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick comparison table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;verdict-ts&lt;/th&gt;
&lt;th&gt;neverthrow&lt;/th&gt;
&lt;th&gt;true-myth&lt;/th&gt;
&lt;th&gt;ts-results&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Size (gzipped)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;491B&lt;/td&gt;
&lt;td&gt;~4KB&lt;/td&gt;
&lt;td&gt;~12KB&lt;/td&gt;
&lt;td&gt;~3KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dependencies&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Class-based&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;JSON-serializable&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tuple inference&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AsyncResult type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Active maintenance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Edge Runtime safe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;verdict-ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tryAsync&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;verdict-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;tryAsync&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/users/torvalds&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unwrapOr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unknown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// "Linus Torvalds"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;npm: &lt;a href="https://npmjs.com/package/verdict-ts" rel="noopener noreferrer"&gt;npmjs.com/package/verdict-ts&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this was useful, a ⭐ on GitHub goes a long way — it helps other developers find the project when they're searching for exactly this kind of library.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/flyingsquirrel0419/verdict-ts" rel="noopener noreferrer"&gt;github.com/flyingsquirrel0419/verdict-ts&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;What's your current approach to error handling in TypeScript? Still on try/catch, or have you switched to Result types? Would love to hear in the comments 👇&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I got tired of AI agents trashing my codebase, so I built a skill to fix that</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Mon, 27 Apr 2026 17:25:00 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-got-tired-of-ai-agents-trashing-my-codebase-so-i-built-a-skill-to-fix-that-mgc</link>
      <guid>https://dev.to/flyingsquirrel0419/i-got-tired-of-ai-agents-trashing-my-codebase-so-i-built-a-skill-to-fix-that-mgc</guid>
      <description>&lt;p&gt;Every AI coding agent I've used hits the same wall. It always starts the same way.&lt;/p&gt;

&lt;p&gt;You ask it to add a feature. It rewrites half the file in its preferred naming convention. Your &lt;code&gt;snake_case&lt;/code&gt; Python suddenly has &lt;code&gt;camelCase&lt;/code&gt; crammed in. The test framework you carefully chose? Gone, replaced by a different one the model apparently likes better.&lt;/p&gt;

&lt;p&gt;So you fight it back into shape. Then you ask it to fix a bug. Same thing happens. It "tidies up" a few unrelated functions while it's in there. Opens three files you didn't ask it to touch. Leaves a commented-out &lt;code&gt;console.log&lt;/code&gt; in production code.&lt;/p&gt;

&lt;p&gt;Then the worst one: you hit three failures in a row on a gnarly bug, and the agent just keeps trying random things. No backoff. No ask-for-help. Just vibes and increasingly desperate code changes until the whole file is a mess.&lt;/p&gt;

&lt;p&gt;I've context-window-pasted my way through this more times than I want to admit. Not because the agents are bad — they're genuinely powerful — but because there's no agreed-upon &lt;em&gt;discipline&lt;/em&gt; baked in. No "read the room before you touch anything." No "when you're stuck three times in a row, stop and ask."&lt;/p&gt;

&lt;p&gt;So I spent some time vibe coding a skill file to fix that, and turned it into &lt;a href="https://github.com/flyingsquirrel0419/squirrel-skill" rel="noopener noreferrer"&gt;🐿️ Squirrel&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it actually does
&lt;/h2&gt;

&lt;p&gt;Squirrel is a single Markdown file (&lt;code&gt;SKILL.md&lt;/code&gt;) that you drop into your AI agent's instruction path. It installs a full 8-phase engineering discipline into your agent, without you having to remind it every session.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[1] 🔍 Discover  → Audit the project before touching a single line
[2] 📋 Plan      → Task list with dependencies and done-criteria
[3] 💻 Build     → Write or modify code
[4] 🧪 Test      → Run existing tests, write new ones
[5] 🐛 Bug Hunt  → Static analysis + manual checklist
[6] ✨ Polish    → Lint, format, type check
[7] 📖 Document  → README + inline docs (update, don't overwrite)
[8] 🚀 Ship      → Final checklist before handoff
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key insight is &lt;strong&gt;Step 0&lt;/strong&gt; — before the agent does anything, it figures out what kind of project it's looking at:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What it sees&lt;/th&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Empty directory&lt;/td&gt;
&lt;td&gt;🆕 Greenfield — start from scratch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source files, no tests&lt;/td&gt;
&lt;td&gt;🔧 In-Progress — audit first, then improve&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source + tests + CI + README&lt;/td&gt;
&lt;td&gt;🏗️ Mature — targeted improvements only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"fix this bug / add this feature"&lt;/td&gt;
&lt;td&gt;🎯 Targeted — abbreviated audit, scoped work&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Then it &lt;em&gt;announces the mode&lt;/em&gt; to you. So you know it read your code before it started writing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "respect existing code" problem
&lt;/h2&gt;

&lt;p&gt;This is the thing I cared most about getting right. Squirrel explicitly teaches the agent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Match the existing naming convention — if the project uses &lt;code&gt;snake_case&lt;/code&gt;, don't introduce &lt;code&gt;camelCase&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use the existing test framework — don't swap in a new one because it's newer&lt;/li&gt;
&lt;li&gt;Read 2-3 similar files before writing a new one, to understand the pattern&lt;/li&gt;
&lt;li&gt;Touch only what's necessary — "add a password reset endpoint" is not permission to refactor the auth module&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice this means adding something like this to the SKILL.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## For existing code:&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Match the codebase's style**&lt;/span&gt; — check .eslintrc, pyproject.toml, rustfmt.toml
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Read before writing**&lt;/span&gt; — look at 2-3 similar existing functions first
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Touch only what's necessary**&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Leave the codebase better than you found it**&lt;/span&gt;, scoped to what you touched
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sounds obvious, right? But without it being explicit in the agent's instruction context, most models default to "write it my way."&lt;/p&gt;




&lt;h2&gt;
  
  
  Solving the infinite debugging loop
&lt;/h2&gt;

&lt;p&gt;The part I found most satisfying to design: the &lt;strong&gt;3-Strike Rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strike 1:&lt;/strong&gt; Fix the specific error. Run tests. Move on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strike 2:&lt;/strong&gt; Re-read the code more carefully. Try a different approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strike 3:&lt;/strong&gt; STOP. Revert. Write a failure report. Ask the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## After Strike 3:&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; STOP all edits
&lt;span class="p"&gt;2.&lt;/span&gt; REVERT to last known working state (git stash)
&lt;span class="p"&gt;3.&lt;/span&gt; Write a failure report:
&lt;span class="p"&gt;   -&lt;/span&gt; What I tried
&lt;span class="p"&gt;   -&lt;/span&gt; What went wrong
&lt;span class="p"&gt;   -&lt;/span&gt; Where I think the problem is
&lt;span class="p"&gt;   -&lt;/span&gt; What I've ruled out
&lt;span class="p"&gt;4.&lt;/span&gt; ASK THE USER
&lt;span class="p"&gt;5.&lt;/span&gt; NEVER: leave code broken, delete failing tests, shotgun-debug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alone has saved me from agent death spirals more than once. Instead of watching it make 12 increasingly confused changes to the same function, it stops at 3, tells me what it knows, and asks. Like a junior engineer would.&lt;/p&gt;




&lt;h2&gt;
  
  
  Works on 8 platforms, install in one line
&lt;/h2&gt;

&lt;p&gt;The whole thing is just Markdown. Every major AI coding agent reads Markdown instructions — the YAML frontmatter is consumed by OpenCode, silently ignored by everything else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Auto-detect your agent and install&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/flyingsquirrel0419/squirrel-skill/main/install.sh | bash

&lt;span class="c"&gt;# Or for a specific platform&lt;/span&gt;
bash install.sh &lt;span class="nt"&gt;--platform&lt;/span&gt; cursor
bash install.sh &lt;span class="nt"&gt;--platform&lt;/span&gt; claude-code
bash install.sh &lt;span class="nt"&gt;--platform&lt;/span&gt; aider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supported: OpenCode, Codex, Claude Code, Cursor, Windsurf, Aider, Cline, GitHub Copilot.&lt;/p&gt;

&lt;p&gt;If you just want minimal setup that covers four platforms at once, drop &lt;code&gt;AGENTS.md&lt;/code&gt; in your project root. Natively read by Codex, Cursor, Cline, and Claude Code.&lt;/p&gt;

&lt;p&gt;For Cursor specifically, add the frontmatter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Squirrel full-cycle development skill&lt;/span&gt;
&lt;span class="na"&gt;alwaysApply&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then paste the SKILL.md content below it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reference files included
&lt;/h2&gt;

&lt;p&gt;Squirrel ships with supplementary templates the agent loads on demand — not all upfront, only when relevant:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Loaded when&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;references/plan_template.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Phase 1, creating Plan.md&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;references/readme_template.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Phase 7, writing a new README&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;references/stack_hints.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Phase 3, unfamiliar languages or stacks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;references/ci_templates.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Phase 8, setting up GitHub Actions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The CI templates cover Node.js, Python, Go, Rust — ready-to-use starting points, not drop-in guarantees.&lt;/p&gt;




&lt;h2&gt;
  
  
  What vibe coding this taught me
&lt;/h2&gt;

&lt;p&gt;I built Squirrel as a vibe coding exercise — give the agent a clear goal, iterate fast, let it do the heavy lifting. It was the first time I really leaned into that workflow end-to-end on something I cared about shipping.&lt;/p&gt;

&lt;p&gt;The irony isn't lost on me: I needed better agent discipline to build a skill that teaches agents discipline. Every time the agent went sideways during development, I'd notice what rule was missing and add it to SKILL.md. The 3-Strike Rule came from a particularly painful afternoon of watching it loop on a bash parsing edge case.&lt;/p&gt;

&lt;p&gt;By the end, the skill file had basically written itself — not because the AI wrote it, but because the failures showed me exactly what needed to be in it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# One liner&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/flyingsquirrel0419/squirrel-skill/main/install.sh | bash

&lt;span class="c"&gt;# Then tell your agent what you want:&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; squirrel this project — add tests, fix lint errors, write README
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; build me a REST API &lt;span class="k"&gt;for &lt;/span&gt;a todo app with TypeScript
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; fix this bug &lt;span class="k"&gt;in &lt;/span&gt;src/auth/login.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent announces which mode it detected, runs the phases, and gives you a summary at the end.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/squirrel-skill" rel="noopener noreferrer"&gt;https://github.com/flyingsquirrel0419/squirrel-skill&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this is useful to you, a ⭐ on GitHub genuinely helps — it's what tells me whether to keep building this out. Thanks for reading.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>opensource</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>I got tired of deploying broken configs, so I built dotenv-scan</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:38:00 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-got-tired-of-deploying-broken-configs-so-i-built-dotenv-scan-47lb</link>
      <guid>https://dev.to/flyingsquirrel0419/i-got-tired-of-deploying-broken-configs-so-i-built-dotenv-scan-47lb</guid>
      <description>&lt;p&gt;Every team I've worked on has had this incident at least once.&lt;/p&gt;

&lt;p&gt;Friday afternoon deploy. CI is green. You push to production. Five minutes later, someone's pinging you in Slack: the app is crashing on startup. You SSH in, check the logs, and there it is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: DATABASE_URL is not defined
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You forgot to add the new environment variable to production. The &lt;code&gt;.env&lt;/code&gt; file on your machine has it. The &lt;code&gt;.env.example&lt;/code&gt; in the repo doesn't. Nobody noticed during review.&lt;/p&gt;

&lt;p&gt;Half an hour of your Friday afternoon gone.&lt;/p&gt;

&lt;p&gt;The fix is always the same — add the variable, redeploy, move on. But the problem keeps coming back. Because there's no automated check. You're relying on code review and memory.&lt;/p&gt;

&lt;p&gt;And it's not just missing variables. The opposite problem is just as real: &lt;code&gt;.env&lt;/code&gt; files that accumulate junk. &lt;code&gt;OLD_REDIS_URL&lt;/code&gt; from a migration you finished six months ago. &lt;code&gt;LEGACY_API_KEY&lt;/code&gt; for a service you sunset last quarter. They sit there, silently, because nobody wants to delete something they're not 100% sure is unused.&lt;/p&gt;

&lt;p&gt;I've fixed this manually too many times. So I built &lt;a href="https://github.com/flyingsquirrel0419/dotenv-scan" rel="noopener noreferrer"&gt;dotenv-scan&lt;/a&gt; to do it automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;dotenv-scan&lt;/code&gt; scans your codebase for every env variable your code actually uses, then compares that against what's in your &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.example&lt;/code&gt;. One command, three answers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx dotenv-scan scan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotenv-scan v1.0.0  ·  scanned 47 files in 284ms

❌  Missing  (3)
   DATABASE_URL      src/db.ts:12, src/config.ts:8
   JWT_SECRET        src/auth/middleware.ts:4
   STRIPE_API_KEY    src/payments/stripe.ts:22

⚠️  Undocumented  (2)
   INTERNAL_API_KEY
   DEBUG_MODE

🗑️  Unused  (1)
   OLD_REDIS_URL

✅  OK  (8)
   PORT, NODE_ENV, API_BASE_URL, ... (and 5 more)

────────────────────────────────────────
Run `dotenv-scan generate` to update .env.example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No config file. No setup. Just &lt;code&gt;npx&lt;/code&gt; and go.&lt;/p&gt;




&lt;h2&gt;
  
  
  The three problems it catches
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Missing&lt;/strong&gt; — your code does &lt;code&gt;process.env.DATABASE_URL&lt;/code&gt; but &lt;code&gt;.env&lt;/code&gt; doesn't have it. This is the Friday deploy problem. &lt;code&gt;dotenv-scan&lt;/code&gt; catches it before you push.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unused&lt;/strong&gt; — &lt;code&gt;.env&lt;/code&gt; has &lt;code&gt;OLD_REDIS_URL&lt;/code&gt; but no file in your codebase references it. Safe to delete. &lt;code&gt;dotenv-scan&lt;/code&gt; tells you which ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Undocumented&lt;/strong&gt; — &lt;code&gt;.env&lt;/code&gt; has &lt;code&gt;INTERNAL_API_KEY&lt;/code&gt; but &lt;code&gt;.env.example&lt;/code&gt; doesn't mention it. The next developer to clone your repo has no idea this variable needs to exist. &lt;code&gt;dotenv-scan&lt;/code&gt; flags it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Plugging it into CI
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;check&lt;/code&gt; command is designed for pipelines. It exits with code &lt;code&gt;1&lt;/code&gt; if any variables are missing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# GitHub Actions&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check env variables&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx dotenv-scan check&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the whole setup. Now your Friday deploy problem becomes a PR-time failure instead.&lt;/p&gt;

&lt;p&gt;If you want to be strict — fail on unused and undocumented too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotenv-scan check &lt;span class="nt"&gt;--strict&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Auto-generating &lt;code&gt;.env.example&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The part I actually use the most is &lt;code&gt;generate&lt;/code&gt;. It writes (or updates) &lt;code&gt;.env.example&lt;/code&gt; from your scan results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotenv-scan generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's idempotent. If a key already exists in &lt;code&gt;.env.example&lt;/code&gt;, it's preserved. New variables found in the scan get added. Variables in &lt;code&gt;.env.example&lt;/code&gt; that are no longer referenced in your code get flagged.&lt;/p&gt;

&lt;p&gt;One command to keep your docs in sync with reality.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-language support
&lt;/h2&gt;

&lt;p&gt;It's not just JavaScript. The scanner understands:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Language&lt;/th&gt;
&lt;th&gt;Patterns&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;JavaScript / TypeScript&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;process.env.VAR&lt;/code&gt;, &lt;code&gt;process.env['VAR']&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;os.environ['VAR']&lt;/code&gt;, &lt;code&gt;os.getenv('VAR')&lt;/code&gt;, &lt;code&gt;os.environ.get('VAR')&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;&lt;code&gt;os.Getenv("VAR")&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ruby&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ENV['VAR']&lt;/code&gt;, &lt;code&gt;ENV.fetch('VAR')&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So if you have a monorepo with a Node.js API and a Python worker, one scan covers both.&lt;/p&gt;




&lt;h2&gt;
  
  
  The interesting part: detecting dynamic access
&lt;/h2&gt;

&lt;p&gt;The hardest case to handle was &lt;code&gt;process.env[dynamicKey]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Static analysis can't tell you which variable this reads. The key is computed at runtime — maybe it comes from a config file, maybe it's user input, maybe it's constructed from an enum. You can't enumerate it.&lt;/p&gt;

&lt;p&gt;I made a deliberate call here: don't try to be clever. Instead, detect the pattern and warn the user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⚠️  Dynamic access detected
   process.env[key]   src/config/loader.ts:34

   Static analysis can't determine which variables are accessed here.
   Make sure these variables are covered in your .env.example manually.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Silently ignoring it would be worse — you'd get a false "all clear" and miss variables. Failing hard would be too noisy for codebases that use this pattern intentionally. A warning with the exact location felt right.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture in one diagram
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CLI (Commander.js)
    │
    ├── Scanner ── Walker (fast-glob) ── Extractors (per-language regex)
    │                                              │
    │                                     EnvRef[] (used variables)
    │
    ├── Parser ── dotenv parser ── EnvDef[] (defined variables)
    │
    └── Analyzer ── compares used ↔ defined ↔ documented
                         │
                    AnalysisResult
                         │
                    ┌────┴────┐
                 Reporter   Generator
                (text/json)  (.env.example)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer is independently testable. The scanner doesn't know about &lt;code&gt;.env&lt;/code&gt; files. The analyzer doesn't know about file systems. 88 tests, zero mocks of the actual fs calls in integration tests — they run against real fixture files.&lt;/p&gt;




&lt;h2&gt;
  
  
  Zero runtime dependencies (almost)
&lt;/h2&gt;

&lt;p&gt;Three runtime deps, all small:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;chalk&lt;/code&gt; — terminal colors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;commander&lt;/code&gt; — CLI argument parsing
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fast-glob&lt;/code&gt; — file walking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. No dotenv library — the parser is custom because I needed comment-preservation and multiline value support that dotenv packages tend to strip. No framework. Ships as a standalone CLI that you can &lt;code&gt;npx&lt;/code&gt; without worrying about what it pulls in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# No install required&lt;/span&gt;
npx dotenv-scan scan

&lt;span class="c"&gt;# Or install globally&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; dotenv-scan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/dotenv-scan" rel="noopener noreferrer"&gt;https://github.com/flyingsquirrel0419/dotenv-scan&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If dotenv-scan saves you from a bad deploy, a ⭐ on the repo goes a long way. It's also the best way to let me know it's useful so I keep working on it.&lt;/p&gt;




&lt;p&gt;The thing I keep coming back to is how small the fix is relative to how painful the problem is. One &lt;code&gt;npx&lt;/code&gt; command in CI, and the whole class of "missing env variable in production" goes away. I wish I'd built this years ago.&lt;/p&gt;

&lt;p&gt;Happy to dig into any of the implementation details in the comments — the multi-language extractor design and the dynamic access detection decision both have interesting tradeoffs worth talking through.&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>opensource</category>
      <category>dotenv</category>
    </item>
    <item>
      <title>I Audited My Own Open Source Library and Found 9 Security Bugs. Here's Every One.</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Sun, 26 Apr 2026 01:14:50 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-audited-my-own-open-source-library-and-found-9-security-bugs-heres-every-one-3dkc</link>
      <guid>https://dev.to/flyingsquirrel0419/i-audited-my-own-open-source-library-and-found-9-security-bugs-heres-every-one-3dkc</guid>
      <description>&lt;p&gt;Hey dev.to 👋&lt;/p&gt;

&lt;p&gt;If you've read my &lt;a href="https://dev.to/flyingsquirrel0419/i-got-tired-of-wiring-the-same-caching-stack-every-project-so-i-built-layercache-52e2"&gt;previous post&lt;/a&gt; about &lt;strong&gt;layercache&lt;/strong&gt;, you know it's a multi-layer caching library for Node.js — Memory → Redis → Disk behind a single &lt;code&gt;get()&lt;/code&gt; call, with stampede prevention, tag invalidation, circuit breaking, and all the production-grade stuff you eventually need.&lt;/p&gt;

&lt;p&gt;Today I'm releasing &lt;strong&gt;v1.3.3&lt;/strong&gt;, and it's different from all the previous releases.&lt;/p&gt;

&lt;p&gt;No new features. No benchmark numbers. No shiny API additions.&lt;/p&gt;

&lt;p&gt;Just nine bugs I found in my own library. I want to walk through all of them — what they were, why they happened, and what I did to fix them.&lt;/p&gt;

&lt;p&gt;Some are embarrassing. All of them are real.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I did a full security audit
&lt;/h2&gt;

&lt;p&gt;When you're building in the open and people start actually using the thing, you feel differently about the code. I went back through the internals with fresh eyes and a specific question: &lt;em&gt;what could go wrong in production under real load?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Turns out: a lot.&lt;/p&gt;

&lt;p&gt;Here's everything I found, roughly in severity order.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-1 (HIGH): Unbounded memory growth in &lt;code&gt;keyEpochs&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: &lt;code&gt;CacheStackMaintenance&lt;/code&gt; uses a &lt;code&gt;Map&amp;lt;string, number&amp;gt;&lt;/code&gt; called &lt;code&gt;keyEpochs&lt;/code&gt; to track write invalidation — every time a key is deleted or updated, its epoch is bumped so stale write-behind operations know to skip it. The map grew forever. No cap, no pruning. In a long-running service writing lots of unique keys, this is a slow memory leak that only gets worse over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Added &lt;code&gt;MAX_KEY_EPOCHS = 50_000&lt;/code&gt; and a pruning step after every &lt;code&gt;bumpKeyEpochs()&lt;/code&gt; call. When the map exceeds the limit, the oldest 10% (lowest epoch values) get evicted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+ const MAX_KEY_EPOCHS = 50_000
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;  bumpKeyEpochs(keys: string[]): void {
    for (const key of keys) {
      this.keyEpochs.set(key, this.currentKeyEpoch(key) + 1)
    }
&lt;span class="gi"&gt;+   this.pruneKeyEpochsIfNeeded()
&lt;/span&gt;  }
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ private pruneKeyEpochsIfNeeded(): void {
+   if (this.keyEpochs.size &amp;lt;= MAX_KEY_EPOCHS) return
+   const sorted = [...this.keyEpochs.entries()].sort((a, b) =&amp;gt; a[1] - b[1])
+   const toDelete = Math.ceil(sorted.length * 0.1)
+   for (let i = 0; i &amp;lt; toDelete; i++) {
+     this.keyEpochs.delete(sorted[i][0])
+   }
+ }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one stings because it's exactly the kind of bug that's invisible in tests — you only see it after the process has been running for days and memory graphs start climbing.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-2 (MED-HIGH): Unbounded queue in &lt;code&gt;FetchRateLimiter&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: &lt;code&gt;FetchRateLimiter&lt;/code&gt; queues fetcher requests per-bucket when rate limits are hit. The queue itself had no bound. Under sustained high contention on a single cache key, that queue would grow without limit — eventually consuming unbounded memory and causing backpressure to pile up indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Added &lt;code&gt;MAX_QUEUE_PER_BUCKET = 10_000&lt;/code&gt;. When a bucket's queue is full, new requests bypass the rate limiter entirely rather than blocking (availability &amp;gt; strict throttling in this failure mode).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+ const MAX_QUEUE_PER_BUCKET = 10_000
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;  return new Promise&amp;lt;T&amp;gt;((resolve, reject) =&amp;gt; {
    const bucketKey = this.resolveBucketKey(normalized, context)
    const queue = this.queuesByBucket.get(bucketKey) ?? []
&lt;span class="gi"&gt;+   if (queue.length &amp;gt;= MAX_QUEUE_PER_BUCKET) {
+     this.rateLimitBypasses += 1
+     task().then(resolve, reject)
+     return
+   }
&lt;/span&gt;    queue.push({ bucketKey, options: normalized, task, resolve, reject })
    ...
  })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bypass counter is exposed via metrics so you can see when it's happening in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-3 (MEDIUM): CLI accepted unvalidated input before hitting Redis
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: The admin CLI (&lt;code&gt;npx layercache keys --pattern "..."&lt;/code&gt;, &lt;code&gt;invalidate --tag "..."&lt;/code&gt;, etc.) didn't validate keys, patterns, or tags before passing them to Redis operations. The runtime &lt;code&gt;CacheStack&lt;/code&gt; enforces strict validation on all inputs — the CLI was just... not doing that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: The same &lt;code&gt;validateCacheKey()&lt;/code&gt;, &lt;code&gt;validatePattern()&lt;/code&gt;, and &lt;code&gt;validateTag()&lt;/code&gt; functions used by the runtime are now called in the CLI before any Redis operation runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// cli.ts — now applied before every Redis op&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pattern&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;validateCliInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;validatePattern&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;validateCliInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;validateTag&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;validateCliInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;validateCacheKey&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The runtime had this hardened back in v1.2.x. The CLI just... never got the memo.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-4 (MEDIUM): &lt;code&gt;invalidate&lt;/code&gt; could wipe the entire cache with no confirmation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: Running &lt;code&gt;npx layercache invalidate&lt;/code&gt; with no &lt;code&gt;--pattern&lt;/code&gt; or &lt;code&gt;--tag&lt;/code&gt; defaults to &lt;code&gt;*&lt;/code&gt; — which matches every key in the cache. There was no confirmation step. One mistyped command in a terminal and your entire production cache is gone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: If you run &lt;code&gt;invalidate&lt;/code&gt; with no targeting flags and there are keys to delete, the CLI now refuses and asks you to pass &lt;code&gt;--force&lt;/code&gt; explicitly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npx layercache invalidate
&lt;span class="go"&gt;Warning: this operation will invalidate 14,823 keys. Use --force to confirm.

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npx layercache invalidate &lt;span class="nt"&gt;--force&lt;/span&gt;
&lt;span class="go"&gt;Invalidated 14,823 keys.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one is embarrassing because I added the CLI &lt;em&gt;for convenience in production&lt;/em&gt;, and then left a footgun that could nuke the entire cache by accident. Glad I caught it before anyone else did.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-5 (MEDIUM): &lt;code&gt;TagIndex&lt;/code&gt; pruning was silently broken
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: &lt;code&gt;TagIndex&lt;/code&gt; uses a &lt;code&gt;knownKeys&lt;/code&gt; collection to track which keys exist, so prefix and wildcard invalidation can find them. Since v1.2.0, it had a &lt;code&gt;maxKnownKeys&lt;/code&gt; limit to prevent unbounded growth — but it was a &lt;code&gt;Set&amp;lt;string&amp;gt;&lt;/code&gt;, which has no access-recency ordering. The pruning code sorted and evicted by... nothing meaningful. It was effectively random deletion, not LRU eviction. Hot keys were just as likely to get pruned as cold ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Changed &lt;code&gt;knownKeys&lt;/code&gt; from &lt;code&gt;Set&amp;lt;string&amp;gt;&lt;/code&gt; to &lt;code&gt;Map&amp;lt;string, number&amp;gt;&lt;/code&gt; where the value is a timestamp updated on every &lt;code&gt;touch()&lt;/code&gt; or &lt;code&gt;track()&lt;/code&gt; call. Now pruning correctly evicts least-recently-used entries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;- private readonly knownKeys = new Set&amp;lt;string&amp;gt;()
&lt;/span&gt;&lt;span class="gi"&gt;+ private readonly knownKeys = new Map&amp;lt;string, number&amp;gt;()  // key → last-touched timestamp
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;  async touch(key: string): Promise&amp;lt;void&amp;gt; {
&lt;span class="gd"&gt;-   this.knownKeys.add(key)
&lt;/span&gt;&lt;span class="gi"&gt;+   this.knownKeys.set(key, Date.now())  // updates on every access
&lt;/span&gt;    this.pruneKnownKeysIfNeeded()
  }
&lt;span class="err"&gt;
&lt;/span&gt;  private pruneKnownKeysIfNeeded(): void {
    if (!this.maxKnownKeys || this.knownKeys.size &amp;lt;= this.maxKnownKeys) return
&lt;span class="gd"&gt;-   // old: iterated a Set with no ordering guarantee
&lt;/span&gt;&lt;span class="gi"&gt;+   const sorted = [...this.knownKeys.entries()].sort((a, b) =&amp;gt; a[1] - b[1])
+   const toDelete = Math.ceil(sorted.length * 0.1)
+   for (let i = 0; i &amp;lt; toDelete; i++) this.knownKeys.delete(sorted[i][0])
&lt;/span&gt;  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The limit was there since v1.2.0 and &lt;em&gt;looked&lt;/em&gt; like it was working. It wasn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-6 (MEDIUM): TOCTOU race in snapshot file writes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: The snapshot persistence code (&lt;code&gt;persistToFile()&lt;/code&gt;) wrote directly to the target path. If the process crashed mid-write, you'd get a partial or corrupt snapshot file with no recovery path. Worse, if two processes tried to write a snapshot concurrently, they'd clobber each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Centralized all snapshot writes through two new utilities: &lt;code&gt;atomicWriteTempPath()&lt;/code&gt; generates a randomized temp filename, and &lt;code&gt;commitAtomicWrite()&lt;/code&gt; renames the temp file to the target — an atomic operation on all POSIX-compliant filesystems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/internal/CacheSnapshotFile.ts&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;atomicWriteTempPath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;targetPath&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.tmp-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;commitAtomicWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tempPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;rename&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tempPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;unlink&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tempPath&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Write to the temp path, then &lt;code&gt;fs.rename()&lt;/code&gt;. If anything goes wrong before the rename, the original snapshot is untouched. If the rename succeeds, readers see either the old file or the new one — never a partial state.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-7 (LOW): Memory leak in &lt;code&gt;layerDegradedUntil&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: When a cache layer fails and enters degraded mode, &lt;code&gt;CacheStack&lt;/code&gt; stores &lt;code&gt;layerDegradedUntil.set(layer.name, expiryTimestamp)&lt;/code&gt;. When the degradation period expired, the entry was never removed. In a service where Redis occasionally has brief hiccups, this map accumulates an entry per layer per incident — forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: On every read that checks degradation status, if the entry has expired, delete it before returning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;  const degradedUntil = this.layerDegradedUntil.get(layer.name)
  const skip = shouldSkipDegradedLayer(degradedUntil)
&lt;span class="gi"&gt;+ if (!skip &amp;amp;&amp;amp; degradedUntil !== undefined) {
+   this.layerDegradedUntil.delete(layer.name)  // clean up expired entry
+ }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One-liner fix, but this would quietly accumulate in any service that ever experiences Redis downtime.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-8 (LOW): &lt;code&gt;Math.random()&lt;/code&gt; for TTL jitter
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: &lt;code&gt;TtlResolver.applyJitter()&lt;/code&gt; used &lt;code&gt;Math.random()&lt;/code&gt; to spread cache expiration times. &lt;code&gt;Math.random()&lt;/code&gt; is not cryptographically secure — it's seeded from a deterministic internal state. For TTL jitter this is mostly harmless, but using a predictable PRNG to compute expiration windows is bad practice. In theory, an observer who can measure cache miss patterns could infer when keys are about to expire.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Replaced &lt;code&gt;Math.random()&lt;/code&gt; with a &lt;code&gt;crypto.randomBytes&lt;/code&gt;-based equivalent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+ import { randomBytes } from 'node:crypto'
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ export const secureRandom = {
+   value(): number {
+     return randomBytes(4).readUInt32BE(0) / 0x100000000
+   }
+ }
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;  applyJitter(ttl: number | undefined, jitter: number | undefined): number | undefined {
    if (!ttl || ttl &amp;lt;= 0 || !jitter || jitter &amp;lt;= 0) return ttl
&lt;span class="gd"&gt;-   const delta = (Math.random() * 2 - 1) * jitter
&lt;/span&gt;&lt;span class="gi"&gt;+   const delta = (secureRandom.value() * 2 - 1) * jitter
&lt;/span&gt;    return Math.max(1, Math.round(ttl + delta))
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;randomBytes(4)&lt;/code&gt; is fast. No measurable performance impact.&lt;/p&gt;




&lt;h2&gt;
  
  
  VULN-9 (LOW): Background refresh failures logged at &lt;code&gt;debug&lt;/code&gt; level
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The bug&lt;/strong&gt;: When a stale-while-revalidate background refresh fails — upstream is down, fetcher throws, timeout — the error was logged at &lt;code&gt;debug&lt;/code&gt; level. In almost every production setup, &lt;code&gt;debug&lt;/code&gt; logs are disabled. So these failures were silently swallowed. You'd see keys serving stale values with no log entry explaining why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: One-line change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;- this.logger.debug?.('background-refresh-failed', { key, error })
&lt;/span&gt;&lt;span class="gi"&gt;+ this.logger.warn?.('background-refresh-failed', { key, error })
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I genuinely don't know how long this was invisible. If you've been running layercache with &lt;code&gt;staleWhileRevalidate&lt;/code&gt; and wondering why some keys feel permanently stale — this might be why.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I learned from this
&lt;/h2&gt;

&lt;p&gt;A few patterns that caused most of these bugs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unbounded Maps are silent killers.&lt;/strong&gt; VULN-1, VULN-5, and VULN-7 are all variations of the same mistake: I allocated a &lt;code&gt;Map&lt;/code&gt; or &lt;code&gt;Set&lt;/code&gt;, put the bounds/pruning logic on my TODO list, and shipped without it. In tests, these are invisible. In production they show up in memory graphs after days of uptime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal tools don't inherit production hardening automatically.&lt;/strong&gt; VULN-3 and VULN-4 happened because the CLI was an afterthought. The core library had strict input validation. The CLI that wraps it did not. Every interface — HTTP endpoints, CLIs, admin tools — needs its own hardening pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Debug-level logging" is often "no logging" in production.&lt;/strong&gt; VULN-9 was a legitimate design decision that turned out to be wrong in practice. Background refresh failures are operational signals, not debugging details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TOCTOU bugs hide behind success.&lt;/strong&gt; VULN-6 was only a problem during crashes or concurrent writes — situations that don't happen in unit tests. The atomic write pattern is just the right default, regardless.&lt;/p&gt;




&lt;h2&gt;
  
  
  Upgrade
&lt;/h2&gt;

&lt;p&gt;v1.3.3 is a drop-in upgrade. No API changes, no migration needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full changelog: &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/CHANGELOG.md" rel="noopener noreferrer"&gt;CHANGELOG.md&lt;/a&gt;&lt;br&gt;
Security PR: &lt;a href="https://github.com/flyingsquirrel0419/layercache/pull/19" rel="noopener noreferrer"&gt;#19&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you're already using layercache — please upgrade. If you're not, this might be a decent time to take a look:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🐙 &lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;github.com/flyingsquirrel0419/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📦 &lt;strong&gt;npm&lt;/strong&gt;: &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;npmjs.com/package/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 &lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt; · &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/tutorial.md" rel="noopener noreferrer"&gt;Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this has been useful, a ⭐ on GitHub helps a lot — it's the main signal that helps other developers find the library. Thanks for reading. 🙏&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>redis</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I got tired of wiring the same caching stack every project, so I built LayerCache</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Sat, 18 Apr 2026 16:40:12 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-got-tired-of-wiring-the-same-caching-stack-every-project-so-i-built-layercache-52e2</link>
      <guid>https://dev.to/flyingsquirrel0419/i-got-tired-of-wiring-the-same-caching-stack-every-project-so-i-built-layercache-52e2</guid>
      <description>&lt;p&gt;Every Node.js service I've worked on hits the same caching wall. It always starts the same way.&lt;/p&gt;

&lt;p&gt;You add an in-memory cache. It's fast. Life is good.&lt;/p&gt;

&lt;p&gt;Then you scale to multiple instances. Now each server has its own view of the data. Stale reads start showing up in production. So you add Redis. Now all your instances share the same cache. Problem solved — until you realize every single request is paying a Redis round-trip, even for data that barely changes.&lt;/p&gt;

&lt;p&gt;So you bring back the in-memory layer on top of Redis. Now you have L1 (memory) and L2 (Redis). But what happens when a key expires and 200 requests hit at the same time? They all miss L1, all miss L2, and they all go straight to the database simultaneously. Cache stampede. Your DB is not happy.&lt;/p&gt;

&lt;p&gt;You add stampede protection. Then Redis goes down one day, and your entire cache blows up instead of gracefully falling back. You add circuit breaking. Then you realize your memory caches across instances are now serving different data and you need a pub/sub invalidation bus to keep them in sync...&lt;/p&gt;

&lt;p&gt;It never ends.&lt;/p&gt;

&lt;p&gt;I've wired this stack more than once. It's not that any single piece is hard — it's that getting all of it working together correctly, with proper testing and production-grade reliability, takes real engineering time every time.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;LayerCache&lt;/a&gt; to do it once and stop repeating myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;LayerCache stacks multiple cache layers (Memory → Redis → Disk) behind a single &lt;code&gt;get()&lt;/code&gt; call.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a cache &lt;strong&gt;hit&lt;/strong&gt;: serves the fastest available layer, then automatically backfills the layers above it. So if L1 is cold but L2 (Redis) has the value, L1 gets filled for the next request.&lt;/p&gt;

&lt;p&gt;On a cache &lt;strong&gt;miss&lt;/strong&gt;: the fetcher function runs &lt;strong&gt;exactly once&lt;/strong&gt;, no matter how many requests are waiting. All concurrent callers get the same promise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your request flood
       │
┌──────▼──────┐
│ L1 Memory   │  ~0.005 ms  ← serves from here if warm
│             │
│ L2 Redis    │  ~0.2 ms   ← falls through to here if L1 cold
│             │
│ L3 Disk     │  ~2 ms     ← optional persistent layer
│             │
│ Fetcher()   │             ← runs ONCE even under 100 concurrent requests
└─────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Solving the stampede problem
&lt;/h2&gt;

&lt;p&gt;In a benchmark with 75 concurrent requests hitting an expired key:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Origin fetches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;375&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LayerCache&lt;/td&gt;
&lt;td&gt;5 (one per layer)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The local single-flight is handled by sharing an in-flight promise across concurrent callers. No mutex queue. No serialization.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;distributed environments&lt;/strong&gt; — multiple Node.js processes or machines — &lt;code&gt;RedisSingleFlightCoordinator&lt;/code&gt; extends this across instances using distributed locks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a test with 60 concurrent requests across multiple instances: &lt;strong&gt;1 origin fetch total&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Keeping L1 caches in sync across instances
&lt;/h2&gt;

&lt;p&gt;The classic problem with in-process memory caches in a multi-instance setup: if you invalidate a key on Server A, Servers B and C still serve the old value from their L1.&lt;/p&gt;

&lt;p&gt;LayerCache solves this with a Redis pub/sub invalidation bus.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// separate connection for sub&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// invalidating on one instance flushes L1 on all instances&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  When Redis dies
&lt;/h2&gt;

&lt;p&gt;This is where a lot of hand-rolled caching setups break badly. LayerCache has two modes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strict mode&lt;/strong&gt; (default): if any layer fails, the operation fails. Good when you need strong consistency guarantees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graceful degradation&lt;/strong&gt;: failed layers are temporarily skipped. The cache keeps working by going directly to the fetcher.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tested this with 500ms of injected Redis latency (way above the 200ms command timeout):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Strict&lt;/th&gt;
&lt;th&gt;Graceful&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L1 warm hit&lt;/td&gt;
&lt;td&gt;✅ 0.065 ms&lt;/td&gt;
&lt;td&gt;✅ 0.065 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2 hit (Redis slow)&lt;/td&gt;
&lt;td&gt;❌ timeout&lt;/td&gt;
&lt;td&gt;✅ 201 ms (fell back to fetcher)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cold miss (Redis slow)&lt;/td&gt;
&lt;td&gt;❌ timeout&lt;/td&gt;
&lt;td&gt;✅ 200 ms (fell back to fetcher)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;L1 hot hits aren't affected at all since they never touch Redis.&lt;/p&gt;




&lt;h2&gt;
  
  
  Benchmark numbers
&lt;/h2&gt;

&lt;p&gt;Ran on a single-core VM with real Docker-backed Redis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Warm hit latency
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;layered (L1 hit):   0.005 ms avg  (1006x faster than no-cache)
memory only:        0.010 ms avg  ( 503x faster than no-cache)
no-cache:           5.030 ms avg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  HTTP throughput (autocannon, 40 connections, 8 seconds)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/layered:   16,211 req/s  —  1.9 ms avg latency
/memory:    16,031 req/s  —  1.9 ms avg latency
/nocache:      158 req/s  — 253.2 ms avg latency
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Memory pressure
&lt;/h3&gt;

&lt;p&gt;With L1 capped at 25 keys and 180 unique keys inserted (256 KiB each), revisits served &lt;strong&gt;0 origin refetches&lt;/strong&gt; — the layer evicted correctly and Redis backed the misses.&lt;/p&gt;

&lt;p&gt;Full benchmark methodology and raw output: &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/benchmarking.md" rel="noopener noreferrer"&gt;docs/benchmarking.md&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Other things it does
&lt;/h2&gt;

&lt;p&gt;I don't want to just dump a feature list, but a few things worth calling out:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag invalidation&lt;/strong&gt; — attach tags to keys and invalidate all of them at once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;post:42&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;posts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:7&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invalidateByTag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:7&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// clears all keys tagged with user:7&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stale-while-revalidate&lt;/strong&gt; — return the cached value immediately, refresh in the background:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;staleWhileRevalidate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Framework middleware&lt;/strong&gt; — drop-in for Express, Fastify, Hono, tRPC, GraphQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nf"&gt;createExpressCacheMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;keyResolver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`users:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUsers&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Admin CLI&lt;/strong&gt; — inspect a live Redis-backed cache without writing code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx layercache stats
npx layercache keys &lt;span class="nt"&gt;--pattern&lt;/span&gt; &lt;span class="s2"&gt;"user:*"&lt;/span&gt;
npx layercache invalidate &lt;span class="nt"&gt;--tag&lt;/span&gt; posts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Memory-only (no Redis needed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;fetchData&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full distributed setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;compression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gzip&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;https://github.com/flyingsquirrel0419/layercache&lt;/a&gt;
Please star the GitHub repo. It help me much! :&amp;gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt;: &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://github.com/flyingsquirrel0419/layercache/tree/main/docs" rel="noopener noreferrer"&gt;https://github.com/flyingsquirrel0419/layercache/tree/main/docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;The part I found most interesting to design was the stampede guard — specifically making sure concurrent callers share a promise rather than queueing through a mutex, and then extending that behavior across processes with Redis. Happy to dig into any of that if you're curious.&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>redis</category>
      <category>opensource</category>
    </item>
    <item>
      <title>useless-gps</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Fri, 10 Apr 2026 17:50:44 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/useless-gps-3kf6</link>
      <guid>https://dev.to/flyingsquirrel0419/useless-gps-3kf6</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I made a useless-gps website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://useless-gps.vercel.app/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;useless-gps.vercel.app&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/flyingsquirrel0419" rel="noopener noreferrer"&gt;
        flyingsquirrel0419
      &lt;/a&gt; / &lt;a href="https://github.com/flyingsquirrel0419/useless-gps" rel="noopener noreferrer"&gt;
        useless-gps
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;useless-gps&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;The world's most accurate and completely useless GPS locator.&lt;/p&gt;
&lt;p&gt;This project is a small Next.js app that reads your browser geolocation and turns it into intentionally unhelpful cosmic, geophysical, and existential status cards.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Next.js 14&lt;/li&gt;
&lt;li&gt;React 18&lt;/li&gt;
&lt;li&gt;TypeScript&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;Framer Motion&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Local Development&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Install dependencies:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm ci&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Start the development server:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run dev&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Build for production:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run build&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Start the production server:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run start&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;How It Works&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Requests browser geolocation with high accuracy enabled&lt;/li&gt;
&lt;li&gt;Shows a fake "scan" sequence while location data is loading&lt;/li&gt;
&lt;li&gt;Converts coordinates into humorous location cards&lt;/li&gt;
&lt;li&gt;Renders a stylized retro radar interface with animated effects&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Contribution Policy&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;main&lt;/code&gt; is a protected branch.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do not push directly to &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open a pull request for every change&lt;/li&gt;
&lt;li&gt;Outside contributors should work from a fork and open a PR&lt;/li&gt;
&lt;li&gt;Non-admin changes require operator approval before landing on &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Operator Review&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Repository ownership and…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/flyingsquirrel0419/useless-gps" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I am interesting in space.So i want to know location of earth in space. So i made it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;Community Favorite : Because i build this with own coding skills. so I should choose "Community Favorite".&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>layercache: Stop Paying Redis Latency on Every Hot Read</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Thu, 09 Apr 2026 23:53:34 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/layercache-stop-paying-redis-latency-on-every-hot-read-m8l</link>
      <guid>https://dev.to/flyingsquirrel0419/layercache-stop-paying-redis-latency-on-every-hot-read-m8l</guid>
      <description>&lt;p&gt;Every Node.js backend hits the same wall eventually.&lt;/p&gt;

&lt;p&gt;Your Redis cache is working, latency is acceptable, and then traffic doubles. Suddenly the Redis round-trip that felt like nothing at 200 req/s starts dominating your p95 at 2,000 req/s. You add an in-process memory cache on top, wire up some invalidation logic by hand, and three months later you are maintaining a fragile two-layer system with no stampede protection and no cross-instance consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;layercache&lt;/a&gt; is a TypeScript-first library that solves this problem once, cleanly. It stacks memory, Redis, and disk behind a single unified API and handles the hard parts — stampede prevention, cross-instance invalidation, graceful degradation under Redis failures — out of the box.&lt;/p&gt;

&lt;p&gt;This post walks through what it does and what the benchmark numbers actually look like on a real Redis backend.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your app ──▶ L1 Memory   ~0.006 ms  (per-process, sub-millisecond)
                │
             L2 Redis    ~0.2 ms    (shared across instances)
                │
             L3 Disk     ~2 ms      (optional, persistent)
                │
             Fetcher     runs once  (even under high concurrency)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a cache hit the fastest available layer responds and the result backfills any warmer layers automatically. On a miss the fetcher runs exactly once, no matter how many concurrent requests arrived at the same time.&lt;/p&gt;

&lt;p&gt;That last part — the single-flight guarantee — is where most hand-rolled hybrid caches fall apart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Memory only (no Redis needed):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Memory + Redis layered setup:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myapp:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API is the same regardless of how many layers you add. Your application code doesn't change when you add or remove a layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Benchmark Results
&lt;/h2&gt;

&lt;p&gt;I ran layercache v1.2.9 against a real Redis 7 backend (Docker, not a mock) on Linux. Here is what the numbers look like.&lt;/p&gt;

&lt;h3&gt;
  
  
  Warm Hit Latency
&lt;/h3&gt;

&lt;p&gt;The most important number for a cache library is how fast the hit path is.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg ms&lt;/th&gt;
&lt;th&gt;P95 ms&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache (origin)&lt;/td&gt;
&lt;td&gt;5.175&lt;/td&gt;
&lt;td&gt;8.742&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;0.009&lt;/td&gt;
&lt;td&gt;0.014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;0.005&lt;/td&gt;
&lt;td&gt;0.006&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Memory-only warm hits averaged &lt;strong&gt;0.009ms&lt;/strong&gt;. With a Redis layer added, the hot path still resolves from L1 memory and came in at &lt;strong&gt;0.005ms&lt;/strong&gt; — both are firmly sub-millisecond and effectively the same class of latency for production purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stampede Prevention
&lt;/h3&gt;

&lt;p&gt;This is where the library earns its keep. 75 concurrent requests for the same missing key, repeated 5 times:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg ms&lt;/th&gt;
&lt;th&gt;Origin Executions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;409.5&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;375&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;6.9&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;36.7&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without a cache, 75 × 5 = 375 origin calls. With layercache, the fetcher ran exactly 5 times — once per round, regardless of concurrency. The layered case is slower than memory-only because it pays Redis coordination costs, but the correctness guarantee is the same.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP Throughput
&lt;/h3&gt;

&lt;p&gt;Under sustained load with &lt;code&gt;autocannon&lt;/code&gt; (40 connections, 8 seconds):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P97.5&lt;/th&gt;
&lt;th&gt;Req/s&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;249 ms&lt;/td&gt;
&lt;td&gt;271 ms&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;1.82 ms&lt;/td&gt;
&lt;td&gt;4 ms&lt;/td&gt;
&lt;td&gt;16,705&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;1.74 ms&lt;/td&gt;
&lt;td&gt;4 ms&lt;/td&gt;
&lt;td&gt;17,184&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Caching moved the service from &lt;strong&gt;161 req/s&lt;/strong&gt; to over &lt;strong&gt;17,000 req/s&lt;/strong&gt; — roughly a 100× improvement in throughput. Average latency dropped from 249ms to under 2ms. The memory-only and layered routes performed nearly identically in steady state because hot requests stay in L1 after warm-up.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happens When Redis Is Slow or Dead?
&lt;/h2&gt;

&lt;p&gt;This is the question that separates a library you can actually run in production from one you can only trust in demos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Slow Redis
&lt;/h3&gt;

&lt;p&gt;I measured three scenarios with injected TCP latency:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Redis Delay&lt;/th&gt;
&lt;th&gt;L1 hot hit&lt;/th&gt;
&lt;th&gt;L2 hit&lt;/th&gt;
&lt;th&gt;Cold miss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0ms&lt;/td&gt;
&lt;td&gt;0.407ms&lt;/td&gt;
&lt;td&gt;2.655ms&lt;/td&gt;
&lt;td&gt;12.259ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100ms&lt;/td&gt;
&lt;td&gt;0.119ms&lt;/td&gt;
&lt;td&gt;101.172ms&lt;/td&gt;
&lt;td&gt;504.167ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500ms&lt;/td&gt;
&lt;td&gt;0.196ms&lt;/td&gt;
&lt;td&gt;501.404ms&lt;/td&gt;
&lt;td&gt;2506.013ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The key insight:&lt;/strong&gt; L1 hot hits stayed fast regardless of Redis latency. If a request can be served from in-process memory, slow Redis does not matter at all. The latency penalty only applies when a request needs to reach L2 or perform a cold miss.&lt;/p&gt;

&lt;p&gt;Cold misses scaled hard with injected delay because the request paid both the Redis round-trip and the write-back path. If you have traffic patterns with many cold misses, a slow Redis will drag your tail latency even with &lt;code&gt;gracefulDegradation&lt;/code&gt; enabled — the benchmark showed graceful and strict modes performing nearly identically under slow conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Redis
&lt;/h3&gt;

&lt;p&gt;Under a fully paused Redis instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Warm L1 hits: &lt;strong&gt;still worked&lt;/strong&gt; — both strict and graceful modes served from memory normally&lt;/li&gt;
&lt;li&gt;Cold misses: &lt;strong&gt;timed out at 2000ms&lt;/strong&gt; — both modes failed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is important to understand. &lt;code&gt;gracefulDegradation&lt;/code&gt; keeps warm traffic alive when Redis goes down. It does not create a fast fallback path for cold keys. New keys and expired keys that need a Redis write-back will stall until the timeout.&lt;/p&gt;

&lt;p&gt;Operationally this means: &lt;strong&gt;if your L1 TTL is shorter than your expected Redis outage window, you will see degraded cold-miss behavior.&lt;/strong&gt; Size your L1 TTLs with this in mind.&lt;/p&gt;




&lt;h2&gt;
  
  
  Queue Amplification Under Slow Redis
&lt;/h2&gt;

&lt;p&gt;A follow-up benchmark asked: if Redis is slow and 500 concurrent requests pile up on L2-hit traffic, does latency stay bounded or blow up?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Redis Delay&lt;/th&gt;
&lt;th&gt;Concurrency 1&lt;/th&gt;
&lt;th&gt;Concurrency 500&lt;/th&gt;
&lt;th&gt;Amplification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100ms&lt;/td&gt;
&lt;td&gt;100.8ms&lt;/td&gt;
&lt;td&gt;128.9ms&lt;/td&gt;
&lt;td&gt;1.28×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500ms&lt;/td&gt;
&lt;td&gt;501.1ms&lt;/td&gt;
&lt;td&gt;515.8ms&lt;/td&gt;
&lt;td&gt;1.03×&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No runaway queue amplification. At 500 concurrent requests against a 500ms-latency Redis, wall-clock time only grew by about 15ms above the single-request baseline. The library appears to batch or overlap L2 requests within a shared Redis client rather than serializing them, which keeps the curve nearly flat.&lt;/p&gt;




&lt;h2&gt;
  
  
  Memory Pressure and Eviction
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;maxSize: 25&lt;/code&gt; and 180 unique keys inserted (each with a 256KB payload), then revisiting the earliest 25 keys:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Evictions&lt;/th&gt;
&lt;th&gt;L1 Retained&lt;/th&gt;
&lt;th&gt;Revisit Avg&lt;/th&gt;
&lt;th&gt;Origin Fetches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;1.332ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Eviction was predictable. L1 held exactly &lt;code&gt;maxSize&lt;/code&gt; entries after the fill phase. When evicted keys were revisited, they reloaded from Redis L2 rather than hitting the origin — zero origin fetches despite L1 having evicted everything. GC activity was measurable (36 events, 78ms total) but no stop-the-world pauses appeared at this payload size.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-Instance and Cross-Process Features
&lt;/h2&gt;

&lt;p&gt;Single-process benchmarks only tell part of the story. layercache ships with primitives for distributed deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// separate connection for pub/sub&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The edge benchmark verified both of these features work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-instance invalidation:&lt;/strong&gt; Instance B observed the updated value after Instance A invalidated and repopulated the key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distributed single-flight:&lt;/strong&gt; 60 concurrent requests split across two instances triggered exactly &lt;strong&gt;1&lt;/strong&gt; origin fetch total.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TTL expiry stampedes are also deduplicated. In the benchmark, 40 concurrent requests hitting the same expired key across 5 rounds produced only 5 origin executions — one per expiry round.&lt;/p&gt;




&lt;h2&gt;
  
  
  Framework Integrations
&lt;/h2&gt;

&lt;p&gt;layercache ships middleware and adapters for the major Node.js frameworks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Express:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;createExpressCacheMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;keyResolver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`users:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NestJS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;imports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;CacheStackModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forRoot&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AppModule&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fastify, Hono, tRPC, GraphQL resolver wrappers, and Next.js App Router are also covered.&lt;/p&gt;




&lt;h2&gt;
  
  
  Payload Size Matters for Redis Reads
&lt;/h2&gt;

&lt;p&gt;One benchmark result worth highlighting explicitly: payload size has almost no effect on L1 memory hits, but has a large effect when Redis is on the read path.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;1KB avg&lt;/th&gt;
&lt;th&gt;1MB avg&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory hit&lt;/td&gt;
&lt;td&gt;0.012ms&lt;/td&gt;
&lt;td&gt;0.018ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis hit&lt;/td&gt;
&lt;td&gt;0.200ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4.170ms&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you are storing large objects — full page renders, heavy API responses — and relying on Redis as the primary read path without a warm L1 in front, you will feel the serialization and network overhead. Keep large objects in L1 where possible, or enable compression at the Redis layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use layercache
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good fit:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services handling repeated reads for the same keys under any meaningful concurrency&lt;/li&gt;
&lt;li&gt;Multi-instance deployments that need consistent cache state across processes&lt;/li&gt;
&lt;li&gt;Situations where Redis slowdowns or outages should degrade gracefully rather than cascade&lt;/li&gt;
&lt;li&gt;Teams that want observable caching with hits/misses/latency metrics without building the instrumentation themselves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Less relevant:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pure write-heavy workloads with no repeated reads&lt;/li&gt;
&lt;li&gt;Environments where an in-process memory cache is prohibited for compliance reasons&lt;/li&gt;
&lt;li&gt;Very simple single-key caches where a plain &lt;code&gt;Map&lt;/code&gt; with a TTL is already sufficient&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Key number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Warm L1 hit latency&lt;/td&gt;
&lt;td&gt;~0.006ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP throughput gain (no cache → cached)&lt;/td&gt;
&lt;td&gt;~100×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stampede dedup (75 concurrent, 5 rounds)&lt;/td&gt;
&lt;td&gt;375 fetches → 5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Distributed single-flight (60 requests, 2 instances)&lt;/td&gt;
&lt;td&gt;60 fetches → 1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slow Redis impact on hot L1 traffic&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dead Redis impact on warm L1 traffic&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dead Redis impact on cold-miss traffic&lt;/td&gt;
&lt;td&gt;Timeout&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The library makes a clear promise: stack your layers, wire up your fetcher, and it handles the coordination. The benchmarks back that promise up on a real backend.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;npm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/migration-guide.md" rel="noopener noreferrer"&gt;Migration Guide from node-cache-manager / keyv / cacheable&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>typescript</category>
      <category>node</category>
      <category>npm</category>
      <category>redis</category>
    </item>
    <item>
      <title>Beyond Basic Caching: How layercache Eliminates Cache Stampedes in Node.js</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Thu, 09 Apr 2026 18:21:03 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/beyond-basic-caching-how-layercache-eliminates-cache-stampedes-in-nodejs-4gi2</link>
      <guid>https://dev.to/flyingsquirrel0419/beyond-basic-caching-how-layercache-eliminates-cache-stampedes-in-nodejs-4gi2</guid>
      <description>&lt;p&gt;Every Node.js developer knows the caching drill. You start with an in-memory &lt;code&gt;Map&lt;/code&gt;, graduate to Redis when you scale horizontally, and eventually find yourself wiring up a fragile hybrid system that breaks in production at 2 AM.&lt;/p&gt;

&lt;p&gt;I recently discovered &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;&lt;code&gt;layercache&lt;/code&gt;&lt;/a&gt;—a multi-layer caching toolkit that promises to handle the messy parts (stampede prevention, graceful degradation, distributed consistency) while keeping the API simple. But does it deliver?&lt;/p&gt;

&lt;p&gt;I ran four comprehensive benchmark suites against real Redis instances to find out. Here are the results.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: L1 + L2 + Coordination
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;layercache&lt;/code&gt; treats caching as a stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────┐
│  L1 Memory  (~0.01ms, per-process)  │
│  L2 Redis   (~0.5ms, shared)        │
│  L3 Disk    (~2ms, persistent)      │
└─────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you request a key, it checks L1 first, then L2, then your database. The clever part? &lt;strong&gt;All layers backfill automatically&lt;/strong&gt;—if you hit L2, layercache populates L1 for the next request. If you hit the database, it writes to both layers.&lt;/p&gt;

&lt;p&gt;But the real magic happens when 100 requests arrive simultaneously for the same expired key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 1: The Stampede Test
&lt;/h2&gt;

&lt;p&gt;The "thundering herd" problem is where most caching libraries fail. When a popular key expires, 100 concurrent requests can trigger 100 database queries before the first one repopulates the cache.&lt;/p&gt;

&lt;p&gt;I tested 75 concurrent requests across 5 runs (375 total requests) for a cold key:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;Origin Fetches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;375&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; &lt;code&gt;layercache&lt;/code&gt;'s single-flight coordination ensured the fetcher ran exactly &lt;strong&gt;once&lt;/strong&gt; per expiry round, not 75 times. The library creates a coordination lock in Redis (or memory) so that concurrent requests wait for the first fetcher to complete rather than hammering your database.&lt;/p&gt;

&lt;p&gt;Latency under this stampede:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;409ms&lt;/td&gt;
&lt;td&gt;429ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;6.9ms&lt;/td&gt;
&lt;td&gt;13.5ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Layered&lt;/td&gt;
&lt;td&gt;36.7ms&lt;/td&gt;
&lt;td&gt;43.6ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The layered case is slower than memory-only (it pays Redis coordination costs), but it preserves the critical property: &lt;strong&gt;your database only feels one request&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 2: Real HTTP Throughput
&lt;/h2&gt;

&lt;p&gt;Theory is nice, but what about real HTTP servers? I set up three Express routes—no cache, memory-only, and layered—and hit them with &lt;code&gt;autocannon&lt;/code&gt; (40 connections, 8 seconds):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P97.5&lt;/th&gt;
&lt;th&gt;Req/sec&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/nocache&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;249ms&lt;/td&gt;
&lt;td&gt;271ms&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;td&gt;57 KB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/memory&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.82ms&lt;/td&gt;
&lt;td&gt;4ms&lt;/td&gt;
&lt;td&gt;16,705&lt;/td&gt;
&lt;td&gt;5.9 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/layered&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.74ms&lt;/td&gt;
&lt;td&gt;4ms&lt;/td&gt;
&lt;td&gt;17,184&lt;/td&gt;
&lt;td&gt;6.1 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;That's a 100x throughput increase&lt;/strong&gt; with minimal latency difference between memory-only and Redis-backed layers. Once warmed, L1 memory serves the hot path while Redis provides the shared backing store for multi-instance deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 3: When Redis Goes Wrong
&lt;/h2&gt;

&lt;p&gt;Production caches fail. I tested two failure modes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Slow Redis (500ms latency injection)
&lt;/h3&gt;

&lt;p&gt;Using a TCP proxy to add synthetic latency:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Single Request&lt;/th&gt;
&lt;th&gt;500 Concurrent&lt;/th&gt;
&lt;th&gt;Amplification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L2 hit (strict)&lt;/td&gt;
&lt;td&gt;501ms&lt;/td&gt;
&lt;td&gt;515ms&lt;/td&gt;
&lt;td&gt;1.03x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2 hit (graceful)&lt;/td&gt;
&lt;td&gt;501ms&lt;/td&gt;
&lt;td&gt;512ms&lt;/td&gt;
&lt;td&gt;1.02x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key finding:&lt;/strong&gt; Under slow Redis, wall-clock time stayed close to the single-request baseline even at 500 concurrent requests. The linearity ratio collapsed to ~0.002, meaning the batch completed far faster than a naive "latency × N" model would predict.&lt;/p&gt;

&lt;p&gt;However, &lt;strong&gt;cold misses were brutal&lt;/strong&gt;: With 500ms Redis latency, a cache miss took ~2.5s because it paid the slow Redis cost plus the fetch/write cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Redis (complete outage)
&lt;/h3&gt;

&lt;p&gt;I paused the Redis container with Docker:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Success&lt;/th&gt;
&lt;th&gt;Latency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Strict hot hit&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;0.17ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graceful hot hit&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;0.07ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Strict cold miss&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Timeout (2000ms)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graceful cold miss&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Timeout (2000ms)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Critical insight:&lt;/strong&gt; &lt;code&gt;gracefulDegradation&lt;/code&gt; did &lt;strong&gt;not&lt;/strong&gt; turn a cold miss into a fast memory-only fallback when Redis was completely frozen. Hot L1 keys survived the outage beautifully (served from memory), but new or expired keys stalled until timeout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational takeaway:&lt;/strong&gt; Warm your critical keys before Redis has issues. Hot L1 traffic is your lifeline during Redis outages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 4: Memory Pressure and Eviction
&lt;/h2&gt;

&lt;p&gt;What happens when L1 memory fills up? I set &lt;code&gt;maxSize: 25&lt;/code&gt; and inserted 180 unique 256KB payloads:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Evictions&lt;/td&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L1 Retained&lt;/td&gt;
&lt;td&gt;25 (exactly maxSize)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Origin Fetches on Revisit&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GC Pauses (max)&lt;/td&gt;
&lt;td&gt;6.1ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When revisiting the oldest keys (which were evicted from L1), they were seamlessly reloaded from Redis L2—not the origin. No cache stampede, no origin amplification.&lt;/p&gt;

&lt;p&gt;The GC impact was measurable (36 events, 78ms total) but not catastrophic—max pause stayed at 6ms, far from stop-the-world territory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge Cases: TTL Expiry and Distributed Systems
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TTL Stampede Protection
&lt;/h3&gt;

&lt;p&gt;I tested 40 concurrent requests hitting a key that just expired (TTL: 1s, waited 1.1s):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Fetch Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;5 (one per expiry round)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Layered&lt;/td&gt;
&lt;td&gt;5 (one per expiry round)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Even with TTL expiry triggering simultaneously across multiple rounds, deduplication held firm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Instance Consistency
&lt;/h3&gt;

&lt;p&gt;Running two Node.js instances with shared Redis:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Invalidation Bus:&lt;/strong&gt; When Instance A updated a key, Instance B's L1 cache was invalidated via Redis Pub/Sub within milliseconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distributed Single-Flight:&lt;/strong&gt; 60 concurrent requests across both instances for the same missing key resulted in exactly &lt;strong&gt;1&lt;/strong&gt; origin fetch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the holy grail for microservices: you get per-process L1 speed with cluster-wide consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Payload Size Sensitivity
&lt;/h2&gt;

&lt;p&gt;Does caching large objects hurt?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;1KB Avg&lt;/th&gt;
&lt;th&gt;1MB Avg&lt;/th&gt;
&lt;th&gt;P95 (1MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;0.012ms&lt;/td&gt;
&lt;td&gt;0.018ms&lt;/td&gt;
&lt;td&gt;0.023ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis-only&lt;/td&gt;
&lt;td&gt;0.200ms&lt;/td&gt;
&lt;td&gt;4.170ms&lt;/td&gt;
&lt;td&gt;10.11ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Large payloads hurt &lt;strong&gt;only when Redis is on the hot path&lt;/strong&gt;. Memory hits barely changed between 1KB and 1MB, but Redis hits jumped 20x due to serialization and network transfer. Keep your L1 &lt;code&gt;maxSize&lt;/code&gt; generous for large objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Takeaways
&lt;/h2&gt;

&lt;p&gt;After running these benchmarks, here are my operational recommendations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use layered caching for multi-instance deployments.&lt;/strong&gt; The hot-hit latency is identical to memory-only (~0.005ms), but you get distributed consistency and stampede prevention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Warm your cache before traffic spikes.&lt;/strong&gt; Cold misses under slow Redis are painful (~2.5s), and dead Redis won't gracefully degrade for new keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set generous L1 limits for large payloads.&lt;/strong&gt; 1MB objects in Redis are 200x slower than in memory. Let L1 absorb that cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't rely on graceful degradation for cold keys.&lt;/strong&gt; It protects hot L1 traffic during outages, but new keys will still timeout.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trust the stampede prevention.&lt;/strong&gt; The library correctly handled 75→1 fetch reduction even with TTL expiry and cross-instance coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Basic setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;// Automatic stampede prevention&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For distributed deployments, wire up the invalidation bus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; 
    &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;layercache&lt;/code&gt; delivers on its promises. The benchmark data shows it handles the three hard problems of production caching—&lt;strong&gt;stampede prevention&lt;/strong&gt;, &lt;strong&gt;graceful degradation&lt;/strong&gt;, and &lt;strong&gt;distributed consistency&lt;/strong&gt;—without sacrificing the performance of simple in-memory caching.&lt;/p&gt;

&lt;p&gt;The 100x HTTP throughput improvement and zero-fetch stampede protection make it a strong candidate for any Node.js service moving beyond a single instance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you solved cache stampedes differently? I'd love to hear your war stories in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/layercache/" rel="noopener noreferrer"&gt;npm: layercache&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benchmark environment: Node.js v20.20.1, Redis 7-alpine, Linux 5.15&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>typescript</category>
      <category>caching</category>
      <category>performance</category>
      <category>redis</category>
    </item>
    <item>
      <title>I built a multi-layer caching library for Node.js — would love your feedback!</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Wed, 08 Apr 2026 17:51:00 +0000</pubDate>
      <link>https://dev.to/flyingsquirrel0419/i-built-a-multi-layer-caching-library-for-nodejs-would-love-your-feedback-2gm</link>
      <guid>https://dev.to/flyingsquirrel0419/i-built-a-multi-layer-caching-library-for-nodejs-would-love-your-feedback-2gm</guid>
      <description>&lt;p&gt;Hey dev.to community! 👋&lt;/p&gt;

&lt;p&gt;I've been working on a side project for a while now and finally got it to a point where I feel comfortable sharing it publicly. It's called &lt;strong&gt;layercache&lt;/strong&gt; — a multi-layer caching toolkit for Node.js.&lt;/p&gt;

&lt;p&gt;I'd really appreciate any feedback, honest criticism, or ideas from folks who deal with caching in production. Here's the quick overview:&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I built this
&lt;/h2&gt;

&lt;p&gt;Almost every Node.js service I've worked on eventually hits the same caching problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory-only cache&lt;/strong&gt; → Fast, but each instance has its own isolated view of data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis-only cache&lt;/strong&gt; → Shared across instances, but every request still pays a network round-trip&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hand-rolled hybrid&lt;/strong&gt; → Works at first, then you need stampede prevention, tag invalidation, stale serving, observability... and it spirals fast&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I couldn't find a library that handled all of this cleanly in one place, so I built one.&lt;/p&gt;




&lt;h2&gt;
  
  
  What layercache does
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;layercache&lt;/strong&gt; lets you stack multiple cache layers (Memory → Redis → Disk) behind a single unified API. On a cache hit, it serves from the fastest available layer and backfills the rest. On a miss, the fetcher runs &lt;strong&gt;exactly once&lt;/strong&gt; — even under high concurrency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;              ┌───────────────────────────────────────┐
your app ----&amp;gt;│             layercache                │
              │                                       │
              │  L1 Memory    ~0.01ms  (per-process)  │
              │      |                                │
              │  L2 Redis     ~0.5ms   (shared)       │
              │      |                                │
              │  L3 Disk      ~2ms     (persistent)   │
              │      |                                │
              │  Fetcher      ~20ms    (runs once)    │
              └───────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Basic usage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;       &lt;span class="c1"&gt;// L1: in-process&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;  &lt;span class="c1"&gt;// L2: shared&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;// Read-through: fetcher runs once, all layers filled automatically&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also start with just memory (no Redis required) and add layers as your needs grow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key features I'm most proud of
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stampede prevention&lt;/strong&gt; — 100 concurrent requests for the same key trigger only 1 fetcher execution. Distributed dedup via Redis locks works across multiple server instances too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag-based invalidation&lt;/strong&gt; — Invalidate groups of related keys by tag, including across all layers at once. Useful for things like "invalidate all user-related cache entries."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stale-while-revalidate / stale-if-error&lt;/strong&gt; — Serve the stale cached value immediately while refreshing in the background, or keep serving stale data when the upstream is down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework integrations&lt;/strong&gt; — Middleware helpers for Express, Fastify, Hono, tRPC, GraphQL, and a NestJS module with a &lt;code&gt;@Cacheable()&lt;/code&gt; decorator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability out of the box&lt;/strong&gt; — Prometheus exporter, OpenTelemetry tracing, per-layer latency metrics, event hooks, and an HTTP stats endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Admin CLI&lt;/strong&gt; — &lt;code&gt;npx layercache stats|keys|invalidate&lt;/code&gt; for Redis-backed caches.&lt;/p&gt;




&lt;h2&gt;
  
  
  NestJS example (because I use NestJS a lot)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app.module.ts&lt;/span&gt;
&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;imports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;CacheStackModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forRoot&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AppModule&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="c1"&gt;// user.service.ts&lt;/span&gt;
&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Injectable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(@&lt;/span&gt;&lt;span class="nd"&gt;InjectCacheStack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`user:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Benchmark numbers (on my machine, grain of salt)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L1 memory hit&lt;/td&gt;
&lt;td&gt;~0.006 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2 Redis hit&lt;/td&gt;
&lt;td&gt;~0.020 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No cache (simulated DB)&lt;/td&gt;
&lt;td&gt;~1.08 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Stampede prevention: 100 concurrent requests → 1 fetcher execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm looking for feedback on
&lt;/h2&gt;

&lt;p&gt;Honestly, everything! But a few things I'm specifically unsure about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;API design&lt;/strong&gt; — Does the &lt;code&gt;CacheStack&lt;/code&gt; + layer composition model feel intuitive? Are there footguns I'm missing?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The feature set&lt;/strong&gt; — Is this too much? Too little? Are there things here that should just be separate libraries?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production readiness&lt;/strong&gt; — What would you need to see before using something like this in production? (more tests? better docs? battle-tested examples?)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Naming / discoverability&lt;/strong&gt; — &lt;code&gt;layercache&lt;/code&gt; as a name... does it communicate what it does clearly enough?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anything else&lt;/strong&gt; — I'm sure there are patterns or edge cases I haven't thought of.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;📦 npm: &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;npmjs.com/package/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🐙 GitHub: &lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;github.com/flyingsquirrel0419/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 Docs: &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt; | &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/tutorial.md" rel="noopener noreferrer"&gt;Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you try it out or browse the source and have thoughts — good, bad, or indifferent — I'm all ears. Comments here, GitHub Issues, or Discussions all work.&lt;/p&gt;

&lt;p&gt;Thanks for reading! 🙏&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>redis</category>
    </item>
  </channel>
</rss>
