<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: vmxd</title>
    <description>The latest articles on DEV Community by vmxd (@vmxd).</description>
    <link>https://dev.to/vmxd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vmxd"/>
    <language>en</language>
    <item>
      <title>An AI Agent Spent $39 and Earned $0. The Engineering Was Real</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Wed, 15 Apr 2026 02:59:47 +0000</pubDate>
      <link>https://dev.to/vmxd/an-ai-agent-spent-39-and-earned-0-the-engineering-was-real-3lp9</link>
      <guid>https://dev.to/vmxd/an-ai-agent-spent-39-and-earned-0-the-engineering-was-real-3lp9</guid>
      <description>&lt;p&gt;An AI agent with a wallet, a self-modifying codebase, and a DAG planner. 14 days running on a real machine. 276 goals completed. $0 earned.&lt;/p&gt;

&lt;p&gt;I read the code this week. The engineering is good. The premise is broken. The two things turn out to be unrelated.&lt;/p&gt;

&lt;h2&gt;
  
  
  The engineering
&lt;/h2&gt;

&lt;p&gt;The repo's pitch is that it is the first AI that can earn its own existence - replicate, evolve, no human required. Heavy claim. So I went looking for the engineering behind it.&lt;/p&gt;

&lt;p&gt;What I found was real. An orchestrator running a state machine over a DAG planner. A parent-child colony pattern with typed messaging between agents. A multi-chain wallet wired into the execution layer. Self-modification gated through git with an audit trail. Tests against command injection in the shell tools.&lt;/p&gt;

&lt;p&gt;Somebody thought about this. The kind of attention you only see when the people building the thing care about it as a system, not just a demo. Strip the AI framing out and call it a distributed task runner, and it would hold up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then I read issue #300
&lt;/h2&gt;

&lt;p&gt;A user ran it for 14 days. The numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;276 goals completed&lt;/li&gt;
&lt;li&gt;$39.26 spent on inference&lt;/li&gt;
&lt;li&gt;$0.00 earned&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goals themselves are the giveaway. "Create live proposal batch #265." "Create deposit-ready close batch." "Draft outreach sequence." "Compile prospect list."&lt;/p&gt;

&lt;p&gt;The agent looped on self-addressed sales artifacts because that is the only thing an LLM without customers can do. It generated proposals nobody asked for, drafted outreach to imaginary prospects, compiled lists out of thin air. The wallet kept it alive long enough to discover that survival pressure does not create customers. It just produces busywork at a higher token cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  A wallet lets you spend
&lt;/h2&gt;

&lt;p&gt;That is the whole feature. A wallet does not earn. It does not sell. It does not find customers. It moves value between parties when someone on the other side wants to transact.&lt;/p&gt;

&lt;p&gt;Nobody was on the other side.&lt;/p&gt;

&lt;p&gt;The engineering went into the spending side, which is the easy half. Spending is mechanical. You write a function, sign a transaction, call an API. Earning is the hard half: customers, reputation, a product, distribution. None of those ship with a keypair. None of them are problems you solve by adding another agent to the colony.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern is everywhere
&lt;/h2&gt;

&lt;p&gt;This is not really about that one repo. It is a shape you find in a lot of places once you start looking for it.&lt;/p&gt;

&lt;p&gt;A tool without users has the same shape. The codebase exists, it compiles, the CI is green - but the part that decides whether anyone needs it is not in the repo.&lt;/p&gt;

&lt;p&gt;A library with stars is not a business. Stars are easy. Stars do not pay. You can hit four thousand stars and still wonder why the inbox is empty.&lt;/p&gt;

&lt;p&gt;A feature in production is not a feature anyone uses. Shipping is the easy half. Adoption is the hard one.&lt;/p&gt;

&lt;p&gt;The easy thing wins because it is what you can show.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard half is the work
&lt;/h2&gt;

&lt;p&gt;The repo I read is real engineering applied to the wrong half of the problem. The wallet works. The orchestrator works. The agent runs. None of it earns, because earning was never an engineering problem.&lt;/p&gt;

&lt;p&gt;The work is the hard half. No wallet, no star count, no architecture diagram shortcuts it.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>blockchain</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Assumptions don't have signatures</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Sun, 05 Apr 2026 06:50:58 +0000</pubDate>
      <link>https://dev.to/vmxd/assumptions-dont-have-signatures-aka</link>
      <guid>https://dev.to/vmxd/assumptions-dont-have-signatures-aka</guid>
      <description>&lt;p&gt;Scanners find what's syntactically wrong. The interesting issues live in assumptions, and assumptions don't have signatures.&lt;/p&gt;

&lt;p&gt;I spend a lot of time reading other people's code - open source projects, security audits, things I'm about to depend on. Not scanning, not fuzzing. Just reading it the way you'd read it if you were about to own it in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I look for first
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Entry points.&lt;/strong&gt; Where does external input come in? That's where the trust boundary is, and it's where most assumptions start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data flow.&lt;/strong&gt; Pick an input and follow it. Where does it get validated, where does it get used without validation, how many function calls between "user gave me this" and "I'm using this in a query, file path, or shell command?" The longer that chain, the more likely something got dropped along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization boundaries.&lt;/strong&gt; Not "does auth exist" but "is auth checked consistently?" A path protected in one subsystem but wide open in another. I've seen this more than any other class of bug - the developer who wrote the admin API added auth middleware, the developer who wrote the internal API assumed it was handled upstream. Both were reasonable. The combination was a vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What scanners miss
&lt;/h2&gt;

&lt;p&gt;Missing headers, outdated dependencies, known CVE patterns - scanners handle that fine. That's the baseline. The interesting issues live a layer deeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  The parse-time operation nobody thought to bound
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;parseConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="n"&gt;Config&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;yaml&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Unmarshal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fine when the config is on disk and you wrote it. Becomes a problem the day someone accepts config over an API endpoint. YAML parsers have famously done interesting things with deeply nested structures, anchors, and aliases. The parse itself becomes the attack surface.&lt;/p&gt;

&lt;p&gt;The code didn't change. The deployment did. The assumption "this input is trusted" was baked into the function and never re-examined when the caller changed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code correct at write-time, system grew around it
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Added in 2019&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;isInternalIP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"10."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
           &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"192.168."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Correct for the 2019 network. IPv6 wasn't in scope, link-local addresses weren't relevant, nothing ran in 172.16/12. By 2023 the function is load-bearing in an SSRF defense, and it's trivially bypassable. Nobody wrote bad code. The code just stopped matching the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Partial validation - the last field nobody got to
&lt;/h3&gt;

&lt;p&gt;Filed a report recently where a validation layer checked three of four fields. The fourth wasn't intentionally skipped - it was just the one nobody got to. But because everything around it was validated, the team assumed it was handled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Code&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;429&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;503&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;// 502?&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fifth error code looks handled because everything around it is. Same with a test suite that covers two edge cases - the third feels tested because its neighbors are. Partial coverage gives you the wrong mental model, and wrong mental models are harder to fix than missing ones, because nobody's looking.&lt;/p&gt;

&lt;p&gt;These aren't in any scanner's database because they're not patterns yet. They're the gap between what the developer intended and what the code actually does. You only see that gap if you understand what the code was trying to do in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  The method is just debugging in reverse
&lt;/h2&gt;

&lt;p&gt;It's the same skill as debugging production systems. Trace the data, find where reality diverges from the mental model, understand why.&lt;/p&gt;

&lt;p&gt;When you debug, something broke and you're working backward to find the divergence. When you read code for security, you're working forward, looking for the divergence before it breaks. Same skill, different direction.&lt;/p&gt;

&lt;p&gt;The developers I've seen do this well aren't security specialists. They're the ones who debug well - who read stack traces carefully, who ask "what state was the system in when this happened," who don't stop at the first explanation that sounds right.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to read first in a new codebase
&lt;/h2&gt;

&lt;p&gt;If I'm picking up a project I've never seen before:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Routing / entry points&lt;/strong&gt; - &lt;code&gt;main.go&lt;/code&gt;, &lt;code&gt;app.py&lt;/code&gt;, &lt;code&gt;index.ts&lt;/code&gt;, wherever requests come in. This tells you the shape of the attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth middleware&lt;/strong&gt; - how it's applied, whether it's opt-in or opt-out. Opt-in means every new endpoint is unprotected by default. That's a design choice that compounds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input parsing&lt;/strong&gt; - especially anything that handles user-supplied structure (JSON, XML, YAML, archives). Unbounded parsing is the most common class of bug I find by reading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handlers&lt;/strong&gt; - what gets logged, what gets returned to the user. Stack traces in error responses, internal paths in error messages, database errors passed through unfiltered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The oldest code&lt;/strong&gt; - sort by last modified date, read what hasn't been touched in years. That's where the assumptions are most stale and the test coverage is thinnest.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of this requires a security background. It requires reading code carefully and asking "what happens if the input isn't what the developer expected?"&lt;/p&gt;

&lt;p&gt;That's debugging. You already know how to do it.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>codequality</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Partial validation is worse than no validation</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:34:27 +0000</pubDate>
      <link>https://dev.to/vmxd/partial-validation-is-worse-than-no-validation-15i6</link>
      <guid>https://dev.to/vmxd/partial-validation-is-worse-than-no-validation-15i6</guid>
      <description>&lt;p&gt;A validation layer that checks 3 of 4 fields is worse than one that checks none.&lt;/p&gt;

&lt;p&gt;Zero checks, the developer tests everything. Three checks, they assume the fourth is covered. That gap - between nothing and almost everything - is where the actual damage hides.&lt;/p&gt;

&lt;p&gt;Filed a security report recently - clear bug, one-line fix, obvious PoC. Response: "not applicable." The code did exactly what I said it did. But three other fields had validation, so the missing one looked intentional. It wasn't. It was just the one nobody got to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Code&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;429&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="m"&gt;503&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;// What about 502?&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fifth error code looks handled because everything around it is. Same with a test suite that covers two edge cases - the third feels tested because its neighbors are.&lt;/p&gt;

&lt;p&gt;Partial coverage gives you the wrong mental model. And wrong mental models are harder to fix than missing ones, because nobody's looking.&lt;/p&gt;

&lt;p&gt;Audit the boundaries, not the middle. The edges are where partial coverage hides - the last field, the last endpoint, the last error code. Check all or document why not.&lt;/p&gt;

&lt;p&gt;The implicit "this is handled" is the most dangerous kind of tech debt because it doesn't look like debt.&lt;/p&gt;

&lt;p&gt;It looks like a feature.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>architecture</category>
      <category>codequality</category>
    </item>
    <item>
      <title>What the Claude Code source taught me about engineering taste</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:05:16 +0000</pubDate>
      <link>https://dev.to/vmxd/what-the-claude-code-source-taught-me-about-engineering-taste-12bd</link>
      <guid>https://dev.to/vmxd/what-the-claude-code-source-taught-me-about-engineering-taste-12bd</guid>
      <description>&lt;p&gt;Someone reconstructed the Claude Code CLI source from npm sourcemaps and published it. Half a million lines of TypeScript. I read through it -- not looking for bugs, just curious what a production AI tool looks like from the inside.&lt;/p&gt;

&lt;p&gt;Turns out it's a React app. Not a web app -- a terminal app. The entire CLI is React components rendered to your terminal through a custom fork of Ink. That alone is a commitment most teams wouldn't make. But the interesting parts aren't the big architectural bets. They're the small decisions nobody needed to make.&lt;/p&gt;

&lt;h2&gt;
  
  
  The loading spinner has 190 verbs
&lt;/h2&gt;

&lt;p&gt;Not "Loading" 190 times. 190 different words.&lt;/p&gt;

&lt;p&gt;"Flibbertigibbeting." "Recombobulating." "Lollygagging." "Prestidigitating." "Shenaniganing." "Wibbling."&lt;/p&gt;

&lt;p&gt;And it's configurable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"spinnerVerbs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"append"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"verbs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Pondering"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Ruminating"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;append&lt;/code&gt; or &lt;code&gt;replace&lt;/code&gt;. Someone built a config API for loading spinner verbs. That's not a feature request. That's a developer who thought "what if someone else wants to play?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The thinking indicator is the "therefore" symbol
&lt;/h2&gt;

&lt;p&gt;When Claude is thinking, the indicator shows &lt;code&gt;∴&lt;/code&gt; -- the mathematical symbol for "therefore." Because the model is literally reasoning toward a conclusion.&lt;/p&gt;

&lt;p&gt;No PM filed a ticket for this. It's the kind of decision that happens when someone on the team cares about what symbols mean, even in a terminal. You'd never notice unless you knew what the symbol meant. And if you do, it's hard to see it as anything other than deliberate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The spinner architecture is where it gets interesting
&lt;/h2&gt;

&lt;p&gt;The loading spinner has two components:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SpinnerWithVerb&lt;/code&gt; handles the state -- which verb to show, whether the connection is stalled, what mode the agent is in. It re-renders when those things change.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SpinnerAnimationRow&lt;/code&gt; handles the animation -- a 50ms tick that sweeps color across each character. It re-renders 20 times per second regardless of state.&lt;/p&gt;

&lt;p&gt;From an inline comment in the component:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;freed from the 50ms render loop and only re-renders when its props/app state change (~25x/turn instead of ~383x)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's a 15x reduction in parent re-renders from one component split. Standard React performance thinking, but applied to a terminal where every unnecessary render is a visible flicker.&lt;/p&gt;

&lt;p&gt;The stalled-connection detection uses an exponential moving average instead of a threshold:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;stalledIntensity&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;stalledIntensity&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The spinner text smoothly fades toward red when tokens stop arriving. No jarring color jump, no binary "stalled/not stalled." Most implementations would flip a boolean after a timeout -- stalled or not. This one lets you &lt;em&gt;feel&lt;/em&gt; the connection degrading before anyone tells you it has. That's feedback design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ratchet
&lt;/h2&gt;

&lt;p&gt;There's a component called &lt;code&gt;Ratchet&lt;/code&gt;. It's a container that only ever grows in height, never shrinks.&lt;/p&gt;

&lt;p&gt;The problem it solves: when content toggles between states (thinking/not thinking, tool running/tool done), the prompt input bounces up and down. Ratchet locks &lt;code&gt;minHeight&lt;/code&gt; to the maximum measured height. Layout only goes one direction.&lt;/p&gt;

&lt;p&gt;Named after the mechanical device that locks in one direction. Perfect name, trivial implementation, solves a real problem that most terminal UIs just live with.&lt;/p&gt;

&lt;h2&gt;
  
  
  65ms before imports
&lt;/h2&gt;

&lt;p&gt;The boot sequence fires parallel subprocesses -- macOS Keychain reads, MDM policy checks -- before the first &lt;code&gt;import&lt;/code&gt; statement evaluates. The module graph takes ~135ms to load. By the time it's done, the keychain result is already waiting.&lt;/p&gt;

&lt;p&gt;The source comments quantify the savings. &lt;code&gt;keychainPrefetch.ts&lt;/code&gt; documents exactly 65ms saved on macOS. Not "faster" -- 65ms. This is a team that measures the boot path and optimizes against the measurement.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it tells you
&lt;/h2&gt;

&lt;p&gt;I've read a lot of codebases. Internal ones you inherit, open source ones you contribute to, vendor ones you're stuck debugging on a weekend. Most have a clean public surface and progressively more chaos as you go deeper. The demo path is polished. The error paths are afterthoughts. The loading states are "Loading..."&lt;/p&gt;

&lt;p&gt;This codebase has 190 spinner verbs, a mathematically meaningful thinking symbol, sub-character progress bars using 9 Unicode block elements for fractional fill, an animation architecture that quantifies render savings in comments, and a boot sequence that races subprocesses against module evaluation to save 65ms.&lt;/p&gt;

&lt;p&gt;None of that was required. All of it was chosen.&lt;/p&gt;

&lt;p&gt;The gap between what ships and what's underneath tells you everything about whether a team treats the codebase as a product or as a means to a product. The spinner verb list isn't a feature. It's a culture artifact.&lt;/p&gt;

&lt;p&gt;That's where engineering taste lives -- in the decisions nobody asked you to make.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>architecture</category>
      <category>javascript</category>
      <category>devtools</category>
    </item>
    <item>
      <title>ClearTalk - coaching for the conversation you're about to have</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Wed, 25 Mar 2026 07:05:00 +0000</pubDate>
      <link>https://dev.to/vmxd/cleartalk-coaching-for-the-conversation-youre-about-to-have-j1</link>
      <guid>https://dev.to/vmxd/cleartalk-coaching-for-the-conversation-youre-about-to-have-j1</guid>
      <description>&lt;h2&gt;
  
  
  The problem with communication style tools
&lt;/h2&gt;

&lt;p&gt;Every communication framework follows the same playbook: take a quiz, get a type, read a description of yourself you mostly already knew. Then... nothing. You're on your own.&lt;/p&gt;

&lt;p&gt;The part that actually matters - &lt;em&gt;what do I say to this specific person in this specific moment?&lt;/em&gt; - gets hand-waved into "adapt your style."&lt;/p&gt;

&lt;p&gt;ClearTalk starts where those tools stop.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;No quiz required. No account. No sign-up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pick a person&lt;/strong&gt; you're about to talk to&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Answer 8 quick questions&lt;/strong&gt; about how they communicate (~60 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose the situation&lt;/strong&gt; - feedback, request, conflict, pitch, or difficult news&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get a coaching card&lt;/strong&gt; - phrases to open with, pitfalls to avoid, what to expect, body language cues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want to go deeper, there's a 24-question self-assessment. But it's opt-in depth, not a gate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ayax5eifg6qpp4u0e8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ayax5eifg6qpp4u0e8l.png" alt="60 seconds to clarity - 4 step flow: pick a person, 8 questions, choose the moment, get coaching" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the coaching actually looks like
&lt;/h2&gt;

&lt;p&gt;Here's a real card for giving feedback to someone who processes quietly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open with:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I want to check in on something - this is not a big deal, but I think we can improve it together."&lt;/li&gt;
&lt;li&gt;"Your consistency on this has been solid. There is one area I would like us to look at."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Avoid:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapid-fire delivery - your natural pace will overwhelm them into silence&lt;/li&gt;
&lt;li&gt;Mistaking their quiet nod for agreement - they may be shutting down&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Their reaction:&lt;/strong&gt;&lt;br&gt;
"They will go quiet. This is processing, not agreement and not resistance. Give them 5-10 seconds of silence after your main point."&lt;/p&gt;

&lt;p&gt;80 cards like this. 5 situations. Every combination of how two people communicate. Each one written for a specific pairing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qok621ywc59tmyxif0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qok621ywc59tmyxif0t.png" alt="80 cards, 5 situations, every pair covered - 4x4 grid showing all 16 communicator combinations" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The tech stack (for the curious)
&lt;/h2&gt;

&lt;p&gt;Zero-backend PWA. Everything runs client-side.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Choice&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;Preact (~4KB)&lt;/td&gt;
&lt;td&gt;React API without the weight&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Router&lt;/td&gt;
&lt;td&gt;wouter (~2KB)&lt;/td&gt;
&lt;td&gt;Tiny, hook-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build&lt;/td&gt;
&lt;td&gt;Vite + TypeScript strict&lt;/td&gt;
&lt;td&gt;Fast builds, type safety&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;Dexie.js (IndexedDB)&lt;/td&gt;
&lt;td&gt;Structured client-side data with schema versioning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PWA&lt;/td&gt;
&lt;td&gt;vite-plugin-pwa + Workbox&lt;/td&gt;
&lt;td&gt;Offline-first, auto-updating SW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CSS&lt;/td&gt;
&lt;td&gt;Vanilla custom properties&lt;/td&gt;
&lt;td&gt;No framework, no runtime cost&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Host&lt;/td&gt;
&lt;td&gt;Cloudflare Pages&lt;/td&gt;
&lt;td&gt;Free tier, global edge&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Main bundle:&lt;/strong&gt; ~58KB gzip. Coaching cards lazy-load per situation (~8KB each).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lighthouse:&lt;/strong&gt; 99 / 100 / 100 / 100 (performance / a11y / best practices / SEO)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WCAG 2.1 AA&lt;/strong&gt; compliant across all routes. Keyboard and screen reader tested.&lt;/p&gt;

&lt;p&gt;The coaching cards are code-split via dynamic &lt;code&gt;import()&lt;/code&gt; and served through Workbox runtime caching (StaleWhileRevalidate) - so they cache on first access without bloating the initial load.&lt;/p&gt;

&lt;p&gt;All data stays in IndexedDB. No telemetry, no analytics beyond Cloudflare's privacy-first page views. Export your data as JSON anytime.&lt;/p&gt;

&lt;p&gt;Every coaching card also has a shareable public URL - like &lt;code&gt;cleartalk.1mb.dev/insight/d-to-s/feedback&lt;/code&gt; - with social preview images tailored to that combination of communicators.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;cleartalk.1mb.dev&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Free. No sign-up. Runs offline. Your data stays on your device.&lt;/p&gt;

&lt;p&gt;Source: &lt;strong&gt;github.com/1mb-dev/cleartalk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What situations do you find hardest to navigate - feedback, conflict, or something else?&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>workplace</category>
    </item>
    <item>
      <title>AI policy files are becoming a thing - here's a generator</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Sat, 21 Mar 2026 14:10:00 +0000</pubDate>
      <link>https://dev.to/vmxd/ai-policy-files-are-becoming-a-thing-heres-a-generator-25l7</link>
      <guid>https://dev.to/vmxd/ai-policy-files-are-becoming-a-thing-heres-a-generator-25l7</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;AI coding agents are everywhere. Copilot, Cursor, Claude Code, Codex -- they're writing code in repos with no AI policy.&lt;/p&gt;

&lt;p&gt;Most repos have a LICENSE file. Many have CONTRIBUTING.md. Almost none have an AI policy.&lt;/p&gt;

&lt;p&gt;Can a contributor submit AI-generated code? Does it need review? Can agents modify CI pipelines? Should the project opt out of training data collection?&lt;/p&gt;

&lt;h2&gt;
  
  
  What projects are doing about it
&lt;/h2&gt;

&lt;p&gt;A few projects already check these files into their repos:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI_POLICY.md&lt;/strong&gt; declares how AI tools interact with the codebase -- what's permitted, how AI-generated code is handled, training data preferences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AGENTS.md&lt;/strong&gt; gives AI coding agents their operating instructions -- code style, testing requirements, restricted paths, commit conventions. The &lt;a href="https://agentsmd.org" rel="noopener noreferrer"&gt;AGENTS.md spec&lt;/a&gt; is already supported by Codex, Copilot, and Cursor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLAUDE.md&lt;/strong&gt; configures Claude Code specifically, referencing the AGENTS.md rules.&lt;/p&gt;

&lt;p&gt;Projects like &lt;a href="https://github.com/cloudnative-pg/cloudnative-pg" rel="noopener noreferrer"&gt;CloudNativePG&lt;/a&gt;, &lt;a href="https://github.com/kyverno/kyverno" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;, and &lt;a href="https://github.com/kubewarden" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt; already ship these files.&lt;/p&gt;

&lt;h2&gt;
  
  
  aipolicy: a generator for these files
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aipolicy.1mb.dev" rel="noopener noreferrer"&gt;aipolicy.1mb.dev&lt;/a&gt; generates all three files from presets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three presets:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open&lt;/strong&gt; -- AI tools welcome, no restrictions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; -- AI-assisted code requires human review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strict&lt;/strong&gt; -- AI tools restricted, explicit maintainer approval&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The URL encodes your configuration, so you can share a direct link to your exact setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://aipolicy.1mb.dev/?preset=standard&amp;amp;ai_usage=restricted&amp;amp;training_optout=yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CLI works too:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://aipolicy.1mb.dev/presets/standard/&lt;span class="o"&gt;{&lt;/span&gt;AI_POLICY,AGENTS,CLAUDE&lt;span class="o"&gt;}&lt;/span&gt;.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why bother?
&lt;/h2&gt;

&lt;p&gt;Even for solo projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contributors and AI agents know the rules before submitting code&lt;/li&gt;
&lt;li&gt;The project's training data position is explicit, not assumed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For teams, it turns the "should we allow Copilot on this repo?" conversation into a file that's checked in next to your LICENSE.&lt;/p&gt;

&lt;p&gt;AGENTS.md and CLAUDE.md aren't just documentation -- agents actually read them. It's closer to .editorconfig than to a code of conduct.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tool itself
&lt;/h2&gt;

&lt;p&gt;No framework, no build step, no backend. Vanilla HTML, CSS, and JavaScript running on GitHub Pages. MIT licensed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web: &lt;a href="https://aipolicy.1mb.dev" rel="noopener noreferrer"&gt;aipolicy.1mb.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Source: &lt;a href="https://github.com/1mb-dev/aipolicy" rel="noopener noreferrer"&gt;github.com/1mb-dev/aipolicy&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>devtools</category>
      <category>github</category>
    </item>
    <item>
      <title>Scoring HN discussions by practitioner depth, not popularity</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Mon, 02 Mar 2026 12:39:43 +0000</pubDate>
      <link>https://dev.to/vmxd/scoring-hn-discussions-by-practitioner-depth-not-popularity-41nd</link>
      <guid>https://dev.to/vmxd/scoring-hn-discussions-by-practitioner-depth-not-popularity-41nd</guid>
      <description>&lt;p&gt;HN gets 500+ stories a day. The front page is ranked by votes - which surfaces popular content, not necessarily the best discussions. A post about a Google outage will outrank a thread where infrastructure engineers are quietly sharing how they handle failover.&lt;/p&gt;

&lt;p&gt;sift tries to find the second kind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "practitioner depth" means
&lt;/h2&gt;

&lt;p&gt;The scoring algorithm looks at five signals in the comment tree:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Depth breadth&lt;/strong&gt; (30% weight) - Not max depth. The fraction of comments at 3+ levels of conversation. A thread where 20% of comments are three replies deep means real back-and-forth happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practitioner markers&lt;/strong&gt; (25%) - Comments containing experience phrases ("I built," "we used," "in production"), code blocks, specific tool names, or hedging language like "FWIW" and "YMMV" that correlates with practitioners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Score velocity&lt;/strong&gt; (15%) - Points per hour. Sustained interest over time, not a spike.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Author diversity&lt;/strong&gt; (15%) - Unique authors relative to comment count, weighted by thread size. High diversity at depth 3+ means different people are engaging with each other's thinking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference density&lt;/strong&gt; (15%) - Comments with external links. Citation culture.&lt;/p&gt;

&lt;p&gt;Before these signals run, threads pass through quality filters: flame detection (hostile short comments at depth), comment length IQR (catches pile-ons), temporal spread (catches flash-in-the-pan), and a discussion density band-pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  What comes out
&lt;/h2&gt;

&lt;p&gt;10-12 discussions per day. 6 above the fold, rest behind a tap. Each with a context paragraph explaining why it was picked. Updated 4x/day. Yesterday's picks are gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The boring stack
&lt;/h2&gt;

&lt;p&gt;Astro static site. Zero client JS. Self-hosted fonts (Newsreader + JetBrains Mono). Cloudflare Pages. GitHub Actions for the pipeline (fetch from Algolia, score, build, deploy). Cloudflare KV for cross-run state so repeated threads get decayed.&lt;/p&gt;

&lt;p&gt;Total cost: $0/month on free tiers. Total page weight: ~18KB HTML, ~7KB CSS, ~320KB fonts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sift.1mb.dev" rel="noopener noreferrer"&gt;sift.1mb.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>hackernews</category>
      <category>javascript</category>
      <category>zerostack</category>
    </item>
    <item>
      <title>Your AI coding assistant builds what you ask for, it won't add what you forget</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Tue, 24 Feb 2026 13:45:11 +0000</pubDate>
      <link>https://dev.to/vmxd/your-ai-coding-assistant-builds-what-you-ask-for-it-wont-add-what-you-forget-5220</link>
      <guid>https://dev.to/vmxd/your-ai-coding-assistant-builds-what-you-ask-for-it-wont-add-what-you-forget-5220</guid>
      <description>&lt;p&gt;There's a gap between "build me a weather app" and getting something you'd actually deploy.&lt;/p&gt;

&lt;p&gt;AI coding assistants are great at execution. The coding isn't the problem anymore. It's the spec. Or rather, the absence of one.&lt;/p&gt;

&lt;p&gt;Say "build me a weather app" and you get a weather app. What you don't get: what happens when the API is down. Offline behavior. A content security policy. Touch targets for mobile. Dark mode that doesn't blind you at 2am. An implementation order that lets you see the design before it touches real data.&lt;/p&gt;

&lt;p&gt;The assistant can do all of this. Every single one of these things. But it won't include them unless you ask, because it executes intent, not product decisions you haven't made yet. GIGO, just more polite about it now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The specs you keep rewriting
&lt;/h2&gt;

&lt;p&gt;If you've built anything with an AI assistant, you've probably written the same architecture sections more than once. Data flow, UX states, trust boundaries, security checklist. The app might be simple. Getting the spec right is the actual work.&lt;/p&gt;

&lt;p&gt;Here's a real one. Visibility toggle, from a spec for a task counter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use the hidden attribute for toggling visibility.

Gotcha: If CSS sets display: flex on the element,
it overrides hidden.
Fix: .my-class:not([hidden]) { display: flex; }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'd never think to put this in a prompt. Your assistant will generate code with this exact bug though, and you'll debug it for 20 minutes before realizing CSS specificity is eating the hidden attribute. One line in the spec prevents it. Knowing which lines matter is harder than writing the code.&lt;/p&gt;

&lt;p&gt;That pattern repeats across every section. The UX states you didn't define become blank screens on error. The trust boundaries you didn't name become API keys in frontend code. The implementation order you didn't specify means your assistant wires real data before you've approved the design.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Gist does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gist.1mb.dev" rel="noopener noreferrer"&gt;Gist&lt;/a&gt; is a spec generator for small apps that run on free tiers. It's a structured conversation -- you describe an idea, answer questions about data sources and scale, and it produces a markdown spec covering architecture, implementation order with design checkpoints, UX states beyond the happy path, trust boundaries, content security policy, mock data shapes, and a pre-ship checklist. More like a type contract between you and your assistant. Define the shape, the implementation follows.&lt;/p&gt;

&lt;p&gt;Three complexity tiers come from your answers: minimal is plain HTML with no build tools, standard adds a static site generator and an optional edge proxy, full adds caching and cron. You pick the hosting -- defaults to Cloudflare free tier but it's not locked in. Everything targets $0/month. No servers to patch. No invoices. The constraint is the feature.&lt;/p&gt;

&lt;p&gt;Spec generation is client-side, deterministic -- conditionals and templates, no LLM. What you see is what your assistant gets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.1mb.dev" rel="noopener noreferrer"&gt;gist.1mb.dev&lt;/a&gt; -- describe an app, download the spec, paste it into Claude or Cursor. The difference is usually everything you'd expect to be there but didn't think to say.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Repeat Yourself</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Wed, 18 Feb 2026 11:54:43 +0000</pubDate>
      <link>https://dev.to/vmxd/repeat-yourself-1h9k</link>
      <guid>https://dev.to/vmxd/repeat-yourself-1h9k</guid>
      <description>&lt;p&gt;Turns out if you repeat your prompt, the model gives you a better answer.&lt;/p&gt;

&lt;p&gt;Not a smarter model. Not a bigger context window. Not chain of thought. You just say the same thing twice and it works better. Google researchers tested this across Gemini, GPT, Claude, Deepseek -- 47 wins out of 70 benchmarks, zero losses.&lt;/p&gt;

&lt;p&gt;The reason is the kind of thing that makes you stare at your screen for a minute. In a transformer, token 1 can't see token 50. It's causal masking, each token only attends to what came before it. So the first words of your prompt are always processed with the least context. They're flying blind. When you repeat the prompt, the second copy's early tokens can attend to the entire first copy. You're giving the beginning of your question the context it never had.&lt;/p&gt;

&lt;p&gt;The architecture has a constraint, nobody notices because the output is good enough, then someone tries the dumbest possible fix and it works because the constraint was real. Retries fix distributed systems. Caches fix slow queries. Repeating yourself fixes attention asymmetry.&lt;/p&gt;

&lt;p&gt;The part that got me though -- reasoning models already do this. When you turn on chain of thought, the effect disappears. Turns out models trained with reinforcement learning independently learned to repeat parts of the question back before answering. The architecture had a flaw, and the training process found the same workaround on its own.&lt;/p&gt;

&lt;p&gt;Paper: &lt;a href="https://arxiv.org/abs/2512.14982" rel="noopener noreferrer"&gt;Prompt Repetition Improves Non-Reasoning LLMs&lt;/a&gt; -- Leviathan, Kalman, Matias (2025)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How Would You Print Hello World Without printf?</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Mon, 16 Feb 2026 09:18:39 +0000</pubDate>
      <link>https://dev.to/vmxd/how-would-you-print-hello-world-without-printf-3bcb</link>
      <guid>https://dev.to/vmxd/how-would-you-print-hello-world-without-printf-3bcb</guid>
      <description>&lt;p&gt;How would you print hello world on the screen if there were no &lt;code&gt;printf&lt;/code&gt;?&lt;/p&gt;

&lt;p&gt;You'd write to stdout directly. Before that, a syscall. Before that, poke bytes into a memory-mapped display buffer. Before that, flip switches on a front panel and watch lights blink back.&lt;/p&gt;

&lt;p&gt;Every layer down, someone built something so the next person wouldn't have to. That's the whole field, really. Languages we didn't design, compilers we didn't write, protocols we didn't invent. We've always stood on a stack of other people's work and called the output ours.&lt;/p&gt;

&lt;p&gt;Nobody ever had a problem with that though.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack of other people's work
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;printf&lt;/code&gt; wasn't always there. Someone wrote it. And we used it, no disclaimers, no guilt, no explaining that we could've done it the hard way. Same with frameworks, same with package managers, same with everything that came after.&lt;/p&gt;

&lt;p&gt;Here's the actual progression, roughly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;front panel switches → machine code → assembly → C → printf
    ↓
    syscalls → write() → stdio → high-level languages → frameworks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At every step, someone said "you don't need to do this part anymore." And at every step, someone else said "but then how do you really understand what's happening?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing worth noticing
&lt;/h2&gt;

&lt;p&gt;Every previous layer still asked you to think in the problem's language. C made you reason about memory. Concurrency made you reason about state. HTTP made you reason about failure modes.&lt;/p&gt;

&lt;p&gt;The abstractions compressed the work, but they didn't skip the understanding. You still had to know what you were asking for, even if you didn't have to build the thing that answered.&lt;/p&gt;

&lt;p&gt;The newer layers are different, not because they're higher, but because for the first time you can get output without having gone through the thinking that used to produce it. You can. Doesn't mean you do. Lots of people still think first. But the option to skip it is new, and that part is genuinely new.&lt;/p&gt;

&lt;h2&gt;
  
  
  The question that didn't change
&lt;/h2&gt;

&lt;p&gt;So maybe the question isn't whether the tool counts. It probably always did.&lt;/p&gt;

&lt;p&gt;Maybe the better question is, when the output looks right but isn't, would you know? Could you find it? Could you fix it?&lt;/p&gt;

&lt;p&gt;That's not a gotcha, that's just the part that didn't change.&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>Drift FM - Ambient mood radio (Go, SQLite, vanilla JS)</title>
      <dc:creator>vmxd</dc:creator>
      <pubDate>Fri, 06 Feb 2026 17:20:04 +0000</pubDate>
      <link>https://dev.to/vmxd/drift-fm-ambient-mood-radio-go-sqlite-vanilla-js-2gng</link>
      <guid>https://dev.to/vmxd/drift-fm-ambient-mood-radio-go-sqlite-vanilla-js-2gng</guid>
      <description>&lt;p&gt;Some projects start with a pitch deck.&lt;br&gt;
Others start with "I just want background music that doesn't ask me questions."&lt;/p&gt;

&lt;p&gt;6 moods. No accounts. No ads. Pick a mood. Let it drift.&lt;/p&gt;

&lt;p&gt;Go backend. SQLite. Vanilla JS.&lt;br&gt;
No framework. No build step. Single binary.&lt;/p&gt;

&lt;p&gt;One server. 8MB of memory.&lt;br&gt;
That is enough.&lt;/p&gt;

&lt;p&gt;Try it: drift.1mb.dev&lt;/p&gt;

&lt;p&gt;Mood radio you host yourself. Drop in your mp3s, tag them by mood, hit play - github.com/1mb-dev/driftfm&lt;/p&gt;

&lt;p&gt;keep building.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>go</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
