<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zimtzimt</title>
    <description>The latest articles on DEV Community by Zimtzimt (@fabianzimber).</description>
    <link>https://dev.to/fabianzimber</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fabianzimber"/>
    <language>en</language>
    <item>
      <title>Here’s How I Built a Global 'Sync' for My Team’s AI Agents</title>
      <dc:creator>Zimtzimt</dc:creator>
      <pubDate>Thu, 16 Apr 2026 07:44:13 +0000</pubDate>
      <link>https://dev.to/fabianzimber/heres-how-i-built-a-global-sync-for-my-teams-ai-agents-1f3m</link>
      <guid>https://dev.to/fabianzimber/heres-how-i-built-a-global-sync-for-my-teams-ai-agents-1f3m</guid>
      <description>&lt;h2&gt;
  
  
  The AI Ecosystem is Moving Too Fast for Manual Config.
&lt;/h2&gt;

&lt;p&gt;We are currently living through a "Cambrian Explosion" of AI tools. Every single day, a new Model Context Protocol (MCP) server is released, a new specialized "skill" is shared, or a better way to structure &lt;code&gt;.claude.md&lt;/code&gt; files is discovered.&lt;/p&gt;

&lt;p&gt;If you’re working solo, you can (barely) keep up. But the moment you bring a team into the mix? Everything breaks.&lt;/p&gt;

&lt;p&gt;I watched my team waste hours every week manually configuring their &lt;code&gt;~/.claude/&lt;/code&gt; files, hunt down the latest GitHub MCP server URLs, and ask each other "Wait, where is the most up-to-date version of our coding standards?" &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I realized that 'Time-to-Agent' is the new 'Time-to-Hello-World'.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;If it takes an engineer an hour to set up their local agent environment to match the team’s standards, we’ve already lost. Out of that frustration, I built &lt;strong&gt;Myosotis&lt;/strong&gt;—a team-wide "Forget-Me-Not" for AI configuration. It turns AI agent setups from manual chores into &lt;strong&gt;global, shared infrastructure.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Core Problem: The Configuration Gap
&lt;/h2&gt;

&lt;p&gt;The problem isn't the AI; it's the glue. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding Friction:&lt;/strong&gt; New hires spend half their first day just getting Claude or Codex to understand the project’s specific constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skill Fragmentation:&lt;/strong&gt; One dev writes a brilliant automation script for the database; the other four devs don't even know it exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Plugin Chasing" Loop:&lt;/strong&gt; "Did you install the new Postgres MCP?" "No, which one are we using?" &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I wanted a way to push a button and have the &lt;em&gt;entire team's&lt;/em&gt; agents instantly gain a new superpower.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ How I Built the "Team Brain" (The Myosotis Setup)
&lt;/h2&gt;

&lt;p&gt;I built a framework that brings the "Source of Truth" back to the center while still respecting the local nature of the developer's machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Global Sync, Local Freedom (The Bootstrap)
&lt;/h3&gt;

&lt;p&gt;Myosotis uses a platform-agnostic bootstrap setup. When an engineer joins (or a new MCP is released), they run one command. It synchronizes our &lt;strong&gt;Canonical MCP Profiles&lt;/strong&gt; and &lt;strong&gt;Shared Skills&lt;/strong&gt; directly into their &lt;code&gt;~/.claude/&lt;/code&gt; or &lt;code&gt;~/.codex/&lt;/code&gt; directories.&lt;/p&gt;

&lt;p&gt;The beauty? It's additive. The script sets the "Team Baseline" (like our read-only DB tools and standard GitHub skills), but the developer can still add their own experimental MCPs on top without breaking the sync.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Managed Anchors (The Agent's Instruction Manual)
&lt;/h3&gt;

&lt;p&gt;I moved the team's knowledge into a set of project templates. Every repository now has managed "anchors" like &lt;code&gt;AGENTS.md&lt;/code&gt; and &lt;code&gt;CLAUDE.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;These aren't just empty docs; they are the protocol. They tell any agent that drops into the repo: "Here are the team skills you have access to, here are the MCP servers you should use, and here is how we write code here." &lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Myosotis Control Surface
&lt;/h3&gt;

&lt;p&gt;To make managing this even easier, the repo includes a Next.js application that serves as a control surface. Instead of editing raw JSON files, you can manage your MCP profiles, skill libraries, and instruction layers in one place and sync them back to your source of truth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y0q8hygelnbzfog46tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y0q8hygelnbzfog46tb.png" alt="App Screenshot" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📉 The Result: Zero-Friction AI Onboarding
&lt;/h2&gt;

&lt;p&gt;The impact was immediate. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding an agent to a new repo&lt;/strong&gt; now takes seconds, not hours.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pushing a new team capability&lt;/strong&gt; (like a new specialized frontend refactoring skill) is now a simple &lt;code&gt;git pull&lt;/code&gt; away for the whole team.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; is guaranteed. If we decide as a team to use a specific MCP for documentation, it's deployed to everyone simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 Stop Managing Agents, Start Orchestrating Them
&lt;/h2&gt;

&lt;p&gt;If your team is serious about using AI, you have to stop treating your agent's config like a personal &lt;code&gt;.bashrc&lt;/code&gt; file. In an era where plugins and skills release "minutely," you need a distribution system, not a manual setup guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treat your agents like members of the team.&lt;/strong&gt; Give them a shared syllabus, a shared utility belt, and a shared headquarters. &lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I've open-sourced Myosotis so you can stop the config madness and start scaling your team's AI intelligence. Check out the link below and let's build the future of AI-native engineering together.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shiftbloom-studio" rel="noopener noreferrer"&gt;
        shiftbloom-studio
      &lt;/a&gt; / &lt;a href="https://github.com/shiftbloom-studio/myosotis" rel="noopener noreferrer"&gt;
        myosotis
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Team-shared Agent Harness Builder to configure MCP Servers, Skills and AGENTS.md / CLAUDE.md for your team with cross-agent setups.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Myosotis&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/shiftbloom-studio/myosotis/docs/media/cover.jpg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fshiftbloom-studio%2Fmyosotis%2FHEAD%2Fdocs%2Fmedia%2Fcover.jpg" alt="Myosotis Filter Cover"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/shiftbloom-studio/myosotis/./LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/034b21af87be4e24ec69a76d307ece19226bd321fd1dfe682d410004cbefe212/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d4170616368652d2d322e302d626c61636b2e737667" alt="License: Apache 2.0"&gt;&lt;/a&gt;
&lt;a href="https://nextjs.org/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/07b84653d4f3f2fb4b2fd8d534aa486f3371fc4802bc34b10e5b6245ff7d3e46/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4e6578742e6a732d31362d626c61636b" alt="Next.js"&gt;&lt;/a&gt;
&lt;a href="https://www.typescriptlang.org/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/62ae429822a17363673752eeffad380e3d87388a0b888e493695948560da3597/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f547970655363726970742d352d333137384336" alt="TypeScript"&gt;&lt;/a&gt;
&lt;a href="https://github.com/shiftbloom-studio/myosotis/./compose/docker-compose.yml" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fa9b18873ce0b42401514bcee3ce3ce00169b5e0a62e3221559a2e3e1f9c2030/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4465706c6f792d446f636b65725f436f6d706f73652d323439364544" alt="Docker Compose"&gt;&lt;/a&gt;
&lt;a href="https://github.com/shiftbloom-studio/myosotis/./infra/terraform/aws" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/240c1eda70242b592a1bbeefa2761d6492893762d0f64af603fd6e7c5e06ffa7/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f496e6672612d4157532d464639393030" alt="AWS"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Myosotis is an open-source control surface for AI-native workspace setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This is not another agent harness.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Its purpose is to solve &lt;strong&gt;Configuration Fragmentation&lt;/strong&gt; and &lt;strong&gt;Team-Wide Onboarding&lt;/strong&gt; in an era where AI skills, MCP servers, and best practices are releasing minutely. It provides a unified, syncable baseline for every developer on a team—ensuring every agent has the same tools and domain context while preserving local autonomy.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why Myosotis?&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Instead of hiding your agent setups behind a database or a proprietary internal tool, Myosotis keeps the source of truth in &lt;strong&gt;plain files&lt;/strong&gt; you can review, diff, fork, and ship.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Instant Team Onboarding&lt;/strong&gt;: Turn hours of manual agent configuration into a 60-second bootstrap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collective Intelligence&lt;/strong&gt;: Shared domain skills (backend, frontend, devops, pr-review) that the whole team uses and improves.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File-Native&lt;/strong&gt;: MCP configs, skills, and instructions remain regular repository assets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Execution&lt;/strong&gt;: A shared stack on AWS for long-running…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shiftbloom-studio/myosotis" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>mcp</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Open Hallucination Index (OHI): turning “Plausible AI” into “Verifiable AI” (LLMs as Gardeners of the Graph) 🌱🧠</title>
      <dc:creator>Zimtzimt</dc:creator>
      <pubDate>Wed, 07 Jan 2026 10:52:16 +0000</pubDate>
      <link>https://dev.to/fabianzimber/open-hallucination-index-ohi-turning-plausible-ai-into-verifiable-ai-llms-as-gardeners-of-2i15</link>
      <guid>https://dev.to/fabianzimber/open-hallucination-index-ohi-turning-plausible-ai-into-verifiable-ai-llms-as-gardeners-of-2i15</guid>
      <description>&lt;p&gt;I’m genuinely obsessed with this phase of life where you can learn &lt;em&gt;so much&lt;/em&gt; so fast — and suddenly you notice: &lt;strong&gt;oh wow, there’s real potential here&lt;/strong&gt;. ✨&lt;/p&gt;

&lt;p&gt;And yes, AI is controversial. A lot of critique is valid and not “debatable away”.&lt;br&gt;
But if you zoom in on the &lt;em&gt;technical&lt;/em&gt; side — the architecture, the retrieval mechanics, the verification problem — it’s an insanely interesting space.&lt;/p&gt;

&lt;p&gt;So… last night I went full hyperfocus and wrote a research paper + reference implementation for something I call:&lt;/p&gt;
&lt;h1&gt;
  
  
  Open Hallucination Index (OHI)
&lt;/h1&gt;

&lt;p&gt;OHI is a &lt;strong&gt;sovereign architectural framework&lt;/strong&gt; that tries to move us from &lt;strong&gt;“Generative AI”&lt;/strong&gt; to &lt;strong&gt;“Verifiable AI”&lt;/strong&gt; by adding a deterministic &lt;strong&gt;Trust Layer&lt;/strong&gt; &lt;em&gt;after&lt;/em&gt; generation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Motto: &lt;strong&gt;“LLMs Hallucinate – We Verify.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  The core problem: the epistemic deficit
&lt;/h2&gt;

&lt;p&gt;LLMs are optimized for &lt;strong&gt;plausibility&lt;/strong&gt;, not &lt;strong&gt;veracity&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
They can &lt;em&gt;sound&lt;/em&gt; true while being completely ungrounded — what my paper frames as &lt;em&gt;stochastic fabulation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;On the human side, there’s also a psychological trap: people tend to grant fluent systems a kind of &lt;strong&gt;epistemic authority&lt;/strong&gt;, which fuels &lt;strong&gt;automation bias&lt;/strong&gt; (trusting the output even when it clashes with evidence).&lt;/p&gt;

&lt;p&gt;So the goal here is not “make models perfect” — it’s:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;make truth-checking systematic, auditable, and configurable.&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Why “naive RAG” doesn’t solve it
&lt;/h2&gt;

&lt;p&gt;Retrieval-Augmented Generation is a good direction, but the naive version still fails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vector similarity can fetch irrelevant chunks&lt;/li&gt;
&lt;li&gt;the generator can still override retrieved context with parametric memory&lt;/li&gt;
&lt;li&gt;“semantic proximity” is not “logical truth”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the paper, I argue verification must be &lt;strong&gt;extrinsic&lt;/strong&gt; and &lt;strong&gt;deterministic&lt;/strong&gt; — not another stochastic step inside the generation loop.&lt;/p&gt;


&lt;h2&gt;
  
  
  OHI in one breath: a post-generation Trust Layer
&lt;/h2&gt;

&lt;p&gt;Instead of trusting an answer as a blob of text, OHI verifies it at &lt;strong&gt;claim-level granularity&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  1) Atomic Claim Decomposition (granularity is everything)
&lt;/h3&gt;

&lt;p&gt;A paragraph can contain 9 correct facts and 1 subtle fabrication — binary “true/false” labels won’t cut it.&lt;/p&gt;

&lt;p&gt;OHI decomposes a response into &lt;strong&gt;atomic claims&lt;/strong&gt;, represented like a tuple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;(S, P, O)&lt;/strong&gt; → subject / predicate / object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is aligned with ideas used in fine-grained factuality metrics like &lt;strong&gt;FActScore&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;T (text)  →  A = {c1, c2, ... cn} (atomic claims)

FActScore = (1 / |A|) * Σ 𝟙(claim is supported)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2) Hybrid evidence retrieval (Graph + Vector + MCP)
&lt;/h3&gt;

&lt;p&gt;For each atomic claim, OHI uses &lt;strong&gt;multiple verification oracles&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Neo4j graph matching&lt;/strong&gt; (deterministic structure: “does this relation/path exist?”)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qdrant vector retrieval&lt;/strong&gt; (semantic candidates, “fuzzy” evidence)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP evidence&lt;/strong&gt; (live, standardized tool access — e.g., Wikipedia / Context7 docs)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3) Classify each claim + compute a Trust Score
&lt;/h3&gt;

&lt;p&gt;Each claim becomes one of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Supported&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Contradicted&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unverifiable&lt;/strong&gt; (no decisive evidence)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the system aggregates per-claim signals into a final &lt;strong&gt;OHI Trust Score (0.0–1.0)&lt;/strong&gt; and returns a &lt;strong&gt;visual overlay&lt;/strong&gt; (green/red/gray) per sentence/claim.&lt;/p&gt;

&lt;p&gt;A mental model I like from the paper:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;like &lt;em&gt;nutritional labels&lt;/em&gt; for epistemic quality — not vibes, but a visible score.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The system architecture (sovereign by design)
&lt;/h2&gt;

&lt;p&gt;OHI is designed for &lt;strong&gt;local sovereignty&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no external OpenAI/Anthropic calls needed for verification&lt;/li&gt;
&lt;li&gt;local control over &lt;strong&gt;Ground Truth&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;avoids the “fox guarding the henhouse” scenario&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-level components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vLLM&lt;/strong&gt; hosting &lt;strong&gt;Qwen2.5&lt;/strong&gt; (paper discusses 7B + 32B variants; reference setup uses a quantized 7B AWQ)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neo4j&lt;/strong&gt; as ontology store (index-free adjacency for multi-hop traversal)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qdrant&lt;/strong&gt; as semantic store (embeddings for initial retrieval)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; as cache (reduce repeated lookup cost)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP servers&lt;/strong&gt; as standardized “truth adapters” (Wikipedia, Context7)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the paper I also highlight &lt;strong&gt;vLLM’s PagedAttention&lt;/strong&gt; as a throughput enabler for parallel verification workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  The algorithmic heart: Hybrid Verification Oracle
&lt;/h2&gt;

&lt;p&gt;The paper formalizes the scoring logic as a hybrid oracle + weighted scorer.&lt;/p&gt;

&lt;p&gt;Simplified version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For each claim c:
  S_graph = 1.0 if exact graph match else 0.0
  S_vec   = semantic similarity from vector retrieval
  S_mcp   = evidence signal from MCP tools

Trust(c) = α*S_graph + β*S_vec + γ*S_mcp
OHI Score = average Trust(c) across all claims
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default weights in the paper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;α = 0.6&lt;/strong&gt; graph exact match (deterministic truth)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;β = 0.3&lt;/strong&gt; vector semantic match (plausibility)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;γ = 0.1&lt;/strong&gt; MCP evidence (contextual grounding)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Graph can effectively act as a “veto” against misleading semantic matches.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two tiny code peeks (from the reference implementation)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) MCP over SSE: a clean “truth sidecar” session
&lt;/h3&gt;

&lt;p&gt;OHI uses MCP to standardize tool access via JSON-RPC 2.0, and the Wikipedia adapter keeps it lightweight with SSE transport.&lt;br&gt;&lt;br&gt;
Here’s the fallback session context manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;contextlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asynccontextmanager&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ClientSession&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp.client.sse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sse_client&lt;/span&gt;

&lt;span class="nd"&gt;@asynccontextmanager&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_session_fallback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Create a new MCP session (non-pooled, for fallback).&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;sse_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_mcp_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;as &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the paper, I also discuss session pooling to reduce repeated SSE handshake overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Streaming Wikipedia dump ingestion (memory-efficient)
&lt;/h3&gt;

&lt;p&gt;Your “Ground Truth” is only as good as your ingestion pipeline.&lt;br&gt;&lt;br&gt;
To build the Neo4j graph from a massive Wikipedia XML dump, the importer uses &lt;strong&gt;streaming iterparse&lt;/strong&gt; + aggressive element clearing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;xml.etree.ElementTree&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;iterparse&lt;/span&gt;

&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;iterparse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_handle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;start&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;elem&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;strip_namespace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;elem&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;start&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;in_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="n"&gt;current_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;in_page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;elem&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt;

        &lt;span class="c1"&gt;# ... collect title, ids, text, etc. ...
&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;in_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;

            &lt;span class="c1"&gt;# resume support: skip until checkpoint
&lt;/span&gt;            &lt;span class="n"&gt;page_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;page_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;page_id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;start_after_page_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;current_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
                &lt;span class="n"&gt;elem&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="k"&gt;continue&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="nf"&gt;yield &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;page_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;revision_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;current_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="n"&gt;current_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

        &lt;span class="c1"&gt;# Clear element to free memory
&lt;/span&gt;        &lt;span class="n"&gt;elem&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s how you survive multi-GB XML without melting RAM.&lt;/p&gt;




&lt;h2&gt;
  
  
  Performance reality check (what the paper reports)
&lt;/h2&gt;

&lt;p&gt;The system is fast on DB queries, but the bottlenecks are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;claim decomposition&lt;/strong&gt; (LLM inference): ~&lt;strong&gt;200–500ms per text segment&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;orchestration overhead in Python (GIL + I/O overhead accumulates under concurrency)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The paper frames a clear direction: &lt;strong&gt;porting core orchestration to Rust&lt;/strong&gt; for near-real-time constraints.&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum “sovereign mode” hardware (paper summary)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;~16GB RAM + NVIDIA GPU&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~8GB VRAM&lt;/strong&gt; for Qwen2.5-7B-AWQ&lt;/li&gt;
&lt;li&gt;Neo4j: &lt;strong&gt;4–16GB RAM&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Qdrant: &lt;strong&gt;~4GB RAM per 1M vectors&lt;/strong&gt; (depends on setup)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  “LLMs as Gardeners of the Graph” 🌿
&lt;/h2&gt;

&lt;p&gt;This analogy is my favorite part of the future-facing discussion.&lt;/p&gt;

&lt;p&gt;Imagine the “Graph” as a floating monolith of structured knowledge:&lt;br&gt;
nodes = entities/facts, edges = relations.&lt;/p&gt;

&lt;p&gt;LLMs aren’t “truth” — they’re &lt;strong&gt;gardeners&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;they shape, prune, and recombine knowledge into fluent language&lt;/li&gt;
&lt;li&gt;but they can also cross-breed the wrong things and produce believable nonsense&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The future vision in the paper is &lt;em&gt;recursive&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLMs help &lt;strong&gt;grow&lt;/strong&gt; the graph (extract triples from new text into Neo4j)&lt;/li&gt;
&lt;li&gt;OHI then &lt;strong&gt;audits&lt;/strong&gt; the same models against the structure they helped build&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A self-correcting loop… with one big philosophical question:&lt;br&gt;
&lt;strong&gt;who owns the graph?&lt;/strong&gt; 👀&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it / read it / fork it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Research Paper (PDF)&lt;/li&gt;
&lt;li&gt;Demo&lt;/li&gt;
&lt;li&gt;Open Source Docker Image + API&lt;/li&gt;
&lt;li&gt;Open Source Demo Website
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Research Paper:
https://publuu.com/flip-book/1040591/2304964

Demo:
https://openhallucination.xyz

Open Source Docker-Image + API:
https://github.com/shiftbloom-studio/open-hallucination-index-api

Open Source Demo Website:
https://github.com/shiftbloom-studio/open-hallucination-index
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;If you’re into epistemology, verification, GraphRAG, or just building systems that &lt;em&gt;refuse&lt;/em&gt; to hand-wave truth…&lt;br&gt;&lt;br&gt;
I’d love feedback, critiques, and “this will break when…” comments. That’s the good stuff. 😄🧠&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>opensource</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Building Circadian UI: time-aware theming for React + Next.js (Open Source)</title>
      <dc:creator>Zimtzimt</dc:creator>
      <pubDate>Sun, 04 Jan 2026 14:50:38 +0000</pubDate>
      <link>https://dev.to/fabianzimber/building-circadian-ui-time-aware-theming-for-react-nextjs-open-source-14nk</link>
      <guid>https://dev.to/fabianzimber/building-circadian-ui-time-aware-theming-for-react-nextjs-open-source-14nk</guid>
      <description>&lt;p&gt;I’m starting a new open-source NPM package called Circadian UI 🌗&lt;/p&gt;

&lt;p&gt;The idea is pretty simple — but I kept thinking: &lt;br&gt;
why does this not exist as a clean, reusable default yet?&lt;br&gt;
Most apps either ship a static theme or a manual Dark Mode toggle. But real usage isn’t static: morning vs late-night usage feels totally different, and so do our eyes, attention, and tolerance for contrast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circadian UI&lt;/strong&gt; aims to be a small “default upgrade” for most React/Next.js projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-aware theming&lt;/strong&gt; (Dawn / Day / Dusk / Night)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind-friendly tokens&lt;/strong&gt; (CSS variables, HSL-based)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WCAG contrast-aware by design&lt;/strong&gt; (not a late patch)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js-ready + SSR-safe&lt;/strong&gt; (no theme flash, clean hydration story)&lt;/li&gt;
&lt;li&gt;User control matters: opt-in, overrides, persistence, prefers-contrast / prefers-color-scheme support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal isn’t “another theme library”.&lt;br&gt;
It’s more like: make the UI feel calm and right at any time of day — automatically — without being creepy or complex.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I want to take this
&lt;/h2&gt;

&lt;p&gt;I’m building this in public and I’d love to turn it into a genuinely solid open-source package: clean API, great docs, good tests, and real-world ergonomics.&lt;/p&gt;

&lt;p&gt;If you’re into React/Next.js DX, design tokens, accessibility, or just building polished tooling:&lt;br&gt;
&lt;em&gt;I’m very open to collaborating 🤝&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Stuff I’d love help or feedback on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;edge cases for phase scheduling and overrides&lt;/li&gt;
&lt;li&gt;a clean SSR + hydration strategy for Next.js App Router&lt;/li&gt;
&lt;li&gt;Tailwind integration patterns that feel “drop-in”&lt;/li&gt;
&lt;li&gt;contrast enforcement heuristics that stay visually consistent&lt;/li&gt;
&lt;li&gt;demo app ideas that prove the value fast&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ll post the repo + first preview/demos soon — and if you want to co-build, feel free to reach out or drop a comment.&lt;/p&gt;

&lt;p&gt;Let’s build something that makes people go: “&lt;em&gt;wait… how was this not a standard already?&lt;/em&gt;” ✨&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>css</category>
      <category>react</category>
      <category>design</category>
    </item>
    <item>
      <title>Serve the Cake First, Add the Icing Only When Safe</title>
      <dc:creator>Zimtzimt</dc:creator>
      <pubDate>Sat, 03 Jan 2026 23:08:09 +0000</pubDate>
      <link>https://dev.to/fabianzimber/birthday-cake-loading-serve-the-cake-first-add-the-icing-only-when-safe-progressive-enhancement-m6e</link>
      <guid>https://dev.to/fabianzimber/birthday-cake-loading-serve-the-cake-first-add-the-icing-only-when-safe-progressive-enhancement-m6e</guid>
      <description>&lt;h1&gt;
  
  
  Building Rich Experiences That Don’t Punish Real Users: Introducing Birthday-Cake Loading
&lt;/h1&gt;

&lt;p&gt;I’ve been working on a promotional site for a fantasy game project I’m developing. The hero section was meant to feel magical: floating particle embers, subtle ambient voice narration, bell-like hover sounds, and smooth animated transitions between sections. On my desktop with a fast connection, it was exactly what I wanted—immersive and atmospheric.&lt;/p&gt;

&lt;p&gt;Then I opened it on my phone over a mediocre 3G signal.&lt;/p&gt;

&lt;p&gt;The experience fell apart. Long blank screen, audio starting late and stuttering, particles causing visible jank, battery drain, and the whole page feeling sluggish. Even after applying the usual optimizations—lazy-loading components, reducing particle count, compressing assets—the core problem remained: the rich version was simply too heavy for many real-world conditions.&lt;/p&gt;

&lt;p&gt;Removing the effects entirely wasn’t an option; they were central to the feel I was going for. Media queries to hide them on mobile felt like a blunt instrument. What I really needed was a way to serve a lean, instantly usable baseline to everyone, then progressively add the richer layers only when the device and network could genuinely support them—without me having to write custom detection logic for every feature.&lt;/p&gt;

&lt;p&gt;That’s when the “birthday cake” metaphor clicked: deliver the solid, edible cake first (the baseline experience that works everywhere), and add the fancy icing, sprinkles, and decorations only when there’s budget for it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Core Idea: Capability-Based Tiering
&lt;/h3&gt;

&lt;p&gt;Birthday-Cake Loading (BCL) is a small runtime that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collects best-effort signals (device memory, CPU cores, network type/speed, Save-Data header, prefers-reduced-motion, etc.).&lt;/li&gt;
&lt;li&gt;Derives a conservative tier: &lt;code&gt;base&lt;/code&gt; → &lt;code&gt;lite&lt;/code&gt; → &lt;code&gt;rich&lt;/code&gt; → &lt;code&gt;ultra&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Exposes feature flags (motion, audio, rich images, smooth scrolling, etc.).&lt;/li&gt;
&lt;li&gt;Provides declarative components to gate content based on those flags or tiers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tiering is intentionally defensive: if there’s any doubt, it stays in a lower tier. This avoids the common pitfall of optimistic enhancements that end up hurting users on constrained devices.&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works in Practice
&lt;/h3&gt;

&lt;p&gt;Here’s a simplified example from my game site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;CakeProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;CakeLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;CakeUpgrade&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;CakeWatch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// optional jank guard&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@shiftbloom-studio/birthday-cake-loading&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;HeroSection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CakeLayer&lt;/span&gt; &lt;span class="na"&gt;feature&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"motion"&lt;/span&gt; &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;StaticHero&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ParticleEmberHero&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Only mounts if motion is allowed */&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;CakeLayer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;AmbientAudio&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CakeUpgrade&lt;/span&gt;
      &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"idle"&lt;/span&gt; &lt;span class="c1"&gt;// wait for idle time&lt;/span&gt;
      &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./AmbientNarration&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SilentVersion&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FullAudioExperience&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;CakeUpgrade&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CakeProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CakeWatch&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Opt-in runtime jank detection */&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HeroSection&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;AmbientAudio&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* rest of the page */&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;CakeProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On a low-end phone with Save-Data enabled → static hero, no audio, instant paint.&lt;/li&gt;
&lt;li&gt;On a high-end desktop → particles, narration, smooth upgrades after idle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Core Web Vitals improved noticeably, and the site finally felt fast across the board.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd9vwnj1aar0igksnnty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd9vwnj1aar0igksnnty.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Next.js Integration
&lt;/h3&gt;

&lt;p&gt;Since the game site uses Next.js App Router, I added server helpers to read Client Hints and bootstrap the tier on the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/layout.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/headers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getServerCakeBootstrapFromHeaders&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@shiftbloom-studio/birthday-cake-loading/server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bootstrap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getServerCakeBootstrapFromHeaders&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CakeProvider&lt;/span&gt; &lt;span class="na"&gt;bootstrap&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;bootstrap&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;CakeProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the initial HTML already reflects the expected tier, avoiding flash of incorrect content.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Note on the Development Process
&lt;/h3&gt;

&lt;p&gt;I used several AI assistants (GPT, Grok, Gemini) extensively during research and early prototyping. They were invaluable for quickly surveying browser APIs for device signals, comparing tiering strategies, and stress-testing edge cases. The speed of iteration was genuinely higher than working entirely solo. That said, every architectural decision, API shape, and line of production code was mine—I treated the AIs as knowledgeable pair programmers rather than code generators. The result feels like a very human library because it is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Not Just Use Existing Solutions?
&lt;/h3&gt;

&lt;p&gt;I looked at libraries for feature detection, reduced-motion hooks, and lazy loading. None quite offered the full progressive-enhancement loop I needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative gating tied to a unified tier.&lt;/li&gt;
&lt;li&gt;Conservative defaults.&lt;/li&gt;
&lt;li&gt;Server-side bootstrap for Next.js.&lt;/li&gt;
&lt;li&gt;Opt-in runtime jank guard (CakeWatchtower).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BCL fills that gap without pulling in heavy dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8jiunn1u86jgmaa8xu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8jiunn1u86jgmaa8xu0.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Current State &amp;amp; What’s Next
&lt;/h3&gt;

&lt;p&gt;The library is still young (v0.2.x as of this writing), but the core is stable and already powering my game site. It’s Apache-2.0 licensed, fully typed, and tree-shakeable.&lt;/p&gt;

&lt;p&gt;If you’re building anything with rich media—games, portfolios, marketing sites, dashboards with heavy animations—I’d love to hear whether this approach resonates. Issues, PRs, and war stories are all welcome.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/shiftbloom-studio/birthday-cake-loading" rel="noopener noreferrer"&gt;https://github.com/shiftbloom-studio/birthday-cake-loading&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/@shiftbloom-studio/birthday-cake-loading" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/@shiftbloom-studio/birthday-cake-loading&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Live demo in the repo (examples/next-demo)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading. 🎂&lt;/p&gt;

&lt;h1&gt;
  
  
  react #nextjs #performance #accessibility #webdev #javascript #opensource #progressiveenhancement
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6zyq3w1hwx4cre9gyd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6zyq3w1hwx4cre9gyd3.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
