<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kumar Satvik</title>
    <description>The latest articles on DEV Community by Kumar Satvik (@krsatvik1).</description>
    <link>https://dev.to/krsatvik1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krsatvik1"/>
    <language>en</language>
    <item>
      <title>RefVault: a local-first design reference vault, powered by Gemma 4 26B MoE</title>
      <dc:creator>Kumar Satvik</dc:creator>
      <pubDate>Sun, 10 May 2026 04:57:58 +0000</pubDate>
      <link>https://dev.to/krsatvik1/refvault-a-local-first-design-reference-vault-powered-by-gemma-4-26b-moe-49fo</link>
      <guid>https://dev.to/krsatvik1/refvault-a-local-first-design-reference-vault-powered-by-gemma-4-26b-moe-49fo</guid>
      <description>&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RefVault&lt;/strong&gt; is a native macOS app that turns your screenshot folder into a searchable design reference library. Out of the box it watches &lt;code&gt;~/Desktop&lt;/code&gt; — exactly where macOS drops every &lt;code&gt;Cmd-Shift-4&lt;/code&gt; screenshot, so there's nothing for you to set up. Drop a screenshot the way you already do — from Pinterest, Dribbble, a competitor's landing page, anywhere — and Gemma 4 26B reads it locally, pulls out palette, typography, mood, layout, tags, and the URL on screen, then files it. When you need it weeks later, you search by sentence (&lt;code&gt;"minimal pricing serif"&lt;/code&gt;, &lt;code&gt;"i want some illustration references"&lt;/code&gt;) and the right screenshot is right there.&lt;/p&gt;

&lt;p&gt;I'm a designer. I bookmark pages, save Pinterest pins, and screenshot UI references constantly — and by the time I actually need a reference for a project, none of it is where I left it. Bookmarks in the wrong browser, Pinterest boards reorganized, screenshots buried on the Desktop with names like &lt;code&gt;Screenshot 2026-05-08 at 4.24.26 AM.png&lt;/code&gt;. RefVault is the thing I built so I'd stop losing them.&lt;/p&gt;

&lt;p&gt;Everything runs on the Mac. Nothing leaves the machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/releases/latest" rel="noopener noreferrer"&gt;↓ Download RefVault for macOS&lt;/a&gt;&lt;/strong&gt;  ·  &lt;strong&gt;&lt;a href="https://github.com/Krsatvik1/RefVault#readme" rel="noopener noreferrer"&gt;README&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A single walkthrough showing the whole loop end-to-end — taking a screenshot, RefVault auto-indexing it, the Dynamic-Island-style save toast, searching the library by sentence, and dragging a result out into another app:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1190854781" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;A few stills:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Save toast&lt;/th&gt;
&lt;th&gt;Already in library&lt;/th&gt;
&lt;th&gt;Drop to import&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9tyobkjugrt5ykutq90.png" width="800" height="355"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwld42l9yfnilea9jpbt.png" width="800" height="355"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2j3wp8eqky9lc7tqgjc.png" width="800" height="518"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/Krsatvik1/RefVault" rel="noopener noreferrer"&gt;github.com/Krsatvik1/RefVault&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the signed &lt;code&gt;.app&lt;/code&gt; (free, ad-hoc signed — no Developer ID): &lt;a href="https://github.com/Krsatvik1/RefVault/releases/latest" rel="noopener noreferrer"&gt;latest release&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Built in Swift / SwiftUI as a single SwiftPM-wrapped &lt;code&gt;.app&lt;/code&gt;. The app bundles its own Ollama runtime and downloads Gemma 4 26B on first run, so end users don't need to install anything — drag the app into &lt;code&gt;/Applications&lt;/code&gt;, click "Open Anyway" once for Gatekeeper (since I don't have an Apple Developer account), and it works.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The release artifact ships as a &lt;strong&gt;&lt;code&gt;.zip&lt;/code&gt;, not a &lt;code&gt;.dmg&lt;/code&gt;&lt;/strong&gt;. macOS Sequoia (15+) added a Gatekeeper check on disk images themselves that flags ad-hoc-signed &lt;code&gt;.dmg&lt;/code&gt;s with a separate "Apple could not verify…" prompt at mount time, on top of the &lt;code&gt;.app&lt;/code&gt;'s own unidentified-developer prompt. A &lt;code&gt;.zip&lt;/code&gt; isn't subject to that check, so users only deal with one Privacy &amp;amp; Security override (for the &lt;code&gt;.app&lt;/code&gt;) instead of two. Safari auto-extracts the download; users see &lt;code&gt;RefVault.app&lt;/code&gt; and drag it straight into &lt;code&gt;/Applications&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How I Used Gemma 4
&lt;/h2&gt;

&lt;p&gt;I picked &lt;strong&gt;Gemma 4 26B MoE&lt;/strong&gt; — the Mixture-of-Experts variant Google describes as "designed for high-throughput, advanced reasoning." That framing fits RefVault almost word-for-word: indexing is high-throughput by nature (one image at a time, but a steady stream as the user takes screenshots), and the per-image work is genuinely a reasoning task — read the image, identify the design archetype, infer mood and typography, distinguish browser chrome from page content. The MoE architecture also means RefVault gets reasoning quality close to the 31B Dense variant while staying inside the 24 GB unified-memory budget of a base-config M-series Mac.&lt;/p&gt;

&lt;p&gt;Earlier builds of RefVault used the smaller E4B variant for speed, but it got palette and typography wrong often enough that the library became noisy. When you search for "minimal pricing serif" and the screenshot is mistagged as "sans-serif", the whole product breaks. The 26B MoE produces tags I trust on the first read.&lt;/p&gt;

&lt;p&gt;A few engineering decisions that made Gemma 4 work well for this:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The indexing pipeline — granular + parallel
&lt;/h3&gt;

&lt;p&gt;Every screenshot first runs through a &lt;strong&gt;relevance gate&lt;/strong&gt; (&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/relevance.txt" rel="noopener noreferrer"&gt;&lt;code&gt;relevance.txt&lt;/code&gt;&lt;/a&gt;) — one short Gemma call that decides whether the image is a design reference. If Gemma says it's a chat window, an error dialog, a code editor, or a random photo, RefVault drops the screenshot before any extraction runs and nothing else gets called.&lt;/p&gt;

&lt;p&gt;If it passes, the agent fires &lt;strong&gt;seven granular calls in parallel&lt;/strong&gt;, one per axis — each its own short focused prompt that does exactly one thing:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;axis&lt;/th&gt;
&lt;th&gt;prompt&lt;/th&gt;
&lt;th&gt;extracts&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;style&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/metadata_style.txt" rel="noopener noreferrer"&gt;&lt;code&gt;metadata_style.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;one of &lt;code&gt;minimal&lt;/code&gt;, &lt;code&gt;brutalist&lt;/code&gt;, &lt;code&gt;editorial&lt;/code&gt;, &lt;code&gt;playful&lt;/code&gt;, …&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;typography&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/metadata_typography.txt" rel="noopener noreferrer"&gt;&lt;code&gt;metadata_typography.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;per-slot type (headings / bodies / others)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mood&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/metadata_mood.txt" rel="noopener noreferrer"&gt;&lt;code&gt;metadata_mood.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2–3 adjectives&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;layout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/metadata_layout.txt" rel="noopener noreferrer"&gt;&lt;code&gt;metadata_layout.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;one of &lt;code&gt;hero&lt;/code&gt;, &lt;code&gt;pricing&lt;/code&gt;, &lt;code&gt;dashboard&lt;/code&gt;, &lt;code&gt;landing&lt;/code&gt;, …&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tags&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/metadata_tags.txt" rel="noopener noreferrer"&gt;&lt;code&gt;metadata_tags.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5–15 single-word tags&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;color&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/colors.txt" rel="noopener noreferrer"&gt;&lt;code&gt;colors.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;primary / secondary / accent / full palette as hex&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;url&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/Krsatvik1/RefVault/blob/main/Sources/RefVault/Resources/prompts/url.txt" rel="noopener noreferrer"&gt;&lt;code&gt;url.txt&lt;/code&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;the URL on screen — only when the relevance gate flagged the image as a browser shot&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So a typical indexing path is &lt;strong&gt;1 relevance call + up to 7 metadata calls&lt;/strong&gt; (six always run; URL is conditional). All seven metadata calls fan out concurrently and share Ollama's warm KV cache, so total wall-clock time barely grows past a single call's worth.&lt;/p&gt;

&lt;p&gt;Across screenshots, indexing is &lt;strong&gt;serial&lt;/strong&gt; — a single queue runs them one at a time. If five screenshots land in the watched folder at once, the first goes through the relevance + parallel-extraction pipeline end-to-end, then the second, and so on. This keeps Ollama's GPU memory usage predictable on M-series Macs (the 26B model takes ~16 GB at runtime, so trying to run two indexes concurrently would thrash) and lets the in-app toast show clean per-image progress.&lt;/p&gt;

&lt;p&gt;Why split it up at all? Each prompt is small and specific — when you ask for one thing, the model can't shortcut a hard sub-task (mood and typography are the usual culprits) by giving up on just that one and padding the rest. I A/B'd this against the combined prompt inside an in-app Debug view:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u0niq9jqif8h3fgll05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u0niq9jqif8h3fgll05.png" alt="Granular vs combined prompt benchmark" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The granular-parallel pipeline produced consistently sharper per-field outputs at comparable wall-clock time on the M4. When forced to answer all axes in one response, the model leans on the easy ones and gets sloppy on the hard ones — separating them keeps each answer crisp.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Why 26B MoE, not E4B
&lt;/h3&gt;

&lt;p&gt;Same image, same prompts, two model variants:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5fxnt05bs6cfkh1kv9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5fxnt05bs6cfkh1kv9t.png" alt="E4B vs 26B MoE comparison" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 26B MoE output recognizes "high-end, editorial" mood and richer layout language ("modern, sophisticated, large-scale-typography, monochromatic, asymmetric, minimalist") where the E4B variant returns thinner, generic tags. Indexing happens once in the background, so model quality matters more than raw speed for this use case — and the MoE design means the quality jump comes without a proportional jump in active-parameter cost during inference.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Search uses the same model — no embeddings
&lt;/h3&gt;

&lt;p&gt;I considered adding a separate embedding model for semantic search and decided against it for a practical reason: shipping one extra model means another download on first run (the user already waits for ~15 GB of Gemma 4 26B), another set of weights resident in RAM, and another moving piece to keep version-aligned with Gemma. Reusing the same model the user already has on disk keeps the install one-shot and the runtime memory budget single-tenant.&lt;/p&gt;

&lt;p&gt;Instead, the user's sentence goes through one short Gemma prompt that rewrites it into a structured filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="s2"&gt;"i want some illustration references"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="err"&gt;→&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tags_any"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"illustration"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="err"&gt;→&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;SQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;against&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;local&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;SQLite&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;library&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One Gemma call per query, no extra model to sync, search stays offline and snappy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance on a MacBook Air M4 (24 GB RAM)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indexing:&lt;/strong&gt; ~60–100 seconds per screenshot (one-time, in the background)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search:&lt;/strong&gt; ~20 seconds per query (one Gemma call to parse, then a local SQLite hit)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both scale with model size and chip — bigger Macs go faster.&lt;/p&gt;




&lt;p&gt;Full prompts, code, and the build pipeline are in the repo. Built solo in Swift / SwiftUI / Ollama for the Gemma 4 Challenge.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
    </item>
  </channel>
</rss>
