DEV Community

Cover image for I Cut My Context-Switch Recovery From 23 Minutes to 5 Seconds
Likhit Kumar V P
Likhit Kumar V P

Posted on

I Cut My Context-Switch Recovery From 23 Minutes to 5 Seconds

The most expensive thing in software isn't bugs. It's forgetting.

A developer sits down at 9 AM. Fresh coffee. Terminal open. And then: What was I doing?

They run git status. Then git stash list. Grep for TODOs. Scroll shell history. Check which files are open in VS Code. Pull up GitHub for open PRs. Open Slack to see if anyone mentioned their branch.

Ten minutes pass before they write a single character of code.

This isn't a personal failing. It's a structural one. UC Irvine researchers found it takes 23 minutes and 15 seconds to fully regain deep focus after a single interruption. Let that sink in. An 8-hour workday produces 4.8 hours of actual output. The rest is recovery.

So I built a tool to collapse that recovery time to 5 seconds.

$ snapctx

────────────────────────────────────────────────────────

SnapContext 8:45 AM · nemotron-3-nano-30b

────────────────────────────────────────────────────────

◉ IN PROGRESS branch: feat/notifications

confidence: ●●●●○ 80%

3 unstaged · 2 staged · 1 untracked · 1 stash · 2 TODOs

────────────────────────────────────────────────────────
You were building the notification routing system.
NotificationRouter.ts is created but not yet wired
into app.ts. Two TODOs remain: "wire websocket handler"
and "add error middleware". Your next structural step
is connecting the router to the entry point.
────────────────────────────────────────────────────────

4.5s

Enter fullscreen mode Exit fullscreen mode

One command. 5 seconds. Plain English.

"But doesn't Copilot already -" No.

Copilot writes code. ChatGPT answers questions. Neither one tells you what you were doing when you left.

SnapContext doesn't write code. It doesn't review code. It doesn't even read code. It's a bookmark for your brain - it reads your working tree structure, not your implementation.

  • Never reads file contents - only file names and line-count stats
  • Never sends code to AI - the prompt is pure structure
  • Secrets aren't redacted, they're EXCLUDED - if a shell command contains an API key, the entire line is dropped
  • Works fully offline - snapctx --provider ollama uses a local model, zero network calls

Your code never leaves your machine. Period.

What it actually reads

10 collectors run in parallel. Each gets 5 seconds. One failure never crashes the rest (Promise.allSettled, not Promise.all).

Signal What SnapContext sees What it NEVER sees
Git working tree "3 files changed in src/auth/" The actual code changes
Git stash "stash@{0}: WIP on feat/auth (4 files)" The stashed content
TODOs "TODO: wire websocket handler" (from diff) Surrounding code
Shell history "ran npm test 12 times in 30 min" Your secrets or env vars
IDE open files "auth.ts, middleware.ts open in VS Code" File contents
GitHub PRs "PR #42: 2 approvals, CI passing" Code diffs
Browser tabs "localhost:3000, jwt.io, Stack Overflow" Page content
AI sessions "Claude Code session: 'auth refactor'" Conversation content
PM tickets "LINEAR-123: Implement OAuth" Description body

That's the data diet. Structure, not substance. Enough for an AI to say "you were building OAuth middleware and got stuck on the token refresh flow"

The part that prevents hallucination

Here's my favorite engineering decision: the confidence gate.

SnapContext has an engine called FIE (Feature Inference Engine) that scores every signal. It detects your state:

State What triggered it
in-progress You have uncommitted changes
blocked Zero changes but heavy shell activity (debugging loop)
stashed Stashes exist, clean tree (you shelved work)
context-switch Reflog shows you just switched branches
conflict Merge conflicts detected
clean Nothing going on

If the confidence score drops below 25% and the state is clean, SnapContext refuses to call the AI. It says "nothing in progress" and exits. No hallucinated context. No invented narrative about what you "might" have been doing.

In an era where every AI tool is racing to generate more, this one knows when to shut up.

It remembers what you don't

Every briefing gets saved to a local SQLite database (Node 23's built-in node:sqlite — zero extra dependencies).


$ snapctx history

Date State Module Conf Branch
just now ◉ progress history ●●●○○ feat/sqlite
2h ago ◉ progress fie ●●●●○ feat/sqlite
1d ago ✓ clean — ○○○○○ main

Enter fullscreen mode Exit fullscreen mode

See what changed since your last briefing — no AI call, instant:


$ snapctx diff

State in-progressin-progress
Module fie → history
Unstaged files 3 → 7 (+4)
TODOs 1 → 0 (-1)

Enter fullscreen mode Exit fullscreen mode

Generate a standup summary from today's activity:


$ snapctx eod

5 commits · 12 files · +486 · -6

Today you shipped the browser tab collector and ticket
tracking system. Five commits across three features...

Enter fullscreen mode Exit fullscreen mode

Or open the web dashboard:


$ snapctx web
● http://localhost:7700

Enter fullscreen mode Exit fullscreen mode

History browser, live streaming briefings via WebSocket, diff viewer, dark theme. All served from node:http — no React, no build step, no node_modules explosion.

Zero dependencies. On purpose.

The entire dependency list: Node.js built-ins. That's it.

  • node:sqlite for history
  • node:http for the web server
  • node:crypto for WebSocket handshake (raw RFC 6455)
  • node:child_process for git and shell commands

No Express. No better-sqlite3. No ws. No React. No Webpack.

Free. Actually free.

Every default provider is genuinely free, no credit card, no trial that expires:

Provider Model What you need
openrouter Nemotron 30B Free key from openrouter.ai/keys
ollama Llama 3.2 Local install, fully offline
groq Llama 3.3 70B Free key from console.groq.com
huggingface Mistral 7B Free token from huggingface.co

The auto mode tries each in order. If you want to go fully air-gapped:


ollama serve && ollama pull llama3.2
snapctx --provider ollama

Enter fullscreen mode Exit fullscreen mode

No internet required. No telemetry. No analytics. Just your git repo and a local model.

Try it in 60 seconds


git clone https://github.com/Likhit-Kumar/SnapContext.git

cd SnapContext && npm install

# Free API key (no credit card): https://openrouter.ai/keys

export OPENROUTER_API_KEY=sk-or-v1-...

alias snapctx="bash $(pwd)/snapctx.sh"

# Run from any git project

cd ~/your-project && snapctx

Enter fullscreen mode Exit fullscreen mode

GitHub: github.com/Likhit-Kumar/SnapContext

Star it if you've ever lost 20 minutes just remembering what you were doing.


Zero dependencies. MIT licensed. Try it.

Top comments (1)

Collapse
 
godnick profile image
Henry Godnick

The 23 minute stat from UC Irvine always hits hard. I've been using Claude Code heavily and the token costs add up fast when you're constantly losing context and re-prompting. Found a little menu bar app recently that tracks token usage in real time across providers and it was eye opening to see how much I was spending just on "where was I?" type prompts. Bookmarking SnapContext, this could pair really well with that workflow.