<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amariah Kamau</title>
    <description>The latest articles on DEV Community by Amariah Kamau (@amariahak).</description>
    <link>https://dev.to/amariahak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amariahak"/>
    <language>en</language>
    <item>
      <title>How I reduced AI codebase context from 100K to 5K tokens using a graph-based RAG</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Thu, 30 Apr 2026 18:43:26 +0000</pubDate>
      <link>https://dev.to/amariahak/how-i-reduced-ai-codebase-context-from-100k-to-5k-tokens-using-a-graph-based-rag-4dbd</link>
      <guid>https://dev.to/amariahak/how-i-reduced-ai-codebase-context-from-100k-to-5k-tokens-using-a-graph-based-rag-4dbd</guid>
      <description>&lt;p&gt;Most AI coding tools lie to you about context.&lt;/p&gt;

&lt;p&gt;They say "I understand your codebase." What they actually do is dump as many files as possible into the context window and hope the model figures it out. That works on a 3-file project. It falls apart on anything real.&lt;/p&gt;

&lt;p&gt;Here's the problem I kept hitting: a medium-sized production codebase hits ~100K tokens when you try to feed it to an LLM. That's expensive, slow, and surprisingly lossy — models start hallucinating relationships between files that don't exist, missing the ones that do.&lt;br&gt;
So I built a different approach inside Atlarix. Here's exactly how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Parse the codebase into a typed graph (Round-Trip Engineering)
&lt;/h2&gt;

&lt;p&gt;When you open a project in Atlarix, it runs a parser over every file using Tree-sitter AST. Instead of storing raw text, it extracts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every function, class, interface, and type&lt;/li&gt;
&lt;li&gt;Import/export relationships between files&lt;/li&gt;
&lt;li&gt;Call relationships between functions&lt;/li&gt;
&lt;li&gt;File-level dependency edges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gets stored as a typed node/edge graph in SQLite — what we call the Blueprint. One-time cost. Persistent across sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Query the graph, not the files
&lt;/h2&gt;

&lt;p&gt;When you ask Atlarix something — "why is this API returning null?" or "add authentication to the user route" — it doesn't re-read your files. It queries the Blueprint graph.&lt;br&gt;
We use BM25 scoring to rank nodes by relevance to the query. Only the top-scoring nodes get passed to the LLM. Everything else stays in SQLite.&lt;br&gt;
Result: instead of ~100K tokens, the average query uses ~5K tokens. That's a 95% reduction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Hierarchical context for complex queries
&lt;/h2&gt;

&lt;p&gt;For simple queries, BM25 node retrieval is enough. For complex ones — multi-file refactors, architecture questions — we use a three-layer hierarchy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mermaid diagram&lt;/strong&gt; — a persistent high-level map of the whole codebase, always in context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BM25-scored Blueprint nodes&lt;/strong&gt; — targeted retrieval for the specific query&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast-model compression&lt;/strong&gt; — if context hits 70% capacity, a fast model compresses the least relevant nodes before passing to the main model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps context tight regardless of project size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Provider-agnostic by design
&lt;/h2&gt;

&lt;p&gt;The Blueprint RAG layer sits beneath every AI provider. Whether you're using GPT-4o, Claude, Gemini, Groq, or a local Ollama model — the same 5K token context gets served. You're not paying for the model to re-read your whole codebase on every message.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters beyond token cost
&lt;/h2&gt;

&lt;p&gt;The token reduction is the headline number. But the real win is accuracy.&lt;br&gt;
When you give an LLM 100K tokens, it attends to all of it roughly equally. The file you actually care about is competing with 200 other files for attention. With 5K targeted tokens, the model is working with exactly what's relevant. Responses are more precise, edits land in the right place, and hallucinated file paths basically disappear.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Atlarix v7 added parallel agents (Research, Architect, Builder, Reviewer, Debugger) on top of this foundation. Each agent uses the same Blueprint RAG layer — so even autonomous multi-agent builds stay within tight context budgets.&lt;br&gt;
We also just shipped Windows support today — so Atlarix now runs on Mac, Windows, and Linux.&lt;br&gt;
If you're building something where AI context management is a bottleneck, I'd love to compare notes. Try it at atlarix.dev or ask me anything in the comments&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>buildinpublic</category>
      <category>programming</category>
    </item>
    <item>
      <title>What two hackathons taught me about agent architecture — and how it's reshaping Atlarix</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Fri, 10 Apr 2026 18:05:45 +0000</pubDate>
      <link>https://dev.to/amariahak/what-two-hackathons-taught-me-about-agent-architecture-and-how-its-reshaping-atlarix-36ch</link>
      <guid>https://dev.to/amariahak/what-two-hackathons-taught-me-about-agent-architecture-and-how-its-reshaping-atlarix-36ch</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe97kbtca8dlmbpuxe61s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe97kbtca8dlmbpuxe61s.png" alt=" " width="800" height="509"&gt;&lt;/a&gt;I recently competed in two hackathons back to back: the Amazon Nova AI Hackathon (virtual, won the Bonus Blog Post Prize) and the Lua x Antler "Building Agent-First Businesses" event in Nairobi (physical). Both cracked open how I was thinking about agents in ways I didn't expect.&lt;/p&gt;

&lt;p&gt;The Amazon Nova lesson: a model is only as powerful as its capabilities&lt;/p&gt;

&lt;p&gt;The hackathon required building with Amazon Nova. My instinct going in was that model choice was the primary variable — pick the best model, get the best results. Wrong.&lt;/p&gt;

&lt;p&gt;What actually moved the needle was what I wrapped around the model: the tools it could call, the context it had access to, how I structured the agent loop. Nova performed at a high level not because of raw benchmark numbers but because I gave it enough surface area to work with.&lt;/p&gt;

&lt;p&gt;This directly influenced how I think about Atlarix's Compass tier system. Compass routes users to Fast / Balanced / Thinking model tiers via OpenRouter — but the tier is almost secondary to the tooling layer underneath. 57 tools, RTE codebase parsing into a SQLite knowledge graph, multi-agent coordination. That's the actual product. The model slot is interchangeable.&lt;/p&gt;

&lt;p&gt;The Nairobi lesson: our definition of "agent" was too narrow&lt;br&gt;
At the Lua x Antler event, I saw builders shipping agent-first products that weren't just "chat plus tools." These were autonomous systems designed around specific domains — deeply customized, operator-light, solving massive real-world use cases across the continent. The technical depth in that room reset my benchmark.&lt;br&gt;
It pushed me to think harder about what "agent mode" actually means in Atlarix. Right now we have Ask / Plan / Build / Debug / Review. That's a good start. But the next version needs to think more seriously about customization depth — how specialized can an agent get for a specific codebase, workflow, or domain?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2h07n6s6cdwrbbl407k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2h07n6s6cdwrbbl407k.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb6gz694xglpbjnkbx2u.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb6gz694xglpbjnkbx2u.jpeg" alt=" " width="576" height="1280"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5av6qwwyw4rf75urzn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5av6qwwyw4rf75urzn0.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
What's next for Atlarix&lt;br&gt;
We're building the next version informed by both of these. The core thesis stays: Your Architecture. Your Model. Your Sovereignty. But we're pushing further into:&lt;/p&gt;

&lt;p&gt;Deeper agent specialization per workflow&lt;br&gt;
First-class support for African AI models as they come online&lt;br&gt;
Keeping data local and sovereign by default&lt;/p&gt;

&lt;p&gt;If you're building with agents or thinking about model-agnostic tooling, I'd love to compare notes. Atlarix is open for early access at atlarix.dev.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>devtools</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>The Private Sovereign AI IDE — Why I built Atlarix differently</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:13:08 +0000</pubDate>
      <link>https://dev.to/amariahak/the-private-sovereign-ai-ide-why-i-built-atlarix-differently-1839</link>
      <guid>https://dev.to/amariahak/the-private-sovereign-ai-ide-why-i-built-atlarix-differently-1839</guid>
      <description>&lt;p&gt;There's an assumption baked into every major AI coding tool right now.&lt;br&gt;&lt;br&gt;
That you have a $20/month subscription to an American AI company. That you have a fast, stable connection. That the models worth using are the ones coming out of San Francisco.&lt;/p&gt;

&lt;p&gt;I'm building software from Nairobi. That assumption doesn't hold.&lt;/p&gt;

&lt;p&gt;So I built Atlarix differently — and after a year of building, &lt;strong&gt;v5.1.0&lt;/strong&gt; is the version that finally says out loud what it was always trying to be: &lt;strong&gt;The Private Sovereign AI IDE&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What that actually means
&lt;/h2&gt;

&lt;p&gt;Sovereign doesn't mean offline-only or anti-cloud. It means &lt;strong&gt;you decide&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
You decide which model runs your code.&lt;br&gt;&lt;br&gt;
You decide what context gets sent.&lt;br&gt;&lt;br&gt;
You decide whether your keys live on your machine or in a provider's system.&lt;/p&gt;

&lt;p&gt;Atlarix is built around three ideas that most AI coding tools don't share:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Blueprint — your codebase as a living graph
&lt;/h3&gt;

&lt;p&gt;Instead of sending files per query, Atlarix parses your codebase into a persistent node/edge graph stored locally in SQLite. That graph becomes the AI's long-term memory. It doesn't reset between sessions. It understands your architecture the way a colleague would — not by re-reading everything each time, but because it already knows.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Any model — genuinely
&lt;/h3&gt;

&lt;p&gt;Cloud providers via BYOK. Local models via Ollama and LM Studio. And now &lt;strong&gt;Compass&lt;/strong&gt; — three built-in model tiers (Fast, Balanced, Thinking) that work the moment you open the app. No API key. No configuration. Just open and build.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The tool carries the intelligence
&lt;/h3&gt;

&lt;p&gt;This is the core thesis. Atlarix feeds the model exactly what it needs, when it needs it, scoped by your &lt;code&gt;.atlarixignore&lt;/code&gt; and Blueprint context tiers. A lighter model with good context beats a frontier model flying blind. That's not a consolation prize — that's the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Work modes that mean something
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ask&lt;/strong&gt; — read-only. No code changes, no terminal. Research, explore, understand.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan&lt;/strong&gt; — map out what needs to happen before anything gets written.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt; — full tools, explicit approval queue before execution.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug&lt;/strong&gt; — focused on what's broken and why.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review&lt;/strong&gt; — read your code the way a reviewer would.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each mode binds to a Blueprint context tier. The AI isn't just switching personality — it's switching what it knows and what it's allowed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why sovereignty matters right now
&lt;/h2&gt;

&lt;p&gt;AI coding tools are becoming infrastructure. The question of which model runs your code, who sees your prompts, and whether you can switch providers without rebuilding your workflow — these are not small questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Atlarix's answer&lt;/strong&gt;: your architecture, your model, your sovereignty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v5.1.0 is live.&lt;/strong&gt; Free to start. Native desktop — Mac, Windows, Linux.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://atlarix.dev" rel="noopener noreferrer"&gt;atlarix.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I built this from Nairobi. If you try it, tell me honestly what's missing. That feedback is worth more than any press right now.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>ai</category>
      <category>devtools</category>
      <category>programming</category>
    </item>
    <item>
      <title>Atlarix v3.7 is live — now Apple Notarized</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Fri, 06 Mar 2026 15:12:43 +0000</pubDate>
      <link>https://dev.to/amariahak/atlarix-v37-is-live-now-apple-notarized-o8a</link>
      <guid>https://dev.to/amariahak/atlarix-v37-is-live-now-apple-notarized-o8a</guid>
      <description>&lt;p&gt;After months of building, testing, and waiting on &lt;br&gt;
Apple's notarization process — Atlarix v3.7 is &lt;br&gt;
officially live and Apple Verified.&lt;/p&gt;

&lt;p&gt;What is Atlarix?&lt;br&gt;
An AI coding copilot that actually understands your &lt;br&gt;
entire codebase. Not just the open file. &lt;br&gt;
The whole thing — visually.&lt;/p&gt;

&lt;p&gt;Here's what makes it different:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blueprint + RTE Parsing&lt;/strong&gt;&lt;br&gt;
Atlarix parses your project once using Round-Trip &lt;br&gt;
Engineering. It builds a live architecture diagram &lt;br&gt;
(Blueprint) and uses it as a RAG knowledge base. &lt;br&gt;
Every AI response comes from the graph — not from &lt;br&gt;
scanning 100K tokens of raw files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8 AI Providers + Local Models&lt;/strong&gt;&lt;br&gt;
Claude, GPT-4, Gemini, Groq, Mistral, xAI, &lt;br&gt;
OpenRouter, Together AI. Plus Ollama and LM Studio &lt;br&gt;
for fully offline, private coding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent System&lt;/strong&gt;&lt;br&gt;
Research, Architect, Builder, Reviewer — &lt;br&gt;
each with dedicated tools and structured delegation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v3.7 specifically:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apple Notarized and code-signed (Mac)&lt;/li&gt;
&lt;li&gt;Sentry error monitoring&lt;/li&gt;
&lt;li&gt;PostHog optional analytics (consent-based)&lt;/li&gt;
&lt;li&gt;Two-stage auto router&lt;/li&gt;
&lt;li&gt;Blueprint enricher&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This couldn't have been possible without every &lt;br&gt;
person who downloaded and used Atlarix before it &lt;br&gt;
was polished. To all our active users — &lt;br&gt;
to many more achievements 🥳&lt;/p&gt;

&lt;p&gt;Free to try → &lt;a href="https://atlarix.dev" rel="noopener noreferrer"&gt;https://atlarix.dev&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  devtools #buildinpublic #AI #MacOS
&lt;/h1&gt;

</description>
      <category>buildinpublic</category>
      <category>devtools</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Got Tired of Re-Explaining My Codebase to AI Every Single Session</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Wed, 18 Feb 2026 19:57:09 +0000</pubDate>
      <link>https://dev.to/amariahak/i-got-tired-of-re-explaining-my-codebase-to-ai-every-single-session-10dk</link>
      <guid>https://dev.to/amariahak/i-got-tired-of-re-explaining-my-codebase-to-ai-every-single-session-10dk</guid>
      <description>&lt;p&gt;There's a specific frustration every developer using AI coding tools knows.&lt;/p&gt;

&lt;p&gt;You open a project you've been on for months. You ask the AI something reasonable — "how does our auth flow connect to the user service?" — and it either makes something up, asks you to paste files, or tells you it doesn't have access to your codebase.&lt;/p&gt;

&lt;p&gt;So you paste the files. You explain the structure. You give it context. It helps. You close the laptop.&lt;/p&gt;

&lt;p&gt;Next session: same thing. From scratch. Every time.&lt;/p&gt;

&lt;p&gt;That's what pushed me to build &lt;a href="https://atlarix.dev" rel="noopener noreferrer"&gt;Atlarix&lt;/a&gt;. Not to make another AI chat interface, but to fix this specific problem — the AI having no real, persistent understanding of your project.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Root Issue: Code Isn't a List of Files
&lt;/h2&gt;

&lt;p&gt;Every tool that tries to "understand your codebase" by dumping raw files into context is solving the wrong problem.&lt;/p&gt;

&lt;p&gt;Your codebase isn't a flat list of files. It's a graph. Functions call functions. API routes hit services. Services write to databases. Webhooks trigger workers. A class inherits from a base that three other classes also inherit from.&lt;/p&gt;

&lt;p&gt;When a senior engineer who's been on a project for a year answers a question, they're not re-reading files. They're querying a mental graph they've built up. The question I kept asking myself was: what if the AI had that graph too?&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built: Parse Once, Query Forever
&lt;/h2&gt;

&lt;p&gt;Here's what Atlarix actually does when you open a project.&lt;/p&gt;

&lt;p&gt;It runs a parser across your codebase — TypeScript and Python right now, more coming — and extracts every meaningful node: API endpoints, functions, classes, database operations, webhooks, scheduled jobs, third-party calls. Each node gets typed and tagged. Then it builds a graph from those nodes and their relationships, and caches it as a &lt;strong&gt;Blueprint&lt;/strong&gt; at &lt;code&gt;~/.atlarix/blueprints/{projectHash}/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Full parse on most projects: under 30 seconds.&lt;/p&gt;

&lt;p&gt;After that, when you ask the AI something, it doesn't scan files. It queries the graph. Finds the relevant nodes. Injects only those into context. We're talking ~5K tokens instead of 100K. A file watcher updates affected nodes on save, so the graph stays current without you doing anything.&lt;/p&gt;

&lt;p&gt;The practical difference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Before: Ask question → AI scans everything → 100K tokens → slow, expensive, confused
After:  Ask question → Query graph → inject relevant nodes → 5K tokens → fast, accurate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the core of what we call RTE + RAG — Round-Trip Engineering to build the graph, Retrieval-Augmented Generation to query it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part I'm Most Proud Of: Project Memory That Actually Persists
&lt;/h2&gt;

&lt;p&gt;The RAG system solves the "understanding" problem. But there's a second problem — remembering &lt;em&gt;decisions&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Why did you choose this database? Why is that service structured that way? What's the rule about how auth tokens are handled?&lt;/p&gt;

&lt;p&gt;Atlarix writes to &lt;code&gt;.atlarix/memory.md&lt;/code&gt; automatically during context compaction (when a conversation gets long). When you start a new session, the AI reads it first. It's just a markdown file — you can edit it manually, version control it, whatever you want.&lt;/p&gt;

&lt;p&gt;There's also &lt;code&gt;.atlarix/spec.md&lt;/code&gt; — a model-created task breakdown the AI generates for complex features. Both are inspired by how Claude Code handles project memory. Simple idea, makes a huge quality-of-life difference in practice.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Visually: The Blueprint Canvas
&lt;/h2&gt;

&lt;p&gt;The graph isn't just for the AI to query. It's also the foundation of a visual architecture designer.&lt;/p&gt;

&lt;p&gt;In Blueprint mode, you get a React Flow canvas. You can design your system visually — drag in containers (Auth API, Worker Service, DB Layer), add beacons inside them (specific routes, functions, handlers), draw edges between them. It looks like an actual system diagram because it's meant to be one.&lt;/p&gt;

&lt;p&gt;The workflow I use for new features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Design the architecture in Blueprint&lt;/li&gt;
&lt;li&gt;Click "Generate Plan" — the AI compares Blueprint (desired) to Live (actual code), generates &lt;code&gt;ATLARIX_PLAN.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;AI implements one task per message, waits for review before the next&lt;/li&gt;
&lt;li&gt;Approve, iterate, ship&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's architecture-first development. Design before you code. The AI implements what you designed, not what it guesses you want.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Agent System (and Why I Separated Permissions from Modes)
&lt;/h2&gt;

&lt;p&gt;One decision I made early that I think was right: separate &lt;em&gt;agent mode&lt;/em&gt; from &lt;em&gt;permission mode&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modes&lt;/strong&gt; are about delegation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Direct&lt;/strong&gt; — just you and the model, no agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guided&lt;/strong&gt; — orchestrator delegates flat to specialists (Research, Architect, Builder, Reviewer)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autonomous&lt;/strong&gt; — agents can spawn sub-agents for complex multi-step work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Permissions&lt;/strong&gt; are about what the AI can touch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ask&lt;/strong&gt; — read-only tools only&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt; — can write files and run commands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are independent. You can run Autonomous mode in Ask permission (agents can plan and research but can't write anything) or Direct mode in Build permission (full write access, no delegation). Mixing them any way you want.&lt;/p&gt;

&lt;p&gt;The reason this matters: a Reviewer shouldn't be writing code. A Research agent shouldn't be executing commands. The separation of concerns in the tool sets prevents the agents from stepping on each other's work.&lt;/p&gt;




&lt;h2&gt;
  
  
  What v3.0 Changed
&lt;/h2&gt;

&lt;p&gt;The big v3.0 changes were less glamorous than the earlier features but genuinely important for daily use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workspace storage and path resolution were reworked and now just work correctly&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;.atlarix/&lt;/code&gt; folder system is solid — &lt;code&gt;memory.md&lt;/code&gt; and &lt;code&gt;spec.md&lt;/code&gt; reliably persist across sessions&lt;/li&gt;
&lt;li&gt;Relative paths in &lt;code&gt;create_directory&lt;/code&gt; and file tools resolve correctly to the workspace root&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes the most important release is the one that makes the existing stuff reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;A few honest things I'd change if I were starting over:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with SQLite sooner.&lt;/strong&gt; The Blueprint is currently JSON-cached. Full ANTLR4 parsing with SQLite persistence is on the roadmap (PIVOT), and I should have committed to that architecture earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fewer providers at launch.&lt;/strong&gt; Supporting 8 cloud providers + AWS Bedrock + Ollama + LM Studio sounds like a feature. In practice, it's a lot of surface area to maintain. I'd have launched with 3 and expanded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The permission UI took longer than expected.&lt;/strong&gt; Getting the approve/reject flow right — where the AI proposes every file change and you see a diff before anything runs — was worth it, but it was the part I most underestimated.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Website:&lt;/strong&gt; &lt;a href="https://atlarix.dev" rel="noopener noreferrer"&gt;https://atlarix.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've hit the same wall with AI coding tools — re-explaining your project every session, blowing token budgets on raw context, wishing the AI actually understood your architecture — I'd love to hear if this solves it for you.&lt;/p&gt;

&lt;p&gt;And if you've built something similar or tackled the codebase-as-graph problem differently, genuinely curious how you approached it. Drop it in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Cursor vs Windsurf vs Atlarix: Which AI Coding Assistant Should You Choose in 2026?</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Tue, 10 Feb 2026 17:41:25 +0000</pubDate>
      <link>https://dev.to/amariahak/cursor-vs-windsurf-vs-atlarix-which-ai-coding-assistant-should-you-choose-in-2026-50k4</link>
      <guid>https://dev.to/amariahak/cursor-vs-windsurf-vs-atlarix-which-ai-coding-assistant-should-you-choose-in-2026-50k4</guid>
      <description>&lt;h1&gt;
  
  
  Cursor vs Windsurf vs Atlarix: Which AI Coding Assistant Should You Choose in 2026?
&lt;/h1&gt;

&lt;p&gt;The AI coding assistant market exploded in 2024-2025. If you're a developer trying to pick between Cursor, Windsurf, and newer alternatives like Atlarix, this comparison breaks down what each tool does best—and where they fall short.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Cursor&lt;/th&gt;
&lt;th&gt;Windsurf&lt;/th&gt;
&lt;th&gt;Atlarix&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Core Strength&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Chat + autocomplete&lt;/td&gt;
&lt;td&gt;Agentic flows&lt;/td&gt;
&lt;td&gt;Visual architecture + AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;$10-15/mo&lt;/td&gt;
&lt;td&gt;Free (BYOK) / $19/mo Pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IDE&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VS Code fork&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;td&gt;Standalone desktop app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture View&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅ Visual blueprints&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Agent System&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;td&gt;Advanced&lt;/td&gt;
&lt;td&gt;3-tier (Research/Architect/Builder/Review)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local Models&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;✅ Full Ollama + LM Studio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multi-Provider&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;OpenAI + Anthropic&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;8+ providers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BYOK Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅ Bring your own API keys&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Cursor: The Chat-First Powerhouse
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Excellent autocomplete (Tab to accept)&lt;/li&gt;
&lt;li&gt;@-mentions for codebase context&lt;/li&gt;
&lt;li&gt;Smooth VS Code integration&lt;/li&gt;
&lt;li&gt;Fast inline edits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where it struggles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No architecture visualization&lt;/li&gt;
&lt;li&gt;Expensive at $20/mo (no BYOK option)&lt;/li&gt;
&lt;li&gt;Limited to OpenAI/Anthropic models&lt;/li&gt;
&lt;li&gt;Can't see the "big picture" of your project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Developers who live in VS Code and want premium autocomplete with AI chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Windsurf: The Agentic Coder
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-step agentic workflows ("Cascade" mode)&lt;/li&gt;
&lt;li&gt;Can plan and execute complex tasks&lt;/li&gt;
&lt;li&gt;Lower price point than Cursor&lt;/li&gt;
&lt;li&gt;Good at refactoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where it struggles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom IDE (learning curve)&lt;/li&gt;
&lt;li&gt;No visual architecture tools&lt;/li&gt;
&lt;li&gt;Limited model choices&lt;/li&gt;
&lt;li&gt;Still scanning your entire codebase each time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Developers comfortable with a new IDE who want autonomous coding agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atlarix: The Blueprint-First Alternative
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What makes it different:&lt;/strong&gt;&lt;br&gt;
Atlarix takes a fundamentally different approach: you design your app's architecture visually, and the AI uses that blueprint as persistent memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visual Blueprint Mode&lt;/strong&gt; - Design your app structure with rooms (services) and functions (APIs, webhooks, features)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3-Tier Agent System&lt;/strong&gt; - Research agent finds patterns, Architect plans changes, Builder implements, Reviewer checks quality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Generation&lt;/strong&gt; - Generate stub code from blueprints, AI fills in logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Memory (v2.2)&lt;/strong&gt; - AI "remembers" your architecture across sessions instead of re-scanning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What it does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See your entire architecture at a glance (Mermaid diagrams)&lt;/li&gt;
&lt;li&gt;BYOK - use your own API keys from 8+ providers&lt;/li&gt;
&lt;li&gt;Full local model support (Ollama, LM Studio)&lt;/li&gt;
&lt;li&gt;Free tier with full features (1 workspace)&lt;/li&gt;
&lt;li&gt;Works as standalone app (not tied to specific IDE)&lt;/li&gt;
&lt;li&gt;Activity stream shows exactly what agents are doing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where it struggles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Newer product (smaller community)&lt;/li&gt;
&lt;li&gt;Blueprint system has learning curve&lt;/li&gt;
&lt;li&gt;Not integrated into existing IDE (separate app)&lt;/li&gt;
&lt;li&gt;v2.2 Blueprint Intelligence still in development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Developers working on complex multi-service architectures who want visual clarity + AI assistance. Also great for privacy-focused devs using local models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Use Cursor if:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You want the smoothest autocomplete experience&lt;/li&gt;
&lt;li&gt;You're happy paying $20/mo&lt;/li&gt;
&lt;li&gt;You primarily use OpenAI/Anthropic models&lt;/li&gt;
&lt;li&gt;You live in VS Code and don't want to switch&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Windsurf if:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You want autonomous agents to handle multi-step tasks&lt;/li&gt;
&lt;li&gt;You're comfortable learning a new IDE&lt;/li&gt;
&lt;li&gt;You want a middle-ground price point&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Atlarix if:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You're building complex apps with multiple services&lt;/li&gt;
&lt;li&gt;You want to see and design architecture visually&lt;/li&gt;
&lt;li&gt;You prefer BYOK or local models (privacy/cost control)&lt;/li&gt;
&lt;li&gt;You like the idea of AI having "persistent memory" of your project structure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Cost Breakdown
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Monthly cost comparison (assuming 100k tokens/day usage):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cursor:&lt;/strong&gt; $20/mo (flat rate, no BYOK)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windsurf:&lt;/strong&gt; ~$10-15/mo (varies by plan)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Atlarix:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free tier: $0 (BYOK - you pay your provider directly)&lt;/li&gt;
&lt;li&gt;With OpenAI API (BYOK): ~$5-10/mo depending on usage&lt;/li&gt;
&lt;li&gt;Pro plan: $19/mo (unlimited workspaces)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're a heavy user with your own API keys, Atlarix can save you significant money.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Take
&lt;/h2&gt;

&lt;p&gt;All three are solid tools. Your choice depends on your workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor&lt;/strong&gt; = Best in-editor experience, premium price&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windsurf&lt;/strong&gt; = Best autonomous agents, custom IDE&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Atlarix&lt;/strong&gt; = Best for architecture-first development, most flexible pricing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd suggest trying all three (they all have free tiers or trials). Personally, I use Atlarix for planning/architecture and Cursor for autocomplete - they complement each other well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Them Out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor:&lt;/strong&gt; cursor.com (2-week trial)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windsurf:&lt;/strong&gt; codeium.com/windsurf (free tier)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Atlarix:&lt;/strong&gt; atlarix.dev (free tier with BYOK)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's your experience with these tools? Drop a comment below - I'd love to hear which workflow works best for you.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Disclosure: I'm building Atlarix, but I've used Cursor and Windsurf extensively and genuinely respect what they're doing. This comparison is based on real hands-on experience with all three tools.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>coding</category>
    </item>
    <item>
      <title>Why I Built a Local-First AI Coding Agent Instead of Another Chat Assistant</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Sat, 17 Jan 2026 13:19:54 +0000</pubDate>
      <link>https://dev.to/amariahak/why-i-built-a-local-first-ai-coding-agent-instead-of-another-chat-assistant-3bb8</link>
      <guid>https://dev.to/amariahak/why-i-built-a-local-first-ai-coding-agent-instead-of-another-chat-assistant-3bb8</guid>
      <description>&lt;p&gt;Most AI coding tools today stop at suggestions.&lt;/p&gt;

&lt;p&gt;They generate code, explain snippets, maybe autocomplete — and then hand everything back to you. The moment something breaks, you’re copy-pasting logs, errors, and screenshots into a chat box.&lt;/p&gt;

&lt;p&gt;I wanted something different.&lt;/p&gt;

&lt;p&gt;I wanted an AI that lives &lt;strong&gt;inside my actual development environment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That’s why I built &lt;strong&gt;Atlarix&lt;/strong&gt; — a &lt;strong&gt;desktop-native, privacy-first AI coding agent&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with cloud-first AI assistants
&lt;/h2&gt;

&lt;p&gt;Most assistants today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run in the cloud
&lt;/li&gt;
&lt;li&gt;Don’t see your real file system
&lt;/li&gt;
&lt;li&gt;Don’t run your code
&lt;/li&gt;
&lt;li&gt;Can’t observe build errors directly
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re helpful, but they’re disconnected from reality.&lt;/p&gt;

&lt;p&gt;As developers, our workflow isn’t text — it’s terminals, servers, logs, broken builds, and iteration loops.&lt;/p&gt;




&lt;h2&gt;
  
  
  What “agentic” actually means in practice
&lt;/h2&gt;

&lt;p&gt;Atlarix isn’t a chat assistant. It’s an agent.&lt;/p&gt;

&lt;p&gt;That means it can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run terminal commands
&lt;/li&gt;
&lt;li&gt;Read compiler and runtime errors
&lt;/li&gt;
&lt;li&gt;Spin up real dev servers (Vite, Next.js, Python, Go)
&lt;/li&gt;
&lt;li&gt;Fix issues in loops without you pasting stack traces
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It sees what you see — and acts on it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Local-first by default
&lt;/h2&gt;

&lt;p&gt;Privacy wasn’t an afterthought.&lt;/p&gt;

&lt;p&gt;Atlarix runs on your machine and uses &lt;strong&gt;BYOK (Bring Your Own Keys)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI / Anthropic
&lt;/li&gt;
&lt;li&gt;OpenRouter
&lt;/li&gt;
&lt;li&gt;Or fully local models via &lt;strong&gt;Ollama&lt;/strong&gt; / &lt;strong&gt;LM Studio&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your code stays yours.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why a desktop app (not a VS Code extension)
&lt;/h2&gt;

&lt;p&gt;I didn’t want to be constrained by editor sandboxes.&lt;/p&gt;

&lt;p&gt;A native desktop app can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control the terminal
&lt;/li&gt;
&lt;li&gt;Access the file system
&lt;/li&gt;
&lt;li&gt;Orchestrate multiple tools at once
&lt;/li&gt;
&lt;li&gt;Run longer agentic workflows safely
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That freedom matters if you want AI to &lt;em&gt;do real work&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Shipping v1.0
&lt;/h2&gt;

&lt;p&gt;Atlarix &lt;strong&gt;v1.0 is live&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I’m less interested in hype and more interested in feedback — especially from developers who actually ship and care about privacy, control, and workflow realism.&lt;/p&gt;

&lt;p&gt;You can explore it here:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://www.atlarix.dev/" rel="noopener noreferrer"&gt;https://www.atlarix.dev/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try it, I’d love to hear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What feels magical
&lt;/li&gt;
&lt;li&gt;What feels off
&lt;/li&gt;
&lt;li&gt;What breaks immediately
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s how better tools get built.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Code smart. Stay authentic.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
I’ll see you in the next one 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developertools</category>
      <category>buildinpublic</category>
      <category>productivity</category>
    </item>
    <item>
      <title>🟩 commit-checker v0.6.1 – Track GitHub streaks, TIL logs, and unlock CLI achievements</title>
      <dc:creator>Amariah Kamau</dc:creator>
      <pubDate>Mon, 28 Jul 2025 14:11:01 +0000</pubDate>
      <link>https://dev.to/amariahak/commit-checker-v061-track-github-streaks-til-logs-and-unlock-cli-achievements-5bnf</link>
      <guid>https://dev.to/amariahak/commit-checker-v061-track-github-streaks-til-logs-and-unlock-cli-achievements-5bnf</guid>
      <description>&lt;p&gt;If you’ve ever wanted to keep your GitHub streak alive &lt;em&gt;without&lt;/em&gt; needing internet or tokens, or if you’ve just wanted to make your terminal feel a little more fun — this tool might be for you.&lt;/p&gt;

&lt;p&gt;I just released &lt;strong&gt;commit-checker v0.6.1&lt;/strong&gt;, a fully offline terminal app that lets you:&lt;/p&gt;

&lt;p&gt;✅ Track your GitHub commit streak across all local repos&lt;br&gt;&lt;br&gt;
📚 Save “Today I Learned” entries from the terminal&lt;br&gt;&lt;br&gt;
📈 View detailed commit trends and language stats&lt;br&gt;&lt;br&gt;
🎮 Unlock ASCII achievements and gamified levels&lt;br&gt;&lt;br&gt;
💻 Works on macOS, Linux, and Windows — no tokens, no login&lt;/p&gt;



&lt;p&gt;🔧 What’s New in v0.6.1&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean uninstall now fully nukes all files + config
&lt;/li&gt;
&lt;li&gt;Smarter wizard input validation (no more broken setups)
&lt;/li&gt;
&lt;li&gt;Fixed duplicate repo bugs and improved path detection
&lt;/li&gt;
&lt;li&gt;Now supports all features even when installed via &lt;code&gt;curl&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;🎬 Demo Video (1min 54s)&lt;/p&gt;

&lt;p&gt;I made a short setup + usage demo : &lt;a href="https://youtu.be/1b3yN4mYpGY" rel="noopener noreferrer"&gt;https://youtu.be/1b3yN4mYpGY&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;🚀 Quick Install&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://raw.githubusercontent.com/AmariahAK/commit-checker/main/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📁 GitHub&lt;br&gt;
All the code, issues, and feature roadmap are on GitHub:&lt;br&gt;
🔗 &lt;a href="https://github.com/AmariahAK/commit-checker" rel="noopener noreferrer"&gt;https://github.com/AmariahAK/commit-checker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🙌 Why I Made This&lt;br&gt;
I built commit-checker to help me stay consistent with my GitHub commits and to make the terminal a bit more fun.&lt;/p&gt;

&lt;p&gt;Instead of needing GitHub tokens or relying on the website’s activity chart, it tracks commits directly from your local repos.&lt;/p&gt;

&lt;p&gt;It also includes a “TIL” feature to save quick learnings, and I plan to expand it with export options, themes, and plugin support.&lt;/p&gt;

&lt;p&gt;💭 I’d Love Feedback&lt;br&gt;
Try it out, let me know what you think, and feel free to open issues or collab if you’re into CLI tools or gamified productivity.&lt;br&gt;
Let’s build cool stuff offline! 🔥&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>cli</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
