<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gowri shankar</title>
    <description>The latest articles on DEV Community by Gowri shankar (@gowrishankar-dev).</description>
    <link>https://dev.to/gowrishankar-dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gowrishankar-dev"/>
    <language>en</language>
    <item>
      <title>I built a local AI coding system that actually understands your codebase — here's what I learned</title>
      <dc:creator>Gowri shankar</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:07:48 +0000</pubDate>
      <link>https://dev.to/gowrishankar-dev/i-built-a-local-ai-coding-system-that-actually-understands-your-codebase-heres-what-i-learned-2ap6</link>
      <guid>https://dev.to/gowrishankar-dev/i-built-a-local-ai-coding-system-that-actually-understands-your-codebase-heres-what-i-learned-2ap6</guid>
      <description>&lt;p&gt;I'm Gowri Shankar, a DevOps engineer from Hyderabad. I just open-sourced a project I've been building for the past few weeks, and I want to share it honestly — what it does, how I built it, and what I learned.&lt;br&gt;
🔗 GitHub: github.com/gowrishankar-infra/leanai&lt;/p&gt;

&lt;p&gt;🤔 The Problem&lt;br&gt;
Every AI coding tool I've used has the same frustration: it sees my code for the first time, every time.&lt;br&gt;
I paste a snippet, explain the context, get an answer, close the tab — and next session, start from zero. Claude doesn't know my project structure. GPT doesn't remember what we discussed yesterday. Copilot suggests function names that don't exist in my codebase.&lt;br&gt;
I wanted an AI that permanently understands my project. So I built one.&lt;/p&gt;

&lt;p&gt;🧠 What LeanAI Does&lt;br&gt;
LeanAI is a fully local AI coding assistant. It runs Qwen2.5 Coder (7B and 32B) on your machine. No cloud, no API keys, no subscriptions, no data leaving your computer.&lt;br&gt;
Here's what makes it different from existing tools:&lt;br&gt;
📂 It knows your entire codebase&lt;br&gt;
Run /brain . and LeanAI scans your project with full AST analysis:&lt;br&gt;
[Brain] Scanned 91 files in 5674ms&lt;br&gt;
Functions: 1,689&lt;br&gt;
Classes: 320&lt;br&gt;
Dependency edges: 9,775&lt;br&gt;
When I ask "what does the engine file do?", it describes MY actual engine with MY real classes — not a generic example about what an engine file might look like.&lt;br&gt;
⚡ Sub-2ms autocomplete from your project&lt;br&gt;
Type /complete gen and in 0.8ms, it returns completions from YOUR codebase:&lt;br&gt;
◆ GenerationConfig              core/engine.py&lt;br&gt;
ƒ generate()                    core/engine_v3.py&lt;br&gt;
ƒ generate_changelog()          brain/git_intel.py&lt;br&gt;
ƒ generate_batch()              core/engine_v3.py&lt;br&gt;
No model call needed. It searches the brain's index of 2,899 functions directly.&lt;br&gt;
🔍 Semantic git bisect&lt;/p&gt;

&lt;p&gt;This one doesn't exist anywhere else.&lt;/p&gt;

&lt;p&gt;Instead of binary search for bugs, LeanAI reads each commit semantically and predicts which one introduced a bug:&lt;br&gt;
/bisect authentication stopped working&lt;/p&gt;

&lt;p&gt;Most likely culprit:&lt;br&gt;
  b7b3f51 — VS Code extension + path separator fix&lt;br&gt;
  Suspicion: 45%&lt;br&gt;
  Reasoning: includes path changes that could affect auth flow&lt;br&gt;
It analyzed 20 commits, scored each one, and explained its reasoning.&lt;br&gt;
🛡️ Adversarial code verification&lt;br&gt;
Instead of just running tests, LeanAI generates edge-case inputs designed to break your code:&lt;br&gt;
/fuzz def sort(arr): return sorted(arr)&lt;/p&gt;

&lt;p&gt;Tested: 12 | Passed: 9 | Failed: 3&lt;/p&gt;

&lt;p&gt;Failures:&lt;br&gt;
  ✗ None → TypeError&lt;br&gt;
  ✗ [1, 'a', 2.0] → TypeError&lt;br&gt;
  ✗ [1, None, 3] → TypeError&lt;/p&gt;

&lt;p&gt;Suggested fixes:&lt;br&gt;
  → Add None check&lt;br&gt;
  → Add type validation&lt;br&gt;
Found 3 bugs in under 1 second.&lt;br&gt;
💾 It never forgets&lt;br&gt;
Every conversation is stored in persistent session memory. Session 1's decisions are searchable in session 10. It tracks how your understanding evolves across sessions — from "setting up a database" to "optimizing cache invalidation" — and predicts what you'll need next.&lt;br&gt;
📈 It gets smarter from your code&lt;br&gt;
Every interaction auto-collects training data. When you have enough examples, QLoRA fine-tuning makes the model learn YOUR coding patterns. No other tool does this.&lt;/p&gt;

&lt;p&gt;🤝 The Honest Part&lt;br&gt;
I built this using Claude. Claude wrote most of the code. I made every architectural decision, debugged every Windows/CUDA issue, tested everything on my machine, and directed every phase of development.&lt;br&gt;
I think this is how software gets built in 2026. 92% of developers use AI coding tools. The value isn't in typing code — it's in knowing what to build, how to architect it, and when something is wrong. I'm not hiding Claude's involvement because I don't think it diminishes the work.&lt;/p&gt;

&lt;p&gt;⚠️ What It's NOT&lt;br&gt;
I want to be upfront about the limitations:&lt;br&gt;
LimitationDetails🐢 It's slow25-90 seconds per response on CPU. Cloud AI gives you 2-5 seconds.🧠 Not as smart as GPT-4/ClaudeNever will be at this model size. The value is project awareness.🔧 It's roughThis is v1. There are bugs. The UI is basic.&lt;/p&gt;

&lt;p&gt;📊 The Numbers&lt;br&gt;
MetricValueIntegrated systems29Tests (all passing)500+Lines of Python27,000+CLI commands45+API endpoints32Interfaces3 (CLI, Web UI, VS Code)Models2 (7B fast, 32B quality)Monthly cost$0&lt;/p&gt;

&lt;p&gt;🏆 Features No Competitor Has&lt;br&gt;
I searched the internet and compared with every major open-source AI coding tool — Aider (39K stars), Continue (20K stars), Tabby (20K stars), Forge, OpenClaw (70K stars). None of them have ALL of these:&lt;/p&gt;

&lt;p&gt;FeatureUnique?1Sub-2ms autocomplete from AST brain index✅2Semantic git bisect with AI suspicion scoring✅3Adversarial code fuzzing with fix suggestions✅4Cross-session evolution tracking✅5Predictive pre-generation✅6Continuous fine-tuning pipeline✅7Full AST dependency graph (9,775 edges)✅8TDD auto-fix loop✅93-pass reasoning engine✅104-pass writing engine✅11Multi-model auto-switching by complexity✅&lt;br&gt;
Individual features exist in other tools. Nobody has integrated them all in one offline system.&lt;/p&gt;

&lt;p&gt;🛠️ Tech Stack&lt;br&gt;
Models:    Qwen2.5 Coder 7B + 32B (GGUF, via llama-cpp-python)&lt;br&gt;
Memory:    ChromaDB + sentence-transformers&lt;br&gt;
Server:    FastAPI + uvicorn&lt;br&gt;
Brain:     Custom AST parser with dependency graph&lt;br&gt;
Language:  Python&lt;br&gt;
License:   AGPL-3.0&lt;br&gt;
Hardware:  i7-11800H, 32GB RAM, RTX 3050 Ti&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI is a coding partner, not a replacement.
Claude wrote the code, but it couldn't have built LeanAI without me deciding what to build, in what order, and catching when things broke.&lt;/li&gt;
&lt;li&gt;Local AI is viable on consumer hardware.
My laptop runs a 32B parameter model. It's slow, but it works. When Qwen3 and Llama 4 drop, the infrastructure I built is ready.&lt;/li&gt;
&lt;li&gt;Project awareness is an unsolved problem.
Every AI tool treats your codebase as a stranger. Building a "brain" that maps functions, tracks dependencies, and remembers conversations is the hard part — not the model inference.&lt;/li&gt;
&lt;li&gt;Testing everything matters.
500+ tests across 18 files. Every system tested independently. This saved me dozens of times when one change broke something else.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🚀 Try It&lt;br&gt;
bashgit clone &lt;a href="https://github.com/gowrishankar-infra/leanai.git" rel="noopener noreferrer"&gt;https://github.com/gowrishankar-infra/leanai.git&lt;/a&gt;&lt;br&gt;
cd leanai&lt;br&gt;
pip install -r requirements.txt&lt;br&gt;
python main.py&lt;br&gt;
Then run /brain . to scan your project and start asking questions.&lt;br&gt;
🔗 GitHub: github.com/gowrishankar-infra/leanai&lt;br&gt;
⭐ Star it if you think local AI that understands your codebase is worth building.&lt;/p&gt;




&lt;h2&gt;
  
  
  UPDATE: Qwen3-Coder-30B Now Running Locally (April 2026)
&lt;/h2&gt;

&lt;p&gt;Since the original post, LeanAI has shipped major upgrades:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qwen3-Coder-30B-A3B&lt;/strong&gt; — response times went from 5-7 &lt;br&gt;
minutes to ~2 minutes on the same hardware. Mixture-of-Experts &lt;br&gt;
architecture: 30B total params, only 3B active per token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6 novel features&lt;/strong&gt; no cloud AI has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code-Grounded Verification (fact-checks AI claims against your AST)&lt;/li&gt;
&lt;li&gt;Cascade Inference (7B drafts → 32B reviews, 3x faster)&lt;/li&gt;
&lt;li&gt;Mixture of Agents (multi-perspective code reviews)&lt;/li&gt;
&lt;li&gt;ReAct (model looks up real code before answering)&lt;/li&gt;
&lt;li&gt;Multi-language brain (20+ language parsers)&lt;/li&gt;
&lt;li&gt;KV Cache optimization (15-25% faster)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now at 40 technologies, 31,000 lines, 99 files.&lt;/p&gt;




&lt;p&gt;I'd love feedback, bug reports, or honest criticism. I know it's not perfect — that's why I'm sharing it.&lt;br&gt;
— Gowri Shankar&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>python</category>
      <category>coding</category>
    </item>
    <item>
      <title>What I shipped this week on khaga.dev — Fix It buttons, health scores, and 24/7 alerts</title>
      <dc:creator>Gowri shankar</dc:creator>
      <pubDate>Mon, 16 Mar 2026 13:46:49 +0000</pubDate>
      <link>https://dev.to/gowrishankar-dev/what-i-shipped-this-week-on-khagadev-fix-it-buttons-health-scores-and-247-alerts-5dg6</link>
      <guid>https://dev.to/gowrishankar-dev/what-i-shipped-this-week-on-khagadev-fix-it-buttons-health-scores-and-247-alerts-5dg6</guid>
      <description>&lt;h1&gt;
  
  
  What I shipped this week on khaga.dev — Fix It buttons, health scores, and 24/7 alerts
&lt;/h1&gt;

&lt;p&gt;Two weeks ago I launched &lt;a href="https://khaga.dev" rel="noopener noreferrer"&gt;khaga.dev&lt;/a&gt; — a free AI tool that diagnoses AWS, GCP, Azure, and Kubernetes infrastructure in seconds using Claude AI.&lt;/p&gt;

&lt;p&gt;This week I shipped 5 major features based on early feedback. Here's what's new.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Fix It button
&lt;/h2&gt;

&lt;p&gt;The most requested feature. Every finding now has a &lt;strong&gt;Fix It&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Click it and you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The exact shell command to fix the issue&lt;/li&gt;
&lt;li&gt;A context-aware safety checklist (different for kubectl vs aws vs terraform)&lt;/li&gt;
&lt;li&gt;One-click copy to clipboard&lt;/li&gt;
&lt;li&gt;"Copy &amp;amp; Open Remediate" to run it directly in Khaga&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example for a CRITICAL finding on a Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Fix: &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/FROM ubuntu:latest/FROM ubuntu:22.04/'&lt;/span&gt; Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No more Googling at 2am.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Infrastructure Health Score
&lt;/h2&gt;

&lt;p&gt;The dashboard now shows a 0-100 health score per cloud provider on login.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🟢 70-100 = Healthy&lt;/li&gt;
&lt;li&gt;🟡 40-69 = Degraded
&lt;/li&gt;
&lt;li&gt;🔴 0-39 = Critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scored based on the severity and frequency of findings across your last 500 diagnoses. Click any score card to jump straight to that provider's diagnosis.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌍 Multi-region AWS scan
&lt;/h2&gt;

&lt;p&gt;Previously Khaga only scanned one AWS region. Most people have resources spread across multiple regions and were missing issues.&lt;/p&gt;

&lt;p&gt;Now there's a "Scan all regions" checkbox on the AWS panel. Check it and Khaga scans all 6 major regions in parallel:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;us-east-1, us-west-2, eu-west-1, ap-south-1, ap-southeast-1, ap-northeast-1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;All findings get tagged with their region.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔔 24/7 Automated Alerts
&lt;/h2&gt;

&lt;p&gt;Khaga now runs in the background and alerts you when critical issues are detected — without you having to manually trigger a diagnosis.&lt;/p&gt;

&lt;p&gt;Set your scan frequency in Settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disabled&lt;/li&gt;
&lt;li&gt;Every hour&lt;/li&gt;
&lt;li&gt;Daily&lt;/li&gt;
&lt;li&gt;Weekly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alerts arrive via &lt;strong&gt;Slack&lt;/strong&gt; (Block Kit formatted) or &lt;strong&gt;email&lt;/strong&gt; (HTML template via Resend) with a "View in Khaga" button.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Streaming responses
&lt;/h2&gt;

&lt;p&gt;Predictive analysis and Compliance used to show a spinner for 20-30 seconds. Users were abandoning before results loaded.&lt;/p&gt;

&lt;p&gt;Now results stream in — the UI shows an animated progress indicator within 1-2 seconds and snaps to the final result when done.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions integration (scan on every deploy)&lt;/li&gt;
&lt;li&gt;Slack bot (&lt;code&gt;/khaga scan aws&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Team accounts&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;All of this is &lt;strong&gt;free&lt;/strong&gt; at &lt;a href="https://khaga.dev" rel="noopener noreferrer"&gt;khaga.dev&lt;/a&gt;. No credit card, no setup beyond adding your cloud credentials.&lt;/p&gt;

&lt;p&gt;If you run AWS/K8s/GCP without a dedicated SRE team — this is built for you.&lt;/p&gt;

&lt;p&gt;What feature would make you actually use this daily? Drop it in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Flask, PostgreSQL, Claude AI (Anthropic), deployed on Railway.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#devops #aws #kubernetes #showdev #webdev&lt;/code&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I built a free AI tool that diagnoses AWS/Kubernetes infrastructure in seconds</title>
      <dc:creator>Gowri shankar</dc:creator>
      <pubDate>Thu, 05 Mar 2026 13:56:01 +0000</pubDate>
      <link>https://dev.to/gowrishankar-dev/i-built-a-free-ai-tool-that-diagnoses-awskubernetes-infrastructure-in-seconds-5cj7</link>
      <guid>https://dev.to/gowrishankar-dev/i-built-a-free-ai-tool-that-diagnoses-awskubernetes-infrastructure-in-seconds-5cj7</guid>
      <description>&lt;p&gt;I got tired of the same thing every DevOps engineer knows too well — something breaks in prod, alerts fire, and you spend the next 3 hours jumping between CloudWatch, kubectl logs, Azure Monitor, and 4 other dashboards trying to figure out what actually happened.&lt;br&gt;
So I built Khaga.&lt;br&gt;
You point it at your AWS, GCP, Azure, or Kubernetes setup and it gives you root cause analysis in plain English — what broke, why it broke, and the exact commands to fix it. No more guessing, no more tab switching.&lt;br&gt;
It also does:&lt;/p&gt;

&lt;p&gt;Terraform plan security review&lt;br&gt;
Dockerfile analysis&lt;br&gt;
CI/CD log parsing&lt;br&gt;
Helm chart review&lt;br&gt;
SOC2 and ISO27001 compliance estimates&lt;br&gt;
Predictive diagnosis — what's likely to break next&lt;br&gt;
Cross-cloud correlation across all providers simultaneously&lt;/p&gt;

&lt;p&gt;The compliance feature is something I'm particularly proud of. SOC2 assessments are normally out of reach for small teams. Khaga gives you a preliminary assessment free with honest AI disclaimers — it tells you what it can't assess, not just what looks good.&lt;br&gt;
Everything is free right now at khaga.dev. &lt;br&gt;
just sign in with Google.&lt;br&gt;
I'd genuinely love feedback from people who manage infrastructure day to day. What's missing? What would make this actually useful in your workflow?&lt;br&gt;
Built with Flask, Claude AI, PostgreSQL. Happy to answer questions about the architecture in the comments.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>infrastructure</category>
    </item>
  </channel>
</rss>
