<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhi</title>
    <description>The latest articles on DEV Community by Abhi (@abhimanyubhagwati).</description>
    <link>https://dev.to/abhimanyubhagwati</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhimanyubhagwati"/>
    <language>en</language>
    <item>
      <title>I Got Tired of Googling kubectl Commands at 2 AM. So I Built a Local AI Agent That Does DevOps Safely. { pip install orbit-cli }</title>
      <dc:creator>Abhi</dc:creator>
      <pubDate>Wed, 04 Mar 2026 12:27:06 +0000</pubDate>
      <link>https://dev.to/abhimanyubhagwati/i-got-tired-of-googling-kubectl-commands-at-2-am-so-i-built-a-local-ai-agent-that-does-devops-39l</link>
      <guid>https://dev.to/abhimanyubhagwati/i-got-tired-of-googling-kubectl-commands-at-2-am-so-i-built-a-local-ai-agent-that-does-devops-39l</guid>
      <description>&lt;h2&gt;
  
  
  The Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;I deployed a couple of applications recently. Simple stuff — containerize, push to a registry, get it running on Kubernetes. Should take an afternoon, right?&lt;/p&gt;

&lt;p&gt;It took me three days.&lt;/p&gt;

&lt;p&gt;I'm still learning Docker and Kubernetes. Every step was a Google search. "How to write a multi-stage Dockerfile." "Why is my pod in CrashLoopBackOff." "What's the difference between kubectl apply and kubectl create." I'd find a Stack Overflow answer, copy the command, run it, get a different error, go back to Google.&lt;/p&gt;

&lt;p&gt;And every time I needed to demo a quick POC to my team, the same cycle repeated. It was painful.&lt;/p&gt;

&lt;p&gt;So I had an idea — what if I built a tool that already knows all of this? Something that can take a goal like "deploy this app to Kubernetes" and actually figure out the steps, run them safely, and fix things when they break?&lt;/p&gt;

&lt;p&gt;Three weeks later, after every night and every weekend alongside my day job, that tool exists. It's called &lt;strong&gt;Orbit&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Conversation That Started It
&lt;/h2&gt;

&lt;p&gt;My friend Sidd works in DevOps. We were talking one evening about what makes DevOps tooling painful, and he said something that stuck with me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The problem isn't that the commands are hard. It's that you have to hold 15 things in your head at once — what namespace you're in, what branch you're on, whether you're pointing at prod or staging, what the last error was."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That clicked. The real problem isn't knowledge — it's &lt;strong&gt;context&lt;/strong&gt;. A tool that could see your entire environment (git state, running containers, K8s cluster, system info) and factor all of that into its decisions would be genuinely useful.&lt;/p&gt;

&lt;p&gt;Sidd kept pushing me on what DevOps folks actually need. Not another chatbot wrapper. Something that understands risk. Something that won't let you accidentally delete production. Something that runs locally so your infrastructure details stay on your machine.&lt;/p&gt;

&lt;p&gt;That became the design spec for Orbit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Everything Runs Locally (and Why That Matters)
&lt;/h2&gt;

&lt;p&gt;The first design decision was: &lt;strong&gt;nothing leaves your machine.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I use Ollama as the LLM backend. Every model runs locally. Your kubectl configs, your Docker setup, your git history, your environment variables — none of it gets sent to OpenAI or Anthropic or anyone else.&lt;/p&gt;

&lt;p&gt;This isn't just a privacy thing (though it is). It's a practical thing. If you're working with production infrastructure, you don't want your cluster details, namespaces, pod names, and error logs flowing through a third-party API. Period.&lt;/p&gt;

&lt;p&gt;Ollama has gotten surprisingly good. Models like Qwen 2.5 at 7B parameters can generate structured JSON reliably, follow system prompts, and reason about shell commands. Not GPT-4 level, but good enough for DevOps task planning — and it runs on my MacBook in seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Engineering: How Orbit Actually Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Agent Loop
&lt;/h3&gt;

&lt;p&gt;When you run &lt;code&gt;orbit do "find why my pods are crashing and fix it"&lt;/code&gt;, here's what actually happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Goal → Scan Environment → Decompose into Subtasks → Select Models
→ Allocate Context Budget → Generate Plan → [For each step:]
  Classify Risk → Confirm with User → Execute → Observe Result
  → Success? Next step. Failed? Replan.
→ Summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't a simple "send prompt to LLM, run the output" pipeline. Each stage is its own component with its own logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment Scanning&lt;/strong&gt; runs 5 collectors in parallel using asyncio: git state, Docker containers, Kubernetes cluster, system info, and filesystem structure. Each collector is fault-tolerant — if you don't have kubectl installed, the K8s collector returns empty instead of crashing. Everything has a 5-second timeout. Results are cached with a 5-second TTL so the agent loop doesn't re-scan on every step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task Decomposition&lt;/strong&gt; takes your goal and breaks it into subtasks, each tagged with a capability requirement: &lt;code&gt;fast_shell&lt;/code&gt; for simple commands, &lt;code&gt;code_gen&lt;/code&gt; for generating scripts, &lt;code&gt;reasoning&lt;/code&gt; for complex analysis. This matters because...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Model Routing&lt;/strong&gt; picks the best locally-available model for each capability. Not every model is good at everything. A small fast model can handle &lt;code&gt;ls&lt;/code&gt; and &lt;code&gt;grep&lt;/code&gt;, but you want your beefiest model for debugging a cascade failure. The router scans your &lt;code&gt;ollama list&lt;/code&gt;, maps each model to capabilities based on known benchmarks, and assigns models to subtasks. No LLM call needed — it's a deterministic lookup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context Budget Allocation&lt;/strong&gt; is token-aware. Each context slot (git info, Docker status, etc.) has a relevance score and estimated token count. The allocator greedily fills the context window by relevance, truncating the last slot if needed. Three truncation strategies: head (for logs), tail (for diffs), and summary (first half + "[truncated]" + last half).&lt;/p&gt;




&lt;h3&gt;
  
  
  The Safety System (The Part I'm Most Proud Of)
&lt;/h3&gt;

&lt;p&gt;Here's the thing about AI agents that run shell commands: they can destroy things. &lt;code&gt;rm -rf /&lt;/code&gt;. &lt;code&gt;kubectl delete namespace production&lt;/code&gt;. &lt;code&gt;git push --force&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Most AI tools handle this by asking the LLM "is this command safe?" That's insane. You're asking the same system that generated the command to evaluate whether it's dangerous. That's like asking the intern who wrote the script whether it's safe to run in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orbit's safety classifier is regex-only. Zero LLM calls.&lt;/strong&gt; 173 hand-written regex patterns that classify every command into four tiers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;What Happens&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Safe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runs silently&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;cat&lt;/code&gt;, &lt;code&gt;kubectl get&lt;/code&gt;, &lt;code&gt;docker ps&lt;/code&gt;, &lt;code&gt;git log&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Caution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single confirmation prompt&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;git push&lt;/code&gt;, &lt;code&gt;docker build&lt;/code&gt;, &lt;code&gt;kubectl apply&lt;/code&gt;, &lt;code&gt;pip install&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Destructive&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Impact analysis + double confirmation&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;rm&lt;/code&gt;, &lt;code&gt;kubectl delete&lt;/code&gt;, &lt;code&gt;git reset --hard&lt;/code&gt;, &lt;code&gt;git push --force&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Nuclear&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Type "i am sure" + 3-second cooldown&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;rm -rf /&lt;/code&gt;, &lt;code&gt;DROP TABLE&lt;/code&gt;, &lt;code&gt;terraform destroy&lt;/code&gt;, any destructive command in production&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The critical design rule: &lt;strong&gt;unrecognized commands default to &lt;code&gt;caution&lt;/code&gt;, never &lt;code&gt;safe&lt;/code&gt;.&lt;/strong&gt; If the classifier doesn't recognize your command, it assumes risk.&lt;/p&gt;

&lt;p&gt;But the really clever part is &lt;strong&gt;production detection&lt;/strong&gt;. Orbit checks your git branch, K8s namespace, and K8s context for production indicators (main, master, release/*, prod, production, live). If it detects production, any escalatable command gets bumped to nuclear automatically.&lt;/p&gt;

&lt;p&gt;So &lt;code&gt;git push origin main&lt;/code&gt; in a feature branch? Caution (single confirm). &lt;code&gt;git push origin main&lt;/code&gt; when you're ON main? Nuclear. Type "i am sure" and wait 3 seconds. Because that push is going to production.&lt;/p&gt;

&lt;p&gt;This saved me during development. I was testing with my actual git repo and almost pushed garbage to main. The escalation caught it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Auto-Replanning: When Things Go Wrong
&lt;/h3&gt;

&lt;p&gt;Real DevOps isn't linear. Commands fail. Pods crash. Builds break. An agent that can only execute a static plan is useless.&lt;/p&gt;

&lt;p&gt;Orbit's observer watches every command result. If a step fails and there's replan budget remaining, it feeds the error back to the LLM with context about what already succeeded, and gets a new plan for the remaining steps. No re-running successful steps. The replanner addresses the specific error.&lt;/p&gt;

&lt;p&gt;But replanning has hard limits. The agent budget enforces: max 15 steps, max 3 replans per step, max 25 total LLM calls. When the budget is exhausted, Orbit exits gracefully with a summary of what it accomplished and what failed. No infinite loops. No runaway token consumption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rollback Plans
&lt;/h3&gt;

&lt;p&gt;Every destructive step gets a rollback plan. &lt;code&gt;git reset --hard&lt;/code&gt;? Rollback via &lt;code&gt;git reflog&lt;/code&gt;. &lt;code&gt;kubectl apply -f deploy.yaml&lt;/code&gt;? Rollback is &lt;code&gt;kubectl delete -f deploy.yaml&lt;/code&gt;. &lt;code&gt;docker compose down&lt;/code&gt;? Rollback is &lt;code&gt;docker compose up -d&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Some things can't be rolled back (&lt;code&gt;rm&lt;/code&gt;, &lt;code&gt;kubectl delete pod&lt;/code&gt;). Orbit tells you that explicitly: "File deletion is irreversible. Check backups."&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;300 tests. All passing. 3.09 seconds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn't write 300 tests to pad a number. Each test validates a specific behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;129 safety classifier tests&lt;/strong&gt; — every regex disambiguation edge case. &lt;code&gt;sed&lt;/code&gt; vs &lt;code&gt;sed -i&lt;/code&gt;. &lt;code&gt;rm -rf ./dir&lt;/code&gt; (destructive) vs &lt;code&gt;rm -rf /tmp&lt;/code&gt; (nuclear). &lt;code&gt;git stash list&lt;/code&gt; (safe) vs &lt;code&gt;git stash pop&lt;/code&gt; (caution). Production escalation for every git branch variant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;53 agent tests&lt;/strong&gt; — real subprocess execution with timeout/kill, streaming output, observer decisions, planner model selection fallback chains, budget enforcement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;43 context tests&lt;/strong&gt; — parallel scanner with fault tolerance, cache TTL, context budget truncation strategies (head/tail/summary), allocation edge cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;18 router tests&lt;/strong&gt; — model capability matching, priority lookup, decomposer with LLM fallback.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;57 more&lt;/strong&gt; across schemas, config, CLI, memory, and LLM provider interfaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every Pydantic model validates. Every JSON schema generates correctly. Every error path returns a safe fallback instead of crashing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned Building This
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Claude Code Made This Possible
&lt;/h3&gt;

&lt;p&gt;I need to be honest about this: I couldn't have built Orbit in three weeks of nights and weekends without AI coding tools. Claude Code was a massive multiplier.&lt;/p&gt;

&lt;p&gt;The pattern was: I'd think through what I wanted (the safety classifier design, the context budget allocator, the observer decision logic), then use Claude Code to help me write it, iterate on edge cases, and generate test coverage. The architecture and design decisions were mine (and Sidd's input). The implementation velocity came from having a coding partner that doesn't sleep.&lt;/p&gt;

&lt;p&gt;This is the reality of building software in 2025. The ideas, the architecture, the "what should this do and why" — that's still deeply human. The "write me a regex that matches kubectl delete with a negative lookahead for namespace" — that's where AI shines.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Regex Safety Classifier Was The Hardest Part
&lt;/h3&gt;

&lt;p&gt;Not the LLM integration. Not the async context scanning. The 173 regex patterns.&lt;/p&gt;

&lt;p&gt;Every pattern has to be precise. &lt;code&gt;sed&lt;/code&gt; without &lt;code&gt;-i&lt;/code&gt; is safe (just prints to stdout). &lt;code&gt;sed -i&lt;/code&gt; modifies files in place — that's caution. The safe pattern uses &lt;code&gt;(?!.*-i)&lt;/code&gt; negative lookahead to exclude the &lt;code&gt;-i&lt;/code&gt; variant. Get that wrong and you're either blocking harmless commands or letting dangerous ones through silently.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl delete pod&lt;/code&gt; is destructive. &lt;code&gt;kubectl delete namespace&lt;/code&gt; is nuclear. &lt;code&gt;kubectl delete pods --all&lt;/code&gt; is nuclear. Three different patterns, ordered so nuclear matches first. First match wins.&lt;/p&gt;

&lt;p&gt;I spent an entire weekend just on the safety patterns. Writing them, testing them, finding edge cases, fixing them, finding more edge cases. It's the kind of work that's tedious but existentially important when your tool runs shell commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Output Is The Right Way To Talk To LLMs
&lt;/h3&gt;

&lt;p&gt;Every single LLM call in Orbit returns structured output. Pydantic model → JSON schema → Ollama's &lt;code&gt;format=&lt;/code&gt; parameter. The model returns JSON that matches the schema, Pydantic validates it, and the code gets typed data.&lt;/p&gt;

&lt;p&gt;No parsing free text. No "extract the command from between the backticks." No regex on LLM output. If the JSON doesn't validate, retry once, then return a safe fallback (empty plan, single general subtask, etc.).&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;temperature=0&lt;/code&gt;, Qwen 2.5 generates valid JSON matching the schema 95%+ of the time. The retry catches most of the rest. The fallback catches the remainder. Three layers of defense.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;orbit-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll need Python 3.11+ and Ollama running with at least one model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull qwen2.5:7b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;orbit &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="s2"&gt;"check disk usage and find the largest directories"&lt;/span&gt;
orbit sense                    &lt;span class="c"&gt;# see what Orbit sees in your environment&lt;/span&gt;
orbit wtf                      &lt;span class="c"&gt;# debug the last failed command&lt;/span&gt;
orbit ask &lt;span class="s2"&gt;"why is my pod in CrashLoopBackOff?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is on GitHub: &lt;a href="https://github.com/abhimanyubhagwati/orbit-cli" rel="noopener noreferrer"&gt;github.com/abhimanyubhagwati/orbit-cli&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is my second open source release after TraceForge (a testing tool for AI agents). Building tools at night alongside a day job isn't easy, but it's the most fun I have writing code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you try it — drop a comment below and tell me what command you ran first. And if it does something unexpected, open an issue. That feedback is exactly what makes this better.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Orbit is Apache 2.0 licensed. Built by Abhimanyu Bhagwati.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>mcp</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
