<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robles.H.</title>
    <description>The latest articles on DEV Community by Robles.H. (@roblesh).</description>
    <link>https://dev.to/roblesh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/roblesh"/>
    <language>en</language>
    <item>
      <title>I built a CLI that runs your CI locally and fixes failures with Claude Code</title>
      <dc:creator>Robles.H.</dc:creator>
      <pubDate>Mon, 04 May 2026 16:52:28 +0000</pubDate>
      <link>https://dev.to/roblesh/i-built-a-cli-that-runs-your-ci-locally-and-fixes-failures-with-claude-code-5bai</link>
      <guid>https://dev.to/roblesh/i-built-a-cli-that-runs-your-ci-locally-and-fixes-failures-with-claude-code-5bai</guid>
      <description>&lt;h1&gt;
  
  
  I built a CLI that runs your CI locally and fixes failures with Claude Code
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Stitch is an open-source CLI that parses your existing GitLab CI / GitHub Actions / Bitbucket Pipelines config, runs the jobs locally in parallel, and when something fails, hands the error to Claude Code or Codex CLI to fix. Re-verifies, commits, pushes. No API keys. MIT.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The loop I wanted to kill
&lt;/h2&gt;

&lt;p&gt;Every push to a branch triggers cloud CI. Cloud CI takes 8–15 minutes. When it fails (and ~30% of pushes fail something), I open the logs in a browser, fix locally, push again, wait another 12 minutes.&lt;/p&gt;

&lt;p&gt;On a normal day, that's 30–60 minutes of waiting. On a bad day with a flaky job, it's an afternoon.&lt;/p&gt;

&lt;p&gt;I tried the existing tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;act&lt;/code&gt;&lt;/strong&gt; runs GitHub Actions locally, but only GitHub, and it stops at "failed."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gitar / Nx Cloud&lt;/strong&gt; are SaaS platforms that intercept failures in remote CI. Each fix attempt = a remote CI cycle, and they want your pipelines on their platform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dagger + AI&lt;/strong&gt; runs locally but expects you to rewrite every pipeline in their SDK.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of them solved my actual problem: keep my existing config, run it on hardware I already own, and let the AI agent CLI I already pay for handle the fix loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Stitch does
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stitch run claude
  |
  |- parses .gitlab-ci.yml / .github/workflows/&lt;span class="k"&gt;*&lt;/span&gt;.yml / bitbucket-pipelines.yml
  |- filters &lt;span class="nb"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;skips deploy, publish, docker-build&lt;span class="o"&gt;)&lt;/span&gt;
  |- runs each job locally &lt;span class="o"&gt;(&lt;/span&gt;subprocess with &lt;span class="nb"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  |
  |- job passes? next job
  |- job fails?
  |    |- spawns the AI agent CLI with the error log
  |    |- agent investigates and edits files
  |    |- re-runs the job to verify the fix
  |    |- repeat up to &lt;span class="nt"&gt;--max-attempts&lt;/span&gt;
  |
  |- reports results with a live TUI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things that ended up mattering more than I expected:&lt;/p&gt;

&lt;h3&gt;
  
  
  Parallel execution by default
&lt;/h3&gt;

&lt;p&gt;Wall-clock time is &lt;code&gt;max(job_i)&lt;/code&gt; instead of &lt;code&gt;sum(job_i)&lt;/code&gt;. A 4-job pipeline takes as long as the slowest job, not all four added together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Batch fixes
&lt;/h3&gt;

&lt;p&gt;When multiple jobs fail together, Stitch sends all the errors to the agent in a single call. One missing import that breaks lint + typecheck + tests gets fixed once, not three times. Fewer tokens, fewer round-trips.&lt;/p&gt;

&lt;h3&gt;
  
  
  Re-verify, don't re-report
&lt;/h3&gt;

&lt;p&gt;After the agent edits, Stitch re-runs the previously failed jobs to prove the fix actually worked. Up to &lt;code&gt;--max-attempts&lt;/code&gt; per job. The fix loop closes only when the job is green.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Claude Code skill
&lt;/h3&gt;

&lt;p&gt;Stitch ships with a Claude Code skill that auto-triggers at four moments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before every push.&lt;/strong&gt; Ask Claude to push, commit, or open a PR — Stitch runs first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End of a task.&lt;/strong&gt; When Claude finishes implementing a feature, it runs Stitch as the last step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Before marking a todo complete.&lt;/strong&gt; If a TodoWrite item touches code, Claude runs Stitch first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context switch.&lt;/strong&gt; If you pivot, Claude validates the previous change first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install once, never invoke again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pluggable agent CLI
&lt;/h3&gt;

&lt;p&gt;Works with Claude Code or Codex CLI today. Uses whichever subscription you already have (Claude Pro, ChatGPT Plus). No Anthropic or OpenAI API key required. Cost of an extra fix: $0.&lt;/p&gt;

&lt;h3&gt;
  
  
  Watch mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stitch run claude &lt;span class="nt"&gt;--watch&lt;/span&gt; &lt;span class="nt"&gt;--jobs&lt;/span&gt; lint,test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Re-runs on every save (debounced). Auto-commits the green result. Walk away from your terminal, come back to a clean branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  History that survives
&lt;/h3&gt;

&lt;p&gt;Every run goes to &lt;code&gt;.stitch/history.jsonl&lt;/code&gt;, safe to commit. Streaks compact (100 green runs = 1 line). Fixes never compact — every successful fix gets its own entry with the commit SHA. Syncs across machines naturally with the rest of the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;

&lt;p&gt;Prerequisite: an agent CLI installed and logged in. Either Claude Code or OpenAI Codex CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; @anthropic-ai/claude-code   &lt;span class="c"&gt;# or @openai/codex&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx stitch-agent doctor              &lt;span class="c"&gt;# check setup&lt;/span&gt;
npx stitch-agent run claude          &lt;span class="c"&gt;# run + fix&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or install globally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; stitch-agent
stitch run claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install the Claude Code skill:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;npm root &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/stitch-agent/skills/stitch"&lt;/span&gt; ~/.claude/skills/stitch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Where it goes next
&lt;/h2&gt;

&lt;p&gt;What's not in v1: CircleCI, Jenkins, Buildkite parsers. These are the most-requested already and on the near-term roadmap. PRs are very welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Site:&lt;/strong&gt; &lt;a href="https://stitch-agent.dev" rel="noopener noreferrer"&gt;stitch-agent.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/X24LABS/stitch-agent" rel="noopener noreferrer"&gt;github.com/X24LABS/stitch-agent&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live on Product Hunt today:&lt;/strong&gt; &lt;a href="https://www.producthunt.com/products/stitch-agent?launch=stitch-agent" rel="noopener noreferrer"&gt;https://www.producthunt.com/products/stitch-agent?launch=stitch-agent&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have a CI config that you think would break the parser, drop it in an issue. I want to find the edge cases before users do.&lt;/p&gt;

&lt;p&gt;MIT, no funding, no telemetry, no SaaS layer. Just a CLI that closes the loop.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>opensource</category>
      <category>cli</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Your API docs are built for humans. Who reads them now?</title>
      <dc:creator>Robles.H.</dc:creator>
      <pubDate>Tue, 24 Feb 2026 02:03:55 +0000</pubDate>
      <link>https://dev.to/roblesh/i-built-swagent-convert-your-openapi-spec-to-llmstxt-for-ai-ready-apis-4h51</link>
      <guid>https://dev.to/roblesh/i-built-swagent-convert-your-openapi-spec-to-llmstxt-for-ai-ready-apis-4h51</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Our docs are built for humans. Who reads them now?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We're in the &lt;strong&gt;Agent Era&lt;/strong&gt;. AI-first development isn't coming — it's here.&lt;/p&gt;

&lt;p&gt;LLM agents consume your API documentation too. Every time an AI assistant helps a developer integrate your API, it reads your docs. But Swagger UI, Redoc, and traditional OpenAPI docs weren't built for that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real cost of human-first docs
&lt;/h2&gt;

&lt;p&gt;Traditional API docs are a disaster for AI agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Thousands of wasted tokens&lt;/strong&gt; on navigation chrome, sidebars, and UI elements the LLM has to skip&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verbose JSON schemas&lt;/strong&gt; with deeply nested &lt;code&gt;$ref&lt;/code&gt; definitions that explode token counts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duplicated descriptions&lt;/strong&gt; repeated across endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTML noise&lt;/strong&gt; that buries the actual API semantics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your AI-powered integrations are paying the cost — in latency, in token spend, and in accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What if you could just say: &lt;em&gt;"Learn my API"&lt;/em&gt;?
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Tell your AI agent: "Learn https://api.alloverapps.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the vision. The agent fetches &lt;code&gt;llms.txt&lt;/code&gt; — a compact, token-optimized representation of your entire API — and understands it immediately. No bloat. No parsing overhead. Just clean, structured API knowledge ready for the Agent Era.&lt;/p&gt;

&lt;p&gt;That's what &lt;a href="https://swagent.dev" rel="noopener noreferrer"&gt;swagent&lt;/a&gt; enables.&lt;/p&gt;

&lt;h2&gt;
  
  
  How swagent works
&lt;/h2&gt;

&lt;p&gt;swagent converts your OpenAPI spec into &lt;strong&gt;3 outputs simultaneously&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📄 &lt;strong&gt;llms.txt&lt;/strong&gt; — compact format for LLM consumption (~75% token reduction)&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Markdown&lt;/strong&gt; — human-readable API reference&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;HTML&lt;/strong&gt; — shareable landing page for your API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A 65KB OpenAPI spec becomes ~16KB. That's the difference between an agent that nails your API in one shot vs. one that hallucinates endpoints halfway through.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compact notation
&lt;/h2&gt;

&lt;p&gt;Instead of verbose JSON schemas, swagent uses a notation designed for machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /pets
  Summary: List all pets
  Params: limit:number, status:available|pending|sold
  Response: [{id*, name*, tag}]

POST /pets
  Summary: Create a pet
  Body: {name*, tag}
  Response: {id*, name*, tag}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;*&lt;/code&gt; = required · &lt;code&gt;:type&lt;/code&gt; for non-strings · &lt;code&gt;|&lt;/code&gt; for enums · &lt;code&gt;[{...}]&lt;/code&gt; for arrays&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;swagent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Fastify&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Fastify&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fastify&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;swagent&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;swagent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Fastify&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;swagent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="c1"&gt;// Your API now exposes /llms.txt, /docs.md and a landing page&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also works with: &lt;strong&gt;Express, Hono, Elysia, Koa, NestJS, Nitro/Nuxt&lt;/strong&gt; and as a &lt;strong&gt;CLI tool&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built on the llms.txt standard
&lt;/h2&gt;

&lt;p&gt;swagent aligns with the &lt;a href="https://llmstxt.org" rel="noopener noreferrer"&gt;llms.txt standard&lt;/a&gt; — the emerging convention for machine-readable content optimized for AI consumption. The pattern is simple: alongside your human docs, expose a version built for machines.&lt;/p&gt;

&lt;p&gt;The Agent Era is already here. Your API docs should be ready for it.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;🌐 Live playground: &lt;a href="https://swagent.dev" rel="noopener noreferrer"&gt;swagent.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📦 npm: &lt;a href="https://www.npmjs.com/package/swagent" rel="noopener noreferrer"&gt;npmjs.com/package/swagent&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Are you building AI agents that consume APIs? How are you feeding API specs to your LLMs today?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>openapi</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
