<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community</title>
    <description>The most recent home feed on DEV Community.</description>
    <link>https://dev.to</link>
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/"/>
    <language>en</language>
    <item>
      <title>GSD: Zero to Productive in Claude Code Without the Faff</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:42 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/gsd-zero-to-productive-in-claude-code-without-the-faff-1pm1</link>
      <guid>https://dev.to/stevengonsalvez/gsd-zero-to-productive-in-claude-code-without-the-faff-1pm1</guid>
      <description>&lt;h2&gt;
  
  
  The Context Rot Problem, Sorted
&lt;/h2&gt;

&lt;p&gt;Ever had a Claude Code session go long and watched the model just... forget what it was doing three steps ago? Context rot. The window fills up, quality tanks, you're repeating yourself, and the whole thing turns into a shambles. Everyone hits it eventually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gsd-build/get-shit-done" rel="noopener noreferrer"&gt;GSD (Get Shit Done)&lt;/a&gt; fixes this by splitting your work into phases: discuss, plan, execute, verify. Each phase keeps its own focused context instead of dumping everything into one massive window. The model knows what to pay attention to &lt;em&gt;right now&lt;/em&gt; rather than trying to hold your entire session in its head.&lt;/p&gt;

&lt;p&gt;48K stars on GitHub, which is mental for a prompting framework. 69 commands, 24 agents running researchers and verifiers in parallel, and it works across 12+ runtimes. Not just Claude Code. Cursor, Gemini, Copilot, Cline, OpenCode, the lot.&lt;/p&gt;

&lt;p&gt;The bit I reckon matters most: you write what you want in a structured spec, GSD breaks it into atomic tasks, each task gets its own clean context, and you get proper git commits per task. No more "one massive diff that nobody wants to review" at the end of a session.&lt;/p&gt;

&lt;p&gt;Built by &lt;a href="https://github.com/gsd-build" rel="noopener noreferrer"&gt;TACHES&lt;/a&gt;. The &lt;a href="https://github.com/gsd-build/gsd-2" rel="noopener noreferrer"&gt;GSD-2 fork&lt;/a&gt; takes it further with even more agents. If you've been faffing about with raw Claude Code prompts and wondering why long sessions go wonky, start here. Nick the bits that work for you, bin the rest.&lt;/p&gt;

</description>
      <category>gsdgetshitdone</category>
      <category>claudecodecontextrot</category>
      <category>metapromptingframework</category>
      <category>aicodingworkflow</category>
    </item>
    <item>
      <title>ntfy + PingMe: Get Pinged When Your Agent Finishes</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:35 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/ntfy-pingme-get-pinged-when-your-agent-finishes-32ln</link>
      <guid>https://dev.to/stevengonsalvez/ntfy-pingme-get-pinged-when-your-agent-finishes-32ln</guid>
      <description>&lt;p&gt;You kick off a Claude Code task that's going to take twenty minutes. You tab away. You forget. Forty minutes later you remember, and the agent's been sat there waiting for input since minute twelve. Proper waste of time, and I kept doing it until I wired up notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/binwiederhier/ntfy" rel="noopener noreferrer"&gt;ntfy&lt;/a&gt; is the one I reach for. Self-hostable, no signup, no faff. One curl and your phone buzzes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"Agent finished the migration"&lt;/span&gt; ntfy.sh/my-agent-alerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Someone built an &lt;a href="https://github.com/cyanheads/ntfy-mcp-server" rel="noopener noreferrer"&gt;ntfy MCP server&lt;/a&gt; if you want the MCP route, though I'd recommend calling it through &lt;a href="https://dev.to/tools-tips/mcporter"&gt;mcporter&lt;/a&gt; rather than loading the full tool schema into your context (see &lt;a href="https://dev.to/blog/death-of-mcp"&gt;why MCP is a context tax&lt;/a&gt;). Even simpler: &lt;a href="https://github.com/cfngc4594/agent-notify" rel="noopener noreferrer"&gt;agent-notify&lt;/a&gt; hooks ntfy into Claude Code's lifecycle events directly, so you get pinged on completion without any MCP wiring at all. That's what I run. Mint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kha7iq/pingme" rel="noopener noreferrer"&gt;PingMe&lt;/a&gt; does something different. Single binary, env var config, blasts notifications to Slack, Telegram, Discord, Teams, Pushover, Mastodon, email, and about ten more. If your team lives across five different chat apps (and whose doesn't), PingMe covers them all from one command. There's a &lt;a href="https://github.com/kha7iq/pingme-action" rel="noopener noreferrer"&gt;GitHub Action&lt;/a&gt; too, which is handy for CI pipelines.&lt;/p&gt;

&lt;p&gt;I use ntfy because agent-notify means zero wiring on my end. The agent finishes, I get a ping. PingMe's the better shout if you need to fan out alerts to the whole team across platforms. Pick whichever matches your setup, but stop checking your terminal every three minutes like a muppet. That was me. Don't be me.&lt;/p&gt;

</description>
      <category>ntfy</category>
      <category>pingme</category>
      <category>agentnotification</category>
      <category>pushalertaiagent</category>
    </item>
    <item>
      <title>Claude Flow: The Multi-Agent Swarm Orchestrator Before It Got a New Name</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:29 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/claude-flow-the-multi-agent-swarm-orchestrator-before-it-got-a-new-name-4kd4</link>
      <guid>https://dev.to/stevengonsalvez/claude-flow-the-multi-agent-swarm-orchestrator-before-it-got-a-new-name-4kd4</guid>
      <description>&lt;h2&gt;
  
  
  What Claude Flow Was
&lt;/h2&gt;

&lt;p&gt;Right, so this tool landed in mid-2025 and it properly turned heads. Claude Flow was built by Reuven Cohen (GitHub handle &lt;code&gt;ruvnet&lt;/code&gt;), and the pitch was straightforward: take Anthropic's Claude, strap a multi-agent orchestration layer on top of it, and let coordinated swarms of AI agents tackle software development tasks. Not one agent fumbling through your codebase on its own. Dozens of them, working in parallel, with a queen agent calling the shots.&lt;/p&gt;

&lt;p&gt;The repo went live on GitHub around June 2025 under &lt;code&gt;ruvnet/claude-flow&lt;/code&gt;, and the thing that set it apart from other "run multiple agents" tools at the time was SPARC.&lt;/p&gt;

&lt;h2&gt;
  
  
  SPARC: The Methodology Baked Into the Tool
&lt;/h2&gt;

&lt;p&gt;SPARC stood for Specification, Pseudocode, Architecture, Refinement, Completion. It was a structured, test-driven approach to AI development that came packaged directly into Claude Flow. You didn't have to invent a workflow or figure out how to prompt your agents. You ran &lt;code&gt;npx claude-flow sparc run&lt;/code&gt; and the tool walked your swarm through each phase.&lt;/p&gt;

&lt;p&gt;Ten specialised modes. The agents knew what phase they were in, what their job was at each stage, and how to hand off work to the next phase. If you'd been struggling with the "I told the agent to build a feature and it went off the rails" problem, SPARC was the answer Claude Flow offered. Structure the work, structure the agents, structure the output.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Swarm Actually Worked
&lt;/h2&gt;

&lt;p&gt;This is where it got interesting. Claude Flow used what they called a "hive mind" architecture. A queen agent sat at the top, coordinating sub-agents below. Each sub-agent had a specialisation. Some did research. Some wrote code. Some reviewed it. The queen figured out who should do what and when.&lt;/p&gt;

&lt;p&gt;By the time v2.7 rolled around, the platform had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;60+ specialised agents running in coordinated swarms&lt;/li&gt;
&lt;li&gt;AgentDB memory powered by SQLite with semantic queries&lt;/li&gt;
&lt;li&gt;Neural memory enhancement for cross-session recall&lt;/li&gt;
&lt;li&gt;Claude Code integration via MCP (Model Context Protocol)&lt;/li&gt;
&lt;li&gt;Parallel agent execution across tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AgentDB bit was particularly clever. v2.7.x introduced 150x faster semantic queries with 56% less memory usage. So your agents could actually remember what they'd done across sessions without the whole thing grinding to a halt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened Next
&lt;/h2&gt;

&lt;p&gt;Claude Flow eventually became &lt;a href="https://dev.to/byte-sized-banter/week-11-claude-flow-becomes-ruflo"&gt;Ruflo&lt;/a&gt; in early 2026. Not just a rename, a full architectural shift to Rust/WASM. But that's a separate story.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;SPARC to Skills&lt;/strong&gt;: The original SPARC methodology was rigid by design. Five phases, sequential, with explicit gates between them. This works brilliantly for well-defined tasks but fights against the reality that most development is messy and iterative. Ruflo's shift to skills-based orchestration reflects a pattern seen across the whole AI tooling space in 2025/2026: structured phases give way to composable capabilities. HumanLayer's RPI methodology made the same move. GitHub's spec-driven development is another flavour of it. The industry is converging on "give agents the right tools and context, then let them figure out the order" over "prescribe the exact sequence." SPARC was the right answer for mid-2025. Skills are the right answer for where we are now.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use It
&lt;/h2&gt;

&lt;p&gt;If you're running Claude Code and you want to throw multiple agents at a problem in parallel with proper coordination, Ruflo (the current incarnation) is one of the most mature options out there. 29,000+ stars on GitHub. Active development. Real architectural investment under the hood.&lt;/p&gt;

&lt;p&gt;The tradeoff is complexity. This is not a "pip install and go" tool. It's an enterprise-grade platform with consensus algorithms (Raft, Byzantine, Gossip), distributed swarm intelligence, and a WASM runtime. If you need five agents working on different files simultaneously with shared memory and coordination, it's brilliant. If you need one agent to fix a bug, it's massive overkill.&lt;/p&gt;

&lt;p&gt;For simpler multi-agent needs, have a look at &lt;a href="https://dev.to/tools-tips/claude-squad"&gt;Claude Squad&lt;/a&gt; which takes a much lighter approach to running parallel agents.&lt;/p&gt;

</description>
      <category>claudeflow</category>
      <category>ruflo</category>
      <category>ruvnetclaudeflow</category>
      <category>sparcmethodology</category>
    </item>
    <item>
      <title>Obsidian Skills: Let Your Agent Manage Your Second Brain</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:22 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/obsidian-skills-let-your-agent-manage-your-second-brain-4fel</link>
      <guid>https://dev.to/stevengonsalvez/obsidian-skills-let-your-agent-manage-your-second-brain-4fel</guid>
      <description>&lt;h2&gt;
  
  
  What It Is
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/kepano/obsidian-skills" rel="noopener noreferrer"&gt;Obsidian Skills&lt;/a&gt; by &lt;a href="https://github.com/kepano" rel="noopener noreferrer"&gt;Steph Ango&lt;/a&gt; (Obsidian's CEO) gives Claude Code full access to your Obsidian vault. Search notes, create new ones, update existing ones, manage links and tags. 22,000 stars.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
npx skills add kepano/obsidian-skills
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your agent can now search your vault semantically, create notes from conversation context, and link related ideas together. It treats your vault as a knowledge base it can both read and write to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Rate It
&lt;/h2&gt;

&lt;p&gt;I keep research notes, meeting summaries, and tool evaluations in Obsidian. Before this skill, pulling that context into an agent conversation meant copy-pasting from one app to another. Now I just say "check my notes on browser automation" and the agent searches my vault, finds the relevant pages, and uses them as context.&lt;/p&gt;

&lt;p&gt;The write side is useful too. Agent finishes researching a topic? "Write a summary to my vault under technology/browser-tools." Done. No manual note-taking after the conversation ends.&lt;/p&gt;

&lt;p&gt;The fact that Obsidian's own CEO built this tells you where the ecosystem is heading. Your notes app is becoming agent-accessible infrastructure, not just a place you type into.&lt;/p&gt;

</description>
      <category>obsidianskills</category>
      <category>claudecodeobsidian</category>
      <category>stephangoobsidianskill</category>
      <category>agentvaultaccess</category>
    </item>
    <item>
      <title>My Security Agent Stack: How Zerocool Guards the Perimeter</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:15 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/my-security-agent-stack-how-zerocool-guards-the-perimeter-4n7l</link>
      <guid>https://dev.to/stevengonsalvez/my-security-agent-stack-how-zerocool-guards-the-perimeter-4n7l</guid>
      <description>&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;My &lt;a href="https://getwololo.dev" rel="noopener noreferrer"&gt;wololo&lt;/a&gt; setup has a security agent called Zerocool. It runs recon, scans for vulns, and reviews code for security issues before anything ships. Not a single tool. A stack of tools coordinated by one agent.&lt;/p&gt;

&lt;p&gt;Here's what's in the stack and why.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recon Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/jasonxtn/Argus" rel="noopener noreferrer"&gt;Argus&lt;/a&gt;&lt;/strong&gt; (jasonxtn) for quick information gathering. Python-based, clean TUI, covers networks, web apps, and security environments. Good for the "what am I looking at" phase when Zerocool first touches a target. Not the heaviest toolkit but it's fast and the interface is proper nice for an agent to parse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/h4r5h1t/webcopilot" rel="noopener noreferrer"&gt;WebCopilot&lt;/a&gt;&lt;/strong&gt; for attack surface mapping. Enumerates subdomains (assetfinder, sublister, subfinder, amass, findomain, gobuster), filters live hosts via dnsx, crawls endpoints, then uses gf patterns to extract params vulnerable to XSS, LFI, SSRF, SQLi, open redirect, and RCE. Scans them with dalfox, kxss, sqlmap. It's the automated "find every door and window" tool. Point it at a domain and it maps the whole surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pentesting Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/KeygraphHQ/shannon" rel="noopener noreferrer"&gt;Shannon&lt;/a&gt;&lt;/strong&gt; is the star of the stack. 37,000 stars. Autonomous white-box AI pentester by &lt;a href="https://keygraph.io" rel="noopener noreferrer"&gt;Keygraph&lt;/a&gt;. It reads your source code, identifies attack vectors, and then actually executes real exploits. Injection, auth bypass, SSRF, XSS. Reports only proven vulnerabilities with copy-paste PoCs. Not theoretical risk assessments. Proof.&lt;/p&gt;

&lt;p&gt;96% on the XBOW benchmark. Handles 2FA, TOTP, SSO, browser automation, parallel exploitation. The Lite version is AGPL-3.0. Pro adds SAST, SCA, secrets scanning, and CI/CD integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/vxcontrol/pentagi" rel="noopener noreferrer"&gt;PentAGI&lt;/a&gt;&lt;/strong&gt; for fully autonomous scanning. 14,600 stars. Sandboxed Docker execution with 20+ security tools baked in (nmap, metasploit, sqlmap). Knowledge graph via Neo4j. Team of specialist AI agents for research, dev, and infra. Multi-LLM support. &lt;code&gt;docker compose up&lt;/code&gt; and it's running.&lt;/p&gt;

&lt;p&gt;I use Shannon for targeted white-box testing on code I control. PentAGI for broader autonomous scanning where I want the agent to find things I haven't thought of.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Security Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/ghostsecurity/skills" rel="noopener noreferrer"&gt;Ghost Security Skills&lt;/a&gt;&lt;/strong&gt; for AI-native code analysis inside Claude Code. Four skills: &lt;code&gt;ghost:repo-context&lt;/code&gt; (understand the codebase), &lt;code&gt;ghost:scan-deps&lt;/code&gt; (dependency vulnerabilities), &lt;code&gt;ghost:scan-secrets&lt;/code&gt; (leaked credentials), &lt;code&gt;ghost:scan-code&lt;/code&gt; (code-level security issues). Install and your agent can security-review a PR before it merges.&lt;/p&gt;

&lt;p&gt;This is the layer that runs on every commit. Shannon and PentAGI run on schedules or before releases. Ghost Security runs continuously.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Zerocool Uses Them
&lt;/h2&gt;

&lt;p&gt;The agent picks tools based on what phase of the security review it's in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Recon&lt;/strong&gt;: Argus + WebCopilot map the target&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static analysis&lt;/strong&gt;: Ghost Security scans the code and dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic testing&lt;/strong&gt;: Shannon runs white-box exploits against the running app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autonomous sweep&lt;/strong&gt;: PentAGI does a broad scan for anything the targeted tools missed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report&lt;/strong&gt;: Zerocool compiles findings, deduplicates, and files issues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of this is ethical/authorised testing on my own infrastructure. If you're pointing any of these at targets you don't own, that's on you.&lt;/p&gt;

</description>
      <category>securityagentstack</category>
      <category>shannonpentesting</category>
      <category>pentagiautonomousscanning</category>
      <category>argusrecon</category>
    </item>
    <item>
      <title>Scrapling: Scrape Anything Without Getting Blocked</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:09 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/scrapling-scrape-anything-without-getting-blocked-h6f</link>
      <guid>https://dev.to/stevengonsalvez/scrapling-scrape-anything-without-getting-blocked-h6f</guid>
      <description>&lt;h2&gt;
  
  
  Why Not Just Use Requests
&lt;/h2&gt;

&lt;p&gt;Because the site blocks you. Or serves you a Cloudflare challenge page. Or fingerprints your TLS stack and returns garbage. Or redesigns their HTML and all your selectors break overnight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/D4Vinci/Scrapling" rel="noopener noreferrer"&gt;Scrapling&lt;/a&gt; handles all of this. 35,000 stars. Three fetcher tiers depending on how hostile the target is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Fetcher&lt;/code&gt;&lt;/strong&gt; (HTTP level): fastest, uses &lt;code&gt;httpx&lt;/code&gt; with browser-grade TLS fingerprinting. Good for APIs and sites without bot detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;StealthyFetcher&lt;/code&gt;&lt;/strong&gt; (real browser): spins up a Playwright browser with anti-detection patches. Handles JavaScript rendering, Cloudflare Turnstile, and most bot checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;PlayWrightFetcher&lt;/code&gt;&lt;/strong&gt; (full control): same browser engine but gives you direct Playwright API access for complex flows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pick the lightest tier that works. Escalate only when you need to. Most sites fold to &lt;code&gt;Fetcher&lt;/code&gt; with the right TLS config.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Selector Trick
&lt;/h2&gt;

&lt;p&gt;The bit that sold me: adaptive selectors. You write a selector once and Scrapling generates multiple fallback strategies (text matching, attribute similarity, structural position). When the site changes their class names or restructures the DOM, your scraper keeps working because it falls back to a selector that still matches.&lt;/p&gt;

&lt;p&gt;That's the difference between a scraper you maintain weekly and one you maintain monthly.&lt;/p&gt;

&lt;h2&gt;
  
  
  For Agents
&lt;/h2&gt;

&lt;p&gt;I use it as a fallback in my &lt;a href="https://dev.to/tools-tips/agents-in-a-box"&gt;/research skill&lt;/a&gt; when &lt;a href="https://dev.to/tools-tips/markdown-new-jina-reader"&gt;markdown.new&lt;/a&gt; or &lt;a href="https://dev.to/tools-tips/markdown-new-jina-reader"&gt;Jina Reader&lt;/a&gt; can't get through. Cloudflare blocks them, anti-bot walls go up, paywall gates slam shut. Scrapling's &lt;code&gt;StealthyFetcher&lt;/code&gt; punches through most of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;scrapling
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;scrapling&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StealthyFetcher&lt;/span&gt;
&lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StealthyFetcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://blocked-site.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;css&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;article&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a deeper comparison with every other scraping and browser tool, see the &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;Browser Tools series&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>scrapling</category>
      <category>pythonwebscraping</category>
      <category>cloudflarebypassscraper</category>
      <category>adaptiveselectors</category>
    </item>
    <item>
      <title>Clawdbot: The Chrome Extension That Lets Agents Drive Your Browser</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:46:02 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/clawdbot-the-chrome-extension-that-lets-agents-drive-your-browser-c2f</link>
      <guid>https://dev.to/stevengonsalvez/clawdbot-the-chrome-extension-that-lets-agents-drive-your-browser-c2f</guid>
      <description>&lt;h2&gt;
  
  
  The Relay Pattern
&lt;/h2&gt;

&lt;p&gt;Here's a pattern that makes a lot of sense once you see it. Instead of your agent controlling a headless browser it spun up on its own (Puppeteer style), what if the agent could control your actual browser? The one you're already using. With your cookies, your sessions, your extensions, your everything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;Clawdbot&lt;/a&gt;, created by Peter Steinberger, does exactly this. It's a Chrome extension paired with a gateway server. The extension sits in your browser. The gateway sits between your agent and the extension. Your agent sends commands to the gateway, the gateway relays them to the extension, and the extension executes them in your real browser session.&lt;/p&gt;

&lt;p&gt;The "relay" bit is the clever part. It can route control locally (agent on your machine, browser on your machine) or remotely (agent running somewhere else, still controlling your browser). Same protocol either way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Most browser automation tools create a parallel universe. They open a fresh browser with no state, no logins, no context. Then your agent has to log in, navigate to the right page, and reconstruct all the context that already exists in the browser tab you've got open.&lt;/p&gt;

&lt;p&gt;Clawdbot skips all that. Your agent operates inside your existing session. It sees what you see. It can interact with pages you're already authenticated on. No credential passing, no cookie juggling, no "wait, why is it asking me to log in again?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The Name Drama
&lt;/h2&gt;

&lt;p&gt;Right, so about that name. It launched as Clawdbot. Then Anthropic's lawyers got involved because, well, "Clawd" is a bit on the nose. It got renamed to Moltbot, then settled on OpenClaw. By March 2026 it had racked up 247k stars on GitHub, making it one of the fastest growing repos in the space. Wild ride for a Chrome extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Install the Chrome extension from the repo, spin up the gateway server, and point your agent at it. The &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw repo&lt;/a&gt; has the setup instructions. It's straightforward if you've ever installed a Chrome extension from source before.&lt;/p&gt;

</description>
      <category>clawdbot</category>
      <category>openclaw</category>
      <category>chromeextensionaiagent</category>
      <category>browserrelayai</category>
    </item>
    <item>
      <title>Google Stitch: AI-Native UI Design That Actually Understands Your Design System</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:45:56 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/google-stitch-ai-native-ui-design-that-actually-understands-your-design-system-58f2</link>
      <guid>https://dev.to/stevengonsalvez/google-stitch-ai-native-ui-design-that-actually-understands-your-design-system-58f2</guid>
      <description>&lt;h2&gt;
  
  
  The Problem With AI-Generated UI
&lt;/h2&gt;

&lt;p&gt;Every AI coding tool can spit out a UI. You ask for a dashboard, you get a dashboard. You ask for a landing page, you get a landing page. The problem is it looks like every other AI-generated dashboard and landing page. Generic colours, default spacing, system fonts. It works, technically, but it looks like nobody who cares about design touched it. And the moment you try to get two screens to look like they belong to the same product, you're back to manually tweaking CSS until something vaguely coheres.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stitch.withgoogle.com" rel="noopener noreferrer"&gt;Google Stitch&lt;/a&gt; takes a different approach. It's an AI-native design tool from Google Labs, built on Gemini, that generates high-fidelity UI from natural language prompts and exports real HTML/CSS. It launched at Google I/O in May 2025 as a fairly basic experiment. The March 2026 v2 update turned it into something properly useful: infinite canvas, multi-screen generation (up to five connected screens at once), interactive prototyping, and a voice canvas for talking through your design ideas. Figma's stock dropped 10% the day the update shipped, which tells you roughly how seriously the market took it.&lt;/p&gt;

&lt;h2&gt;
  
  
  DESIGN.md: The Bit That Actually Matters
&lt;/h2&gt;

&lt;p&gt;The flashy generation stuff is nice, but the feature that got me paying attention is DESIGN.md. It's a plain markdown file that encodes your entire design system. Colour palette with semantic tokens (primary, surface, accent), typography (font families, sizes, weights, line heights), spacing scale, grid conventions, border radius, shadows. Everything your design system defines, written in a format that both humans and language models can parse without breaking a sweat.&lt;/p&gt;

&lt;p&gt;When you prompt Stitch, it passes your DESIGN.md as context to Gemini. The model treats those values as hard constraints, not suggestions. Every generated UI follows your system. Brand colours, spacing scale, typography, all consistent across screens. It's the difference between "generate me a settings page" and "generate me a settings page that looks like it belongs in our product."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Developers Should Care
&lt;/h2&gt;

&lt;p&gt;Here's why DESIGN.md is more interesting than yet another design tool. It's portable. Completely tool-agnostic. Drop it in your repo root and any AI coding agent that reads context files will pick it up. Claude Code reads it. Cursor reads it. Copilot reads it. Your agent generates UI that respects your design system without you having to re-explain your brand colours in every prompt.&lt;/p&gt;

&lt;p&gt;You can extract a DESIGN.md from any existing URL. Stitch scrapes the design tokens from a live site and produces the file for you. Got a client's marketing site and need to build an internal tool that matches? Point Stitch at their URL, grab the DESIGN.md, and your coding agent generates components that feel like they belong.&lt;/p&gt;

&lt;p&gt;Think of it as the README.md of design systems. Designers define the system, developers commit the file, agents consume it. No Figma plugin, no design token pipeline to maintain, no arguing about whether that blue is &lt;code&gt;#2563EB&lt;/code&gt; or &lt;code&gt;#3B82F6&lt;/code&gt;. It's in the file. Full spec is at &lt;a href="https://stitch.withgoogle.com/docs/design-md/overview/" rel="noopener noreferrer"&gt;stitch.withgoogle.com/docs/design-md/overview&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here's what a minimal one looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Colors&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Primary: #1A73E8
&lt;span class="p"&gt;-&lt;/span&gt; Primary Dark: #1557B0
&lt;span class="p"&gt;-&lt;/span&gt; Background: #FFFFFF
&lt;span class="p"&gt;-&lt;/span&gt; Surface: #F8F9FA
&lt;span class="p"&gt;-&lt;/span&gt; Error: #EA4335
&lt;span class="p"&gt;-&lt;/span&gt; Text Primary: #202124
&lt;span class="p"&gt;-&lt;/span&gt; Text Secondary: #5F6368

&lt;span class="gu"&gt;## Typography&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Font Family: Inter, sans-serif
&lt;span class="p"&gt;-&lt;/span&gt; Heading 1: 32px, 700 weight
&lt;span class="p"&gt;-&lt;/span&gt; Body: 16px, 400 weight
&lt;span class="p"&gt;-&lt;/span&gt; Caption: 12px, 400 weight

&lt;span class="gu"&gt;## Spacing&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Base unit: 8px
&lt;span class="p"&gt;-&lt;/span&gt; Values: 4, 8, 16, 24, 32, 48px

&lt;span class="gu"&gt;## Components&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Button border radius: 8px
&lt;span class="p"&gt;-&lt;/span&gt; Card shadow: 0 1px 3px rgba(0,0,0,0.12)
&lt;span class="p"&gt;-&lt;/span&gt; Input border: 1px solid #DADCE0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Your agent reads this and every button, card, and heading it generates uses those exact values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Free at &lt;a href="https://stitch.withgoogle.com" rel="noopener noreferrer"&gt;stitch.withgoogle.com&lt;/a&gt; with a Google account. The workflow is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generate a DESIGN.md&lt;/strong&gt; from an existing site URL, or create one from scratch in Stitch's editor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drop it in your repo root&lt;/strong&gt; (or &lt;code&gt;.stitch/DESIGN.md&lt;/code&gt; if you prefer keeping things tidy).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt your AI coding agent&lt;/strong&gt; to build UI. It picks up the design system automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For quick iteration, Stitch's canvas lets you generate multiple connected screens and prototype interactions between them before you write a line of code. When you're happy, export the HTML/CSS and hand it off to your component framework of choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  When It Falls Short
&lt;/h2&gt;

&lt;p&gt;The generated code is clean HTML/CSS but not component-ised. You'll want to refactor the output into React, Vue, or whatever you're running. Complex interactions beyond simple navigation need building by hand after export. And the five-screen limit means larger apps get done in batches.&lt;/p&gt;

&lt;p&gt;The DESIGN.md format isn't standardised either. It's Google's convention and other tools happen to work with it because markdown is universal. No formal spec. If Google bins Stitch tomorrow, the file is still useful, but the tooling around it disappears.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Question
&lt;/h2&gt;

&lt;p&gt;Are we watching the design-to-development workflow get properly rewired here?&lt;/p&gt;

&lt;p&gt;For years, the pipeline has been: designer creates in &lt;a href="https://www.figma.com" rel="noopener noreferrer"&gt;Figma&lt;/a&gt;, developer squints at the spec, developer approximates it in code, designer files a ticket saying the padding is wrong, developer adjusts by 4 pixels, repeat until someone gives up or the sprint ends. The tooling between design and code has always been niche, expensive, and full of cognitive load. Figma-to-code plugins. Style dictionaries. Design token pipelines. Handoff ceremonies. None of it flows naturally.&lt;/p&gt;

&lt;p&gt;DESIGN.md is interesting because it turns design into an intermediate DSL that makes the whole thing commodity for developers. A markdown file. In your repo. That your agent reads. No Figma plugin, no handoff, no "inspect mode." The design system is code-adjacent from the start.&lt;/p&gt;

&lt;p&gt;Now, I've been told many times by design organisations that those extra 10 pixels on a button contribute to 1% of revenue for large companies. Maybe they do. At scale, design systems matter enormously. The pixel-level precision, the A/B testing of border radius changes, the obsessive consistency across 200 screens. That's real work with real business impact, and I'm not dismissing it.&lt;/p&gt;

&lt;p&gt;But for the 95% of teams that aren't operating at that scale? The ones shipping MVPs, building internal tools, prototyping features? The Figma-to-dev pipeline is overhead they can't afford. DESIGN.md gives them "good enough consistency" at near-zero cost. Write the file once, every agent respects it, move on.&lt;/p&gt;

&lt;p&gt;The question is whether this eventually scales up to replace the enterprise design workflow too, or whether there's a permanent split: DESIGN.md for speed, Figma for precision. I reckon both survive, but the percentage of work that needs Figma-level precision shrinks every time the AI-generated output gets a bit better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; DESIGN.md might be the most consequential thing about &lt;a href="https://stitch.withgoogle.com" rel="noopener noreferrer"&gt;Stitch&lt;/a&gt;. Not because the design tool is revolutionary, but because a plain markdown file as the bridge between design and code is such a stupidly simple idea that it makes you wonder why we spent a decade building elaborate pipelines instead.&lt;/p&gt;

</description>
      <category>googlestitch</category>
      <category>designmd</category>
      <category>aiuigeneration</category>
      <category>googlelabsstitch</category>
    </item>
    <item>
      <title>UI/UX Pro Max: Stop Your AI Making Everything Look the Same</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:45:49 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/uiux-pro-max-stop-your-ai-making-everything-look-the-same-a3o</link>
      <guid>https://dev.to/stevengonsalvez/uiux-pro-max-stop-your-ai-making-everything-look-the-same-a3o</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Ask any AI coding tool to "build me a dashboard" and you get the same thing every time. Inter font. Purple-to-blue gradient. Cards with rounded corners. Drop shadows everywhere. It looks like every other AI-generated dashboard because the model defaults to what it's seen most often in training data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nextlevelbuilder/ui-ux-pro-max-skill" rel="noopener noreferrer"&gt;UI/UX Pro Max&lt;/a&gt; is a skill that gives your agent actual design taste. 60,000+ stars. 50+ distinct UI styles, 97 colour palettes, 57 font pairings, 25 chart types, and design system generation. Install it and your agent stops defaulting to the same generic SaaS template.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx uipro-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It installs as a Claude Code skill (works with Cursor and others too). When you ask for UI, the agent picks from a curated library of styles and applies consistent design tokens instead of improvising. The difference is immediately visible. Layouts that look like a human designer touched them, not a model trained on Tailwind UI screenshots.&lt;/p&gt;

&lt;p&gt;Pairs well with &lt;a href="https://dev.to/tools-tips/impeccable"&gt;Impeccable&lt;/a&gt; for the anti-pattern side (what NOT to do) and &lt;a href="https://dev.to/tools-tips/google-stitch"&gt;Google Stitch&lt;/a&gt; for the DESIGN.md system. Stack all three and your agent-generated UIs stop looking like AI slop.&lt;/p&gt;

</description>
      <category>uiuxpromax</category>
      <category>aidesignskill</category>
      <category>agentuistyles</category>
      <category>colourpaletteai</category>
    </item>
    <item>
      <title>PinchTab: 12MB Binary That Replaces Playwright for AI Agents</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:45:42 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/pinchtab-12mb-binary-that-replaces-playwright-for-ai-agents-2cob</link>
      <guid>https://dev.to/stevengonsalvez/pinchtab-12mb-binary-that-replaces-playwright-for-ai-agents-2cob</guid>
      <description>&lt;p&gt;Playwright is brilliant for CI testing. But when you're giving an AI agent browser access, it's like handing someone a fire hose when they asked for a glass of water. The agent doesn't need the full DOM. It needs to know what's on screen and how to click things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pinchtab/pinchtab" rel="noopener noreferrer"&gt;PinchTab&lt;/a&gt; gets this right. It's a 12MB Go binary, zero dependencies, that starts an HTTP server and gives your agent REST endpoints to control Chrome. The trick is it serves the Accessibility Tree instead of raw HTML. That's roughly 800 tokens per page instead of the 4,500 to 12,000 you'd get from Playwright dumping the full DOM. For agents burning through context windows, that's a proper big deal.&lt;/p&gt;

&lt;p&gt;Elements get stable refs like &lt;code&gt;e5&lt;/code&gt; instead of fragile XPath selectors or pixel coordinates. Your agent says "click e5" and it clicks. Deterministic. No guessing, no "click at coordinates 340,220 and hope the layout hasn't shifted."&lt;/p&gt;

&lt;p&gt;Stealth mode is baked in. It masks &lt;code&gt;navigator.webdriver&lt;/code&gt; and spoofs Canvas/WebGL fingerprints, which matters if you're automating sites that actively block headless browsers.&lt;/p&gt;

&lt;p&gt;Any agent that can make HTTP calls can use it. Claude Code, Cursor, whatever. No MCP server needed, just plain REST. There's multi-instance orchestration with a dashboard if you're running several browsers at once.&lt;/p&gt;

&lt;p&gt;One thing to flag: there's an &lt;a href="https://github.com/advisories/GHSA-rw8p-c6hf-q3pg" rel="noopener noreferrer"&gt;open SSRF vulnerability&lt;/a&gt; (CVE-2026-30834) when I last checked. Worth looking into before you deploy it anywhere public-facing. For local agent use it's fine, but I wouldn't put it on a server without patching that first.&lt;/p&gt;

&lt;p&gt;Runs on macOS, Linux, and Docker. MIT licensed. I reckon this or something like it is where agent browser tooling ends up, because feeding 12,000 tokens of DOM soup to an LLM for every page visit was always a bit mental.&lt;/p&gt;

</description>
      <category>pinchtab</category>
      <category>accessibilitytreebrowseragent</category>
      <category>gobrowserautomation</category>
      <category>restapibrowsercontrol</category>
    </item>
    <item>
      <title>Deepgram: $200 Free STT That Makes Voice Coding Actually Work</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:45:36 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/deepgram-200-free-stt-that-makes-voice-coding-actually-work-16g4</link>
      <guid>https://dev.to/stevengonsalvez/deepgram-200-free-stt-that-makes-voice-coding-actually-work-16g4</guid>
      <description>&lt;h2&gt;
  
  
  Why This Matters for Coding Agents
&lt;/h2&gt;

&lt;p&gt;Voice input for coding used to be a gimmick. Whisper was slow. Commercial options cost a fortune. The latency between speaking and text appearing was long enough to break your train of thought.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://deepgram.com" rel="noopener noreferrer"&gt;Deepgram&lt;/a&gt; changed the maths. Their Nova-3 model does real-time streaming transcription fast enough that the text appears as you speak, not after. And the free tier gives you $200 in credit, which is roughly 12,000 minutes of transcription. That's a lot of talking before you pay a penny.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vibe Coding Angle
&lt;/h2&gt;

&lt;p&gt;Wire Deepgram into any STT tool (&lt;a href="https://dev.to/tools-tips/voice-coding"&gt;justspeaktoit&lt;/a&gt;, a custom script, whatever) and suddenly voice is a real input method for your coding agents. "Refactor the auth middleware to use the new token format" spoken out loud, transcribed in under 200ms, piped into Claude Code. No typing. No context switch.&lt;/p&gt;

&lt;p&gt;The accuracy on technical speech is surprisingly good. It handles "refactor," "middleware," "useState," "async await" without flinching. Not perfect on obscure library names, but proper solid on the vocabulary you actually use while coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Sign up at deepgram.com, grab your API key&lt;/span&gt;
&lt;span class="c"&gt;# $200 free credit, no card required&lt;/span&gt;

&lt;span class="c"&gt;# Quick test with curl&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"https://api.deepgram.com/v1/listen"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Token YOUR_KEY"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: audio/wav"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data-binary&lt;/span&gt; @audio.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just install &lt;a href="https://github.com/crmitchelmore/justspeaktoit" rel="noopener noreferrer"&gt;justspeaktoit&lt;/a&gt; which wraps Deepgram with a macOS menu bar app. Press hotkey, speak, text appears. Sorted.&lt;/p&gt;

</description>
      <category>deepgram</category>
      <category>nova3stt</category>
      <category>speechtotextcoding</category>
      <category>voicecodingapi</category>
    </item>
    <item>
      <title>Entire CLI: Git Blame for the AI Era</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:45:29 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/entire-cli-git-blame-for-the-ai-era-52gj</link>
      <guid>https://dev.to/stevengonsalvez/entire-cli-git-blame-for-the-ai-era-52gj</guid>
      <description>&lt;h2&gt;
  
  
  Who actually wrote this code?
&lt;/h2&gt;

&lt;p&gt;Git tells you what changed. &lt;a href="https://entire.io" rel="noopener noreferrer"&gt;Entire&lt;/a&gt; tells you why, and who. Or &lt;em&gt;what&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here's the problem. You're running Claude Code, Codex, Gemini CLI, whatever. The agent writes a hundred lines, you tweak ten, commit, push. Six months later someone's debugging that function and git blame says your name. But you didn't write most of it. You don't even remember what prompt produced it. The reasoning, the agent's decisions, the tool calls, all gone.&lt;/p&gt;

&lt;p&gt;Entire hooks into your git push and captures the full AI session. Prompts, responses, tool calls, files touched, token usage. Everything gets stored on a hidden branch (&lt;code&gt;entire/checkpoints/v1&lt;/code&gt;) so your main history stays clean. Each commit gets a 12-character Checkpoint ID linking back to the session on their dashboard.&lt;/p&gt;

&lt;p&gt;The line-level attribution is the bit that matters. Not just "AI helped with this file" but actual percentage breakdowns of which lines were agent-written versus human-written. For audits, for onboarding, for debugging at 2am when you need to understand intent behind code you didn't write. Proper useful.&lt;/p&gt;

&lt;p&gt;Thomas Dohmke (ex-GitHub CEO) started this with a $60M seed. Agent-agnostic by design, works with Claude Code, Codex, Gemini CLI, Cursor, the lot. I reckon every team shipping AI-assisted code daily is going to need something like this eventually.&lt;/p&gt;

</description>
      <category>entirecli</category>
      <category>aicodeattribution</category>
      <category>gitblameai</category>
      <category>linelevelaitracking</category>
    </item>
  </channel>
</rss>
