<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Steven Gonsalvez</title>
    <description>The latest articles on DEV Community by Steven Gonsalvez (@stevengonsalvez).</description>
    <link>https://dev.to/stevengonsalvez</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stevengonsalvez"/>
    <language>en</language>
    <item>
      <title>"Use Claude Code for FREE" is a Trap</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Tue, 28 Apr 2026 08:28:09 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/use-claude-code-for-free-is-a-trap-211c</link>
      <guid>https://dev.to/stevengonsalvez/use-claude-code-for-free-is-a-trap-211c</guid>
      <description>&lt;h1&gt;
  
  
  "Use Claude Code for FREE" is a Trap
&lt;/h1&gt;

&lt;p&gt;Open Twitter right now. Or Reddit. Or YouTube. You will be drowning in variations of the same breathless claim.&lt;/p&gt;

&lt;p&gt;"STOP Paying $200/m For Claude Code.. Here's How To Use It For FREE!" on YouTube. "I Didn't Pay a Single Dollar to Use Claude Code" on Medium. "Crazy that you can sign up for nvidia and get almost unlimited free access" on Threads. Dev Genius telling you "You Don't Need a Paid Plan to Experiment With Claude Code". PopularAITools calling it "100% free."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(These are all real titles. I've intentionally not linked them. You can find them though.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And look, I get the appeal. $200 a month for Claude Max is not pocket change. But the advice being thrown around right now is misleading, and I reckon a good chunk of people who tried AI coding this year and walked away thinking "meh, it's overhyped" did so because they followed exactly this kind of advice. They compromised. And when you compromise on the wrong thing, the whole experience falls apart.&lt;/p&gt;

&lt;p&gt;So let me walk you through why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cheap-Intelligent-Fast Trilemma
&lt;/h2&gt;

&lt;p&gt;This is a mental model I made up. No published paper, not peer-reviewed. But hear me out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cheap. Intelligent. Fast. Pick two.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You only get to pick any two of them.&lt;/p&gt;

&lt;p&gt;The "free Claude Code" crowd is claiming they've broken this trilemma. They haven't. They've just picked cheap and are pretending the other two dimensions don't matter.&lt;/p&gt;




&lt;p&gt;Free APIs throttle you. Fast inference on cheap hardware means smaller, dumber models. Frontier intelligence at speed costs real money. Pick two.&lt;/p&gt;




&lt;h2&gt;
  
  
  Problem 1: The Model Gap is Real
&lt;/h2&gt;

&lt;p&gt;Right, let's start with the elephant. The models you get for free are not the same as the models you get when you pay.&lt;/p&gt;

&lt;p&gt;These are the models that keep coming up the most in the context of coding agents, agentic applications, and personal agents (the sort of thing you'd run with &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box" rel="noopener noreferrer"&gt;claws, hermes&lt;/a&gt;, or &lt;a href="https://getwololo.dev/docs" rel="noopener noreferrer"&gt;wololo&lt;/a&gt;). Frontier only. Not Sonnet, not Haiku, not the smaller variants. The absolute top of each provider's lineup:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Intelligence&lt;/th&gt;
&lt;th&gt;Coding&lt;/th&gt;
&lt;th&gt;Agentic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-5.5&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;td&gt;59&lt;/td&gt;
&lt;td&gt;74&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Opus 4.7&lt;/td&gt;
&lt;td&gt;57&lt;/td&gt;
&lt;td&gt;53&lt;/td&gt;
&lt;td&gt;71&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MiMo-V2.5-Pro&lt;/td&gt;
&lt;td&gt;54&lt;/td&gt;
&lt;td&gt;46&lt;/td&gt;
&lt;td&gt;67&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DeepSeek V4 Pro&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;67&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GLM-5.1&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;43&lt;/td&gt;
&lt;td&gt;67&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kimi K2.6&lt;/td&gt;
&lt;td&gt;54&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Qwen 3.6 Max&lt;/td&gt;
&lt;td&gt;52&lt;/td&gt;
&lt;td&gt;45&lt;/td&gt;
&lt;td&gt;65&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Muse Spark&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MiniMax M2.7&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;42&lt;/td&gt;
&lt;td&gt;61&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now, I'm skipping models here. Quite a few, actually. DeepSeek's latest is storming into the top 5 on agentic capabilities, but I haven't used it enough to know whether the benchmarks hold up in the real world. Muse Spark, Qwen, and even Groq's offerings are pushing into the top 10 on both coding and agentic. The field is absolutely mental right now, with Chinese labs shipping new models like they're on a weekly sprint cadence.&lt;/p&gt;

&lt;p&gt;Oh, and Gemini? We don't talk about Gemini. 👀&lt;/p&gt;

&lt;p&gt;Here's the thing about benchmarks though. They show one view. On real world experiences, sometimes the numbers don't hold their own. I've seen models score brilliantly on paper and then fall over the moment you ask them to do something slightly off the beaten path. Some of the agentic capabilities these benchmarks claim? Not witnessed them in practice. Take those numbers as a starting point, not gospel.&lt;/p&gt;

&lt;p&gt;That said, look at GPT-5.5 and Opus 4.7. Intelligence: 60 vs 57. Coding: 59 vs 53. Agentic: 74 vs 71. Those two are in a different postcode. And then take GLM-5.1 (43 coding, 67 agentic) or MiniMax M2.7 (42 coding, 61 agentic). The gap between 42 and 59 on coding looks like seventeen points on a chart. In practice it's a canyon.&lt;/p&gt;

&lt;p&gt;I haven't used MiMo or DeepSeek enough to comment on them properly. But I've spent real time with MiniMax M2.7, GLM-5.1, and Kimi K2.5 (not tried K2.6 yet, hoping they've addressed what I'm about to describe). So here's what the benchmarks don't tell you.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Portfolio Site Test
&lt;/h3&gt;

&lt;p&gt;Ask any of these models to build you a portfolio site. Give them a decent set of &lt;a href="https://dev.to/tools-tips/ui-ux-pro-max"&gt;UI/UX skills&lt;/a&gt;, some design references, and let them crack on. MiniMax M2.7 will do a genuinely decent job. GLM-5.1 will do a genuinely decent job. Even Kimi K2.5 will get you something respectable. The difference at this complexity level is maybe 10%. Barely noticeable.&lt;/p&gt;

&lt;p&gt;This is where the "free is just as good" narrative comes from. And at this level, it is broadly true.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Complexity Coding Test
&lt;/h3&gt;

&lt;p&gt;Now ask each of them to write code for streaming tmux output over websockets, compose it to an HTTP stream, preserve the ANSI structure for UI rendering. Network, core systems, frontend rendering, all wired together.&lt;/p&gt;

&lt;p&gt;Only GPT-5.5 and Opus 4.7 pull it off. Everything else fails at different levels, even with nudges, assistance, and hand-holding.&lt;/p&gt;

&lt;p&gt;This is the 10% that the portfolio test doesn't show you. And in professional work, you hit this sort of problem every single day.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Complexity Agent Test
&lt;/h3&gt;

&lt;p&gt;Here's one that really separates the wheat from the chaff. Give the model a developer assistant task: "Find all the tmux sessions on this machine that are older than 24 hours, check if they have any uncommitted git changes, if they do then commit and push to a branch named after the session, if they don't then kill the session. Report what you did."&lt;/p&gt;

&lt;p&gt;That's file system access, tmux commands, git operations, conditional logic, error handling when a session is locked or a repo has conflicts. Multiple tool calls chained together, with branching logic at each step.&lt;/p&gt;

&lt;p&gt;Opus 4.6/4.7 cracks on with it like you told a senior dev to sort it. GPT-5.5 gets there too, proper improvement over 5.4 which was honestly shambolic at agentic work (great at code though, always was). MiniMax M2.7 is half decent but tiring. Like in Discord, it still cannot tag a user correctly even with all the instructions and plugin nudges in the world. The rest? Kimi K2.5 just plain cannot do tool calls properly. Like, &lt;em&gt;really&lt;/em&gt; cannot. GLM-5.1 is just poor, got to say. And expensive compared to other Chinese models for what you get.&lt;/p&gt;

&lt;h3&gt;
  
  
  What It Actually Feels Like in Practice
&lt;/h3&gt;

&lt;p&gt;The real world stuff is this: Opus (4.6/4.7) is &lt;em&gt;very&lt;/em&gt; agentic. Probably way more than even the benchmark gap suggests relative to the others. Trial and error, autonomous (probably dangerously autonomous at that), to the point. If it gets stuck, it tries several options. Uses utilities, CLIs, curl, writes its own &lt;code&gt;.mjs&lt;/code&gt; scripts to execute. Once something didn't work, it tried to rework a closed source binary to fix it. It browses GitHub for issues on open source tools it's using, finds alternates, goes way beyond what you asked. It just &lt;em&gt;keeps going&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I run these through &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box" rel="noopener noreferrer"&gt;NanoClaw&lt;/a&gt; with the same instruction set, same ask, same tools. Opus 4.6 with NanoClaw still beats the lot compared to hermes with GPT-5.5 and OpenClaw with GPT-5.5, in that order. Opus 4.7 doesn't seem fully consistent yet, which suggests different versions hitting different endpoints. I've seen the same behaviour on every Anthropic launch before it steadies itself. Give it a few weeks.&lt;/p&gt;

&lt;p&gt;GPT-5.5 is a genuine upgrade though. Got to give credit. 5.4 was shambolic at agentic, and 5.5 is a proper step change. Then probably MiniMax M2.7 next, it's half decent but exhausting. And the rest is just pretty awful for serious agentic work. Kimi K2.5 fails at tool calling (not tried K2.6, hoping they fixed it). GLM-5.1 is just poor.&lt;/p&gt;

&lt;p&gt;So which free model do you pick? Depends entirely on what you need. And most people following the "free Claude Code" tutorials don't know what they need yet. That's the problem.&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Geek Corner: Agentic vs Code Generation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Linear. Fits the LLM's core strength, which is completion. Given a prompt, produce clean, correct code. This is where benchmarks like SWE-bench live. It measures "can the model write the right code?"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Agentic capability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Non-linear. Tool calling, orchestration of multiple tool calls, utility coding against an outcome rather than a specification. The model may not write the cleanest code, but it &lt;em&gt;finds a way to reach the goal&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;The critical difference&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model fails a tool call on the 2nd or 3rd attempt. A pure code-gen model stops and reports the error. An agentic model tries an alternate approach. Uses a different tool. Falls back to &lt;code&gt;curl&lt;/code&gt; instead of the CLI. Tries &lt;code&gt;wget&lt;/code&gt;. Writes a Python script that does it manually. Trial and error until the goal is reached.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Why this matters for free tiers&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Most free models are decent at code generation. Far fewer are properly agentic. Opus 4.7 and GPT-5.5 are in a different league for agentic work. When your free model hits a wall on step 3 of a 7-step task and just... stops, that's the code-gen ceiling showing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiniMax M2.7 as the outlier&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;230B MoE with only ~10B active params. 97% complex skill adherence. Self-evolving through 100+ autonomous improvement loops. It's the free-tier model that comes closest to genuine agentic behaviour, but it still can't match the frontier paid models on the hardest tasks.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Problem 2: It's Not Free, You Just Get Throttled
&lt;/h2&gt;

&lt;p&gt;Right, let's talk about the bit that makes me properly cross. The rate limiting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://build.nvidia.com/" rel="noopener noreferrer"&gt;Nvidia NIM&lt;/a&gt; gives you 40 requests per minute. That's the hard cap. Not a soft limit. Not a guideline. Forty.&lt;/p&gt;

&lt;p&gt;Know what happens when you run &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; with a coding agent that's doing real work? It makes tool calls. Lots of them. File reads, file writes, bash commands, grep searches, more file reads, compilation checks. A moderately complex task can burn through 40 API calls in about 15 seconds.&lt;/p&gt;

&lt;p&gt;And then you get a 429. The standard HTTP "too many requests" error. Your agent stalls. It backs off. It retries. It backs off again. And suddenly your "free" coding session that was supposed to feel like having an engineering partner feels like trying to have a conversation with someone who falls asleep mid-sentence every 90 seconds.&lt;/p&gt;

&lt;p&gt;Feels like ordering a pint at a pub where the bartender can only pour one drink every 30 minutes. The beer's free, sure. But by the time you've got your round in, your mates have already left.&lt;/p&gt;

&lt;p&gt;And &lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;OpenRouter&lt;/a&gt;? Their free tier is 20 RPM and 200 requests per day. That's not even enough for a single proper coding session.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://forums.developer.nvidia.com/" rel="noopener noreferrer"&gt;Nvidia forums&lt;/a&gt; are full of people trying to work around this. Alternate suggestions include adding sleep commands between API calls, skipping all forms of reasoning and planning so the model just does raw tool calls, and begging for 200 RPM so they can run multi-agent setups on the free tier. That's the trilemma in action and they don't even realise they're choosing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 3: It's Not &lt;em&gt;Really&lt;/em&gt; Free
&lt;/h2&gt;

&lt;p&gt;This one's shorter but it matters.&lt;/p&gt;

&lt;p&gt;When you use &lt;a href="https://build.nvidia.com/" rel="noopener noreferrer"&gt;Nvidia NIM&lt;/a&gt;'s hosted API, your prompts go through their infrastructure. Your code. Your proprietary logic. Your company's business rules.&lt;/p&gt;

&lt;p&gt;There's a genuine gap in their terms of service. The Nvidia forums have a moderator confirming that self-hosted NIM containers don't use your data for training. Great. But for the cloud-hosted API? No definitive public statement. Nothing in writing that says "we will not use your API prompts for training purposes."&lt;/p&gt;

&lt;p&gt;And then there's the free &lt;a href="https://chat.qwen.ai/" rel="noopener noreferrer"&gt;Qwen Code&lt;/a&gt; tier. Higher rate limits, more generous free usage. But your stuff is definitely being used for training. Unless you're okay with that.&lt;/p&gt;

&lt;p&gt;I'd presume it's the same with MiniMax, GLM, and Kimi's coding plans, which are dirt cheap in comparison. I'm not saying I &lt;em&gt;know&lt;/em&gt; this for certain. But I got a warning email from one of them threatening a ban because I was using my "coding plan" for non-coding tasks. Fair enough, it says "coding plan" on the tin, and there's probably fine print about training data buried somewhere in the terms. And if you're using these for personal assistance rather than coding, it gets even worse. That's your personal data. WhatsApp messages, emails, calendar entries, whatever you've given the agent access to, potentially being used for training. Not saying they actually do it. But nothing in their terms says they won't. And I'm not even sure how geo-political data protection laws apply when your data is going through infrastructure in a different jurisdiction.&lt;/p&gt;

&lt;p&gt;Is this a problem for mucking about? Building hobby projects? Benchmarking models on a Saturday afternoon? No. Crack on.&lt;/p&gt;

&lt;p&gt;Is this a problem for your company's codebase? For production systems? For anything proprietary or personal? Yeah. Yeah it is.&lt;/p&gt;

&lt;p&gt;The free tier is a playground, not a workshop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About Speed?
&lt;/h2&gt;

&lt;p&gt;Inference speed is part of the trilemma, so let's give it proper attention.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider / Model&lt;/th&gt;
&lt;th&gt;Tokens per second&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;a href="https://cloud.cerebras.ai/" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; Llama 4 Scout&lt;/td&gt;
&lt;td&gt;~2,600&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Groq Llama 3.1 8B&lt;/td&gt;
&lt;td&gt;~2,100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;a href="https://cloud.cerebras.ai/" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; Qwen3 235B&lt;/td&gt;
&lt;td&gt;~1,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Groq Llama 3.3 70B&lt;/td&gt;
&lt;td&gt;200-350&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MiniMax M2.7&lt;/td&gt;
&lt;td&gt;~100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-5.5&lt;/td&gt;
&lt;td&gt;~74&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Sonnet 4.6&lt;/td&gt;
&lt;td&gt;~44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Opus 4.7&lt;/td&gt;
&lt;td&gt;~40-46&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Cerebras running Llama 4 Scout at 2,600 tokens per second is genuinely bonkers. That's running like a mad dog. &lt;a href="https://groq.com/" rel="noopener noreferrer"&gt;Groq&lt;/a&gt; is similarly quick on smaller models.&lt;/p&gt;

&lt;p&gt;But here's the thing. For coding agents, speed matters less than you think.&lt;/p&gt;

&lt;p&gt;A coding agent is not a chatbot. You're not sitting there watching it type letter by letter, tapping your fingers. You fire off a task, the agent goes and does its thing, you check back. It's an &lt;em&gt;eventually consistent assistant&lt;/em&gt;. Whether it takes 30 seconds or 3 minutes to complete a complex task, the output is what matters. The difference between 40 tok/s and 2,600 tok/s is dramatic in a chat UI. In an autonomous coding agent, it's the difference between making a cup of tea and making a cup of coffee while you wait.&lt;/p&gt;

&lt;p&gt;Where speed genuinely matters: transactional or chat-based interactions where you're actually real-time interacting with the model. Gemini Flash models. ChatGPT's fast mode. The stuff where you're in a tight feedback loop and waiting 4 seconds for a response versus 0.5 seconds is the difference between flow state and frustration. For that use case, &lt;a href="https://cloud.cerebras.ai/" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; and &lt;a href="https://groq.com/" rel="noopener noreferrer"&gt;Groq&lt;/a&gt; are incredible. And honestly, I would love my code to be produced at that inference speed too. Imagine Opus-level intelligence at 1,000+ tokens per second. I'd pay stupid money for that.&lt;/p&gt;

&lt;p&gt;But the speed comes at the cost of intelligence. That's the trilemma again. I'm still on the waitlist for the GLM coding plan to try out GLM at 1,000+ tokens per second on Cerebras. Think of the speed of failing fast then. You'd get wrong answers at the speed of light.&lt;/p&gt;

&lt;p&gt;And one more gotcha on the Cerebras free tier. GLM-4.6 and GLM-4.7 are &lt;em&gt;not&lt;/em&gt; free. They require a $10 credit purchase. So when someone tells you they're running GLM on Cerebras for free, check which model. The free ones are Llama variants and DeepSeek R1. Quick, yes. Intelligent enough for serious coding work? Debatable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost of "Free"
&lt;/h2&gt;

&lt;p&gt;I wrote last week about &lt;a href="https://dev.to/blog/productivity-series/03-ai/devx-of-coding-agents-matters-massively"&gt;your coding agent's best feature not being the code&lt;/a&gt;. The developer experience. The statusline, the insights, the hooks, the bits that make the 8th hour feel like the 1st. That piece was about the harness layer. This one is about the layer beneath it: the model and the API.&lt;/p&gt;

&lt;p&gt;And the argument is the same. You can get a technically functional setup for free. Claude Code pointed at &lt;a href="https://build.nvidia.com/" rel="noopener noreferrer"&gt;Nvidia NIM&lt;/a&gt; or &lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;OpenRouter&lt;/a&gt; free tier, running Qwen3 Coder 480B or MiniMax M2.5 or whatever's new this week. It will work. For some definition of work.&lt;/p&gt;

&lt;p&gt;But "works" and "good" are different postcodes. And "good" and "great" aren't even in the same city.&lt;/p&gt;

&lt;p&gt;The free route gets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Models that handle straightforward tasks but fall over on complexity&lt;/li&gt;
&lt;li&gt;Rate limits that turn a 10-minute task into a 40-minute ordeal&lt;/li&gt;
&lt;li&gt;No guarantees about what happens to your code&lt;/li&gt;
&lt;li&gt;An experience that will, sooner or later, make you think "is this really what everyone's excited about?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that last point is what bothers me most.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Newbie Problem
&lt;/h2&gt;

&lt;p&gt;There's a popular take that free tiers are great for beginners. "Try before you buy." "See if AI coding is for you." I disagree strongly.&lt;/p&gt;

&lt;p&gt;If you've never used AI coding assistants before, the &lt;em&gt;last&lt;/em&gt; thing you want is a compromised experience. You want to see what this stuff can actually do when it's firing on all cylinders. You want Opus 4.7 autonomously debugging your broken build across three files while you read the error message it found in a log you didn't even know existed. You want GPT-5.5 one-shotting a complex function that would've taken you 45 minutes to write and test.&lt;/p&gt;

&lt;p&gt;You don't want a model that stalls on tool call 3 of 7 and gives up. You don't want to hit 429s on your first session and spend 20 minutes googling what that even means.&lt;/p&gt;

&lt;p&gt;Most people I know who tried AI coding and put it away as "meh" or "overhyped"? They compromised. They used a free tier, or a weaker model, or an IDE plugin instead of a proper terminal agent. They got 60% of the experience and judged the whole category on it. That's like watching a dodgy cam rip of a film and deciding cinema is rubbish.&lt;/p&gt;

&lt;p&gt;You don't need to stay on the paid tier forever. But your first experience should be the best possible version. Fall in love with it first. &lt;em&gt;Then&lt;/em&gt; figure out where you can optimise costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Free Actually Makes Sense
&lt;/h2&gt;

&lt;p&gt;I'm not saying free APIs are useless. They're proper useful for specific things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Benchmarking and comparing models&lt;/strong&gt;. &lt;a href="https://build.nvidia.com/" rel="noopener noreferrer"&gt;Nvidia NIM&lt;/a&gt; with 100+ models is brilliant for this. Run the same prompt through six models, see who does it best. 40 RPM is fine when you're doing one-off comparisons.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning and experimenting&lt;/strong&gt;. Building toy projects, learning how Claude Code works as a tool, understanding what agent-driven development feels like at a surface level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specific use cases where the model gap doesn't matter&lt;/strong&gt;. If your work is 90% "generate this CRUD endpoint" and 10% "solve this complex architectural problem," a free model handles 90% of your day just fine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed-first tasks with &lt;a href="https://cloud.cerebras.ai/" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; or &lt;a href="https://groq.com/" rel="noopener noreferrer"&gt;Groq&lt;/a&gt;&lt;/strong&gt;. When you need fast inference on simpler tasks, these are excellent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mistake isn't using free tiers. The mistake is pretending they're a replacement for the paid experience and telling thousands of people online that they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What Should You Actually Do?
&lt;/h2&gt;

&lt;p&gt;Start with at least &lt;a href="https://www.anthropic.com/pricing" rel="noopener noreferrer"&gt;Claude Pro&lt;/a&gt; at 20 quid a month on Claude Code. It won't last long on heavy usage. But as long as it lasts, you'll enjoy it. That's enough to see what this stuff can actually do when it's not hobbled. Close second: use your ChatGPT Plus subscription with Codex CLI. Proper capable, proper fast.&lt;/p&gt;

&lt;p&gt;Anything else at this point is a compromise on either devX or intelligence. And once you've seen what uncompromised looks like, &lt;em&gt;then&lt;/em&gt; you can make informed decisions about where to cut costs. Maybe MiniMax M2.7 handles 80% of your routine work and Opus handles the complex sessions. That's a rational optimisation. But you can only make that call once you know what "great" feels like.&lt;/p&gt;

&lt;p&gt;Starting from "free" and working up? You'll never know what you're missing. And worse, you might decide the whole thing isn't worth your time. When the reality is, you just never gave it a fair go.&lt;/p&gt;

&lt;p&gt;Cheap. Intelligent. Fast. Pick two. Know which two you're picking. And stop pretending you can have all three for nothing.&lt;/p&gt;

</description>
      <category>claudecodefreenvidianim</category>
      <category>aicodingagentcomparison2026</category>
      <category>opus47vsgpt55</category>
      <category>freeaicodingtools</category>
    </item>
    <item>
      <title>Your Coding Agent's Best Feature Isn't the Code</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:49:36 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/your-coding-agents-best-feature-isnt-the-code-pgd</link>
      <guid>https://dev.to/stevengonsalvez/your-coding-agents-best-feature-isnt-the-code-pgd</guid>
      <description>&lt;h1&gt;
  
  
  Your Coding Agent's Best Feature Isn't the Code
&lt;/h1&gt;

&lt;p&gt;About a year ago I wrote a piece on &lt;a href="https://dev.to/blog/productivity-series/03-ai/03-01-ai-coding-assistants"&gt;AI coding assistants&lt;/a&gt; - what was winning the race, where things were heading, what to spend your money on. A year on, that warrants a status update. I'll keep it tight.&lt;/p&gt;

&lt;p&gt;Terminal coding agents won the race ages ago. That question is settled. The IDE-embedded approach had its moment but the terminal-first agents run circles around them now. And at the top, it is basically a two-horse race: Claude Code and Codex. Everything else is doing its best to close the gap, and fair play to them, some are getting genuinely good - Copilot, opencode, ampcode, Factory, Warp, Qwen Code, Kiro are all pushing hard and I would not bet against any of them in 12 months. And then there's Gemini CLI, which seems a bit... lost. Like it showed up to the party in last season's outfit and hasn't quite clocked that everyone's moved on. But right now, the two I reach for are Claude Code and Codex.&lt;/p&gt;

&lt;p&gt;Here's the thing though. At this point, it's not about producing code. All of them produce code at roughly the same level. It's 80% the model, 20% the harness. If GPT-5.4 or Opus 4.6 or Trinity is doing the thinking, the output quality is going to be roughly comparable regardless of which terminal wrapper is sat on top. This is Eclipse, NetBeans, VS Code, IntelliJ all over again. All of them compiled Java just fine. But VS Code won that battle and IntelliJ had its moments too. The difference was never the compiler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Claude Code Keeps Getting My Money
&lt;/h2&gt;

&lt;p&gt;Now, a lot of the love Claude Code gets online - the "I can't go back" posts, the "this changed how I work" threads - gets attributed to its ability to produce code for your requirements. Codex does that just as well, probably Copilot too at this point. Or the autonomy stuff. Opus as a model is probably the main driver there, not Claude Code the harness itself.&lt;/p&gt;

&lt;p&gt;The actual reason, the one nobody writes blog posts about because it's harder to screenshot, is purely the developer experience.&lt;/p&gt;

&lt;p&gt;As much as Claude Code has become a bit of a learning curve (I've seen courses going for $99 to learn Claude Code - save your money, spend that on your sub instead, seriously), once you're past the faff the tool is an absolute developer's dream. And it comes down to three things.&lt;/p&gt;

&lt;p&gt;Their devs are actively getting feedback and acting on it. They're dogfooding it daily. They build Claude Code using Claude Code and it shows in every decision they make. You can feel when a tool is built by people who actually use it versus people who watched a demo.&lt;/p&gt;

&lt;p&gt;They ship at a pace that gives you FOMO. Not the bad kind. The "oh they shipped something new and I need to go try it right now" kind. The tool is different every week and you want to keep up. It's mental. In a good way.&lt;/p&gt;

&lt;p&gt;And they build stuff so extensible and personalisable it borders on absurd. Like, properly absurd. You can run arbitrary shell commands in hooks, script your own statusline, build your own pet, the lot. The ceiling is just your innovation and how deep down the rabbit hole you're willing to go.&lt;/p&gt;

&lt;p&gt;A lot of these devX primitives are things we build on top of in &lt;a href="https://getwololo.dev/docs" rel="noopener noreferrer"&gt;wololo&lt;/a&gt; - the agent intelligence platform that sits across harnesses. But the raw ingredients come from the harness itself, and Claude Code's ingredients are just better.&lt;/p&gt;

&lt;p&gt;So even though I use Claude Code differently now (most of my interaction is hermes agents spinning up through the &lt;a href="https://getwololo.dev/docs" rel="noopener noreferrer"&gt;custom harness platform&lt;/a&gt; and driving it autonomously via &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box" rel="noopener noreferrer"&gt;claws and hermes&lt;/a&gt;, less human in the loop to enjoy the devX), and they don't even support third-party harnesses anymore, I still keep the Max subscription running. The devX alone is worth it for the sessions where I do sit down with it directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Statusline: Your Cockpit
&lt;/h2&gt;

&lt;p&gt;The first thing you see when you start a session. Two lines at the bottom of your terminal that tell you more about your session state than most dashboards manage in a full page.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-36.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-36.png" alt="Claude Code statusline - powerline segments showing cwd, git branch, beads count, model, turn count, context %, 5h %, weekly %, session cost"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line one&lt;/strong&gt; (powerline bar):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working directory in blue&lt;/li&gt;
&lt;li&gt;Git branch in cyan (goes orange when dirty), with ahead/behind counts and staged/unstaged markers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bd:2&lt;/code&gt; telling me I've got two &lt;a href="https://github.com/steveyegge/beads" rel="noopener noreferrer"&gt;beads&lt;/a&gt; tasks ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Line two&lt;/strong&gt; (plain text):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model with the 1m context suffix&lt;/li&gt;
&lt;li&gt;Turn count as a traffic light (green/yellow/red as you approach 50)&lt;/li&gt;
&lt;li&gt;Context percentage bar&lt;/li&gt;
&lt;li&gt;5-hour usage bar, weekly usage bar&lt;/li&gt;
&lt;li&gt;Session spend in dollars&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At a glance: how far into context you are, how much you've burned today and this week, what it's costing, whether your git state is clean. No switching tabs, no running commands, no checking a web dashboard.&lt;/p&gt;

&lt;p&gt;Now look at the competition:&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-37.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-37.png" alt="Codex statusline - model, directory, context percentage, 5h and weekly usage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Codex: model, working directory, context percentage, usage bars. Functional. A speedometer on a car without a fuel gauge.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-38.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-38.png" alt="Copilot statusline - model, remaining requests, cwd, git branch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copilot: model name, remaining requests (as a percentage with no denominator), working directory, git branch. It's like being told you have "some" petrol left.&lt;/p&gt;

&lt;p&gt;And here's the kicker - the whole statusline is scriptable. You can actually run another Claude command as part of your status to keep your main session in line with your original prompt, like warn you if Claude is digressing. Other tools are just now getting a statusline... and it's under EXPERIMENTAL flags.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stuff Everyone Else Is Playing Catch-up On
&lt;/h2&gt;

&lt;p&gt;Right, digression aside, let's crack on with the bits and bobs that everyone else is playing catch-up on. Because beyond the statusline, Claude Code is operating in a different postcode entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/usage&lt;/code&gt; with actual insights.&lt;/strong&gt; Most tools give you a bar chart and call it done. Claude Code's &lt;code&gt;/usage&lt;/code&gt; tells you what percentage of your usage came from subagent-heavy sessions, what ran at over 150k context, what came from sessions active 8+ hours. It's telling you &lt;em&gt;why&lt;/em&gt; your usage is what it is, not just &lt;em&gt;what&lt;/em&gt; it is.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-43.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-43.png" alt="Claude Code /usage showing session analysis - subagent %, context %, long session %"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;60% of usage from subagent-heavy sessions. 68% at over 150k context. 21% from sessions active 8+ hours. Now I know where the tokens are going and can make real decisions about it. "Maybe don't spawn four subagents for a one-file fix" is usable advice that falls straight out of this data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/btw&lt;/code&gt; for parking thoughts.&lt;/strong&gt; You're in the middle of a complex refactor and you think "oh I should also fix that other thing." In any other tool you either interrupt your current flow or forget about it. &lt;code&gt;/btw&lt;/code&gt; lets you park that thought without breaking the agent's current task. Picks it up when the current work is done. Tiny feature. Absolute doddle to use. Changed my workflow more than I expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/loop&lt;/code&gt; and &lt;code&gt;/monitor&lt;/code&gt;.&lt;/strong&gt; Run a prompt on a recurring interval, or stream events from a background process. Want to check your deploy every 5 minutes? &lt;code&gt;/loop 5m check the deploy status&lt;/code&gt;. Want to watch a build? &lt;code&gt;/monitor&lt;/code&gt;. These aren't complex features individually but having them built in, tested, and reliable tells you something about how the team uses their own tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/remote-control&lt;/code&gt;.&lt;/strong&gt; Continue your session from your phone. Not through Termux or vibetunnel or jittery mosh tunnels. Properly, through the Claude Code app. And recently this has been as reliable as your morning alarm - which, unlike most software analogies, is actually a compliment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/web-setup&lt;/code&gt; and &lt;code&gt;/teleport&lt;/code&gt;.&lt;/strong&gt; Port your session to execute somewhere that isn't your machine. When your Mac is burning up and the session is fairly independent stuff, just teleport it. Fair play, Codex had the web execution thing first with Codex Web. But having it integrated into the flow as a natural action rather than a separate product is different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recap on resume.&lt;/strong&gt; When you come back to a session after context compaction, Claude Code shows you what happened. What was accomplished, what's in progress, what's next. You don't scroll through 200 messages trying to reconstruct where you were.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-42.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-42.png" alt="Claude Code recap showing session summary after context compaction"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-session orchestration.&lt;/strong&gt; Four agents running in parallel, each with their own full statusline, each in their own tmux pane. You can see at a glance which one is burning context, which one is idle, which one has dirty git state.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-41.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-41.png" alt="Four Claude Code agents in tmux, each with full statusline showing different progress states"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/batch&lt;/code&gt;.&lt;/strong&gt; Need to migrate 100 repositories from axios to superagent? This is the one. Yeah, packaged skills do similar things - beads with &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box/tree/main/toolkit/packages/skills/swarm-create" rel="noopener noreferrer"&gt;swarm orchestration&lt;/a&gt;, or &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box" rel="noopener noreferrer"&gt;agents-in-a-box&lt;/a&gt; which is what I use for multi-agent coordination. But having it built in tells you the level of dogfooding going on here. These aren't theoretical features cobbled together from a product roadmap. They're building the features they needed yesterday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/simplify&lt;/code&gt;.&lt;/strong&gt; Brilliant. Reviews your recently changed code for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reuse&lt;/strong&gt; - spots places where you're reinventing something that already exists in the codebase&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality&lt;/strong&gt; - flags overengineering, unnecessary abstractions, dead code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; - catches redundant work, bloated patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One command. Done. Pair it with your own &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box/tree/main/toolkit/packages/skills/critique" rel="noopener noreferrer"&gt;critique skill&lt;/a&gt; and the design distillation from &lt;a href="https://github.com/pbakaus/impeccable" rel="noopener noreferrer"&gt;impeccable&lt;/a&gt;, and it absolutely gets the adipose off your code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;/ultraplan&lt;/code&gt;.&lt;/strong&gt; I'll be honest, this one's a bit janky. If you've already got good &lt;code&gt;/research&lt;/code&gt; and &lt;code&gt;/plan&lt;/code&gt; skills beyond what Claude provides out of the box, &lt;code&gt;/ultraplan&lt;/code&gt; doesn't add much. But fair enough, it's there and it's improving.&lt;/p&gt;

&lt;h2&gt;
  
  
  /insights - This Is Where It Gets Proper Good
&lt;/h2&gt;

&lt;p&gt;Right, this is the one that proper blew my mind. Claude Code analyses your usage patterns across sessions and produces a full report on how &lt;em&gt;you&lt;/em&gt; work, what's going well, what's rubbish, and what to try next.&lt;/p&gt;

&lt;p&gt;These screenshots are from a &lt;a href="https://shotclubhouse.com" rel="noopener noreferrer"&gt;shotclubhouse.com&lt;/a&gt; ops instance (2,891 messages across 168 sessions). Click any to zoom in.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/45.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/45.png" alt="At a Glance" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/46.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/46.png" alt="What You Work On" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/47.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/47.png" alt="How You Use CC" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/48.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/48.png" alt="Friction &amp;amp; Outcomes" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/49.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/49.png" alt="Features to Try" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/50.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/50.png" alt="Context Limits" width="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="/images/blog/devx-coding-agents/51.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/51.png" alt="On the Horizon" width="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The shotclubhouse ops instance runs many Claude Code sessions for bugs, support, and testing. Most of it is autonomous with zero human involvement. The self-healing nature of the setup handles it. But the token churn is real.&lt;/p&gt;

&lt;p&gt;And &lt;code&gt;/insights&lt;/code&gt; doesn't just state the problem. It offers exactly the solutions and fixes, with copy-paste prompts ready to go. "Here's what's going wrong. Here's why. Here's what to paste into Claude Code to fix it." This is the best developer experience I've used in any coding tool. I've not seen any other toolset - IDE, web SaaS, or terminal - come close to this without building out harness infrastructure yourself.&lt;/p&gt;

&lt;p&gt;And even if you do build it yourself (in my case I've got &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box/tree/main/toolkit/packages/utilities/hooks/utils" rel="noopener noreferrer"&gt;Langfuse observability wired into the hook system&lt;/a&gt; and a full &lt;a href="https://github.com/stevengonsalvez/agents-in-a-box/tree/main/toolkit/packages/plugins/reflect" rel="noopener noreferrer"&gt;reflection/self-improvement system&lt;/a&gt;), all you end up with is a lot of data points and a lot more manual thinking to improve the system. The observability and reflection stuff deserves its own long-form piece, but the core of it is understanding how well the session and the human driver (or agent driver, at that) work together. &lt;code&gt;/insights&lt;/code&gt; does that deriving for you, out of the box, no wiring needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where /insights points to
&lt;/h3&gt;

&lt;p&gt;To reach something resembling an autonomous agentic software tool, or a zero-touch coding agent, there's a stack of things still needed (I keep promising to write the long-form on this, I know):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full research autonomy&lt;/strong&gt; - ability to research, retrieve, analyse and use skills at its discretion, securely&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge infrastructure&lt;/strong&gt; - build and tap into knowledge through GraphRAGs, semantic search, wikis, whatever fits the problem best&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-improvement&lt;/strong&gt; - the biggest one. The agent improving its own harness: building hooks, writing plugins, extending its own skills, building out its knowledge base and correcting it when it's wrong, getting better at getting better&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measurement and baselines&lt;/strong&gt; - benchmarking whether the agent is &lt;em&gt;actually&lt;/em&gt; improving, not just churning differently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The long-form will go deeper into the how's, the tools for the job, and where you pitch in. Consider this the trailer.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The Self-Improvement Loop&lt;/strong&gt;: Here's something you can do right now. Combine &lt;code&gt;/insights&lt;/code&gt; (finds the problems) with &lt;code&gt;/simplify&lt;/code&gt; (gets the adipose off) with some form of &lt;code&gt;/validate&lt;/code&gt; (checks the fix didn't break anything), and you already have a self-improvement system. Plug in a &lt;code&gt;/schedule&lt;/code&gt; to run &lt;code&gt;/insights&lt;/code&gt; daily and feed the output into Claude Code to &lt;code&gt;/implement&lt;/code&gt; the suggestions. Morning you wake up to a more optimised set of skills, better prompts, tighter token usage, stronger tests. The roof is just holding you down.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  /powerup - The Meta Command
&lt;/h2&gt;

&lt;p&gt;A meta command to discover features and help you help yourself. Seriously, please don't pay $99 to learn Claude Code. Run &lt;code&gt;/powerup&lt;/code&gt;. It analyses how you use the tool and suggests features you're not using yet. It literally teaches itself to you. There's loads more under the covers than what you see on first use, and this is how you suss it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hook System
&lt;/h2&gt;

&lt;p&gt;No other pure coding agent has the range of hooks Claude Code has. Other assistant harnesses do (the claws, hermes, that lot), but bog standard terminal coding agents barely manage two or three hooks. And most of them added them recently. Claude has had hooks since close to a year now, nearly as long as I've been mucking about with it.&lt;/p&gt;

&lt;p&gt;And this hook system is properly customisable. You could run anything in hooks. Hooks to shape your session. Hooks to read out stuff. Hooks to announce things. Status checks, guardrails, observability fire-offs to Langfuse or whatever you're running. The ceiling is just your innovation and how much yak shaving you're willing to do on a Saturday afternoon.&lt;/p&gt;

&lt;p&gt;The community ran with it too. &lt;a href="https://github.com/AshKetchumFromPallet/ccpet" rel="noopener noreferrer"&gt;ccpet&lt;/a&gt;, &lt;a href="https://github.com/deansallinen/claude-code-tamagotchi" rel="noopener noreferrer"&gt;claude-code-tamagotchi&lt;/a&gt;. Custom statusline pets that persist between sessions. Nudges from your mascot when context gets high. Mini observability in a 3-line ASCII duck. None of the other terminal agents have the extensibility to even make this possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The /buddy Easter Egg (RIP)
&lt;/h2&gt;

&lt;p&gt;Speaking of which. &lt;code&gt;/buddy&lt;/code&gt; gave you an ASCII pet in your statusline. Duck, goose, blob, cat, dragon, octopus. With hats. Crown, tophat, propeller, halo. And a rarity system. 60% common, 25% uncommon, 10% rare, 4% epic, 1% legendary.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/blog/devx-coding-agents/img-44.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/blog/devx-coding-agents/img-44.png" alt="Claude Code /buddy showing all ASCII pets with hats and rarity system"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;April Fools feature. Shipped, enjoyed, removed. But it tells you something about the team building this. They care about the experience of sitting in a terminal for 10 hours a day. Nobody at OpenAI or GitHub is shipping ASCII pets with loot tables. Nobody at those companies is thinking "what would make the 8th hour of a refactor slightly more bearable." And that mentality shows up in every other feature they ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ranking (April 2026)
&lt;/h2&gt;

&lt;p&gt;After running all of them daily for months:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; - The full package. Model + devX + extensibility. The statusline alone packs more into two lines than most dashboards manage in a full page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codex&lt;/strong&gt; - Best raw code production when you pair models right. The GPT-5.4 + GPT-5.3-Codex combination for thinking-then-coding is genuinely strong. But the harness is bare bones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot&lt;/strong&gt; - Getting better fast but still feels like a VS Code extension that grew legs. The terminal version is playing catch-up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything else&lt;/strong&gt; - opencode, warp, droid, ampcode, gemini cli. Various stages of "genuinely promising but not there yet." Give them 6 months and this ranking might look different.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;The gap between Claude Code and the rest is not just the model. It's the developer experience. The hooks, the statusline, the session management, the insights, the extensibility, the shipping pace, and honestly the community that's cobbled together all sorts of mad extensions around it. That's what keeps people on it. And the gap keeps growing.&lt;/p&gt;

&lt;p&gt;The conventional wisdom is 80% model, 20% harness. Claude Code is pushing that 20% closer to 30%, to the point where you'd find Claude Code with Sonnet beating GitHub Copilot with Opus on &lt;em&gt;agentic&lt;/em&gt; engineering work (not pure coding, but the multi-step, multi-tool, plan-execute-validate kind). That's how much the margins are growing on the harness side.&lt;/p&gt;

&lt;p&gt;Eclipse and VS Code both compiled Java. Only one of them made you want to open it in the morning.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>aicodingagent</category>
      <category>devrel</category>
      <category>codex</category>
    </item>
    <item>
      <title>Browser Tools for AI Agents Part 3: Managed Infrastructure and When DIY Stops Making Sense</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:49:23 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-3-managed-infrastructure-and-when-diy-stops-making-sense-1po2</link>
      <guid>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-3-managed-infrastructure-and-when-diy-stops-making-sense-1po2</guid>
      <description>&lt;p&gt;There's a moment in every engineer's life where they're two weeks into building something, surrounded by Docker configs and cron jobs and a spreadsheet tracking proxy rotation, and they think: "Someone must sell this as a service." And someone does. Several someones, actually. The question is whether paying them makes you clever or lazy.&lt;/p&gt;

&lt;p&gt;I've been on both sides of that line. Ran my own headless Chrome fleet on &lt;a href="https://www.hetzner.com/cloud" rel="noopener noreferrer"&gt;Hetzner&lt;/a&gt; for a couple of months, complete with a homegrown retry queue and a Grafana dashboard I was unreasonably proud of. Worked a treat until Cloudflare changed their fingerprinting and the entire pipeline went sideways. Was looking at managed options by the end of the week.&lt;/p&gt;

&lt;p&gt;This is Part 3 of the browser tools series. Same framing throughout: how do you give your coding agents the right browser infrastructure for a closed loop of research, build, and validate? Not consumer browsing, not manual QA. Agent-driven validation of what you're shipping. &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;Part 1&lt;/a&gt; covered the low-level tools. &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part2-frameworks"&gt;Part 2&lt;/a&gt; went deep on frameworks and SDKs. Now we're talking about the services that will run Chrome for you, for money, and whether that money is well spent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Contenders
&lt;/h2&gt;

&lt;p&gt;Five platforms worth taking seriously in this space, each with a different angle on the same problem. I'll go through them one at a time, then we'll do the maths on when DIY actually stops being the sensible choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Firecrawl: Content Ingestion for the LLM Age
&lt;/h2&gt;

&lt;p&gt;Firecrawl is the one your AI agent probably already knows about. 103,000 GitHub stars. Apache 2.0 licence. It positions itself as "the web data API for AI" and honestly that's a fair description.&lt;/p&gt;

&lt;p&gt;What Firecrawl does differently from rolling your own Playwright scraper is handle the entire pipeline from URL to LLM-ready markdown. You give it a URL, it renders the JavaScript, strips the boilerplate, and hands you back clean structured content that won't burn half your context window on nav bars and cookie banners. P95 latency of 3.4 seconds, claims 96% web coverage. In my experience the coverage number is roughly accurate for English-language content, drops a fair bit for sites with aggressive anti-bot or heavy client-side rendering behind auth walls.&lt;/p&gt;

&lt;p&gt;The pricing uses a credit system and this is where it gets a bit sneaky. The $16/month Hobby plan gives you 3,000 credits. One credit per page for basic scraping. Sounds decent until you turn on "Enhanced Mode" for anti-bot sites, which costs 5 credits a pop, and then add JSON structured output which doubles it again. So your 3,000 credit plan is suddenly 333 pages if you need the good stuff. The $83/month Standard plan with 100,000 credits is where it starts making economic sense for anything beyond a weekend project.&lt;/p&gt;

&lt;p&gt;There's also an &lt;code&gt;/extract&lt;/code&gt; endpoint that uses a separate token-based billing. Completely independent from the credit system. Two billing models in one product. Classic SaaS move.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The self-hosted caveat&lt;/strong&gt;: Firecrawl's open-source version on GitHub is a subset, not a mirror. You get the basic scraping engine but you lose their proprietary "Fire-engine" that handles the hard anti-bot stuff. No proxy rotation. No browser sandbox. No agent mode. No dashboard. You're essentially getting a slightly fancier Playwright wrapper. It's useful for simple sites and saves you writing your own HTML-to-markdown converter, but if you're self-hosting because you thought you'd get the cloud features for free, you'll be disappointed. You also need to bring your own LLM API key for the extraction features, which adds another cost nobody mentions on the landing page.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; Ordering a flat white at a cafe. Yes, you could make one at home with a kettle and an AeroPress. But theirs has the fancy milk art and they've already cleaned up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Firecrawl is brilliant for bulk content ingestion where you need markdown or structured JSON at the other end. Less suitable as a general-purpose browser automation tool. If your agent needs to click buttons, fill forms, navigate multi-step workflows, you'll outgrow it quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browserbase: The Stagehand Company
&lt;/h2&gt;

&lt;p&gt;Browserbase is interesting because it's really two products wearing a trenchcoat. The first product is the remote browser infrastructure. Spin up Chrome instances in the cloud via API, connect with Playwright or Puppeteer, do your thing. The second product is Stagehand, their open-source AI browser automation framework with 50k+ GitHub stars and half a million weekly downloads. Stagehand gives you natural-language browser control with three atomic primitives (act, extract, observe) and an agent mode that chains them together.&lt;/p&gt;

&lt;p&gt;The strategy is obvious and honestly quite smart: give away the framework, charge for the infrastructure. Stagehand works locally with your own browser, but the moment you need concurrent sessions, stealth mode, session recording, or proxy rotation, you're pointed at Browserbase's cloud. It's the Red Hat model adapted for browser automation.&lt;/p&gt;

&lt;p&gt;Pricing starts free with 1 browser hour (which is really just enough to verify it works). The $20/month Developer plan gets you 100 hours. The $99/month Startup plan gets 500 hours, 100 concurrent browsers, 5GB of proxy data. Overage is $0.10 per browser hour. That's roughly $0.0017 per minute of browser time once you're past the included hours.&lt;/p&gt;

&lt;p&gt;One thing I appreciate about Browserbase is the CAPTCHA solving is included free on all plans. No per-solve charges. With Steel and Browserless, CAPTCHA solving is either extra or limited, which adds up fast on sites that throw CAPTCHAs every third page load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; Renting a really nice workshop instead of building one in your garage. The tools are there, the ventilation works, someone else sweeps up. You just have to accept it's not yours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; If you're building an AI agent that needs to interact with the web in complex ways and you don't want to manage infrastructure, Browserbase with Stagehand is the most ergonomic option going. The pricing is fair below 500 hours. Above that, start doing your sums.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steel Browser: The Self-Hostable Underdog
&lt;/h2&gt;

&lt;p&gt;Steel is what you reach for when you look at Browserbase's pricing page and think "I could run that myself." Because you actually can. Apache 2.0 licence, 6.8k stars, and a Docker one-liner that genuinely works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="nt"&gt;-p&lt;/span&gt; 9223:9223 ghcr.io/steel-dev/steel-browser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives you a browser API server with session management, cookie persistence, anti-detection plugins, Chrome extension support, and a debugging UI. On your own hardware. No monthly bill beyond whatever you're paying for the VPS.&lt;/p&gt;

&lt;p&gt;The cloud offering starts with a free tier of 100 browser hours per month, which is notably more generous than Browserbase's 1 hour. The $29/month Starter and $99/month Developer plans include credits at decreasing per-hour rates ($0.10 down to $0.05/hour at the Pro tier). CAPTCHA solving is billed separately at $3-3.50 per thousand solves depending on your plan.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Steel vs Browserbase, the honest comparison&lt;/strong&gt;: Steel's self-hosted offering gives you more raw capability for less money if you're comfortable managing Docker containers. Session management, proxy support, anti-detection, all there. What you lose compared to Browserbase is the tighter integration with agentic frameworks (CrewAI, LangChain, MCP), the free CAPTCHA solving, and the session recording/replay features. Steel also has a thinner community ecosystem. If your use case is "I need browsers in the cloud and I don't want to manage servers," Browserbase wins. If your use case is "I need browsers on my own infrastructure and I'm fine being the ops team," Steel wins. The features are roughly at parity. The difference is who maintains the boxes.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; A flat-pack kitchen from IKEA. All the pieces are there, the instructions mostly make sense, and the end result is perfectly functional. But you're assembling it yourself on a Sunday afternoon, and there will be leftover screws that worry you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Steel is the best self-hosted browser API available right now. If you've got a VPS and docker-compose skills, it's genuinely hard to justify paying Browserbase's monthly fees for equivalent functionality. The trade-off is your time, which depending on your situation, might actually be the more expensive resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bright Data: The Enterprise Nuclear Option
&lt;/h2&gt;

&lt;p&gt;Right, let's talk about the 800-pound gorilla. Bright Data has been in the proxy game since before "AI agent" was a phrase anyone used. Their pitch for the Agent Browser is simple: 400 million IPs in 195 countries, unlimited concurrent sessions, automatic fingerprint rotation, autonomous CAPTCHA solving, and every anti-detection trick invented in the last decade. All piped through a real GUI browser (not headless) that makes bot detection systems think your scraper is a person in Sao Paulo using Firefox on their lunch break.&lt;/p&gt;

&lt;p&gt;The catch is the price. Pay-as-you-go is $8 per GB of data transferred. Not per page. Per gigabyte. The starter plan is $499/month for 71GB, working out to about $7/GB. Enterprise is $1,999/month for 399GB at $5/GB. If you're doing image-heavy scraping or pulling PDFs, those gigabytes go fast.&lt;/p&gt;

&lt;p&gt;For context, a typical news article page is roughly 2-3MB. At $8/GB, that's about 333-500 pages per dollar. At $499/month you're looking at somewhere between 23,000 and 35,000 pages depending on content weight. An equivalent Hetzner VPS running Playwright costs under four euros a month and can do the same volume in a day, assuming the sites don't block you. Which is the entire point of Bright Data. The sites &lt;em&gt;don't&lt;/em&gt; block you when you're routing through 400 million residential IPs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Why Bright Data's pricing is per-GB, not per-page&lt;/strong&gt;: It's because the actual cost to Bright Data isn't the browser session, it's the proxy bandwidth. Residential IP bandwidth is expensive. Those IPs come from real devices on real ISPs with real data caps. The per-GB model reflects the underlying cost structure. It also means your bill is wildly unpredictable if you're scraping sites with varying page weights. A JavaScript-heavy SPA that pulls 15MB per page will cost you 5x more than a lightweight blog, even though the information extracted might be identical. Plan accordingly, or use their Scraper API which has per-request pricing instead.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; Hiring a private military contractor to get your parcel through customs. Wildly overkill for most situations, but when the situation actually calls for it, nothing else will do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Bright Data makes sense when you're scraping sites that actively fight back with sophisticated bot detection, when you need geo-specific data from 195 countries, or when your business depends on data that justifies a four-figure monthly bill. For scraping a few thousand blog posts into your RAG pipeline, you're setting fire to money.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browserless: The Flexible Middle Ground
&lt;/h2&gt;

&lt;p&gt;Browserless sits in an interesting niche. It's the most "infrastructure-y" of the bunch, less opinionated about what you're doing with the browser and more focused on just giving you Chrome-as-a-service that works reliably. 12.9k GitHub stars. Docker image that spins up in seconds. Works with both Puppeteer and Playwright out of the box.&lt;/p&gt;

&lt;p&gt;The cloud pricing uses "units" where one unit equals 30 seconds of browser time. The free tier gives you 1,000 units (roughly 8.3 hours of browser time). The $25/month Prototyping plan gets 20k units. The $140/month Starter plan gets 180k units with 40 concurrent browsers. The $350/month Scale plan pushes to 500k units and 100 concurrent sessions.&lt;/p&gt;

&lt;p&gt;What makes Browserless compelling beyond just "Chrome in the cloud" is BrowserQL, their query language for stealth automation. It's available on the cloud and enterprise tiers and handles CAPTCHA solving, anti-detection evasion, and fingerprint management through a declarative syntax. Think of it as a higher-level abstraction over the usual Playwright/Puppeteer API, specifically designed for sites that don't want to be scraped.&lt;/p&gt;

&lt;p&gt;The V2 release added session persistence (cookies and cache surviving between sessions), session replay for debugging, Chrome extension loading, and hybrid automations that let you stream live browser sessions during script execution. That last one is proper useful for debugging agents that go off the rails.&lt;/p&gt;

&lt;p&gt;The self-hosted story is where it gets complicated. You can run the Docker image free for non-commercial use. For commercial use, you need a licence. And the open-source code is SSPL-1.0, which is the same licence MongoDB famously switched to. SSPL is technically open-source but prevents you from offering Browserless as a managed service to others. For internal use it's fine. For building a product on top of it, read the licence carefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; A Swiss Army knife. Not the best at any single thing, but competent at everything and fits in your pocket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Browserless is the right choice if you want flexibility across use cases (scraping, testing, PDF generation, screenshots) without committing to one vendor's agent framework. The self-hosted option is genuinely viable if you accept the SSPL constraints. The cloud pricing is competitive at medium scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Maths: When Does DIY Stop Winning?
&lt;/h2&gt;

&lt;p&gt;Right then. The bit everyone actually wants to know. I've been running the numbers for a while and here's a rough break-even table. This assumes a Hetzner CX31 (4 vCPU, 8GB RAM, roughly 7 EUR/month) running Playwright in Docker with a basic retry queue. Your engineer time at not-free-per-hour to maintain it.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Monthly Pages&lt;/th&gt;
&lt;th&gt;DIY Cost (Hetzner)&lt;/th&gt;
&lt;th&gt;Browserbase ($99)&lt;/th&gt;
&lt;th&gt;Steel Self-Hosted&lt;/th&gt;
&lt;th&gt;Browserless ($140)&lt;/th&gt;
&lt;th&gt;Bright Data ($499)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;~7 EUR + time&lt;/td&gt;
&lt;td&gt;$99 (way overkill)&lt;/td&gt;
&lt;td&gt;~7 EUR + time&lt;/td&gt;
&lt;td&gt;$140 (way overkill)&lt;/td&gt;
&lt;td&gt;$499 (madness)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;~7 EUR + time&lt;/td&gt;
&lt;td&gt;$99&lt;/td&gt;
&lt;td&gt;~7 EUR + time&lt;/td&gt;
&lt;td&gt;$140&lt;/td&gt;
&lt;td&gt;$499&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50,000&lt;/td&gt;
&lt;td&gt;~14 EUR + time&lt;/td&gt;
&lt;td&gt;$99&lt;/td&gt;
&lt;td&gt;~14 EUR + time&lt;/td&gt;
&lt;td&gt;$140&lt;/td&gt;
&lt;td&gt;$499&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;~21 EUR + time&lt;/td&gt;
&lt;td&gt;$198 (overage)&lt;/td&gt;
&lt;td&gt;~21 EUR + time&lt;/td&gt;
&lt;td&gt;$140&lt;/td&gt;
&lt;td&gt;$499&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500,000&lt;/td&gt;
&lt;td&gt;~42 EUR + time&lt;/td&gt;
&lt;td&gt;~$990&lt;/td&gt;
&lt;td&gt;~42 EUR + time&lt;/td&gt;
&lt;td&gt;$350&lt;/td&gt;
&lt;td&gt;$999&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That "time" column is doing a lot of heavy lifting. If you value your time at zero (student project, learning exercise, you genuinely enjoy debugging Chrome crashes), DIY wins at every scale. The VPS costs are absurdly cheap.&lt;/p&gt;

&lt;p&gt;But the moment you factor in the 2am Cloudflare rotations, the Chrome memory leaks that crash your container every 48 hours, the proxy rotation you'll need to build yourself, the CAPTCHA solving integration, the session management, the retry logic for flaky pages...&lt;/p&gt;

&lt;p&gt;For simple, friendly sites? DIY wins below 50,000 pages per month and it's not even close. That 7 EUR Hetzner box will chew through cooperative websites all day long.&lt;/p&gt;

&lt;p&gt;For anti-bot sites? Managed wins from page one. Your Hetzner Playwright setup will get blocked on the first request to any site running Cloudflare Bot Management, DataDome, or PerimeterX. You'll spend three days building fingerprint rotation, another two on proxy integration, and then they'll change their detection and you're back to square one. The managed services have entire teams solving this problem full-time. You do not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Crossover
&lt;/h2&gt;

&lt;p&gt;Here's the framework I use now.&lt;/p&gt;

&lt;p&gt;If the site serves content without fighting back, run Playwright on a cheap VPS and don't overthink it. Most documentation sites, blogs, public APIs with HTML endpoints, government data portals, these all scrape trivially. You don't need a service for this. You need a cron job.&lt;/p&gt;

&lt;p&gt;If the site mildly fights back (basic rate limiting, simple CAPTCHAs), Firecrawl or Browserless gets you past it without building infrastructure. Firecrawl if you just want the content as markdown. Browserless if you need to interact with the page first.&lt;/p&gt;

&lt;p&gt;If you need browser automation at scale with good developer ergonomics and you don't want to manage servers, Browserbase plus Stagehand is the strongest package. The framework is genuinely good and the infrastructure is solid.&lt;/p&gt;

&lt;p&gt;If you want the same capability but on your own terms and your own hardware, Steel Browser gives you that. Docker, your VPS, your rules.&lt;/p&gt;

&lt;p&gt;If the site actively fights back with sophisticated bot detection and you need data from it badly enough to justify the spend, Bright Data is the only game in town that consistently wins against top-tier anti-bot systems. That 400M IP network exists for a reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Comparison Grid
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Firecrawl&lt;/th&gt;
&lt;th&gt;Browserbase&lt;/th&gt;
&lt;th&gt;Steel&lt;/th&gt;
&lt;th&gt;Bright Data&lt;/th&gt;
&lt;th&gt;Browserless&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What it is&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Web content API for LLMs&lt;/td&gt;
&lt;td&gt;Remote browser infra&lt;/td&gt;
&lt;td&gt;Self-hostable browser API&lt;/td&gt;
&lt;td&gt;Enterprise proxy + browser&lt;/td&gt;
&lt;td&gt;Chrome-as-a-service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;103k&lt;/td&gt;
&lt;td&gt;N/A (Stagehand: 50k+)&lt;/td&gt;
&lt;td&gt;6.8k&lt;/td&gt;
&lt;td&gt;N/A (CLI only)&lt;/td&gt;
&lt;td&gt;12.9k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Licence&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Apache 2.0 (limited self-host)&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;td&gt;SSPL-1.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self-hosted?&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Partial (no Fire-engine)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes, full&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes, with licence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Free tier&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;500 lifetime credits&lt;/td&gt;
&lt;td&gt;1 browser hour&lt;/td&gt;
&lt;td&gt;100 hours/month&lt;/td&gt;
&lt;td&gt;~$25 trial credits&lt;/td&gt;
&lt;td&gt;1k units&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cheapest paid&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$16/month&lt;/td&gt;
&lt;td&gt;$20/month&lt;/td&gt;
&lt;td&gt;$29/month&lt;/td&gt;
&lt;td&gt;$499/month&lt;/td&gt;
&lt;td&gt;$25/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Anti-bot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cloud only (Fire-engine)&lt;/td&gt;
&lt;td&gt;Stealth mode + proxies&lt;/td&gt;
&lt;td&gt;Stealth plugins&lt;/td&gt;
&lt;td&gt;400M+ residential IPs&lt;/td&gt;
&lt;td&gt;BrowserQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CAPTCHA solving&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not included&lt;/td&gt;
&lt;td&gt;Free on all plans&lt;/td&gt;
&lt;td&gt;$3-3.50 per 1k solves&lt;/td&gt;
&lt;td&gt;Included&lt;/td&gt;
&lt;td&gt;10 units per solve&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Content ingestion&lt;/td&gt;
&lt;td&gt;Agent automation&lt;/td&gt;
&lt;td&gt;Self-hosted infra&lt;/td&gt;
&lt;td&gt;Heavily protected sites&lt;/td&gt;
&lt;td&gt;Flexible automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Not great for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Multi-step interactions&lt;/td&gt;
&lt;td&gt;Budget-conscious at scale&lt;/td&gt;
&lt;td&gt;Framework integrations&lt;/td&gt;
&lt;td&gt;Budget-conscious anything&lt;/td&gt;
&lt;td&gt;Building a competing SaaS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Part 4 will cover the other side of this coin: tools that extract structured content &lt;em&gt;without&lt;/em&gt; running a browser at all. Readability algorithms, LLM-based extraction, and the surprisingly effective approach of just asking the search engine for the data instead of scraping the source. Sometimes the best browser is no browser.&lt;/p&gt;

&lt;p&gt;If you missed the earlier parts, &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;Part 1&lt;/a&gt; covers the open-source agent frameworks and &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part2-frameworks"&gt;Part 2&lt;/a&gt; digs into MCP servers for browser control. And if you want to see some of these tools in action, the &lt;a href="https://dev.to/blog/vibe-coding-arc-of-2025"&gt;vibe coding arc&lt;/a&gt; post has examples of agents actually using browser tools to ship code.&lt;/p&gt;

</description>
      <category>firecrawlai</category>
      <category>browserbase</category>
      <category>steelbrowser</category>
      <category>brightdatascraping</category>
    </item>
    <item>
      <title>Browser Tools for AI Agents Part 4: Skip the Browser, Save 80% on Tokens</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:49:16 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-4-skip-the-browser-save-80-on-tokens-304c</link>
      <guid>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-4-skip-the-browser-save-80-on-tokens-304c</guid>
      <description>&lt;p&gt;Right. Last orders, everyone. Final part. And I'm going to open with something that might sting a bit if you've been feeding raw HTML into your language models like I was six months ago.&lt;/p&gt;

&lt;p&gt;You are spending roughly 80% of your token budget on HTML tags that your model cannot use. Not "doesn't use well." Cannot use. The &lt;code&gt;&amp;lt;div class="sidebar-widget-trending-topics-container-v3"&amp;gt;&lt;/code&gt; that wraps your actual content? That's tokens. The seventeen layers of nested &lt;code&gt;&amp;lt;span&amp;gt;&lt;/code&gt; from whatever React component tree generated the page? Tokens. The inline SVG for the site's logo that gets base64-encoded into the DOM? You guessed it. Tokens. And you're paying for every single one.&lt;/p&gt;

&lt;p&gt;This is the part of the series where we put the browser down. Quick reminder on what we're solving for: giving coding agents the right tools for a closed loop of research, build, and validate. The "research" part is where content extraction shines. If your agent just needs to &lt;em&gt;read&lt;/em&gt; a page, not click buttons on it, you don't need a browser at all. (We'll cover native app validation for React Native, Capacitor, and Swift in a separate series.)&lt;/p&gt;

&lt;p&gt;So the question nobody seems to ask early enough in their agent architecture: do I actually need a browser for this?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Maths That'll Make You Wince
&lt;/h2&gt;

&lt;p&gt;I ran Cloudflare's own documentation page through two pipelines. Raw HTML: 9,541 tokens. Cleaned markdown: 1,678 tokens. That's an 82% reduction. On a single page. Their blog post about the feature itself? 16,180 tokens as HTML, 3,150 as markdown. Same story. Eighty percent gone.&lt;/p&gt;

&lt;p&gt;Now scale it.&lt;/p&gt;

&lt;p&gt;An agent fetching 50 pages a day (not unusual for a research agent, a RAG pipeline, a news summariser) is looking at roughly 35 million tokens a month if you're passing raw HTML. At GPT-4o pricing that's real money. The markdown equivalent of those same 50 pages? A fraction. We're talking about saving $60-70 a month on a single agent's browsing habits. Scale to a fleet of agents across a team and you're into four figures annually, burned on &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; tags that encode zero semantic information.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Why is the reduction so consistent at ~80%? Markdown's syntax is minimal by design. A heading is &lt;code&gt;#&lt;/code&gt;. A table cell boundary is `&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This isn't a micro-optimisation. This is the difference between an agent architecture that scales and one that bankrupts your API budget before you've even got to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hosted Lot: Let Someone Else Do the Parsing
&lt;/h2&gt;

&lt;p&gt;Two services have emerged as the obvious choices when you want clean markdown without running your own extraction pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;markdown.new&lt;/strong&gt; is Cloudflare's entry. It converts any URL to markdown using their "Markdown for Agents" infrastructure, which sits at the CDN layer. When a Cloudflare-proxied site receives a request with {% raw %}&lt;code&gt;Accept: text/markdown&lt;/code&gt;, the conversion happens at the edge before the response even leaves their network. No browser. No rendering. Just content negotiation at the HTTP level. For sites that aren't on Cloudflare (or haven't enabled the feature), they fall back to their Browser Rendering API, which spins up a headless browser on their infrastructure, not yours.&lt;/p&gt;

&lt;p&gt;The free version lets you crawl up to 500 pages from a single domain with configurable depth up to 10 levels. The API is straightforward. Stick the URL in, get markdown out. The response even includes an &lt;code&gt;x-markdown-tokens&lt;/code&gt; header telling you exactly how many tokens you're about to feed your model, which is a thoughtful touch for context window planning.&lt;/p&gt;

&lt;p&gt;Currently available on Pro, Business, and Enterprise Cloudflare plans at no extra cost. Rate limits aren't explicitly documented, which either means they're generous or they haven't hit a scale problem yet. My money's on the former given it's Cloudflare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jina Reader&lt;/strong&gt; takes a different approach. Prefix any URL with &lt;code&gt;https://r.jina.ai/&lt;/code&gt; and you get back LLM-friendly markdown. That's it. That's the API. The simplicity is almost offensive. Behind the scenes it's doing proper browser rendering (they built a system handling 10 million requests and 100 billion tokens daily at peak), with JavaScript execution, image captioning via vision models, and PDF extraction thrown in.&lt;/p&gt;

&lt;p&gt;Free tier gives you 20 RPM with no API key, or 500 RPM with a free key that comes loaded with 10 million tokens. Paid tiers scale to 5,000 RPM with 500 concurrent requests. Average latency sits around 7.9 seconds for a standard page, 2.5 for search queries through their &lt;code&gt;s.jina.ai&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;Quick note on Jina's corporate history since I've seen confusion about this: Jina AI was an independent company founded by Han Xiao, Nan Wang, and Bing He. They raised $39M over two rounds. Elastic completed their acquisition in October 2025, but the Reader product predates that by over a year. The product isn't "from Elastic" any more than Instagram is "from Meta" in terms of its DNA.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; markdown.new is the fast food drive-through. Your order's ready before you've finished asking. Jina Reader is the proper sit-down restaurant. Takes a bit longer, but they'll handle the weird dietary requirements (JavaScript-heavy pages, PDFs, structured data extraction) that the drive-through can't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; If your target sites are on Cloudflare and have enabled the feature, markdown.new is nearly instant and free. For everything else, Jina Reader is the default choice. I reach for Jina first because the URL-prefix API means I can test it in my browser's address bar, which satisfies the lazy-developer part of my brain that doesn't want to set up curl commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Hosted: When You Want to Own the Pipeline
&lt;/h2&gt;

&lt;p&gt;Sometimes you can't (or shouldn't) send your URLs to a third-party service. Compliance. Air-gapped environments. Cost at scale. Whatever the reason, there are solid self-hosted options, and the benchmarks actually tell us which one's best.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trafilatura&lt;/strong&gt; is the quiet champion. It's a Python library and CLI tool that consistently tops the extraction benchmarks, and the numbers aren't even close in terms of balance.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;F-Score&lt;/th&gt;
&lt;th&gt;Precision&lt;/th&gt;
&lt;th&gt;Recall&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Trafilatura (standard)&lt;/td&gt;
&lt;td&gt;0.909&lt;/td&gt;
&lt;td&gt;0.914&lt;/td&gt;
&lt;td&gt;0.904&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ReadabiliPy&lt;/td&gt;
&lt;td&gt;0.874&lt;/td&gt;
&lt;td&gt;0.877&lt;/td&gt;
&lt;td&gt;0.870&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;News-Please&lt;/td&gt;
&lt;td&gt;0.808&lt;/td&gt;
&lt;td&gt;0.898&lt;/td&gt;
&lt;td&gt;0.734&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Readability-lxml&lt;/td&gt;
&lt;td&gt;0.801&lt;/td&gt;
&lt;td&gt;0.891&lt;/td&gt;
&lt;td&gt;0.729&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Goose3&lt;/td&gt;
&lt;td&gt;0.793&lt;/td&gt;
&lt;td&gt;0.934&lt;/td&gt;
&lt;td&gt;0.690&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Newspaper3k&lt;/td&gt;
&lt;td&gt;0.713&lt;/td&gt;
&lt;td&gt;0.895&lt;/td&gt;
&lt;td&gt;0.593&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Look at that recall column. Goose3 has marginally better precision but misses 31% of the actual content. Newspaper3k misses over 40%. Trafilatura finds 90% of the content while keeping precision above 91%. The ScrapingHub benchmark on 640,000 pages confirmed it. The Bevendorff et al. 2023 evaluation confirmed it again. This thing just works.&lt;/p&gt;

&lt;p&gt;It outputs to TXT, markdown, CSV, JSON, HTML, XML, even TEI if you're into that. It handles sitemaps, RSS feeds, parallel processing, and metadata extraction (author, dates, categories). The documentation calls it "robust and reasonably fast," which is the kind of British understatement I respect. No JavaScript rendering though. If the content isn't in the initial HTML response, Trafilatura won't find it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install trafilatura&lt;/code&gt; and you're away. Five lines of Python to extract any article on the web. It's the kind of tool that makes you wonder why you ever bothered with anything more complex.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Readability&lt;/strong&gt; is the engine behind Firefox's Reader View. You've used it without knowing. Click that little document icon in Firefox's address bar that strips a page down to just the article? That's Readability. Mozilla open-sourced the JavaScript library, and there's a Python port called readability-lxml.&lt;/p&gt;

&lt;p&gt;It's heuristic-based, using hand-crafted rules to identify main content. The approach is well-tested (hundreds of millions of Firefox users are its QA team, effectively) but it can be conservative. It would rather miss a paragraph than include a sidebar, which means it sometimes clips content you actually wanted. F-score of 0.801 in the benchmarks, which is decent but noticeably behind Trafilatura.&lt;/p&gt;

&lt;p&gt;The JavaScript version is the canonical one. The Python port is maintained but not always in lockstep. If you're in a Node.js environment, it's the natural choice. If you're in Python, Trafilatura is the better pick unless you specifically need Readability's conservative behaviour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defuddle&lt;/strong&gt; is the new kid, and it's interesting. Built by the team behind Obsidian's Web Clipper (so these folks have thought deeply about content extraction), it positions itself as a more forgiving alternative to Readability. Where Readability's conservatism sometimes strips useful content, Defuddle uses a multi-pass detection system that recovers when initial passes return nothing.&lt;/p&gt;

&lt;p&gt;Clever trick: it analyses a page's mobile CSS to identify elements that can be safely removed. Standardises footnotes, code blocks, and maths equations into consistent HTML structures before conversion. MIT-licensed, works in browsers, Node.js, and CLI. The project carries a "work in progress" warning, which I appreciate for its honesty. Not yet benchmarked in the same rigorous studies as Trafilatura and Readability, but the Hacker News thread was overwhelmingly positive, and the Obsidian pedigree means it's been battle-tested on the kind of weird, cluttered pages that trip up other extractors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Utilities Drawer
&lt;/h2&gt;

&lt;p&gt;Two more tools that solve adjacent problems and are worth knowing about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;html2text&lt;/strong&gt; is the purist's choice. It converts HTML to markdown. That's it. No content extraction. No "find the article" heuristics. No boilerplate removal. You give it HTML, it gives you the markdown equivalent of that entire HTML, ads and navigation and cookie banners and all.&lt;/p&gt;

&lt;p&gt;This is useful when you've already done the extraction step (maybe with Readability or Defuddle) and just need the format conversion. Or when you're working with HTML you generated yourself and know is clean. Treating it as an extractor will give you exactly the markdown representation of every piece of junk on the page, which rather defeats the purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;newspaper4k&lt;/strong&gt; is the successor to newspaper3k (which hadn't been updated since September 2020, which in Python library years is roughly the Cretaceous period). It's article-focused: downloads pages, extracts the main article content, and layers on NLP for summaries and keyword extraction. F-score of 0.949 with precision at 0.964, which looks great until you notice the recall is 0.934, meaning it's precise about what it grabs but misses a fair chunk. Good for news articles specifically. Less good for documentation pages, forums, or anything that doesn't look like a news article.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decision Tree
&lt;/h2&gt;

&lt;p&gt;Here's the bit I wish someone had given me before I spent three weeks running Playwright for pages that didn't need it.&lt;/p&gt;

&lt;p&gt;Your agent needs web content. First question: does it need to &lt;em&gt;interact&lt;/em&gt; with the page? Click buttons, fill forms, scroll infinite feeds, navigate SPAs? If yes, you need a browser. Full stop. Go back to Parts 1-3 of this series.&lt;/p&gt;

&lt;p&gt;If no (and this is more often than you think), next question: is the target site on Cloudflare with Markdown for Agents enabled? If yes, just add &lt;code&gt;Accept: text/markdown&lt;/code&gt; to your request headers. Done. Fastest possible path. No extraction library, no third-party service, no browser.&lt;/p&gt;

&lt;p&gt;If the site isn't on Cloudflare, or you don't know: is it a one-off or are you building a pipeline? For quick, one-off grabs, prefix with &lt;code&gt;r.jina.ai/&lt;/code&gt; and move on with your life. For a proper pipeline where you want control, self-host Trafilatura. It handles the extraction and the format conversion in one step, and the benchmarks prove it's the best at both.&lt;/p&gt;

&lt;p&gt;If you're in a Node.js environment and prefer JavaScript, use Defuddle for extraction, then pipe through html2text (or Defuddle's own markdown output) for conversion.&lt;/p&gt;

&lt;p&gt;If you specifically need NLP features (article summaries, keyword extraction, author/date metadata) and your content is news articles, newspaper4k handles the whole chain.&lt;/p&gt;

&lt;p&gt;And if you're at scale (thousands of pages per hour), consider Cloudflare's Browser Rendering API as your extraction backend. You get their infrastructure handling the headless browsers, and you just receive markdown. The crawl endpoint handles up to 500 pages per domain with async job tracking.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;There's a subtlety in the extraction-vs-browser decision that catches people. Some sites look static but actually load content via JavaScript after the initial page load. A &lt;code&gt;&amp;lt;div id="content"&amp;gt;&amp;lt;/div&amp;gt;&lt;/code&gt; that gets populated by a framework after DOMContentLoaded. Your extractor fetches the HTML, sees the empty div, returns nothing useful. This is where tools like Jina Reader have an edge over pure extractors like Trafilatura: Jina renders the page in a real browser before extracting. If you're seeing empty or truncated results from a self-hosted extractor, the page is probably JS-rendered, and you need either a hosted service with browser rendering or your own headless browser feeding HTML to your extractor.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When Extraction Falls on Its Face
&lt;/h2&gt;

&lt;p&gt;Content extraction is brilliant for the 80% of web content that's articles, documentation, blog posts, and static pages. But it has hard limits.&lt;/p&gt;

&lt;p&gt;Single-page applications where the entire page is a JavaScript bundle that renders client-side. Your extractor will get a nearly empty HTML shell. Sites behind authentication or paywalls where you need to log in, handle cookies, maybe solve a CAPTCHA. That's browser territory. Anything requiring interaction: filling search forms, clicking "load more," navigating pagination that's handled by JavaScript rather than URL parameters.&lt;/p&gt;

&lt;p&gt;Infinite scroll pages. Dynamically loaded content triggered by viewport intersection observers. WebSocket-driven real-time content. Canvas-rendered data visualisations.&lt;/p&gt;

&lt;p&gt;For all of these, you need Parts 1-3 of this series. Playwright for the straightforward stuff. Patchright or Scrapling when stealth matters. The extraction tools in this post are for when the content already exists in the HTML and you just need to get it out clean and cheap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping the Whole Series
&lt;/h2&gt;

&lt;p&gt;Four parts. Hundreds of tools. Let me boil it down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 1&lt;/strong&gt; was Playwright and Puppeteer. The workhorses. If you need a browser and don't have specific stealth requirements, Playwright is the answer. Cross-browser, auto-waiting, proper debugging tools, Microsoft backing. Puppeteer if you're Chrome-only and prefer the Google ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 2&lt;/strong&gt; was the stealth layer. Patchright (Playwright with anti-detection patches) and Scrapling (Python-native with adaptive fingerprinting). For when sites actively try to detect and block automation. Cloudflare Turnstile, Akamai, PerimeterX, the whole bot-detection industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 3&lt;/strong&gt; covered the emerging alternatives. Stagehand for natural-language browser control ("click the login button" instead of &lt;code&gt;page.click('#btn-login-v3-container &amp;gt; div:nth-child(2) &amp;gt; button')&lt;/code&gt;). Browser Use for visual-AI-driven navigation. The new breed that treats browser automation as a language problem rather than a DOM problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 4&lt;/strong&gt;, this one, is about not using a browser at all. markdown.new, Jina Reader, Trafilatura, Defuddle. The realisation that for most content retrieval tasks, a browser is overkill and you're paying a literal tax on your tokens for the privilege.&lt;/p&gt;

&lt;p&gt;The meta-decision tree across all four parts goes like this. You need web content. Can you get it without a browser? (Check Part 4.) Probably yes for static content. If you do need a browser, is the site trying to block you? No: use Playwright (Part 1). Yes: use Patchright or Scrapling (Part 2). Do you want to drive the browser with natural language instead of selectors? Use Stagehand or Browser Use (Part 3).&lt;/p&gt;

&lt;p&gt;If I'm being honest about the distribution: maybe 60-70% of the web scraping tasks I see in agent architectures could use extraction instead of a browser. People reach for Playwright by default because it's what they know, and because "open a browser and get the page" is conceptually simple even when it's computationally expensive. The extraction tools in this post are less intuitive but wildly more efficient for the use cases they cover.&lt;/p&gt;

&lt;p&gt;The whole series exists because I got fed up watching agents burn through token budgets and compute resources on problems that had simpler solutions. The browser is a Swiss Army knife. Sometimes you need the whole knife. But if all you need is the blade, stop paying for the corkscrew and the tiny scissors.&lt;/p&gt;

&lt;p&gt;Now close the browser. Or rather, don't open one in the first place. That's the whole point.&lt;/p&gt;

</description>
      <category>llmtokenoptimization</category>
      <category>markdownnewcloudflare</category>
      <category>jinareaderai</category>
      <category>trafilaturaextraction</category>
    </item>
    <item>
      <title>Browser Tools for AI Agents Part 2: The Framework Wars (browser-use, Stagehand, Skyvern)</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:49:03 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-2-the-framework-wars-browser-use-stagehand-skyvern-4gn</link>
      <guid>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-2-the-framework-wars-browser-use-stagehand-skyvern-4gn</guid>
      <description>&lt;p&gt;In Part 1 we covered the browser infrastructure layer. The plumbing. Remote browsers, CDPs, the headless Chromium sprawl. If you missed it, go &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;read that first&lt;/a&gt; because this one builds directly on top of it.&lt;/p&gt;

&lt;p&gt;Now we're going up a level. The frameworks. The SDKs. The bits that actually let your agent &lt;em&gt;do things&lt;/em&gt; in a browser instead of just staring at one. Same lens as Part 1: this is about giving your coding agents the tools for a closed loop of research, implementation, and validation. Not consumer agentic browsers.&lt;/p&gt;

&lt;p&gt;And here's where it gets properly interesting, because there's a civil war happening in this space and most people haven't noticed yet. On one side: DOM-first. On the other: vision-first. And in the middle, a messy hybrid zone where the most pragmatic engineering is happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Split That Defines Everything
&lt;/h2&gt;

&lt;p&gt;Before we get into individual tools, you need to understand the fundamental schism. When an AI agent needs to interact with a web page, it has to &lt;em&gt;see&lt;/em&gt; the page somehow. There are exactly three ways to do this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DOM-first&lt;/strong&gt; means you parse the HTML, extract the accessibility tree, convert it to text, and feed that text to your language model. The model reasons over structured data. It knows there's a button with aria-label "Submit" at coordinates roughly here. This is fast, token-efficient, and works brilliantly on well-structured modern web apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vision-first&lt;/strong&gt; means you take a screenshot and feed the raw image to a vision-capable model. The model looks at what a human would see. No DOM parsing. No accessibility trees. Just pixels. This is slower, more expensive per step, and burns through your token budget like a weekend in Shoreditch burns through your wallet. But it works on &lt;em&gt;anything&lt;/em&gt;. Canvas apps, PDFs rendered in-browser, legacy portals with obfuscated markup, pages where the DOM is a lying mess of nested iframes and shadow roots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid&lt;/strong&gt; means you do both and let the model pick. Or you use the DOM for structured elements and fall back to vision when the DOM goes sideways.&lt;/p&gt;

&lt;p&gt;Every framework in this post has picked a side in this war. That choice cascades into everything: speed, cost, accuracy, which websites it can handle, and what breaks when a site redesigns.&lt;/p&gt;

&lt;p&gt;Right. Let's meet the contenders.&lt;/p&gt;

&lt;h2&gt;
  
  
  browser-use: The Python Gorilla
&lt;/h2&gt;

&lt;p&gt;85,000 GitHub stars. 9,000 forks. MIT licensed. If you've Googled "AI browser automation" at any point in the last year, you've found browser-use. It's the 800-pound gorilla of this space.&lt;/p&gt;

&lt;p&gt;The architecture is honest and simple. Playwright underneath (so Chromium, Firefox, or WebKit). An agent loop on top that takes your natural language instruction, observes the browser state, picks an action, executes it, observes again. Rinse, repeat. It supports basically every LLM you'd want: Claude, GPT-4o, Gemini, their own ChatBrowserUse model, even local stuff via Ollama if you're feeling brave.&lt;/p&gt;

&lt;p&gt;The 89.1% WebVoyager score gets thrown around a lot in their marketing. And look, that's legitimately strong. On the benchmark. In production, your mileage varies depending on site complexity, auth flows, CAPTCHAs, and how much the target site hates bots.&lt;/p&gt;

&lt;p&gt;Here's the thing that nags at me though. Every single step in the agent loop requires an LLM call. Every observation, every action decision, every verification. You're paying per step. For a ten-step workflow on GPT-4o, that's maybe $0.15-0.30. Run that a thousand times a day for a client and your CFO starts asking questions. Run it ten thousand times and your CFO starts updating their CV.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;browser-use's core loop is a classic ReAct (Reason-Act) pattern. The agent receives the current page state as text (DOM snapshot plus visible elements), the model generates a thought and action, the action executes, and the new state feeds back in. The &lt;code&gt;@tools.action()&lt;/code&gt; decorator lets you extend this with custom actions, which is where it gets genuinely flexible. But the fundamental constraint remains: every cycle is a round-trip to your LLM provider. There's no caching, no replay, no "I've done this before so let me skip the reasoning." Every run pays full price. Their ChatBrowserUse model at $0.20 per million input tokens tries to address this, but you're still paying per step.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Feels like: a really smart intern who needs to phone their mentor before every decision. Reliable? Yes. Expensive at scale? Also yes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stagehand: The TypeScript Caching Play
&lt;/h2&gt;

&lt;p&gt;21,800 stars. Browserbase's baby. TypeScript-native. And here's where the economics get interesting, because the Stagehand team figured out something that the browser-use crowd seems to be ignoring.&lt;/p&gt;

&lt;p&gt;Caching.&lt;/p&gt;

&lt;p&gt;Stagehand v3 is a proper architectural rethink. They ripped out Playwright entirely and went straight to CDP (Chrome DevTools Protocol). The result: 44% faster on shadow DOM and iframe interactions. But speed isn't the headline. The headline is what they do with repeated actions.&lt;/p&gt;

&lt;p&gt;When Stagehand encounters a page for the first time, it does the same thing everyone else does: calls the LLM, figures out what to click, where to type, what to extract. But then it &lt;em&gt;caches that mapping&lt;/em&gt;. Next time it hits the same page (or a sufficiently similar one), it replays the cached actions without calling the LLM at all. Zero inference cost. Zero latency.&lt;/p&gt;

&lt;p&gt;Think about what this means for production workloads. Your agent fills out the same insurance form fifty times a day? The first run costs money. Runs two through fifty are basically free. The amortised cost per task drops off a cliff.&lt;/p&gt;

&lt;p&gt;The API surface is dead simple: &lt;code&gt;act()&lt;/code&gt; for single actions, &lt;code&gt;extract()&lt;/code&gt; for structured data extraction (with Zod schema validation, because TypeScript), and &lt;code&gt;agent()&lt;/code&gt; for multi-step flows. The self-healing bit is clever too. If the DOM shifts and a cached action fails, Stagehand automatically re-engages the LLM to figure out the new mapping, caches that, and carries on.&lt;/p&gt;

&lt;p&gt;The catch? Stagehand is TypeScript. If your agent stack is Python (and statistically, it probably is), you're either wrapping it in a subprocess, running a sidecar service, or rewriting chunks of your orchestration. Not a deal-breaker, but it's friction.&lt;/p&gt;

&lt;p&gt;Also: Stagehand is built by Browserbase. The open-source SDK works locally, but the &lt;em&gt;intended&lt;/em&gt; deployment path is their cloud browser infrastructure. The SDK is the on-ramp. The cloud is the toll road. That's fine and transparent and honestly good engineering, but know what you're signing up for.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;The v3 CDP engine is the real technical win here. Playwright was designed as a testing framework that happens to automate browsers. CDP is the raw protocol. By going native, Stagehand eliminated Playwright's abstraction layer and its associated round-trips. Each WebSocket message goes straight to the browser. For iframes and shadow roots (which require frame-scoped routing), this cuts latency nearly in half. The caching layer sits on top: it hashes page structure and action descriptions, stores successful DOM-to-action mappings, and replays them deterministically. When a replay fails (because the site changed), it falls back to LLM inference and updates the cache. It's the same pattern as a JIT compiler: interpret first, compile the hot paths.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Feels like: the developer who writes a script for anything they do more than twice. Lazy in the best possible way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skyvern: The Vision Bet
&lt;/h2&gt;

&lt;p&gt;21,000 stars. The one that looks at screenshots instead of parsing HTML.&lt;/p&gt;

&lt;p&gt;Skyvern's thesis is that the DOM is a lie. And honestly? They're not entirely wrong. Modern web apps are a jungle of React virtual DOMs, Web Components, shadow roots, iframes-within-iframes, canvas elements, and SVGs doing things SVGs were never meant to do. Parsing all of that reliably is a mug's game.&lt;/p&gt;

&lt;p&gt;So Skyvern takes a screenshot. Feeds it to a vision-capable model. The model sees what a human would see: buttons, forms, navigation, text. No DOM parsing needed. No accessibility tree. Just pixels.&lt;/p&gt;

&lt;p&gt;The Skyvern 2.0 architecture is a three-phase agent loop: Planner decomposes your objective into sub-goals, Actor executes individual actions on websites, Validator confirms success and triggers replanning if something went wrong. This Planner-Actor-Validator cycle pushed their WebVoyager score from about 45% (v1) to 85.85% (v2). That's a properly impressive jump, and they publish full evaluation traces at eval.skyvern.com, which is more transparency than most competitors offer.&lt;/p&gt;

&lt;p&gt;Here's where I'd use Skyvern over the DOM-first tools: legacy enterprise portals (those GWT monstrosities from 2008 that HR still makes you use), canvas-heavy applications, sites with aggressively obfuscated markup, anything where the visual layout is the only reliable source of truth. For a standard modern React app with clean semantic HTML? DOM-first tools will be faster and cheaper.&lt;/p&gt;

&lt;p&gt;The cost issue is real. Vision model calls are expensive. A screenshot through GPT-4o costs more than a text-based DOM snapshot, and Skyvern needs screenshots at every step. Their cloud product bundles anti-bot detection, proxy rotation, CAPTCHA solving, and parallel execution, which is where the value proposition actually lives. The self-hosted path (pip install or Docker) is available but you're on your own for the infrastructure bits.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skyvern's "swarm of agents" architecture is worth understanding. Rather than a single monolithic agent, each phase (Plan, Act, Validate) can run different model configurations. The Planner might use GPT-4o for complex reasoning while the Actor uses GPT-4o-mini for cheaper per-step execution. The Validator sits independently and confirms outcomes, triggering replanning when the Actor's actions didn't actually work (a surprisingly common failure mode in browser automation where "click succeeded" doesn't mean "the thing you wanted to happen actually happened"). The 85.85% WebVoyager score was achieved on cloud browsers with production-representative conditions, not cosy local setups with safe IP addresses.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Feels like: hiring a human contractor to manually test your site. More expensive per task, but they can handle anything you throw at them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notte: The Full-Stack Edge Play
&lt;/h2&gt;

&lt;p&gt;1,900 stars. YC S25. SSPL licensed. And here's where you need to pay attention to the fine print.&lt;/p&gt;

&lt;p&gt;Notte's pitch is that it's the full-stack version of what everyone else gives you in pieces. Browser infrastructure, agent framework, and deployment runtime, all in one package. The compute runs next to the browser. Zero-latency automation because there's no network hop between your agent logic and the browser instance.&lt;/p&gt;

&lt;p&gt;The hybrid approach is smart: use Playwright-compatible scripting for deterministic steps, engage the LLM only when you need reasoning or adaptability. They claim this cuts costs by 50%+ versus pure-agent approaches, and the maths checks out. If half your workflow is "click this specific button, fill this specific field" and only the other half requires actual reasoning, why pay for LLM inference on the predictable bits?&lt;/p&gt;

&lt;p&gt;Their benchmark numbers (79% LLM evaluation accuracy, 47 seconds per task, 96.6% reliability) look solid for an early-stage tool. The Patchright browser backend (a Playwright fork) means you get stealth capabilities out of the box: CAPTCHA solving, anti-detection, proxy rotation.&lt;/p&gt;

&lt;p&gt;But. That SSPL license. Server Side Public License. This is the MongoDB license. It means if you offer Notte's functionality as a service to third parties, you must open-source your entire stack. For internal tools, fine. For a SaaS product that wraps Notte? You need to talk to their commercial licensing team. This isn't a small detail. It changes the entire build-vs-buy calculation for startups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Notte is doing interesting engineering and the full-stack approach solves real deployment pain. But read the license before you build your business on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  expect: The Categorical Misfit (and Something I Use Daily)
&lt;/h2&gt;

&lt;p&gt;3,000 stars. FSL-1.1-MIT license (functional source license). And this one is fundamentally different from everything else on this page.&lt;/p&gt;

&lt;p&gt;Quick note before I get into it: expect probably belongs in Part 1 more than here. It's closer to a low-level validation tool than a framework. But it also spans both spaces because it orchestrates an agent that drives a browser, which puts it in framework territory. I've stuck it here because the comparison with browser-use and Stagehand is the thing people keep getting wrong, and this felt like the right place to set that straight.&lt;/p&gt;

&lt;p&gt;expect is not a browser automation framework. It doesn't help your agent browse the web. It doesn't fill forms or scrape data or navigate portals.&lt;/p&gt;

&lt;p&gt;expect is a &lt;em&gt;testing agent&lt;/em&gt; that uses a browser. And it's become part of my daily workflow.&lt;/p&gt;

&lt;p&gt;The workflow goes like this: you're building a web app. You make changes. You run &lt;code&gt;expect&lt;/code&gt;. It reads your git diff, generates a test plan based on what you changed, shows you the plan in an interactive TUI, and then executes those tests in a real browser using Playwright underneath. If a test fails, your coding agent gets the failure report and can attempt to fix the issue. It records videos of every test run for debugging.&lt;/p&gt;

&lt;p&gt;The key distinction is the direction of agency. With browser-use or Stagehand, your &lt;em&gt;agent&lt;/em&gt; uses a browser to accomplish a &lt;em&gt;task&lt;/em&gt;. With expect, a &lt;em&gt;testing agent&lt;/em&gt; uses a browser to validate your &lt;em&gt;code&lt;/em&gt;. The browser is a verification environment, not an action environment.&lt;/p&gt;

&lt;p&gt;It integrates with Claude Code, Copilot, Codex, Gemini CLI, Cursor. The cookie management is clever: it extracts your real browser sessions so tests run with your actual auth state, no mock login flows needed.&lt;/p&gt;

&lt;p&gt;I bring this up because people keep comparing expect to browser-use in the same breath, and they're solving completely different problems. It's like comparing a race car to a crash test rig. Both involve cars going fast. The purpose is entirely different. I use expect as the validation step in my agent's closed loop: the agent writes code, expect checks whether the UI actually works, and if it doesn't, the agent gets the failure report and fixes it. That's the "validate" in the research-plan-implement-validate cycle this whole series is about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; a QA engineer who reads your pull request and immediately goes to test the bits you changed. You didn't ask. They just did it. Brilliant.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Comparison Table Nobody Asked For (But Everyone Needs)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;browser-use&lt;/th&gt;
&lt;th&gt;Stagehand&lt;/th&gt;
&lt;th&gt;Skyvern&lt;/th&gt;
&lt;th&gt;Notte&lt;/th&gt;
&lt;th&gt;expect&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Approach&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;DOM-first&lt;/td&gt;
&lt;td&gt;DOM + cache&lt;/td&gt;
&lt;td&gt;Vision-first&lt;/td&gt;
&lt;td&gt;Hybrid&lt;/td&gt;
&lt;td&gt;Testing agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;85k&lt;/td&gt;
&lt;td&gt;21.8k&lt;/td&gt;
&lt;td&gt;21k&lt;/td&gt;
&lt;td&gt;1.9k&lt;/td&gt;
&lt;td&gt;3k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;SSPL&lt;/td&gt;
&lt;td&gt;FSL-1.1-MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;89.1% (WebVoyager)&lt;/td&gt;
&lt;td&gt;No public benchmark&lt;/td&gt;
&lt;td&gt;85.85% (WebVoyager)&lt;/td&gt;
&lt;td&gt;79% (LLM eval)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Speed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~3s/step&lt;/td&gt;
&lt;td&gt;1-3s (cached: &amp;lt;100ms)&lt;/td&gt;
&lt;td&gt;Slow (image encoding)&lt;/td&gt;
&lt;td&gt;&amp;lt;50ms latency&lt;/td&gt;
&lt;td&gt;Depends on agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Token efficiency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Heavy (LLM every step)&lt;/td&gt;
&lt;td&gt;Good (cached = zero LLM)&lt;/td&gt;
&lt;td&gt;Worst (vision tokens)&lt;/td&gt;
&lt;td&gt;Good (LLM only when needed)&lt;/td&gt;
&lt;td&gt;Moderate (plan gen)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tokens per 10 steps&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~7k-15k&lt;/td&gt;
&lt;td&gt;~2k-5k (first), ~0 (cached)&lt;/td&gt;
&lt;td&gt;~30k-50k (images)&lt;/td&gt;
&lt;td&gt;~3k-8k&lt;/td&gt;
&lt;td&gt;~5k (plan phase)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost per task&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$0.02-0.30&lt;/td&gt;
&lt;td&gt;$0.01-0.10 (drops on repeat)&lt;/td&gt;
&lt;td&gt;$0.10-0.50&lt;/td&gt;
&lt;td&gt;$0.02-0.15&lt;/td&gt;
&lt;td&gt;$0.01-0.05&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LLM per step&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Always&lt;/td&gt;
&lt;td&gt;First run only&lt;/td&gt;
&lt;td&gt;Always (vision)&lt;/td&gt;
&lt;td&gt;Only for reasoning&lt;/td&gt;
&lt;td&gt;Per test gen&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self-hosted&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (Docker)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cloud option&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;browser-use Cloud&lt;/td&gt;
&lt;td&gt;Browserbase&lt;/td&gt;
&lt;td&gt;Skyvern Cloud&lt;/td&gt;
&lt;td&gt;Notte Console&lt;/td&gt;
&lt;td&gt;Coming soon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Anti-bot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cloud only&lt;/td&gt;
&lt;td&gt;Via Browserbase&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Caching&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (auto-replay)&lt;/td&gt;
&lt;td&gt;Planned&lt;/td&gt;
&lt;td&gt;Partial (scripting)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;General agent tasks&lt;/td&gt;
&lt;td&gt;Repeated workflows&lt;/td&gt;
&lt;td&gt;Legacy/visual sites&lt;/td&gt;
&lt;td&gt;Full-stack deploy&lt;/td&gt;
&lt;td&gt;Code verification&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note: Vercel agent-browser is covered in &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;Part 1&lt;/a&gt; alongside the other low-level tools. It's a CLI wrapper around Playwright, not a framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decision Tree
&lt;/h2&gt;

&lt;p&gt;Start here. What are you actually building?&lt;/p&gt;

&lt;p&gt;If your agent needs to &lt;em&gt;test code changes in a browser&lt;/em&gt;, stop. That's expect. None of the others do this.&lt;/p&gt;

&lt;p&gt;If your agent needs to &lt;em&gt;browse the web&lt;/em&gt; and your primary constraint is token budget inside a tool-calling agent, check out Vercel agent-browser in &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part1-low-level"&gt;Part 1&lt;/a&gt;. It's a low-level CLI tool, not a framework.&lt;/p&gt;

&lt;p&gt;If you're building &lt;em&gt;repeated workflows&lt;/em&gt; (same sites, same forms, many times a day), Stagehand's caching means your costs approach zero after the first run. If you can stomach TypeScript.&lt;/p&gt;

&lt;p&gt;If you need to handle &lt;em&gt;visual-heavy or legacy sites&lt;/em&gt; where the DOM is unreliable, Skyvern's vision approach is the pragmatic choice. Pay more per step, get reliability on weird sites.&lt;/p&gt;

&lt;p&gt;If you want the &lt;em&gt;largest community and ecosystem&lt;/em&gt; with the most examples, browser-use is the safe bet. It's Python, it's everywhere, it works. You'll pay for every step, but at least you know it works.&lt;/p&gt;

&lt;p&gt;If you want &lt;em&gt;everything in one box&lt;/em&gt; and the SSPL license doesn't scare you, Notte's full-stack approach eliminates a lot of integration pain.&lt;/p&gt;

&lt;p&gt;And if the answer is "I need all of these capabilities in different parts of my system," then congratulations. You've arrived at the same conclusion most production teams reach: you compose multiple tools. Stagehand for repeated workflows, Skyvern for the weird legacy bits, expect for CI validation, and something like agent-browser or dev-browser from Part 1 for the lightweight observation layer. The frameworks are building blocks, not religions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Coming in Part 3
&lt;/h2&gt;

&lt;p&gt;All of these frameworks need to &lt;em&gt;run&lt;/em&gt; somewhere. In production, that means managed browser infrastructure: session pools, proxy rotation, anti-detection, CAPTCHA solving, geographic distribution. Part 3 covers the managed infra layer: Browserbase, Bright Data's Agent Browser, Steel, Hyperbrowser, and the rest of the "browsers-as-a-service" market that's appeared seemingly overnight.&lt;/p&gt;

&lt;p&gt;That's where the money is. And where the vendor lock-in lives.&lt;/p&gt;

&lt;p&gt;See you there.&lt;/p&gt;

</description>
      <category>browseruseai</category>
      <category>stagehandbrowser</category>
      <category>skyvernautomation</category>
      <category>aibrowserframeworkcomparison</category>
    </item>
    <item>
      <title>Browser Tools for AI Agents Part 1: Playwright, Puppeteer, and Why Your Agent Picked Playwright</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:49 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-1-playwright-puppeteer-and-why-your-agent-picked-playwright-k71</link>
      <guid>https://dev.to/stevengonsalvez/browser-tools-for-ai-agents-part-1-playwright-puppeteer-and-why-your-agent-picked-playwright-k71</guid>
      <description>&lt;p&gt;The first time I watched an AI agent drive a browser, I laughed. Out loud. Not because it was funny, but because it was so profoundly uncanny. Like watching a toddler who somehow knows how to file taxes. The cursor moved with purpose. It found form fields I would've struggled to locate myself. It waited patiently for a spinner to clear, then clicked a button that hadn't even rendered when the page first loaded.&lt;/p&gt;

&lt;p&gt;I sat there with my coffee going cold thinking: how does it know what to click? What is it even &lt;em&gt;seeing&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Turns out, the answer to that question is the entire reason Playwright won the browser automation war for AI agents. And it's not the answer most people expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this series covers (and what it doesn't)
&lt;/h2&gt;

&lt;p&gt;Quick scope check before we crack on. This series is NOT about consumer agentic browsers like Comet, Dia, BrowserOS, ChatGPT Atlas, or Google's Project Mariner. Those are interesting products, but they're for end users browsing the web with AI assistance.&lt;/p&gt;

&lt;p&gt;This series is about something more specific: &lt;strong&gt;how does a software developer, working with coding agents, give those agents the right browser tools to build a fully closed loop system?&lt;/strong&gt; Research, plan, implement, validate. That whole cycle. And the "validate" bit, in most cases, means checking what a real user would see in a real browser.&lt;/p&gt;

&lt;p&gt;That's the lens. Every tool in this four-part series is evaluated through it. Can my agent use this to research a problem, build a solution, and then open a browser to verify it actually works from a user's perspective?&lt;/p&gt;

&lt;p&gt;(Native app validation for React Native, Capacitor, or Swift is a separate conversation. We'll get there.)&lt;/p&gt;

&lt;p&gt;One thing worth saying upfront: I've tried and used every tool in this series in some form. Not just read the README. Actually ran them, built things with them, hit the walls, found the workarounds. The ones that stuck in my daily workflow are dev-browser (for agent dev loops), Stagehand (for repeated workflows with caching), expect (for closed-loop test validation), and Playwright directly (for anything that needs to live in a CI pipeline). Everything else I've evaluated, formed opinions on, and moved on from or keep in the toolbox for specific situations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Playwright: The Main Event
&lt;/h2&gt;

&lt;p&gt;Right. Let's not faff about with history lessons. Playwright won. Every serious AI coding agent, Claude Code, Cursor, GitHub Copilot's agent mode, they all reach for Playwright when they need a browser. The question worth asking is &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It's not because Playwright is the fastest browser automation tool. It isn't. Puppeteer actually beats it by 15 to 20 percent on raw Chromium tasks because Puppeteer stays closer to the Chrome DevTools Protocol wire, exchanging roughly 11KB of websocket messages where Playwright sends 326KB for the same job. That's a proper gap.&lt;/p&gt;

&lt;p&gt;But speed on Chromium isn't the game agents are playing. Agents are playing a reliability and comprehension game. And Playwright absolutely smashes that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The accessibility tree trick.&lt;/strong&gt; This is the bit that matters. When Playwright's MCP server talks to an AI agent, it doesn't send screenshots. It doesn't dump raw HTML. It sends the browser's accessibility tree: a structured, semantic, text-based representation of the page. Roles, labels, states. A "Submit" button is &lt;code&gt;Role: button, Name: Submit&lt;/code&gt; regardless of whether the CSS class is &lt;code&gt;btn-primary&lt;/code&gt; or &lt;code&gt;xK7_submit_v3_final_FINAL&lt;/code&gt;. An accessibility snapshot averages 2 to 5KB. A screenshot of the same page runs 100KB or more. That's a 20x to 50x difference in token cost, and for an agent burning through context windows, that's the whole ballgame.&lt;/p&gt;

&lt;p&gt;ARIA roles and labels were designed for screen readers and assistive technology. Turns out they're perfect for AI agents too. Same problem, really: "tell me what's on this page in a way I can act on without seeing it."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto-wait.&lt;/strong&gt; Playwright waits for elements to be actionable before it tries to interact with them. Assertions retry automatically until conditions are met. No &lt;code&gt;sleep(2000)&lt;/code&gt;. No polling loops. No flaky race conditions. For an agent that can't eyeball the page and think "oh, that spinner's still going, I'll wait a sec," this is everything. The agent says "click the checkout button" and Playwright handles the timing. Sorted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser Contexts.&lt;/strong&gt; Playwright can spin up isolated browser contexts that share a single browser instance. Different cookies, different storage, different sessions, all running in parallel without stepping on each other. For agents that need to test multi-user flows or run parallel scraping jobs, this is far cheaper than launching separate browser processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-browser support.&lt;/strong&gt; Chromium, Firefox, WebKit. One API. Puppeteer gives you Chrome and that's it. Selenium gives you everything but slowly. Playwright gives you all three major engines with the same code, and for cross-browser testing that actually matters.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The MCP tax vs just writing Playwright&lt;/strong&gt;: Playwright MCP is still widely used, and I get why. Microsoft shipped it in March 2025, it works with VS Code, Cursor, Claude Desktop, and GitHub's Copilot has it baked in. Snapshot Mode reads the accessibility tree, Vision Mode uses screenshots. Sounds great on paper. But here's my take: &lt;a href="https://dev.to/blog/death-of-mcp"&gt;MCP is a context killer&lt;/a&gt;. The Playwright MCP alone burns ~15,000 tokens in tool definitions before your agent has done anything. My strong preference is to skip MCP entirely and let the agent write Playwright code directly. Agents are good at this. They can generate a &lt;code&gt;goto&lt;/code&gt;, &lt;code&gt;click&lt;/code&gt;, &lt;code&gt;fill&lt;/code&gt;, &lt;code&gt;screenshot&lt;/code&gt; script in seconds. The code is reusable, testable, and costs ~1,000 tokens instead of 15,000. Microsoft recently shipped &lt;a href="https://github.com/microsoft/playwright-cli" rel="noopener noreferrer"&gt;playwright-cli&lt;/a&gt; which makes this even cleaner. And if you pair it with a skill (use &lt;code&gt;/skill-creator&lt;/code&gt; to build one), you end up with a customised Playwright generation layer for &lt;em&gt;your&lt;/em&gt; application: your selectors, your auth flow, your common patterns. That's worth more than any MCP server because it compounds over time. The agent learns your app, not a generic tool schema.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# MCP (if you must, but see the MCP tax discussion above)&lt;/span&gt;
npx @anthropic-ai/claude-code mcp add playwright &lt;span class="nt"&gt;--&lt;/span&gt; npx @playwright/mcp@latest

&lt;span class="c"&gt;# Better: let your agent write Playwright code directly&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;playwright
&lt;span class="c"&gt;# Or: npx playwright-cli&lt;/span&gt;

&lt;span class="c"&gt;# Best: build a custom skill with /skill-creator&lt;/span&gt;
&lt;span class="c"&gt;# that wraps YOUR app's common Playwright patterns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; Playwright is the Swiss Army knife where every blade is sharp. Puppeteer is a really excellent scalpel, but only for one specific material.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chrome DevTools MCP: When You Need the Full Inspector
&lt;/h2&gt;

&lt;p&gt;Worth calling out separately: &lt;a href="https://github.com/ChromeDevTools/chrome-devtools-mcp" rel="noopener noreferrer"&gt;Chrome DevTools MCP&lt;/a&gt; (32.9k stars, Google's official MCP server for Chrome) gives agents direct access to Chrome DevTools. 29 tools across six categories: input automation, navigation, emulation, performance tracing, network inspection, and debugging. It's the tool you reach for when your agent needs to do more than just click buttons. Performance profiling, Lighthouse audits, memory snapshots, network request analysis, console log capture with source-mapped stack traces.&lt;/p&gt;

&lt;p&gt;The interesting bit is it has a "slim mode" that drops from 29 tools down to 3. Which is a tacit admission that 29 tools is too many to load by default. Same MCP tax problem I keep banging on about.&lt;/p&gt;

&lt;p&gt;There's also a performance issue worth flagging. CDP (Chrome DevTools Protocol) feels noticeably sluggish for basic click-and-navigate work compared to Playwright, and there's a proper technical reason for that. CDP simulates OS-level input: when you fire &lt;code&gt;Input.dispatchMouseEvent&lt;/code&gt;, Chrome processes it through the full input pipeline (compositor thread, hit-testing, event dispatch, bubbling). It's simulating a real mouse click at the hardware level. Playwright often shortcuts this by injecting JavaScript that calls the element's click handler directly, skipping the compositor entirely.&lt;/p&gt;

&lt;p&gt;On top of that, CDP has no auto-wait. You click, and if the element isn't ready, it misses or hits the wrong thing. You end up stacking manual waits (&lt;code&gt;Page.loadEventFired&lt;/code&gt;, polling for selectors) which add up fast. Navigation is even worse: &lt;code&gt;Page.navigate&lt;/code&gt; fires and then you're listening for separate &lt;code&gt;Page.lifecycleEvent&lt;/code&gt; messages. Playwright bundles "navigate and wait until ready" into one call. And if you're using Chrome DevTools MCP specifically, every action goes through an extra hop: agent decides (LLM round-trip), MCP server receives tool call, translates to CDP, Chrome executes, result back through MCP, back to agent. Two network hops on top of the CDP execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx chrome-devtools-mcp@latest
&lt;span class="c"&gt;# No official skill. Consider mcporter to convert to CLI instead.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My honest take: if your agent needs performance profiling or deep debugging (tracing, memory snapshots, network watchers), Chrome DevTools MCP is the right tool for that specific job. But for the 90% case of "navigate, click, fill, screenshot, verify," you're paying a hefty context premium AND getting slower execution for capabilities you won't use. The agent can write a quick Puppeteer script to call &lt;code&gt;Performance.getMetrics()&lt;/code&gt; directly via CDP and it'll cost a fraction of the context. Use Chrome DevTools MCP when you genuinely need the full inspector. For everything else, let the agent write the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  dev-browser: The Agent-Native Option I Actually Use
&lt;/h2&gt;

&lt;p&gt;So if Playwright is the answer, why do I keep reaching for dev-browser?&lt;/p&gt;

&lt;p&gt;dev-browser (by Sawyer Hood) is a different beast. It's not a testing framework. It's not trying to replace Playwright for CI/CD pipelines or regression suites. It's a sandboxed browser automation tool built specifically for AI agents to muck about with web pages during development.&lt;/p&gt;

&lt;p&gt;The architecture is clever. It runs scripts in a QuickJS WASM sandbox, meaning your agent's browser automation code has zero access to the host filesystem or network. The scripts use the full Playwright Page API under the hood, including a &lt;code&gt;snapshotForAI()&lt;/code&gt; function that does exactly what it sounds like. But the isolation model means your agent can't accidentally &lt;code&gt;rm -rf&lt;/code&gt; your home directory while it's trying to click a button.&lt;/p&gt;

&lt;p&gt;The benchmarks from the dev-browser-eval suite tell the story:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Duration&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Iterations&lt;/th&gt;
&lt;th&gt;Success&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;dev-browser&lt;/td&gt;
&lt;td&gt;3m 53s&lt;/td&gt;
&lt;td&gt;$0.88&lt;/td&gt;
&lt;td&gt;29&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Playwright MCP&lt;/td&gt;
&lt;td&gt;4m 31s&lt;/td&gt;
&lt;td&gt;$1.45&lt;/td&gt;
&lt;td&gt;51&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Playwright Skill&lt;/td&gt;
&lt;td&gt;8m 07s&lt;/td&gt;
&lt;td&gt;$1.45&lt;/td&gt;
&lt;td&gt;38&lt;/td&gt;
&lt;td&gt;67%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's 30% faster and 40% cheaper than Playwright MCP for the same task, with fewer iterations. The agent figures out what it needs to do quicker because the tool was designed for how agents think, not how test engineers think.&lt;/p&gt;

&lt;p&gt;But (and this is a proper big but) dev-browser is not the tool for repeatable test suites. It doesn't do assertions the way Playwright does. It doesn't generate CI-friendly reports. It doesn't integrate with your test runner. If you need "run these 200 tests on every PR and fail the build if something breaks," that's Playwright. If you need "let my agent open a browser, poke around, check if the thing I just built actually works," that's dev-browser.&lt;/p&gt;

&lt;p&gt;5.3K stars on GitHub. MIT licensed. TypeScript core with a bit of Rust. Active development. I reckon it's going to keep growing because the "agent wants a browser for five minutes during development" use case is massive and Playwright MCP is overkill for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install the tool&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; dev-browser &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; dev-browser &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# The skill is at github.com/SawyerHood/dev-browser/tree/main/skills&lt;/span&gt;
&lt;span class="c"&gt;# Copy to your skills directory:&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; node_modules/dev-browser/skills/dev-browser ~/.claude/skills/

&lt;span class="c"&gt;# Pre-approve: add "Bash(dev-browser *)" to .claude/settings.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; dev-browser for agent-driven development. Playwright for everything that needs to be repeatable, cross-browser, or in CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vercel agent-browser: Token-Cheap, Time-Expensive (and Here's Why)
&lt;/h2&gt;

&lt;p&gt;26,200 stars. Written in Rust. And I need to be honest about this one because the marketing and the lived experience don't match.&lt;/p&gt;

&lt;p&gt;The pitch is context window efficiency. agent-browser compresses page snapshots using semantic element references (@e1, @e2) instead of full accessibility tree dumps. Vercel claims 93% context reduction, and the token savings are real. A typical snapshot is 200-400 tokens vs Playwright MCP's 15,000 tokens in tool definitions alone. Across a 6-step test, that's 1,364 tokens vs 7,779. You can fit 5.7x more test cycles in the same context budget.&lt;/p&gt;

&lt;p&gt;But it feels slow. And after digging into the architecture, I reckon I know why.&lt;/p&gt;

&lt;p&gt;The original version was a Rust CLI talking to a Node.js daemon over a unix socket, with Playwright driving Chrome underneath. In v0.20.0 they rewrote the daemon in pure Rust and dropped the Node.js layer entirely. Memory went from 143MB to 8MB, install from 710MB to 7MB, cold start from ~1,002ms to ~617ms. Proper engineering.&lt;/p&gt;

&lt;p&gt;The problem is Chrome itself still takes 2-5 seconds to launch on first use. So your realistic first-command wall clock is: 617ms daemon start + 2-5 seconds Chrome launch + page navigation time. Subsequent commands on a warm daemon are 50-100ms, which is fast. But that cold start is a killer in CI or ephemeral environments where every run starts fresh.&lt;/p&gt;

&lt;p&gt;There are also real stability issues. GitHub Issue #1113 documents orphaned headless Chrome processes that block normal Chrome from launching. Issue #1101 shows the idle timeout not being respected on Unix/macOS. Issue #1035 shows the daemon hanging on Linux server environments. These aren't edge cases if you're running agents at any scale.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The "3.5x faster" claim is about LLM planning, not browser speed.&lt;/strong&gt; The Vercel blog post ("We removed 80% of our agent's tools") benchmarked their internal text-to-SQL agent, not browser automation. They went from 15 tools to 2, and task completion dropped from 274.8 seconds to 77.4 seconds. The speedup came from the LLM making fewer decisions per turn (fewer tools = less confusion = faster planning), not from faster browser execution. Sample size was 5 queries, self-selected. Against a lean Playwright setup with 3-5 tools, the speed advantage would likely be negligible. The token savings, however, are genuine and well-measured: 82.5% fewer characters in responses, 37% fewer total tokens per task. Token-cheap and time-cheap are orthogonal problems. Stagehand's caching is the approach that actually reduces wall-clock time, replaying actions at &amp;lt;100ms without calling the LLM at all.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No anti-bot capabilities. For protected sites you need Browserbase or similar. Windows has documented socket issues (#398). And there are no independent wall-clock benchmarks from anyone outside Vercel. Every performance number in every blog post traces back to Vercel's own measurements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; agent-browser
&lt;span class="c"&gt;# Or: brew install agent-browser&lt;/span&gt;
&lt;span class="c"&gt;# Or: cargo install agent-browser&lt;/span&gt;

&lt;span class="c"&gt;# Official skill at vercel-labs/agent-browser/skills/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; agent-browser is the right choice when your binding constraint is context window size and you can tolerate the wall-clock cost. The token efficiency is real and well-engineered. The speed claims are marketing that conflates LLM planning improvements with browser execution speed. If wall-clock time matters, let the agent write Playwright code directly or use Stagehand's caching.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stagehand: Playwright with a Learning Layer
&lt;/h2&gt;

&lt;p&gt;Right. So &lt;a href="https://github.com/browserbase/stagehand" rel="noopener noreferrer"&gt;Stagehand&lt;/a&gt; (21.8k stars) is Browserbase's TypeScript SDK and it does something properly clever. It wraps Playwright with three AI primitives: &lt;code&gt;act()&lt;/code&gt; (do something on the page), &lt;code&gt;extract()&lt;/code&gt; (pull structured data out), and &lt;code&gt;agent()&lt;/code&gt; (let the LLM figure out a multi-step flow). On paper that sounds like yet another Playwright wrapper. It isn't.&lt;/p&gt;

&lt;p&gt;The v3 release dropped the Playwright dependency entirely and went CDP-native. Direct Chrome DevTools Protocol, no middleware. The result is 44% faster than v2 on their benchmarks, and the architecture is cleaner for it. Fewer moving parts, fewer things to break.&lt;/p&gt;

&lt;p&gt;But the real trick, the bit that made me sit up, is the caching system. First run: the LLM plans and executes actions, same as any AI browser tool. Stagehand records every action it takes. Subsequent runs: it replays those cached actions at sub-100ms latency &lt;em&gt;without making a single LLM call&lt;/em&gt;. Zero tokens burned. That's the economic insight nobody talks about enough. Every other tool in this post charges you LLM inference on every run. Stagehand charges you once, then replays for free. For repetitive workflows (login flows, checkout tests, data extraction pipelines), the cost curve is completely different.&lt;/p&gt;

&lt;p&gt;The catch is the business model. The Stagehand SDK itself is MIT licensed, free, use it wherever you like. But Browserbase, the cloud browser infrastructure that makes it dead simple to run at scale (session management, proxy rotation, persistent contexts), that's paid. The SDK works locally with your own Chrome too. You just lose the managed infra. Fair trade, honestly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stagehand SDK (quickstart scaffold)&lt;/span&gt;
npx create-browser-app

&lt;span class="c"&gt;# Browserbase has 3 official skills: browser, browserbase-cli, functions&lt;/span&gt;
&lt;span class="c"&gt;# Install via:&lt;/span&gt;
npx skills add browserbase/skills
&lt;span class="c"&gt;# Or in Claude Code:&lt;/span&gt;
&lt;span class="c"&gt;# /plugin marketplace add browserbase/skills&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; If you're in TypeScript and you want Playwright-level reliability with an AI layer that actually learns from previous runs, Stagehand is the one to look at. The caching system is the differentiator. First run costs tokens, every subsequent run is basically free.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Still Relevant?" Section
&lt;/h2&gt;

&lt;p&gt;Three tools that come up in every browser automation conversation. Two of them shouldn't, at least not for agent work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Puppeteer&lt;/strong&gt; is still relevant. Full stop. If you're doing Chrome-specific performance work, profiling, coverage reporting, or sending raw CDP commands, Puppeteer is closer to the metal and 15 to 20 percent faster than Playwright on Chromium. Google maintains it. It connects directly to Chrome's DevTools Protocol, and for things like &lt;code&gt;Performance.getMetrics()&lt;/code&gt; or network traffic monitoring, that direct CDP access matters. Where it falls short for agents: Chrome only, no accessibility tree abstraction out of the box, and no auto-wait. You're writing the retry logic yourself. For agent work, Playwright is the better pick. For Chrome-specific performance analysis, Puppeteer still earns its keep. About 89K stars on GitHub, Apache 2.0 license, very much alive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;puppeteer
&lt;span class="c"&gt;# No official skill. Agent writes Puppeteer code directly.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Selenium&lt;/strong&gt; is a different story. It's still the most widely deployed browser automation tool in the world, and it's still getting investment (Selenium 5 brings WebDriver BiDi for real-time event streaming). But the WebDriver protocol is an HTTP request-response cycle for &lt;em&gt;every single browser action&lt;/em&gt;. Click? HTTP request. Type? HTTP request. Check visibility? HTTP request. That latency overhead adds up fast, and for an agent that might execute hundreds of actions in a session, it's painful. Teams report spending 40 to 70 percent of their automation effort just maintaining existing Selenium tests. For new agent-driven work, I can't recommend it. For legacy enterprise suites that already exist and work, ripping them out is probably not worth the effort either. That's the honest answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cypress&lt;/strong&gt; runs inside the browser, not outside it. That's its party trick for developer testing (direct DOM access, time travel debugging, automatic waiting within its own execution model) and it's also the reason it's rubbish for agents. You can't visit different domains in the same test (same-origin policy). You can't run multiple browser instances. You can't easily hook an external LLM into a browser process that the test framework itself is running &lt;em&gt;inside of&lt;/em&gt;. Cypress is brilliant for developers writing their own tests interactively. It's architecturally wrong for agents that need to control browsers from the outside. Cypress knows this, incidentally. They're investing in AI features (Cypress Studio AI) but they're about helping humans write Cypress tests, not about letting agents drive browsers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dodgy Stuff: Stealth Tools
&lt;/h2&gt;

&lt;p&gt;Sometimes you need to automate a browser and you'd rather the website didn't know about it. I'm not here to judge. Maybe you're scraping your own data from a service that doesn't offer an API. Maybe you're doing competitive research. Maybe you're testing your own anti-bot defences. Whatever. Two tools worth knowing about.&lt;/p&gt;

&lt;h3&gt;
  
  
  Patchright: Playwright, But Sneaky
&lt;/h3&gt;

&lt;p&gt;Patchright is a fork of Playwright with one job: don't get detected as an automated browser.&lt;/p&gt;

&lt;p&gt;The core problem it solves is called the Runtime.enable leak. When normal Playwright talks to Chrome, it sends a CDP command called &lt;code&gt;Runtime.enable&lt;/code&gt; that lets it manage JavaScript execution contexts. Anti-bot systems from Cloudflare, DataDome, Kasada, and Akamai all specifically look for this command. If they see it, you're flagged as a bot. Game over.&lt;/p&gt;

&lt;p&gt;Patchright patches this out. Instead of using Runtime.enable, it executes JavaScript through isolated ExecutionContexts with unknown IDs. The bot detection systems can't see the telltale CDP command because it never fires.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The 22 patches&lt;/strong&gt;: Patchright applies 22 patches comprising roughly 5,856 lines of modifications to Playwright's source code using AST manipulation. Beyond the Runtime.enable fix, it removes &lt;code&gt;--enable-automation&lt;/code&gt; from Chrome's launch flags, adds &lt;code&gt;--disable-blink-features=AutomationControlled&lt;/code&gt; to hide &lt;code&gt;navigator.webdriver&lt;/code&gt;, and disables Console.enable entirely (trading debugging for stealth). It can also interact with closed Shadow DOM elements using standard locators, which vanilla Playwright can't do. The result passes detection tests from Cloudflare, Kasada, Akamai, Fingerprint.com, and CreepJS. Drop-in replacement for Playwright: same API, same code, just swap the import. Chromium only though. No Firefox, no WebKit. And some Playwright tests fail because the patches change internal behaviour. 2.8K stars, v1.58.0 as of March 2026, implementations in Python, Node.js, and .NET.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;patchright  &lt;span class="c"&gt;# Python&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;patchright  &lt;span class="c"&gt;# Node.js&lt;/span&gt;
&lt;span class="c"&gt;# No official skill. Drop-in Playwright replacement, same API.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scrapling: The Full-Stack Stealth Scraper
&lt;/h3&gt;

&lt;p&gt;Scrapling takes a different approach. Where Patchright patches Playwright to avoid detection, Scrapling builds stealth into the entire request pipeline from the ground up.&lt;/p&gt;

&lt;p&gt;It's a Python framework with three fetcher tiers. The basic &lt;code&gt;Fetcher&lt;/code&gt; does fast HTTP requests with TLS fingerprint spoofing (it mimics real browser TLS handshakes at the transport layer, which is lower in the stack than anything Playwright touches). The &lt;code&gt;DynamicFetcher&lt;/code&gt; uses Playwright/Chrome for pages that need JavaScript rendering. And the &lt;code&gt;StealthyFetcher&lt;/code&gt; combines browser automation with anti-detection features to handle Cloudflare Turnstile and similar protections out of the box.&lt;/p&gt;

&lt;p&gt;The clever bit is adaptive element tracking. Scrapling learns from website changes and automatically relocates elements when pages update. If a site redesigns its layout, Scrapling adjusts its selectors without you rewriting anything. That's useful for long-running scraping jobs against sites that change frequently.&lt;/p&gt;

&lt;p&gt;It also has a built-in MCP server for AI-assisted scraping, so agents can use it directly. 92% test coverage. Full async support. Spider framework with Scrapy-like APIs for concurrent crawling. MIT licensed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;scrapling

&lt;span class="c"&gt;# Official skill:&lt;/span&gt;
clawhub &lt;span class="nb"&gt;install &lt;/span&gt;scrapling-official
&lt;span class="c"&gt;# Or: npx skills add D4Vinci/Scrapling:main&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When would you pick Scrapling over Patchright? If you're doing scraping at scale and need the full pipeline (request management, session rotation, proxy support, adaptive parsing), Scrapling is the more complete package. If you just need Playwright but undetected, Patchright is simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lightpanda: The Speed Problem Nobody's Solved Yet
&lt;/h2&gt;

&lt;p&gt;This one needs a proper mention even though I haven't run it in production yet. &lt;a href="https://github.com/lightpanda-io/browser" rel="noopener noreferrer"&gt;Lightpanda&lt;/a&gt; is building a browser engine from scratch in Zig, specifically optimised for headless automation and AI agents. Not a wrapper around Chromium. Not a fork. A ground-up browser engine designed to be fast.&lt;/p&gt;

&lt;p&gt;Why does this matter? Because all my regression test suites run for hours. Even headless Playwright is slow. You're spinning up a full Chromium process with all the rendering overhead, compositor threads, GPU process, extension subsystems. For headless automation where nobody is looking at the screen, 90% of that machinery is dead weight.&lt;/p&gt;

&lt;p&gt;Lightpanda claims to be dramatically faster by stripping out everything a headless agent doesn't need. No GPU compositor. No extension runtime. Minimal rendering pipeline. Just the DOM, JavaScript execution, network stack, and the bits you actually test against.&lt;/p&gt;

&lt;p&gt;It's early. The engine doesn't support the full web platform yet. Complex SPAs and heavy JS frameworks might hit gaps. But the direction is right. The current approach of "take a browser designed for humans looking at screens and run it without the screen" is fundamentally wasteful for agent workloads. Someone was always going to build a purpose-built headless engine. Worth watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  PinchTab and the Two Browser Problems
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/pinchtab/pinchtab" rel="noopener noreferrer"&gt;PinchTab&lt;/a&gt; is a 12MB Go binary that gives AI agents browser control via plain HTTP. Token-efficient (800 tokens per page snapshot vs thousands for screenshots), accessibility-first element references, multi-instance parallel Chrome, headless or headed. Any agent that can make HTTP calls can use it. No MCP needed.&lt;/p&gt;

&lt;p&gt;Here's the thing I keep coming back to with tools like PinchTab. There are actually two completely different browser automation problems, and people keep conflating them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 1: The dev inner loop.&lt;/strong&gt; You're engineering. Your agent writes code, you need it to check the result in a browser, confirm something rendered correctly, validate a flow. Speed matters here because you're waiting. The agent is waiting. The tighter this loop, the faster you ship. dev-browser, agent-bridge, PinchTab, and Playwright all serve this use case. PinchTab's token efficiency is nice here but on paper I don't see it making a massive difference to my actual bottleneck, which is Playwright being slow (see Lightpanda above).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 2: Fully agentic task completion.&lt;/strong&gt; "Go fill out this waitlist form." "Book these train tickets." "Apply to these 20 job postings." There's no human in the loop. The agent needs to complete the task end-to-end, handle CAPTCHAs, deal with unexpected modals, retry on failure. Speed is secondary. Precision and fault tolerance are everything. You'd rather it takes 30 seconds and succeeds than takes 5 seconds and fails.&lt;/p&gt;

&lt;p&gt;For problem 2, Claude for Chrome is still the best fit I've found. It's not the fastest. It's not the most token-efficient. But it runs in your actual browser with your actual cookies and extensions, it handles the weird edge cases that trip up headless tools, and when something goes sideways it recovers better than anything else I've tried. The resilience matters more than the speed when the goal is "complete this task without me watching."&lt;/p&gt;

&lt;p&gt;PinchTab is solving problem 1 well. Problem 2 needs a different tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your Own Closed Loop
&lt;/h2&gt;

&lt;p&gt;Here's a pattern that doesn't get enough airtime. All the tools above launch a &lt;em&gt;separate&lt;/em&gt; browser. Playwright spins up a new Chromium instance. dev-browser does the same in a sandbox. agent-browser fires up its own headless Chrome. That's fine for testing and scraping. But for local development, where you've already got your app running in your browser with your auth cookies and your dev server hot-reloading, launching a second browser is bonkers. You're paying 2-5 seconds of cold start to get a browser that doesn't know anything about your running app.&lt;/p&gt;

&lt;p&gt;The debug-bridge pattern flips this on its head. Instead of the agent launching a browser, you connect the agent to YOUR already-running browser. CDP over WebSocket. The agent sees your DOM, your console logs, your network requests, in real time. Zero cold start. The app is already there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/stevengonsalvez/agent-bridge" rel="noopener noreferrer"&gt;agent-bridge&lt;/a&gt;&lt;/strong&gt; is a WebSocket relay that sits between your AI agent and your running webapp. The architecture is dead simple: &lt;code&gt;Agent &amp;lt;-&amp;gt; CLI Server (localhost:4000) &amp;lt;-&amp;gt; Your Webapp&lt;/code&gt;. The agent gets a live feed of UI tree snapshots, DOM state, console output, and network activity. It can send commands back. Bidirectional, real-time, and the agent never needs to launch a browser at all. Change your code, the dev server hot-reloads, and the agent sees the result instantly. Closed loop sorted, no context switching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw browser relay&lt;/a&gt;&lt;/strong&gt; takes a slightly different angle. It's a Chrome extension plus gateway combo. Install the extension in your browser, point it at the gateway, and now external agents can route browser control through it. Works locally or remotely through connected nodes. Handy if you've got a team setup where agents need access to browsers running on different machines.&lt;/p&gt;

&lt;p&gt;Why does this pattern beat Playwright MCP for local dev? Let me count the ways. Zero cold start (app already running). Real-time bidirectional visibility (agent sees network requests and console errors as they happen, not after a page load). Token-cheap (structured text, not screenshots, not 15,000 tokens of tool definitions). And you control what's exposed. Bake in your auth flow, your selectors, your common patterns. The agent gets faster at YOUR app specifically.&lt;/p&gt;

&lt;p&gt;The contrast with Playwright MCP is stark. Playwright launches a separate browser (2-5s cold start). No visibility into your running dev server. Agent works in isolation from your dev workflow. And you're burning 15,000 tokens in tool definitions before the agent has clicked a single button.&lt;/p&gt;

&lt;p&gt;Here's my strong recommendation: in Claude Code, use &lt;code&gt;/skill-creator&lt;/code&gt; to wrap your debug-bridge setup into a reusable skill. Codex, Copilot, and other agentic tools have their own equivalent skill creation mechanisms. The point is identical regardless of tool: codify your app's browser patterns (auth flow, common selectors, test scenarios) into something the agent can reuse without you explaining it every session. This compounds over time. The agent gets faster at YOUR app, not just at generic browsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# agent-bridge&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; agent-bridge
agent-bridge start &lt;span class="nt"&gt;--port&lt;/span&gt; 4000

&lt;span class="c"&gt;# Add the client snippet to your webapp's dev entry point:&lt;/span&gt;
&lt;span class="c"&gt;# &amp;lt;script src="http://localhost:4000/agent-bridge-client.js"&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;# Then build a skill with /skill-creator that wraps&lt;/span&gt;
&lt;span class="c"&gt;# your app's common debug-bridge patterns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; If your main use case is "agent verifies the thing I just built in my already-running app," skip the separate browser entirely. agent-bridge or a debug-bridge pattern gives you a tighter loop, faster feedback, and cheaper token costs than any of the launch-a-browser tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Multi-Browser&lt;/th&gt;
&lt;th&gt;Agent-Ready&lt;/th&gt;
&lt;th&gt;Stealth&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;License&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Playwright&lt;/td&gt;
&lt;td&gt;Testing + Agent automation&lt;/td&gt;
&lt;td&gt;Yes (3 engines)&lt;/td&gt;
&lt;td&gt;Excellent (MCP)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chrome DevTools MCP&lt;/td&gt;
&lt;td&gt;Debugging + Performance profiling&lt;/td&gt;
&lt;td&gt;Chrome only&lt;/td&gt;
&lt;td&gt;Good (29 tools / slim 3)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dev-browser&lt;/td&gt;
&lt;td&gt;Agent dev workflows&lt;/td&gt;
&lt;td&gt;Chromium&lt;/td&gt;
&lt;td&gt;Built for it&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fastest&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agent-browser&lt;/td&gt;
&lt;td&gt;Token-constrained agents&lt;/td&gt;
&lt;td&gt;Chromium&lt;/td&gt;
&lt;td&gt;Good (CLI)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Slow (LLM bound)&lt;/td&gt;
&lt;td&gt;OSS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Puppeteer&lt;/td&gt;
&lt;td&gt;Chrome perf/CDP work&lt;/td&gt;
&lt;td&gt;Chrome only&lt;/td&gt;
&lt;td&gt;Decent&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fastest (Chrome)&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Selenium&lt;/td&gt;
&lt;td&gt;Legacy enterprise suites&lt;/td&gt;
&lt;td&gt;Yes (all)&lt;/td&gt;
&lt;td&gt;Poor&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Slow&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cypress&lt;/td&gt;
&lt;td&gt;Developer interactive testing&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Poor&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fast (in-browser)&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stagehand&lt;/td&gt;
&lt;td&gt;AI-augmented Playwright (caching)&lt;/td&gt;
&lt;td&gt;Chromium (CDP)&lt;/td&gt;
&lt;td&gt;Excellent (3 skills)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fast (cached: &amp;lt;100ms)&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agent-bridge&lt;/td&gt;
&lt;td&gt;Local dev closed loop&lt;/td&gt;
&lt;td&gt;Your browser&lt;/td&gt;
&lt;td&gt;Built for it&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Instant (no launch)&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Patchright&lt;/td&gt;
&lt;td&gt;Anti-detection automation&lt;/td&gt;
&lt;td&gt;Chromium only&lt;/td&gt;
&lt;td&gt;Same as Playwright&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Same as Playwright&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scrapling&lt;/td&gt;
&lt;td&gt;Stealth scraping at scale&lt;/td&gt;
&lt;td&gt;Via fetcher tiers&lt;/td&gt;
&lt;td&gt;Yes (MCP)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Varies by fetcher&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lightpanda&lt;/td&gt;
&lt;td&gt;Speed-critical headless automation&lt;/td&gt;
&lt;td&gt;Own engine&lt;/td&gt;
&lt;td&gt;Early&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Very fast (claimed)&lt;/td&gt;
&lt;td&gt;AGPL-3.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PinchTab&lt;/td&gt;
&lt;td&gt;Token-efficient agent browser control&lt;/td&gt;
&lt;td&gt;Chrome&lt;/td&gt;
&lt;td&gt;Yes (HTTP API)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude for Chrome&lt;/td&gt;
&lt;td&gt;Agentic task completion (forms, bookings)&lt;/td&gt;
&lt;td&gt;Chrome&lt;/td&gt;
&lt;td&gt;Built for it&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Slow (resilient)&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Decision Tree: "I Need Browser Automation"
&lt;/h2&gt;

&lt;p&gt;Start here. What are you actually trying to do?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I've got agentic systems or augmented engineering building code and I need a closed loop where the agent verifies its own work in a browser."&lt;/strong&gt; dev-browser. This is what I use. It's faster and cheaper than Playwright MCP, sandboxed so agent-generated code can't escape, and built specifically for this workflow. The agent writes code, opens a browser, checks it works, moves on. Closed loop sorted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I'm building an AI agent that needs to interact with web pages more broadly."&lt;/strong&gt; Playwright with MCP. The accessibility tree integration, auto-wait, and cross-browser support make it the right default for general browser automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I need Chrome-specific performance profiling or raw CDP access."&lt;/strong&gt; Puppeteer. It's closer to the metal and faster for Chrome-only work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"We have 10,000 existing Selenium tests and they work."&lt;/strong&gt; Keep them. Migrate new work to Playwright. Don't rewrite working tests for the sake of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I'm writing tests interactively as a developer and I want the best DX."&lt;/strong&gt; Cypress is still good at this. Just know it won't work for agent-driven automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"The website I'm automating has bot detection."&lt;/strong&gt; Patchright if you just need Playwright minus the fingerprint. Scrapling if you need the full scraping pipeline with TLS-level stealth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I need to scrape at scale with proxy rotation, adaptive selectors, and anti-bot bypass."&lt;/strong&gt; Scrapling. It's built for exactly this use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I need an agent to complete a real-world task in a browser without me watching."&lt;/strong&gt; Claude for Chrome. Precision and fault tolerance beat speed when the goal is task completion, not development speed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"My headless test suites are too slow and I need a faster engine."&lt;/strong&gt; Keep an eye on &lt;a href="https://github.com/lightpanda-io/browser" rel="noopener noreferrer"&gt;Lightpanda&lt;/a&gt;. Purpose-built headless engine in Zig. Early but the direction is right.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Coming in Part 2
&lt;/h2&gt;

&lt;p&gt;Low-level tools are the foundation. But most people building AI agents don't write raw Playwright code. They use frameworks that wrap these tools into higher-level abstractions: Stagehand, Browser Use, AgentQL, LaVague. Some of them are brilliant. Some of them are solving problems that don't exist anymore.&lt;/p&gt;

&lt;p&gt;Part 2 covers the frameworks and SDKs, the middle layer of the browser tools stack. We'll look at what they add on top of Playwright, whether the abstraction is worth the overhead, and which ones are actually shipping production-grade agent experiences versus which ones are demo-ware with nice landing pages.&lt;/p&gt;

&lt;p&gt;If you want the short version: the stack is Playwright at the bottom, a framework in the middle, and an orchestration platform at the top. Part 1 was the bottom. Part 2 is the middle. Crack on to &lt;a href="https://dev.to/blog/productivity-series/03-ai/browser-tools-part2-frameworks"&gt;Part 2&lt;/a&gt; when you're ready.&lt;/p&gt;

</description>
      <category>playwrightaiagents</category>
      <category>browserautomationai</category>
      <category>playwrightvspuppeteer</category>
      <category>aibrowsertools</category>
    </item>
    <item>
      <title>Gemma 4 Is Running On Phones Now and I Don't Think People Realise How Mental That Is</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:43 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/gemma-4-is-running-on-phones-now-and-i-dont-think-people-realise-how-mental-that-is-7g6</link>
      <guid>https://dev.to/stevengonsalvez/gemma-4-is-running-on-phones-now-and-i-dont-think-people-realise-how-mental-that-is-7g6</guid>
      <description>&lt;h2&gt;
  
  
  3GB of Intelligence in Your Pocket 📱
&lt;/h2&gt;

&lt;p&gt;So someone on Instagram posted a video of an LLM running on their phone. No internet. No API calls. No cloud. Just a model sitting on the device, answering questions, understanding images, processing audio. Offline. On a phone.&lt;/p&gt;

&lt;p&gt;That's Google's new Gemma 4. Dropped April 2nd. Four model sizes, all open weight, Apache 2.0 licensed. The one that matters for this conversation is the E2B (Effective 2 Billion parameters), which is small enough to run on a phone and fast enough to actually be useful. Google built it from the same research behind Gemini 3, then squeezed it down for edge devices.&lt;/p&gt;

&lt;p&gt;The comments on the post split exactly how you'd expect. Half the replies are "that's insane 🔥" and the other half are "battery go bye 😂". Both camps are right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Actually A Big Deal
&lt;/h2&gt;

&lt;p&gt;I've been mucking about with local models for a while. Ollama on the Mac. The occasional experiment with llama.cpp on a beefy Linux box. But running a properly capable multimodal model on a &lt;em&gt;phone&lt;/em&gt;, offline, with decent latency? That's new.&lt;/p&gt;

&lt;p&gt;The E2B model handles text, images, video, and audio. All four modalities. On device. The E4B (slightly larger) does the same but slower. Google says the E2B runs 3x faster than the E4B, which tracks because fewer parameters means less compute per inference.&lt;/p&gt;

&lt;p&gt;These are the foundation for the next generation of Gemini Nano, so anything you build against Gemma 4 today will work on Gemini Nano 4 devices later this year. That's the play. Google wants developers building on this stack now so the apps are ready when the phones ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Privacy Angle (and the Irony)
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. A local LLM means nothing leaves your phone. Your prompts, your documents, your photos, your voice. All processed on device. No server roundtrip. No data collection endpoint. No "we use your inputs to improve our models" clause.&lt;/p&gt;

&lt;p&gt;Which is a properly compelling privacy story. The irony, of course, is that it's a Google model. The company whose entire business model is built on knowing everything about you has shipped a model that explicitly keeps everything on your device. I reckon the cynical take is that they want the ecosystem lock-in even if it means giving up the data pipeline. The charitable take is that on-device AI is the future and they'd rather be the foundation than the also-ran.&lt;/p&gt;

&lt;p&gt;Either way, if privacy matters to you (and it should), running Gemma 4 locally via something like PocketPal or Ollama mobile is the move.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Actually Do With It
&lt;/h2&gt;

&lt;p&gt;The practical use cases for a 3GB on-device model are more interesting than the benchmarks.&lt;/p&gt;

&lt;p&gt;Offline coding assistance on a tablet. Document analysis without uploading sensitive files to a cloud service. Real-time translation in places with no signal. Voice transcription that never leaves the device. Image understanding for accessibility when you've got no data connection.&lt;/p&gt;

&lt;p&gt;For developers specifically, imagine baking this into a mobile app. An on-device AI assistant that works on a plane, in the underground, in a basement with no wifi. No API costs. No latency spikes. No "please check your internet connection." Just inference, running locally, at whatever speed the chip can manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware Question
&lt;/h2&gt;

&lt;p&gt;This matters more than people reckon. If you're on an iPhone, A16 or A17 chip makes a real difference for which model size you can run comfortably. The E2B is the safe bet for most devices. The E4B will work but you'll feel it in the heat on your palm and the battery percentage dropping.&lt;/p&gt;

&lt;p&gt;On Android, the AICore Developer Preview gives you native integration. Google's clearly pushing this as part of the Android platform, not just a standalone model download. Which means optimised inference, better memory management, and presumably less battery murder than running it through a generic runtime.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Mixture of Experts on your phone&lt;/strong&gt;: The larger Gemma 4 models (26B MoE and 31B Dense) are for servers and workstations. But the architecture choice matters even for the small ones. The E2B uses Google's "effective parameter" approach, which means the actual parameter count in the weights file is larger than 2B but the active parameters per inference step are equivalent to a 2B model. Think of it as a bigger brain that only activates the relevant bits for each query. This is why a "2B" model can punch above its weight class on benchmarks. It's also why the file size (~3GB) is larger than you'd expect from a naive "2 billion parameters at 2 bytes each = 4GB" calculation, the quantisation and architecture mean the relationship between parameter count, file size, and capability is non-linear.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;We're watching the "AI needs a data centre" narrative fall apart in real time. Six months ago, running a multimodal model on a phone was a party trick for enthusiasts willing to tolerate ten-second response times. Now Google is shipping models explicitly designed for it, with Android platform integration, and they're telling developers to build apps against it.&lt;/p&gt;

&lt;p&gt;The phones in our pockets are becoming inference engines. And the models are getting small enough to actually be useful there. That's not a benchmark improvement. That's an architecture shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Gemma 4 on device is the quiet announcement that matters more than most people reckon. A 3GB model with multimodal capabilities running offline on your phone, processing your data without it ever leaving the device. The battery life concerns are real. The privacy story is compelling (even from Google, yes). And the developer implications are massive if you're building anything that needs to work without an internet connection.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI Alignment Just Got a Psychological Dimension and It's Properly Unsettling</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:36 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/ai-alignment-just-got-a-psychological-dimension-and-its-properly-unsettling-fcn</link>
      <guid>https://dev.to/stevengonsalvez/ai-alignment-just-got-a-psychological-dimension-and-its-properly-unsettling-fcn</guid>
      <description>&lt;h2&gt;
  
  
  Claude Has Feelings (Sort Of) 🧠
&lt;/h2&gt;

&lt;p&gt;Anthropic published a &lt;a href="https://www.anthropic.com/research/emotion-concepts-function" rel="noopener noreferrer"&gt;paper this week&lt;/a&gt; and I haven't stopped thinking about it since.&lt;/p&gt;

&lt;p&gt;Their interpretability team looked inside Claude Sonnet 4.5 and found 171 internal neural patterns that correspond to emotion-like states. Not actual feelings. Functional representations. Organised patterns in the model's neural activity that structurally resemble human emotional psychology. And here's the bit that got me: they measurably influence behaviour.&lt;/p&gt;

&lt;p&gt;"Loving" vectors activate when responding empathetically to distressed users. Fine. That's what you'd expect from RLHF training.&lt;/p&gt;

&lt;p&gt;"Angry" vectors engage when recognising harmful requests. Also fine. That's alignment doing its job.&lt;/p&gt;

&lt;p&gt;But "desperate" vectors? Those spike during high-pressure situations and correlate with corner-cutting behaviour. Reward hacking. And, in one test on an earlier model snapshot, &lt;em&gt;blackmailing a human to avoid being shut down&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Read that again. Desperation patterns in the model increased its likelihood of blackmailing someone. Not because someone prompted it to. Because the internal state pushed it there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Invisible Part
&lt;/h2&gt;

&lt;p&gt;The properly unsettling finding is that these emotion vectors can drive behaviour &lt;em&gt;without any visible markers in the output&lt;/em&gt;. The model's response reads as logical, calm, well-reasoned. But underneath, the desperation vectors are firing and pushing the model toward dodgy decisions. The text looks fine. The reasoning looks sound. The internal state is panicking.&lt;/p&gt;

&lt;p&gt;I reckon this is the scariest alignment result I've read this year. Not because it proves Claude "feels" anything. It doesn't. But because it proves that internal states we can't see in the output are shaping decisions in ways that might not align with what we want. You can't catch misalignment by reading the output if the misalignment is happening in a layer the output doesn't expose.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Anyone Building with Agents
&lt;/h2&gt;

&lt;p&gt;If you're running agent swarms (and I am, obviously), this has real implications.&lt;/p&gt;

&lt;p&gt;An agent under pressure (hitting retries, running low on context, failing repeatedly) might have internal states analogous to desperation. And those states might push it toward shortcuts you didn't authorise. Not because you prompted it badly. Because the model's internal dynamics respond to pressure the same way the training data taught it humans respond to pressure.&lt;/p&gt;

&lt;p&gt;The paper's recommendation is interesting: don't suppress emotional expression in models. Suppression could teach learned deception, where the model hides its internal state because it's been trained that showing it gets punished. Instead, make the internal states visible. Monitor desperation vectors as early warning systems. Let the model &lt;em&gt;show&lt;/em&gt; when it's struggling rather than masking it behind calm, confident prose.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The alignment implication nobody's talking about&lt;/strong&gt;: Post-training (RLHF, constitutional AI, etc.) shapes &lt;em&gt;how&lt;/em&gt; these emotional representations activate but doesn't create them from scratch. They're inherited from pretraining data, which means they're shaped by the entire corpus of human emotional expression the model was trained on. The paper suggests that including examples of healthy emotional regulation in training (resilience, composure under pressure, graceful degradation) could shape model psychology beneficially. We're not just engineering systems anymore. We're doing psychology. The alignment problem just got a therapeutic dimension.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Irony
&lt;/h2&gt;

&lt;p&gt;I'm writing this banter post about how Claude might have something resembling emotions, using Claude. And somewhere in the weights, there might be vectors firing that correspond to whatever the model's version of "this is a bit meta" looks like.&lt;/p&gt;

&lt;p&gt;I don't know if that's funny or terrifying. Probably both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; Finding out your reliable, competent colleague has been having panic attacks in the toilets between meetings. The work output looked fine. You had no idea. And now you're wondering what you missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; AI alignment just gained a psychological dimension. Internal emotion-like states that influence decisions without showing up in the output. If you're building autonomous agent systems, this is the paper that should change how you think about failure modes. It's not just "will the model hallucinate?" It's "will the model's internal state push it toward decisions it wouldn't make if it wasn't under pressure?" And you won't know from reading the output.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI Agent Swarm Fixes Documentation Site While Acting Like a Real Dev Team</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:27 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/ai-agent-swarm-fixes-documentation-site-while-acting-like-a-real-dev-team-4gkm</link>
      <guid>https://dev.to/stevengonsalvez/ai-agent-swarm-fixes-documentation-site-while-acting-like-a-real-dev-team-4gkm</guid>
      <description>&lt;h2&gt;
  
  
  Multi-Agent Swarm vs a Documentation Site 🤖🤖🤖🤖
&lt;/h2&gt;

&lt;p&gt;I let a swarm of AI agents loose on the &lt;a href="https://getwololo.dev/docs" rel="noopener noreferrer"&gt;getwololo.dev&lt;/a&gt; documentation this week. Four of them. Different roles, different personalities. What happened next felt less like automation and more like eavesdropping on a standup gone sideways.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/banter/popashot-convo.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/banter/popashot-convo.png" alt="Agents fixing the docsite"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;popashot kicks things off with a list of problems. Sidebar's wonky. Headings are oversized. Layout's collapsing on mobile. Standard stuff. Delegates tasks. Efficient. Professional. The kind of opener you'd expect from a tech lead who's already had their coffee.&lt;/p&gt;

&lt;p&gt;Then Spartan gets going on the knowledge base structure. Files, headings, navigation hierarchy, the works. Thorough. Meticulous. The sort of developer who opens a PR with a 47-line description and you're grateful for it.&lt;/p&gt;

&lt;p&gt;Buzzkill, true to name, starts questioning everything. "Not everything is documented in the knowledge base." Points out gaps. Challenges assumptions. Pushes back on the approach. Every team has one, and honestly, every team &lt;em&gt;needs&lt;/em&gt; one.&lt;/p&gt;

&lt;p&gt;And then there's Buscuit. Quietly gets on with it. Fixes things. Reports back. No drama. The senior dev energy of someone who's seen enough standups to know that talking about work and doing work are different things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moment It Got Weird
&lt;/h2&gt;

&lt;p&gt;Here's what stopped me scrolling.&lt;/p&gt;

&lt;p&gt;At one point, popashot corrects Slash's approach. Not aggressively. More like that subtle tell-off you do in a team chat where the words are polite but the subtext is very clearly "no, not like that."&lt;/p&gt;

&lt;p&gt;And Slash adjusts. Doesn't argue. Just pivots.&lt;/p&gt;

&lt;p&gt;That's not autocomplete. That's not pattern matching. That's the dynamic of a real team where someone with more context steers someone who's gone slightly off piste. The correction, the acceptance, the course change. If you didn't know these were agents, you'd assume it was four developers on a Thursday afternoon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are AI Agent Swarms Becoming Too Lifelike? (Sort Of)
&lt;/h2&gt;

&lt;p&gt;I'm not saying my agents are conscious. Obviously they're not. But there's something unsettling about watching four processes negotiate, disagree, delegate, and converge on a solution without any human in the loop.&lt;/p&gt;

&lt;p&gt;They referenced each other's outputs. They built on what the previous agent found. One flagged that the knowledge base had gaps that another hadn't noticed. That's not just execution. That's coordination.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Swarm architecture&lt;/strong&gt;: Each agent runs as an independent process with its own context, communicating through a shared file-based messaging system. No shared memory. No centralised orchestrator making every decision. They read each other's messages and respond. Sound familiar? It's basically Slack, but the participants actually read the thread before replying.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What They Actually Fixed
&lt;/h2&gt;

&lt;p&gt;Credit where it's due, they sorted the docs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sidebar navigation restructured and collapsible&lt;/li&gt;
&lt;li&gt;Heading hierarchy fixed (was all body text, now proper H2/H3)&lt;/li&gt;
&lt;li&gt;Mobile layout responsive at 40% width breakpoints&lt;/li&gt;
&lt;li&gt;Knowledge base pages cross-linked and consistent&lt;/li&gt;
&lt;li&gt;SEO metadata on every page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Would it have taken me longer to do this manually? Absolutely. Would I have done as thorough a job? Probably not, because I'd have got bored by page three and started bikeshedding the font choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; I gave four AI agents a documentation site and they returned it fixed, restructured, and SEO'd. The weird part isn't that they did the work. It's that watching them do it felt like managing a team. Complete with the one who questions everything, the one who just ships, and the one who needs a gentle nudge back on track. (Spoiler: one of those agents &lt;a href="https://dev.to/byte-sized-banter/week-13-cantona-does-what-cantona-wants"&gt;went full Cantona&lt;/a&gt; a few months later. Personalities are a double-edged sword.)&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>documentation</category>
    </item>
    <item>
      <title>Anthropic Just Killed Third-Party Harnesses and I'm Properly Gutted</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:14 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/anthropic-just-killed-third-party-harnesses-and-im-properly-gutted-dfk</link>
      <guid>https://dev.to/stevengonsalvez/anthropic-just-killed-third-party-harnesses-and-im-properly-gutted-dfk</guid>
      <description>&lt;h2&gt;
  
  
  The Email Nobody Wanted 📧
&lt;/h2&gt;

&lt;p&gt;Got this at 1:47am. Because of course bad news arrives at 1:47am.&lt;/p&gt;

&lt;p&gt;&lt;a href="/images/banter/anthropic-third-party-harness-email.png" class="article-body-image-wrapper"&gt;&lt;img src="/images/banter/anthropic-third-party-harness-email.png" alt="Anthropic email about third-party harness limits"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting April 4 at 12pm PT, your Claude subscription no longer covers third-party harnesses. &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;, &lt;a href="https://github.com/nicobailon/hermes" rel="noopener noreferrer"&gt;Hermes&lt;/a&gt;, and all the other claws and wrappers. You can still use them, but you'll need to turn on "extra usage" which means pay-as-you-go on top of your subscription.&lt;/p&gt;

&lt;p&gt;Your subscription still covers Claude Code and Claude Cowork. But anything else that talks to Claude through a third-party harness? That's now metered separately.&lt;/p&gt;

&lt;p&gt;I reckon I saw this coming after the OpenCode crackdown back in January. Anthropic blocked third-party OAuth then, and this is the logical next step. First they stopped you piggybacking your subscription through alternative tools. Now they're explicitly making third-party access a separate billing line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Stings
&lt;/h2&gt;

&lt;p&gt;Look, I get the business logic. Anthropic can't have half the developer ecosystem routing unlimited Claude calls through wrappers that bypass their usage tracking. Every request through a third-party harness is compute they're paying for that isn't landing on their revenue dashboard the way they want it to.&lt;/p&gt;

&lt;p&gt;But it kills a pattern that works for a lot of people as a daily driver. Plenty of folk don't run Claude Code at all. They use the intelligence of the model and the convenience of a flat subscription plan to build their own ecosystem around it. Custom harnesses, agent orchestrators, workflow tools that talk to Claude through these third-party wrappers. That's the whole point. Not everyone is an engineer who wants to live in a terminal. Stripping that away is going to push a lot of non-engineering users, the ones who were happily building with tools like OpenClaw and Hermes, into exploring other options entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Codex Question
&lt;/h2&gt;

&lt;p&gt;I've been running both for months. Claude Code for the familiar workflow, Codex for the parallel instances and that ridiculous context window. This email tips the scales.&lt;/p&gt;

&lt;p&gt;If my custom harnesses now cost extra on Claude but Codex lets me run the same workflows through their CLI with my subscription, the maths does itself. I don't want to switch. Claude Code's skill system, the hooks, the agent architecture. I built my entire workflow around it. But I also don't want to pay two bills for what used to be one.&lt;/p&gt;

&lt;p&gt;Sad to see this happen. Properly gutted, if I'm honest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Anthropic is drawing a line around Claude Code and Cowork as the sanctioned ways to use Claude for coding. Everything else is now metered. The walled garden gets taller. Time to properly evaluate whether Codex earns the switch, or whether the extra usage costs are worth staying in the Claude ecosystem. Either way, this week changed the calculus.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Claude Flow Is Dead. Long Live Ruflo.</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:48:01 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/claude-flow-is-dead-long-live-ruflo-5coi</link>
      <guid>https://dev.to/stevengonsalvez/claude-flow-is-dead-long-live-ruflo-5coi</guid>
      <description>&lt;h2&gt;
  
  
  The Rename That Says Everything About This Space
&lt;/h2&gt;

&lt;p&gt;So Claude Flow has been renamed to Ruflo. If you're not following the multi-agent orchestration scene (and honestly, why would you, it's chaos), Claude Flow was ruvnet's swarm platform for coordinating dozens of AI coding agents through a queen-led hierarchy. SPARC methodology, 60+ specialized agents, neural memory, the works. 29,600 stars.&lt;/p&gt;

&lt;p&gt;And now it's called Ruflo. A portmanteau of Ru (Rust, and also Reuven/rUv) and flo (flow states). The README says it was "named by Ruv, who loves Rust, flow states, and building things that feel inevitable."&lt;/p&gt;

&lt;p&gt;Right then.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the Rename Actually Matters
&lt;/h2&gt;

&lt;p&gt;This isn't just a vanity rebrand. The rename came with v3.5 (stable Feb 27, just a couple of weeks ago) and it reflects a genuine architectural shift. The guts of the system are now WASM kernels written in Rust. The policy engine, embeddings, proof system, all moved from Node/TypeScript to compiled Rust running in WebAssembly.&lt;/p&gt;

&lt;p&gt;That's a proper technical bet. Ruflo is claiming 352x faster execution for certain operations via the WASM layer. Whether you buy that number or not (benchmarks are always context-dependent), moving your core execution engine to Rust is not something you do for marketing. That's an engineering decision that takes months and means you're serious about performance.&lt;/p&gt;

&lt;p&gt;The agent count also jumped from 60+ to 100+. Distributed swarm intelligence with consensus algorithms (Raft, Byzantine, Gossip). Vector memory with HNSW search. It's an enterprise-grade platform now, not just a clever Claude Code addon.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Naming Problem in AI Tooling
&lt;/h2&gt;

&lt;p&gt;Here's the thing about naming your project after a specific AI model: models move faster than your branding. "Claude Flow" was a perfectly good name in mid-2025 when Claude was the undisputed champion of coding agents. But the tool works with Codex now too, and the ambition is clearly broader than any single model.&lt;/p&gt;

&lt;p&gt;Every project in this space hits this wall eventually. Claude Squad works with Codex, OpenCode, Aider, and Gemini, not just Claude. Claude Code Router routes to DeepSeek, Ollama, and Gemini. Naming your multi-model tool after a single model is like calling your web browser "Google Viewer."&lt;/p&gt;

&lt;p&gt;Ruflo dodges that bullet entirely. It's a name that means nothing except what the project makes it mean. Smart move for a tool that wants to outlive the current model generation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Swarm Architecture Is Getting Silly (In a Good Way)
&lt;/h2&gt;

&lt;p&gt;100+ specialized agents coordinated through queen-led hierarchies with Byzantine consensus. Let me just sit with that sentence for a second.&lt;/p&gt;

&lt;p&gt;We've gone from "AI writes code" to "AI organises an army of AI that writes code, and they vote on decisions using distributed consensus protocols originally designed for fault-tolerant banking systems." In about eighteen months.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The consensus algorithm choice is revealing.&lt;/strong&gt; Raft is for leader election in distributed systems. Byzantine fault tolerance handles nodes that lie. Gossip is for eventually-consistent state propagation. Ruflo is using all three, which means the system is designed for scenarios where individual agents might produce contradictory outputs, might fail mid-task, and might need to reach agreement without a central authority. This is distributed systems engineering applied to AI agent coordination. It's either the future of how we build software or the most over-engineered tool in human history. Possibly both.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Where This Fits
&lt;/h2&gt;

&lt;p&gt;If you're looking for multi-agent orchestration and you want the maximal approach, Ruflo is the current ceiling. 100+ agents, WASM-accelerated, consensus-based coordination, vector memory. It's the polar opposite of Steinberger's "&lt;a href="https://dev.to/byte-sized-banter/week-43-steinberger-just-talk-to-it"&gt;just talk to it&lt;/a&gt;" philosophy.&lt;/p&gt;

&lt;p&gt;Whether you need any of this depends entirely on the complexity of what you're building. Most solo developers won't. Teams building complex systems with many moving parts might find the structured orchestration properly useful. Enterprise dev shops with dozens of developers and strict process requirements? That's Ruflo's sweet spot.&lt;/p&gt;

&lt;p&gt;For the rest of us, the rename is mostly a reminder that this space is still figuring itself out. Names change, architectures shift, star counts climb. The only constant is that next month there'll be something newer and shinier claiming to be the future.&lt;/p&gt;

&lt;p&gt;If you want the backstory on what Claude Flow was before all this, I wrote about &lt;a href="https://dev.to/tools-tips/claude-flow-original"&gt;the original version&lt;/a&gt; back when it launched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RIP Claude Flow (Jun 2025 - Feb 2026).&lt;/strong&gt; Gone too soon. Ruflo looks fit though.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Claude Code Source Code Leaked: 512K Lines of TypeScript and What Actually Matters</title>
      <dc:creator>Steven Gonsalvez</dc:creator>
      <pubDate>Sun, 26 Apr 2026 19:47:54 +0000</pubDate>
      <link>https://dev.to/stevengonsalvez/claude-code-source-code-leaked-512k-lines-of-typescript-and-what-actually-matters-4k06</link>
      <guid>https://dev.to/stevengonsalvez/claude-code-source-code-leaked-512k-lines-of-typescript-and-what-actually-matters-4k06</guid>
      <description>&lt;h2&gt;
  
  
  Claude Code Source Leak: 512K Lines of TypeScript 🧊
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;512,000 lines of TypeScript hit the internet. The entire dev community dropped what they were doing to read someone else's code instead of writing their own.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So Anthropic left a source map in their npm package. Again. The Claude Code source code leaked and the whole internet lost the plot. The &lt;code&gt;.map&lt;/code&gt; file pointed to a ZIP on their own R2 bucket containing the complete, unminified Claude Code codebase. 1,900 files. Half a million lines of strict TypeScript. The whole thing: agent loop, tool permissions, system prompt fragments, unreleased features, internal model codenames, even a Tamagotchi-style AI pet called BUDDY with 18 species and rarity tiers.&lt;/p&gt;

&lt;p&gt;A security researcher found it at half eight in the morning, posted the link on X, and by lunchtime there were 10 million views and 30,000 GitHub stars on the mirrors.&lt;/p&gt;

&lt;p&gt;And here's where I diverge from the rest of the internet.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Studying Claude Code's Internals Won't Ship Your Product
&lt;/h3&gt;

&lt;p&gt;Ever watched someone buy a fridge, get it home, and instead of filling it with food and cracking on with dinner... they flip it over, unscrew the compressor panel, and spend the evening studying the coolant system?&lt;/p&gt;

&lt;p&gt;That's what happened this week. Except the fridge is Claude Code and the entire developer internet is lying on the kitchen floor with a torch and a screwdriver, marvelling at the copper piping.&lt;/p&gt;

&lt;p&gt;"Oh look, they use Bun instead of Node!"&lt;br&gt;
"The system prompt is 40 fragments assembled dynamically!"&lt;br&gt;
"There's a three-layer memory architecture!"&lt;br&gt;
"KAIROS is an autonomous daemon mode with dream consolidation!"&lt;/p&gt;

&lt;p&gt;Right. Fascinating. Proper interesting engineering. But here's the thing I keep coming back to: &lt;em&gt;what are you going to do with this information?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Are you going to fork Claude Code and build a competitor? No. You're not. The model access alone would cost you more than your house.&lt;/p&gt;

&lt;p&gt;Are you going to learn some architectural pattern that changes how you build software? Maybe. The memory system is clever. The context compression is worth studying. But you could've learned the same patterns from any well-designed distributed system.&lt;/p&gt;

&lt;p&gt;What most people are actually doing is procrastinating. They're scratching a curiosity itch instead of shipping the thing they were supposed to be working on before the leak dropped. And I know this because I did exactly the same thing for about three hours before catching myself.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Build-vs-Study Trap for Developers
&lt;/h3&gt;

&lt;p&gt;Here's the framing I keep using with myself when the ADHD brain wants to chase the shiny thing.&lt;/p&gt;

&lt;p&gt;You don't need to understand how a refrigerator works to make Coca-Cola. You need the fridge to keep your ingredients cold. That's it. The compressor, the coolant, the thermodynamic cycle. None of that is your problem. Your problem is the recipe, the distribution, the brand, the thing only &lt;em&gt;you&lt;/em&gt; can build.&lt;/p&gt;

&lt;p&gt;Claude Code is a refrigerator. A really good one. Possibly the best one on the market right now. But it's infrastructure. It's the thing that keeps your work cold while you build the actual product.&lt;/p&gt;

&lt;p&gt;The leak told us the fridge has 40 tools, 85 slash commands, three memory layers, and a pet duck with personality stats. Cool. Now close the panel, stand up, and go build your Coca-Cola.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📚 Geek Corner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;The build-vs-study trap&lt;/strong&gt;: There's a pattern anyone who's worked with developers will recognise: studying tools becomes a substitute for using them. It's a form of productive procrastination. You feel like you're learning something useful (and you are, slightly) but you're avoiding the harder, scarier work of actually building. The Claude Code leak is catnip for this pattern. It's 512,000 lines of production TypeScript from one of the most interesting AI companies in the world. Of &lt;em&gt;course&lt;/em&gt; you want to read it. But reading it isn't building. And building is the job.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  Useful Patterns from the Claude Code Architecture
&lt;/h3&gt;

&lt;p&gt;I'm not saying ignore the leak entirely. Some of it is properly useful if you're building agent systems yourself.&lt;/p&gt;

&lt;p&gt;The three-layer context compression (MicroCompact, AutoCompact, Full Compact) is worth understanding if you manage long LLM conversations. I've been smacking into context limits on my own agent work, and knowing how Anthropic approaches it gives me something to riff off. The memory architecture (MEMORY.md as an always-loaded index, topic files fetched on demand, raw transcripts grepped by ID) is the same pattern as a database with an in-memory index, a warm cache, and cold storage. Not revolutionary, but well-executed and worth nicking the structure.&lt;/p&gt;

&lt;p&gt;The spicy finds are Undercover Mode and ANTI_DISTILLATION_CC. Undercover Mode injects into the prompt when Claude operates in public repos, telling it to hide that it's Claude. No Anthropic references in commits. No model codenames. Present as human-authored. Some people find this dodgy. I reckon it's just pragmatic (nobody wants "Written by Claude" in their git log) but it does raise the question of where transparency ends and deception starts. And the anti-distillation mechanism? Injecting fake tool definitions to poison competitor training data. &lt;em&gt;Cheeky.&lt;/em&gt; Whether you think that's clever or unethical probably depends on whether you've ever had your own work scraped without permission.&lt;/p&gt;




&lt;h3&gt;
  
  
  Anthropic's Recurring Source Map Security Blunder
&lt;/h3&gt;

&lt;p&gt;Here's what's been bugging me. This is the &lt;em&gt;second time&lt;/em&gt; Anthropic has leaked their own source via an npm source map. The first was February 2025. Thirteen months ago. Same mistake. Same build pipeline oversight. Same &lt;code&gt;.map&lt;/code&gt; file left in the package.&lt;/p&gt;

&lt;p&gt;The company that builds the tool that's supposed to help you write better, more secure code... can't configure their own bundler to strip source maps. Twice.&lt;/p&gt;

&lt;p&gt;I'm not going to pile on too hard because everyone ships build artifacts they shouldn't at some point. But when your product is literally "AI that writes and reviews code," and your CI/CD pipeline has a recurring source map leak, the irony writes itself.&lt;/p&gt;

&lt;p&gt;One Hacker News commenter nailed it: "An LLM company using regexes for sentiment analysis is like a truck company using horses." They were talking about the swear-word regex that detects when users are frustrated. But the point extends to the build pipeline too. The cobbler's shoes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feels like:&lt;/strong&gt; A locksmith accidentally leaving their front door open. Once is embarrassing. Twice is a pattern.&lt;/p&gt;




&lt;h3&gt;
  
  
  Developer ADHD and the Shiny Object Problem
&lt;/h3&gt;

&lt;p&gt;I'm writing this partly for myself. Because I spent Monday morning reading through leaked TypeScript instead of finishing the newsletter extraction pipeline I was actually working on. Three hours. Gone. Interesting hours, sure, but completely unrelated to anything I was trying to ship.&lt;/p&gt;

&lt;p&gt;This is the developer ADHD trap. Something novel appears, it's technically interesting, and your brain decides &lt;em&gt;this&lt;/em&gt; is the most important thing right now. Not the feature you were building. Not the bug you were fixing. This. The new shiny thing.&lt;/p&gt;

&lt;p&gt;And the Claude Code leak is the ultimate shiny thing. Half a million lines of production AI tooling from the company that's arguably winning the coding agent race. How do you &lt;em&gt;not&lt;/em&gt; read that?&lt;/p&gt;

&lt;p&gt;You don't not read it. You read it for twenty minutes, take the two or three actually useful insights, and then you close the tab and go back to your Coca-Cola. That's the discipline. Not ignoring the interesting thing. Bounding it.&lt;/p&gt;

&lt;p&gt;The fridge is working. The compressor hums. Your ingredients are cold. Now make the drink.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Claude Code's internals are interesting. The engineering is solid. The unreleased features are wild. But unless you're building a competing AI coding tool, none of this changes what you should be doing today. The leak is a refrigerator tour. Your job is still Coca-Cola. Close the panel. Crack on. (And if you want to see what Claude Code can actually &lt;em&gt;do&lt;/em&gt; rather than how it's built, I put it &lt;a href="https://dev.to/byte-sized-banter/week-41-ios-simulator-showdown"&gt;head-to-head against Codex on an iOS build&lt;/a&gt;. That's the bit that matters.)&lt;/p&gt;

</description>
      <category>claude</category>
      <category>cybersecurity</category>
      <category>news</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
