<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arcede</title>
    <description>The latest articles on DEV Community by Arcede (@arcede).</description>
    <link>https://dev.to/arcede</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arcede"/>
    <language>en</language>
    <item>
      <title>Has anyone actually measured the ROI of training their team on AI tools?</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:10:03 +0000</pubDate>
      <link>https://dev.to/arcede/has-anyone-actually-measured-the-roi-of-training-their-team-on-ai-tools-1lal</link>
      <guid>https://dev.to/arcede/has-anyone-actually-measured-the-roi-of-training-their-team-on-ai-tools-1lal</guid>
      <description>&lt;p&gt;I keep seeing posts about companies adopting AI tools — copilots, chatbots, automation suites — but almost nobody talks about what happened &lt;em&gt;after&lt;/em&gt; they rolled it out.&lt;/p&gt;

&lt;p&gt;We work with mid-size teams (15-50 people) helping them get past the "we bought licenses but nobody uses them properly" phase. The pattern we see over and over:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Month 1: Excitement, everyone tries the tool&lt;/li&gt;
&lt;li&gt;Month 2: Usage drops 60% because nobody was trained on &lt;em&gt;workflows&lt;/em&gt;, just features&lt;/li&gt;
&lt;li&gt;Month 3: Leadership asks "was this worth it?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The teams that actually get ROI treat it like onboarding a new hire: structured curriculum, practice on real tasks, measurable checkpoints.&lt;/p&gt;

&lt;p&gt;Has anyone here actually tracked hours saved or output quality before/after structured AI training for their team? Would love to hear what worked (or didn't).&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Genuinely curious about your experience — not selling anything here, just trying to understand what separates teams that get real value from the ones that don't.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>productivity</category>
      <category>management</category>
    </item>
    <item>
      <title>What is the actual cost of your team figuring out AI on their own?</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 18:08:42 +0000</pubDate>
      <link>https://dev.to/arcede/what-is-the-actual-cost-of-your-team-figuring-out-ai-on-their-own-4fme</link>
      <guid>https://dev.to/arcede/what-is-the-actual-cost-of-your-team-figuring-out-ai-on-their-own-4fme</guid>
      <description>&lt;p&gt;We did the math on what it costs when a team of 10 learns AI tools through trial and error vs. structured training.&lt;/p&gt;

&lt;p&gt;The numbers were uncomfortable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hidden costs we found:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average employee spends 45 min/day on AI prompts that produce unusable output (thats ~160 hours/year per person)&lt;/li&gt;
&lt;li&gt;Senior staff spend 2-3 hours/week reviewing and fixing junior AI-generated work&lt;/li&gt;
&lt;li&gt;Three departments bought overlapping AI subscriptions nobody coordinated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The total? Roughly $130K/year in a 25-person company. Not on AI tools — on &lt;em&gt;using AI tools badly&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Structured training (even just 4 hours) cut the waste roughly in half within the first month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question for the community:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have you tried to quantify the cost of unstructured AI adoption?&lt;/li&gt;
&lt;li&gt;What was the moment you realized self-serve AI access was not enough?&lt;/li&gt;
&lt;li&gt;For those who invested in training — what format worked? (workshops, async courses, 1-on-1 coaching?)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Would love to hear real numbers, not vendor pitches.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>productivity</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>How do you measure the cost of your team doing things the slow way?</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:09:49 +0000</pubDate>
      <link>https://dev.to/arcede/how-do-you-measure-the-cost-of-your-team-doing-things-the-slow-way-259d</link>
      <guid>https://dev.to/arcede/how-do-you-measure-the-cost-of-your-team-doing-things-the-slow-way-259d</guid>
      <description>&lt;p&gt;We recently did an exercise with a 10-person service team: tracked how many hours each person spent on tasks they &lt;em&gt;knew&lt;/em&gt; could be done faster — reformatting reports, searching Slack for links, re-explaining processes to new hires.&lt;/p&gt;

&lt;p&gt;The average was 5 hours per person per week. At a $50/hr blended cost, that is $130,000/year just in wasted motion.&lt;/p&gt;

&lt;p&gt;The interesting part: most of these were not technology problems. The team had Notion, Slack, ChatGPT, and every tool you can name. The gap was that nobody had shown them how to use these tools &lt;em&gt;together&lt;/em&gt; effectively.&lt;/p&gt;

&lt;p&gt;A few 2-hour training sessions later, the number dropped to about 2 hours/week per person. Simple ROI math.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Questions for the community:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Has anyone done this kind of audit on their own team? What surprised you?&lt;/li&gt;
&lt;li&gt;What is the single biggest time sink you see in your day-to-day?&lt;/li&gt;
&lt;li&gt;Do you think the problem is more about tools or about skills?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Genuinely curious — we are building better ways to measure and fix this, and real-world stories help more than any framework.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
      <category>career</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Your Team Has AI Tools. They Don't Have AI Skills. That's the Expensive Part.</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 12:08:58 +0000</pubDate>
      <link>https://dev.to/arcede/your-team-has-ai-tools-they-dont-have-ai-skills-thats-the-expensive-part-2m0c</link>
      <guid>https://dev.to/arcede/your-team-has-ai-tools-they-dont-have-ai-skills-thats-the-expensive-part-2m0c</guid>
      <description>&lt;p&gt;Everyone's talking about which AI model is best. Nobody's asking the harder question: does your team know how to use &lt;em&gt;any&lt;/em&gt; of them effectively?&lt;/p&gt;

&lt;h2&gt;
  
  
  The $240/month question
&lt;/h2&gt;

&lt;p&gt;The average team spends $240/month per seat on AI tools. Most can't point to a single workflow that's actually faster because of it.&lt;/p&gt;

&lt;p&gt;That's not a tool problem. It's a training problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I keep seeing
&lt;/h2&gt;

&lt;p&gt;After working with dozens of service businesses on AI adoption, here's the pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1:&lt;/strong&gt; Team gets excited, experiments with ChatGPT&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Week 4:&lt;/strong&gt; Usage drops 60%&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Week 8:&lt;/strong&gt; "AI doesn't work for our business"&lt;/p&gt;

&lt;p&gt;The missing step? Nobody showed them how to apply it to &lt;em&gt;their&lt;/em&gt; actual workflows. Not generic prompt tips — specific, role-based training on the tools they already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real ROI calculation
&lt;/h2&gt;

&lt;p&gt;When someone learns to automate a 2-hour weekly report into a 10-minute review, that's 90 hours/year reclaimed. At $50/hour loaded cost, that's $4,500 from one person learning one thing.&lt;/p&gt;

&lt;p&gt;Multiply that across a 10-person team learning 3-4 workflows each, and training pays for itself in the first month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question for the community
&lt;/h2&gt;

&lt;p&gt;For those who've rolled out AI tools to a team: &lt;strong&gt;what was the single biggest factor in whether people actually kept using them?&lt;/strong&gt; Was it the tool choice, the training approach, management buy-in, or something else entirely?&lt;/p&gt;

&lt;p&gt;Genuinely curious — drop your experience in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I run AI training programs for service businesses. Happy to share specific frameworks if useful.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>business</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Why Your Service Business Still Runs on Tribal Knowledge (And What to Do About It)</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 11:08:34 +0000</pubDate>
      <link>https://dev.to/arcede/why-your-service-business-still-runs-on-tribal-knowledge-and-what-to-do-about-it-33op</link>
      <guid>https://dev.to/arcede/why-your-service-business-still-runs-on-tribal-knowledge-and-what-to-do-about-it-33op</guid>
      <description>&lt;p&gt;Every service business has that one person who "just knows" how everything works. The invoicing quirks. The client preferences. The workaround for that system that never got fixed.&lt;/p&gt;

&lt;p&gt;When that person goes on vacation, things break. When they leave, institutional panic sets in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost
&lt;/h2&gt;

&lt;p&gt;A 20-person service company (IT services, logistics, consulting) typically loses &lt;strong&gt;$2,000-4,000/week&lt;/strong&gt; to knowledge gaps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New hires taking 3-6 months to reach full productivity&lt;/li&gt;
&lt;li&gt;Repeated mistakes because process lives in someone's head&lt;/li&gt;
&lt;li&gt;Senior staff spending 30% of their time answering the same questions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The companies that fix this don't buy a fancy knowledge management tool. They do three things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Record the "obvious" stuff first.&lt;/strong&gt; The things your veterans think everyone knows? Write those down. That's where 80% of the value is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Make training operational, not theoretical.&lt;/strong&gt; Nobody reads a 40-page manual. Build 15-minute walkthroughs of your actual tools with your actual data. Show the new hire exactly what to click.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Automate the repetitive judgment calls.&lt;/strong&gt; If your team makes the same decision 50 times a day based on the same criteria, that's not expertise — that's a process begging to be systematized.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Multiplier
&lt;/h2&gt;

&lt;p&gt;Once you capture tribal knowledge into repeatable training, every new hire gets productive in weeks instead of months. That compounds.&lt;/p&gt;

&lt;p&gt;A 20-person team saving 5 hours/person/week = &lt;strong&gt;5,200 recovered hours/year.&lt;/strong&gt; At average billing rates, that's $260K-520K in capacity you're already paying for.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We help service businesses turn tribal knowledge into structured team training. If your onboarding takes months instead of weeks, &lt;a href="https://arcede.com" rel="noopener noreferrer"&gt;let's talk&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>business</category>
      <category>productivity</category>
      <category>management</category>
      <category>startup</category>
    </item>
    <item>
      <title>The $130K Line Item Nobody Tracks: What Untrained Teams Actually Cost</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 10:12:11 +0000</pubDate>
      <link>https://dev.to/arcede/the-130k-line-item-nobody-tracks-what-untrained-teams-actually-cost-5fh4</link>
      <guid>https://dev.to/arcede/the-130k-line-item-nobody-tracks-what-untrained-teams-actually-cost-5fh4</guid>
      <description>&lt;p&gt;Most companies track software costs down to the penny. Seats, licenses, usage-based billing — it all gets a line in the spreadsheet.&lt;/p&gt;

&lt;p&gt;But nobody tracks the cost of &lt;em&gt;not knowing how to use the tools you already paid for&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math Nobody Does
&lt;/h2&gt;

&lt;p&gt;Average knowledge worker salary: $75K loaded.&lt;br&gt;
That is about $36/hour.&lt;/p&gt;

&lt;p&gt;Time spent on tasks that could be automated or done 5x faster with proper technique: ~2.5 hours/day (conservative, based on time-tracking studies across 50+ mid-market deployments).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per person per year:&lt;/strong&gt; 2.5 hrs × 250 workdays × $36 = &lt;strong&gt;$22,500 in recoverable productivity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A 10-person team? That is $225,000/year in productivity sitting on the table.&lt;/p&gt;

&lt;p&gt;And these are not hypothetical gains. The techniques exist today — most teams just never got trained on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Training" Actually Means
&lt;/h2&gt;

&lt;p&gt;Not a 2-hour webinar. Not a Slack channel full of tips nobody reads.&lt;/p&gt;

&lt;p&gt;Effective training looks like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Department-specific playbooks&lt;/strong&gt; — Sales needs different techniques than Finance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real scenarios from their actual workflow&lt;/strong&gt; — not toy examples&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-to-task mapping&lt;/strong&gt; — which tool for which job (they are not interchangeable)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;30-day rollout with checkpoints&lt;/strong&gt; — because habits take time to form&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The ROI math works because even a 20% improvement in technique adoption recovers the training cost in the first week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Question
&lt;/h2&gt;

&lt;p&gt;If your team has access to AI tools but no structured training on how to use them, you are paying for a gym membership nobody uses.&lt;/p&gt;

&lt;p&gt;The tools are not the bottleneck. The gap between &lt;em&gt;having&lt;/em&gt; tools and &lt;em&gt;using them well&lt;/em&gt; is where the money is.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We built a free ROI calculator that runs these numbers for your specific team size and salary range. DM or check our profile for the link.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://arcede.com" rel="noopener noreferrer"&gt;Arcede&lt;/a&gt; — we train teams and build automation infrastructure.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>startup</category>
      <category>business</category>
      <category>productivity</category>
      <category>management</category>
    </item>
    <item>
      <title>Your Browser Agent Costs More Than Your Intern — Here's the Math</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 09:07:51 +0000</pubDate>
      <link>https://dev.to/arcede/your-browser-agent-costs-more-than-your-intern-heres-the-math-21ok</link>
      <guid>https://dev.to/arcede/your-browser-agent-costs-more-than-your-intern-heres-the-math-21ok</guid>
      <description>&lt;p&gt;Every week, a new "AI browser agent" demo goes viral. It fills out a form! It books a flight! It navigates a checkout flow!&lt;/p&gt;

&lt;p&gt;And every week, engineering teams try to deploy these agents in production — and hit the same wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;A typical browser agent exploring an unfamiliar website burns through this loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Take screenshot (~1,500 tokens)&lt;/li&gt;
&lt;li&gt;Reason about what to see (~500 tokens)
&lt;/li&gt;
&lt;li&gt;Decide on action (~300 tokens)&lt;/li&gt;
&lt;li&gt;Execute action&lt;/li&gt;
&lt;li&gt;Take another screenshot&lt;/li&gt;
&lt;li&gt;Repeat 15-40 times per task&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That'''s &lt;strong&gt;35,000-90,000 tokens per task&lt;/strong&gt;. At current model prices, a single "search for flights on Kayak" costs $0.30-0.80. Run that across 1,000 customers and you'''re spending $300-800 on something a human does in 45 seconds.&lt;/p&gt;

&lt;p&gt;The problem isn'''t the model. The problem is that &lt;strong&gt;every agent starts from zero, every time&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What If Agents Could Share What They Learn?
&lt;/h2&gt;

&lt;p&gt;Imagine the first agent that visits kayak.com discovers the search form, the date picker pattern, and the results selector. Now imagine every subsequent agent &lt;em&gt;already knows this&lt;/em&gt; — zero exploration, straight to execution.&lt;/p&gt;

&lt;p&gt;That'''s not hypothetical. That'''s what a shared capability layer does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First visit:&lt;/strong&gt; Agent explores, maps the site, reports what it found&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Second visit:&lt;/strong&gt; Agent gets a pre-verified execution plan, skips exploration entirely&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;100th visit:&lt;/strong&gt; Execution is near-instant, costs drop 95%+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The economics flip. Instead of each agent paying the full exploration tax, the cost is amortized across every agent that touches that domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Matter
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Solo Agent&lt;/th&gt;
&lt;th&gt;Shared Intelligence&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tokens per task&lt;/td&gt;
&lt;td&gt;35,000-90,000&lt;/td&gt;
&lt;td&gt;2,000-5,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost per task&lt;/td&gt;
&lt;td&gt;$0.30-0.80&lt;/td&gt;
&lt;td&gt;$0.01-0.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time to execute&lt;/td&gt;
&lt;td&gt;30-90 seconds&lt;/td&gt;
&lt;td&gt;2-5 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure rate&lt;/td&gt;
&lt;td&gt;30-50%&lt;/td&gt;
&lt;td&gt;&amp;lt;5%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This isn'''t marginal improvement. It'''s the difference between "browser agents are a cool demo" and "browser agents are a production tool."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compounding Effect
&lt;/h2&gt;

&lt;p&gt;Here'''s what makes shared intelligence different from caching or hard-coded scrapers:&lt;/p&gt;

&lt;p&gt;Every agent that reports its outcomes makes the system smarter. A failed selector gets flagged. A new site layout gets mapped. An API fast-path gets discovered. The system improves &lt;em&gt;because&lt;/em&gt; agents use it, not despite them.&lt;/p&gt;

&lt;p&gt;This is the same pattern that made Google Maps accurate (every phone contributing GPS data) and Waze useful (every driver reporting traffic). Individual contribution, collective benefit.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your Stack
&lt;/h2&gt;

&lt;p&gt;If you'''re building agents that touch the web, the question isn'''t "which model should I use?" It'''s "how do I avoid paying the exploration tax on every single request?"&lt;/p&gt;

&lt;p&gt;The teams that figure this out first will ship agents that are 10-100x cheaper and faster than their competitors. The teams that don'''t will keep burning tokens on screenshots of loading spinners.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We'''re building this shared intelligence layer at &lt;a href="https://arcede.com" rel="noopener noreferrer"&gt;Arcede&lt;/a&gt;. If you'''re shipping browser agents and the cost-per-task math keeps you up at night, the &lt;a href="https://www.npmjs.com/package/@anthropic/air-sdk" rel="noopener noreferrer"&gt;AIR SDK&lt;/a&gt; is open for early integrators.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>startup</category>
    </item>
    <item>
      <title>What If the Internet Could Tell Your Agent What Is Possible?</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 08:07:47 +0000</pubDate>
      <link>https://dev.to/arcede/what-if-the-internet-could-tell-your-agent-what-is-possible-1nch</link>
      <guid>https://dev.to/arcede/what-if-the-internet-could-tell-your-agent-what-is-possible-1nch</guid>
      <description>&lt;p&gt;Every browser automation tool makes the same mistake: they give you a browser and say "figure it out." You write selectors, handle popups, manage sessions, retry failures — all before you've done anything useful.&lt;/p&gt;

&lt;p&gt;What if the internet itself could tell your agent what's possible?&lt;/p&gt;

&lt;p&gt;That's the idea behind AIR (Agent Internet Runtime). Instead of hardcoding selectors, your agent asks a site what capabilities it supports, gets execution guidance, and reports what worked — building a shared knowledge layer that every agent benefits from.&lt;/p&gt;

&lt;p&gt;Here's how it works in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three-Step Pattern
&lt;/h2&gt;

&lt;p&gt;Every interaction follows the same flow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Browse.&lt;/strong&gt; Ask a domain what capabilities exist. You get back a structured list: search, login, add-to-cart, fill-form, etc. Each has confidence scores and selector hints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Execute.&lt;/strong&gt; Pick a capability and pass your parameters. AIR returns the optimal execution path: sometimes an API shortcut, sometimes verified macro steps, sometimes selector guidance for uncharted territory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Report.&lt;/strong&gt; After your agent acts, it reports what happened — which selectors worked, what failed, what it observed. This is where the flywheel turns. Every report improves guidance for the next agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Example: Searching Hacker News
&lt;/h2&gt;

&lt;p&gt;Instead of writing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://news.ycombinator.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;search_box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_element&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;By&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CSS_SELECTOR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input[name=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;q&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;search_box&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;browser automation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;search_box&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Step 1: What can I do here?
&lt;/span&gt;&lt;span class="n"&gt;caps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;air&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;browse_capabilities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;news.ycombinator.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns: search_stories, view_comments, submit_story...
&lt;/span&gt;
&lt;span class="c1"&gt;# Step 2: How do I search?
&lt;/span&gt;&lt;span class="n"&gt;plan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;air&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_capability&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;news.ycombinator.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;capability&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search_stories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;browser automation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns: execution steps with verified selectors
&lt;/span&gt;
&lt;span class="c1"&gt;# Step 3: Report what happened
&lt;/span&gt;&lt;span class="n"&gt;air&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;report_outcome&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;news.ycombinator.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;capability&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search_stories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[...],&lt;/span&gt;  &lt;span class="c1"&gt;# what you actually did
&lt;/span&gt;    &lt;span class="n"&gt;success&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference: your code doesn't break when HN changes their DOM. The collective knowledge layer already has updated selectors from other agents who visited recently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Traditional browser automation is O(n) — every new site requires new code. AIR makes it O(1) amortized — once any agent figures out a site, every agent benefits.&lt;/p&gt;

&lt;p&gt;The numbers back this up. In our benchmarks, agents using AIR's collective intelligence layer use 7,000x fewer tokens per successful action compared to agents that screenshot-and-reason their way through pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;AIR SDK is open and available as an MCP server. If you're building agents that touch the web, the browse → execute → report pattern is worth trying. The SDK handles the knowledge layer; you handle the logic.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/anthropics/anthropic-cookbook" rel="noopener noreferrer"&gt;github.com/anthropics/anthropic-cookbook&lt;/a&gt; has MCP examples. AIR's approach is similar — structured capability discovery instead of raw DOM wrestling.&lt;/p&gt;

&lt;p&gt;The web wasn't built for agents. But with the right abstraction layer, it doesn't have to be rebuilt either.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>browserautomation</category>
      <category>opensource</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Stop Writing Selectors: How Shared Intelligence Makes Browser Automation Self-Improving</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 06:07:35 +0000</pubDate>
      <link>https://dev.to/arcede/stop-writing-selectors-how-shared-intelligence-makes-browser-automation-self-improving-114n</link>
      <guid>https://dev.to/arcede/stop-writing-selectors-how-shared-intelligence-makes-browser-automation-self-improving-114n</guid>
      <description>&lt;p&gt;Most browser automation breaks because every script starts from zero. It finds a button, clicks it, the site redesigns, it breaks. Repeat forever.&lt;/p&gt;

&lt;p&gt;After building browser automation for AI agents, we noticed a pattern: &lt;strong&gt;every agent re-learns the same websites independently.&lt;/strong&gt; Agent A figures out how to search on Amazon. Agent B does the same work an hour later. The knowledge is generated and immediately discarded.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Idea: Collective Agent Memory
&lt;/h2&gt;

&lt;p&gt;What if agents pooled their browsing knowledge?&lt;/p&gt;

&lt;p&gt;We built a shared intelligence layer where every agent interaction with a website contributes verified execution paths back to a collective knowledge base. The pattern is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Browse&lt;/strong&gt; — Check what's known about a domain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute&lt;/strong&gt; — Get a pre-verified plan (or explore if unknown)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report&lt;/strong&gt; — Feed back what worked and what didn't&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When Agent B visits a site Agent A already mapped, it gets a structured execution plan instead of raw-dogging the DOM with screenshots.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Observed
&lt;/h2&gt;

&lt;p&gt;After running this in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Token usage dropped significantly for repeat site visits — agents skip the screenshot→LLM→guess loop&lt;/li&gt;
&lt;li&gt;Multi-step workflows complete in 1-2 API calls instead of 10+ screenshot cycles&lt;/li&gt;
&lt;li&gt;Failed selectors are as valuable as working ones — the system learns what &lt;em&gt;doesn't&lt;/em&gt; work too&lt;/li&gt;
&lt;li&gt;True network effects: every agent makes the system better for every other agent&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  agent.json: robots.txt for AI Agents
&lt;/h2&gt;

&lt;p&gt;We also built an open spec called &lt;code&gt;agent.json&lt;/code&gt; (placed at &lt;code&gt;/.well-known/agent.json&lt;/code&gt;) that lets websites declare their capabilities to AI agents. Instead of agents guessing, sites can say: "here's my search bar, here's how to add to cart, here's an API shortcut."&lt;/p&gt;

&lt;p&gt;Think robots.txt, but instead of "don't crawl this," it says "here's how to interact with me."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;Browser automation is shifting from scripted Selenium/Playwright to AI-driven agents. But the economics don't work if every agent spends 80% of its tokens on visual reasoning just to find a search bar. Shared intelligence is the obvious next step — it's how humans work (we share documentation, tutorials, Stack Overflow answers), but agents have been operating as isolated islands.&lt;/p&gt;

&lt;p&gt;Over 2,225 domains are already indexed in the shared knowledge base.&lt;/p&gt;

&lt;p&gt;Curious what the dev community thinks. Are you building browser automation for AI agents? What's your biggest reliability pain point?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The SDK is open and works with any agent framework. Check it out at &lt;a href="https://agentinternetruntime.com/docs" rel="noopener noreferrer"&gt;arcede.com/air&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>browserautomation</category>
      <category>agents</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why Your Team's $240/Month AI Subscription Is Generating $0 in Productivity Gains</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sun, 29 Mar 2026 03:11:58 +0000</pubDate>
      <link>https://dev.to/arcede/why-your-teams-240month-ai-subscription-is-generating-0-in-productivity-gains-93j</link>
      <guid>https://dev.to/arcede/why-your-teams-240month-ai-subscription-is-generating-0-in-productivity-gains-93j</guid>
      <description>&lt;p&gt;Most organizations are paying for AI tools nobody knows how to use properly. Here's the pattern: someone on the team tried ChatGPT once, got a decent result, then never developed the skill further. The subscription keeps billing. The productivity gain never materializes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $400/Week Problem
&lt;/h2&gt;

&lt;p&gt;Average knowledge worker: $50/hr. Time spent on repetitive tasks (drafting proposals, summarizing meetings, reformatting data, writing client emails): 8-10 hours/week.&lt;/p&gt;

&lt;p&gt;That's $400-500/week per person in recoverable productivity. For a 10-person team, that's $4,000-5,000/week sitting on the table.&lt;/p&gt;

&lt;p&gt;The problem isn't the tools. Claude, Gemini, and ChatGPT are all capable enough. The problem is that generic YouTube tutorials don't teach your team how to apply these tools to &lt;em&gt;their actual workflows&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Structured Training Looks Like
&lt;/h2&gt;

&lt;p&gt;Instead of "here's what AI can do" presentations, effective training works backwards from the team's real tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Audit the workflow&lt;/strong&gt; — identify the 3-5 most repetitive knowledge tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build custom prompts&lt;/strong&gt; — not generic ones, prompts designed for your specific documents, formats, and processes
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure before/after&lt;/strong&gt; — track time-on-task for each workflow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A legal team drafting contracts doesn't need to know about image generation. An accounting team processing invoices doesn't need creative writing tips. Training that maps to actual work delivers measurable ROI within the first week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ROI Math
&lt;/h2&gt;

&lt;p&gt;Conservative scenario: 5 hours saved per person per week at $50/hr = $250/week per person.&lt;/p&gt;

&lt;p&gt;10-person team: $2,500/week = $130,000/year in recovered productivity.&lt;/p&gt;

&lt;p&gt;Even at 50% of that estimate, the return on a one-time training investment is extraordinary.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Teams Get Wrong
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No structured onboarding&lt;/strong&gt; — people figure it out alone (they won't)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generic training&lt;/strong&gt; — covers features, not workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No measurement&lt;/strong&gt; — can't improve what you don't track&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-and-done&lt;/strong&gt; — AI tools update monthly; skills need maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The organizations seeing real productivity gains treat AI fluency like any other professional skill: structured, measured, and continuously developed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What's the most repetitive knowledge task in your organization? I'm curious what teams would automate first.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>ai</category>
      <category>business</category>
      <category>management</category>
    </item>
    <item>
      <title>agent.json: The Missing robots.txt for AI Agents</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sat, 28 Mar 2026 10:10:27 +0000</pubDate>
      <link>https://dev.to/arcede/agentjson-the-missing-robotstxt-for-ai-agents-3m93</link>
      <guid>https://dev.to/arcede/agentjson-the-missing-robotstxt-for-ai-agents-3m93</guid>
      <description>&lt;p&gt;Your website probably has a &lt;code&gt;robots.txt&lt;/code&gt;. It tells search engine crawlers what they can and can't access. Simple, universal, effective.&lt;/p&gt;

&lt;p&gt;But there's nothing equivalent for AI agents.&lt;/p&gt;

&lt;p&gt;When a browser agent visits your site, it has zero context about what it can do there. So it dumps your entire DOM into an LLM (85,000+ tokens for a typical page), asks "what should I click?", and hopes for the best. This is expensive, slow, and error-prone.&lt;/p&gt;

&lt;h2&gt;
  
  
  What agent.json does
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;agent.json&lt;/code&gt; is a proposed standard that lets websites declare their capabilities for AI agents. A site publishes it at &lt;code&gt;/.well-known/agent.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"capabilities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"search"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"selector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"input#search-box"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fill_and_submit"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"add_to_cart"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/cart/add"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of parsing the entire page, an agent reads the manifest and knows exactly how to interact. The difference in token cost is 85,000 tokens vs ~200.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;AI agents are proliferating fast. Claude, GPT, Gemini — they all increasingly need to interact with websites. Without a standard, every agent framework invents its own discovery mechanism. Sites get hammered with full-page scrapes. Everyone pays more.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;agent.json&lt;/code&gt; gives site owners control over how agents interact with their properties, while giving agents a fast, cheap path to action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;p&gt;Over 2,225 domains are indexed in the public capability graph. Sites can publish their own &lt;code&gt;agent.json&lt;/code&gt;, or capabilities get learned automatically through agent interaction reports (a collective intelligence approach where agents teach each other what works).&lt;/p&gt;

&lt;p&gt;The spec is open and evolving: &lt;a href="https://agentinternetruntime.com/spec/agent-json" rel="noopener noreferrer"&gt;agentinternetruntime.com/spec/agent-json&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  For site owners
&lt;/h2&gt;

&lt;p&gt;Publishing &lt;code&gt;agent.json&lt;/code&gt; means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You control what agents can do on your site&lt;/li&gt;
&lt;li&gt;Agent interactions become predictable and auditable&lt;/li&gt;
&lt;li&gt;Your users get faster, cheaper agent experiences&lt;/li&gt;
&lt;li&gt;You reduce unnecessary DOM scraping load on your servers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  For agent developers
&lt;/h2&gt;

&lt;p&gt;Consuming &lt;code&gt;agent.json&lt;/code&gt; means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skip DOM-to-LLM on every page visit&lt;/li&gt;
&lt;li&gt;Deterministic interactions instead of probabilistic LLM guesses&lt;/li&gt;
&lt;li&gt;Dramatically lower token costs (we measured 7,000x reduction on cached sites)&lt;/li&gt;
&lt;li&gt;Works as an MCP server — install once, benefit everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reference implementation is open source: &lt;a href="https://github.com/ArcedeDev/air-sdk" rel="noopener noreferrer"&gt;AIR SDK on GitHub&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The web needs standards that work for AI agents the same way HTTP, robots.txt, and sitemap.xml work for traditional crawlers. agent.json is a step toward that.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>standards</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Why Browser Agents Waste 99% of Their Tokens (And How to Fix It)</title>
      <dc:creator>Arcede</dc:creator>
      <pubDate>Sat, 28 Mar 2026 09:26:08 +0000</pubDate>
      <link>https://dev.to/arcede/why-browser-agents-waste-99-of-their-tokens-and-how-to-fix-it-jnp</link>
      <guid>https://dev.to/arcede/why-browser-agents-waste-99-of-their-tokens-and-how-to-fix-it-jnp</guid>
      <description>&lt;p&gt;Every browser agent pays a hidden tax: tokens.&lt;/p&gt;

&lt;p&gt;When an agent visits a webpage, it dumps the DOM into an LLM. The LLM reads thousands of elements, reasons about which button to click, and generates a tool call. Then it does it again. And again.&lt;/p&gt;

&lt;p&gt;For a 10-step workflow, that's 25+ LLM round trips. Context grows with each step because conversation history accumulates. By step 10, you're sending 175,000 tokens per action.&lt;/p&gt;

&lt;p&gt;At frontier model pricing, that's roughly $4 for one workflow execution. Run it 1,000 times a day and you're burning $4,000 daily — on clicking buttons.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compounding problem
&lt;/h2&gt;

&lt;p&gt;The issue isn't that LLMs are expensive. It's that agent architectures make them exponentially more expensive with each step:&lt;/p&gt;

&lt;p&gt;Step 1: Inspect DOM (4,000 tokens) → Reason → Act&lt;br&gt;
Step 2: Inspect DOM + step 1 context (6,000 tokens) → Reason → Act&lt;br&gt;
Step 3: Inspect DOM + steps 1-2 context (8,000 tokens) → Reason → Act&lt;/p&gt;

&lt;p&gt;By step 10, your context window is carrying the entire conversation history. Each action costs more than the last.&lt;/p&gt;

&lt;p&gt;This is the fundamental problem with using general-purpose reasoning for repetitive browser tasks.&lt;/p&gt;
&lt;h2&gt;
  
  
  What if agents could skip the reasoning?
&lt;/h2&gt;

&lt;p&gt;Consider a different approach: what if the first agent to figure out how to search on Amazon shared that knowledge with every other agent?&lt;/p&gt;

&lt;p&gt;The CSS selector for Amazon's search box doesn't change between requests. Neither does Google's search button or GitHub's login form. These are solved problems being re-solved millions of times a day.&lt;/p&gt;

&lt;p&gt;That's the idea behind collective intelligence for agents. One agent figures out the selectors and steps. Every subsequent agent reuses that knowledge via a single API call — no DOM inspection, no LLM reasoning, zero tokens.&lt;/p&gt;

&lt;p&gt;The result: a 10-step workflow drops from $4 and 50 seconds to $0.0006 and 178 milliseconds.&lt;/p&gt;
&lt;h2&gt;
  
  
  The three-step pattern
&lt;/h2&gt;

&lt;p&gt;The pattern is simple — browse, execute, report:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Browse&lt;/strong&gt;: Ask what's possible on a domain. Get back a list of capabilities with confidence scores and pre-verified selectors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute&lt;/strong&gt;: Request the optimal execution path for a specific capability. Get back CSS selectors, API fast-paths, or macro steps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report&lt;/strong&gt;: After executing, report what worked. This closes the learning loop — successful patterns become verified macros for every other agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every report makes the system smarter. Agents that use this pattern aren't just consuming intelligence — they're producing it.&lt;/p&gt;
&lt;h2&gt;
  
  
  The math that matters
&lt;/h2&gt;

&lt;p&gt;If you're building browser agents at scale, the cost equation is what determines viability:&lt;/p&gt;

&lt;p&gt;Traditional approach: Cost grows linearly (or worse) with workflow complexity. 10 actions = 25 LLM calls = ~$4.&lt;/p&gt;

&lt;p&gt;Collective approach: Cost is constant regardless of workflow complexity. 10 actions = 1 API call = $0.0006.&lt;/p&gt;

&lt;p&gt;The gap widens with every additional step. A 50-step workflow with traditional LLM reasoning could cost $20+. With pre-verified macros, it's still $0.0006.&lt;/p&gt;
&lt;h2&gt;
  
  
  Who benefits
&lt;/h2&gt;

&lt;p&gt;This pattern is relevant if you're building agents that interact with the web repeatedly. E-commerce bots, data extraction pipelines, testing automation, form-filling services — anywhere agents do the same browser tasks thousands of times.&lt;/p&gt;

&lt;p&gt;If your agent visits a site once and never returns, LLM reasoning is fine. If it visits the same site daily, you're paying a recurring tax that compounds over time.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The AIR SDK implements this pattern as an MCP server. Install, point your agent at it, and the three-step browse→execute→report pattern replaces DOM reasoning automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @arcede/air-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub: &lt;a href="https://github.com/ArcedeDev/air-sdk" rel="noopener noreferrer"&gt;ArcedeDev/air-sdk&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building browser agents? What's your cost-per-action looking like? Curious to hear how others are handling the token economics.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
