<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marco</title>
    <description>The latest articles on DEV Community by Marco (@marc0dev).</description>
    <link>https://dev.to/marc0dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marc0dev"/>
    <language>en</language>
    <item>
      <title>The Listicle SEO Strategy Just Collapsed. Here's What's Replacing It.</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 24 Feb 2026 05:34:23 +0000</pubDate>
      <link>https://dev.to/marc0dev/the-listicle-seo-strategy-just-collapsed-heres-whats-replacing-it-5f0o</link>
      <guid>https://dev.to/marc0dev/the-listicle-seo-strategy-just-collapsed-heres-whats-replacing-it-5f0o</guid>
      <description>&lt;p&gt;SaaS companies are losing 30% to 50% of their organic search visibility. Not over months. Over weeks.&lt;/p&gt;

&lt;p&gt;The cause isn't a penalty. It's not a manual action. It's Google finally catching up to the most overused SEO tactic of the last two years: self-promotional listicles.&lt;/p&gt;

&lt;p&gt;You know the format. "10 Best [Category] Tools in 2026." Company publishes it on their own blog. Ranks themselves #1. Updates the year in the title every January. Calls it "content marketing."&lt;/p&gt;

&lt;p&gt;It worked. Until January 2026. Now it's collapsing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;After Google's December 2025 core update, search rankings stayed volatile through January. SEO researcher Lily Ray at Amsive analyzed the affected sites and found a pattern so consistent it's hard to call it coincidence.&lt;/p&gt;

&lt;p&gt;The numbers across affected SaaS and B2B companies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One site had &lt;strong&gt;191 self-promotional listicles&lt;/strong&gt; on a blog with 30,000 articles. Visibility dropped sharply.&lt;/li&gt;
&lt;li&gt;Another had &lt;strong&gt;228 self-promotional listicles&lt;/strong&gt; in their guide section. Down 42%.&lt;/li&gt;
&lt;li&gt;A B2B SaaS company with &lt;strong&gt;267 listicles&lt;/strong&gt; across 2,790 blog posts. Lost 38% visibility.&lt;/li&gt;
&lt;li&gt;A software company with &lt;strong&gt;76 self-serving listicles&lt;/strong&gt; among 1,980 tutorials. Hit during both the December update and January volatility.&lt;/li&gt;
&lt;li&gt;Even a site with just &lt;strong&gt;10 self-promotional listicles&lt;/strong&gt; saw a 29% drop — suggesting Google weights the pattern heavily even at small scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In every case, the drops hit the blog or content subfolder specifically. Product pages and core site sections held steady or even gained. The algorithm targeted the content pattern, not the domain.&lt;/p&gt;

&lt;p&gt;Lily Ray found at least 15 sites with 100+ self-promotional listicles whose blogs got hammered between January 20-30. Most were AI SaaS companies. Many had review Schema markup that didn't match actual independent reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Listicles Worked (And Why They Don't Anymore)
&lt;/h2&gt;

&lt;p&gt;The strategy was simple and effective:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write "Best [X] Tools for 2026"&lt;/li&gt;
&lt;li&gt;Put yourself at #1&lt;/li&gt;
&lt;li&gt;Rank for high-intent commercial queries&lt;/li&gt;
&lt;li&gt;Get cited by AI Overviews, ChatGPT, and Perplexity (which pull from top-ranking pages)&lt;/li&gt;
&lt;li&gt;Repeat for every keyword variation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It was the dominant GEO (Generative Engine Optimization) tactic of 2025. Companies weren't just gaming Google — they were gaming every AI system that uses Google's results as a source.&lt;/p&gt;

&lt;p&gt;The problem is obvious when you say it out loud: a company reviewing itself and declaring itself the best isn't a review. It's an ad wearing a trench coat.&lt;/p&gt;

&lt;p&gt;Google's quality guidelines have always said reviews should show first-hand testing, independent evaluation, and transparent methodology. Self-promotional listicles fail all three. The only question was when Google would enforce it.&lt;/p&gt;

&lt;p&gt;The answer: January 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cascade Effect
&lt;/h2&gt;

&lt;p&gt;Here's what makes this worse than a normal ranking drop.&lt;/p&gt;

&lt;p&gt;Google's organic rankings feed into AI Overviews. AI Overviews feed into ChatGPT and Perplexity citations (since many LLMs use Google search results as source data). When your listicle drops out of organic search, it drops out of AI search too.&lt;/p&gt;

&lt;p&gt;The same sites losing Google visibility are losing their AI citations simultaneously. The tactic that was supposed to future-proof their SEO for the AI era is now the thing killing their visibility across all channels.&lt;/p&gt;

&lt;p&gt;Companies that went all-in on listicles as their GEO strategy are discovering that building on a manipulative foundation means the whole structure collapses when the foundation gets pulled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern Google Is Targeting
&lt;/h2&gt;

&lt;p&gt;It's not listicles in general. It's a specific combination of signals:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-ranking.&lt;/strong&gt; The publisher puts their own product at #1 without independent methodology or third-party validation. The search query &lt;code&gt;site:company.com/blog/ intitle:best "1. company"&lt;/code&gt; exposes exactly how many a site has.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale.&lt;/strong&gt; Dozens to hundreds of near-identical listicles across categories. When one site has 267 "Best [X]" articles, it's clearly a systematic strategy, not editorial judgment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Year-stuffing.&lt;/strong&gt; Updating the title to "2026" with no meaningful content changes. Some sites had 76+ articles updated to "2026" in the first four weeks of the year. Google can see when a "2026 Guide" is a 2024 article with a new title tag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thin content.&lt;/strong&gt; Template-driven listicle structures with minimal differentiation between articles. Same format, same promotional language, different keyword.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-generated at scale.&lt;/strong&gt; Many affected sites showed high AI detection scores. Mass-producing listicles with AI amplifies every other risk factor.&lt;/p&gt;

&lt;p&gt;Any one of these is a yellow flag. Combined, they're a signal to Google that the content exists to manipulate rankings, not to help users make informed decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually Working Now
&lt;/h2&gt;

&lt;p&gt;The sites that held steady — or gained — during the same period share different characteristics. Not revolutionary. Just honest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Original research and testing.&lt;/strong&gt; Content that shows someone actually used the tools and documented the results. Screenshots. Benchmarks. Specific numbers from real usage. Not "We love this tool because it has great features."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First-person experience.&lt;/strong&gt; "I tested this on my site and here's what happened" beats "This tool is rated 4.8 stars on G2" every time. Google's helpful content system rewards demonstrated experience over aggregated ratings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transparent methodology.&lt;/strong&gt; If you compare tools, explain how you evaluated them. What criteria? What testing environment? What didn't work? Content that acknowledges weaknesses builds more trust than content that only lists benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specificity over breadth.&lt;/strong&gt; Instead of "10 Best SEO Tools," write about one specific problem and how you solved it. "How I Found 430 Impressions of Untapped Keywords in 2 Minutes" is more useful (and harder to replicate) than a generic tool list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data from your own work.&lt;/strong&gt; The content that's hardest to compete with is content based on your own data. If you can show your actual GSC numbers, your actual traffic growth, your actual workflow — that's original by definition. Nobody else has your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Content Strategy Shift
&lt;/h2&gt;

&lt;p&gt;The listicle era was built on a simple bet: volume and templates beat depth and originality. Publish 200 "Best [X]" articles and some will rank. The cost per article was low (especially with AI), the potential upside was high, and the risk seemed manageable.&lt;/p&gt;

&lt;p&gt;That bet just lost.&lt;/p&gt;

&lt;p&gt;The new bet is the opposite: fewer pieces, higher quality, more original data.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead of:&lt;/strong&gt; "10 Best AI SEO Tools in 2026" where you rank yourself #1&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write:&lt;/strong&gt; A case study showing how you used your tool on a real site, what it found, what the results were, with actual numbers and screenshots&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead of:&lt;/strong&gt; "Best Content Marketing Agencies 2026" listing 10 competitors you've never tested&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write:&lt;/strong&gt; A comparison of 2-3 specific approaches you've actually used, with cost breakdowns, time investment, and measurable outcomes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead of:&lt;/strong&gt; 50 listicles targeting every variation of "best [keyword]"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write:&lt;/strong&gt; 5 deep-dive articles based on original data from your product, your customers, or your industry experience&lt;/p&gt;

&lt;p&gt;Five data-backed articles will outrank fifty templated listicles in 2026. That's the shift.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Tools Should Actually Be Used for SEO
&lt;/h2&gt;

&lt;p&gt;The irony is that AI SaaS companies got hit hardest — by using AI in the worst way possible.&lt;/p&gt;

&lt;p&gt;They used AI to mass-produce generic listicles. Hundreds of them. All following the same template. All ranking themselves #1. The AI made it cheap and fast to produce bad content at scale.&lt;/p&gt;

&lt;p&gt;The better use of AI for SEO is the opposite of mass production. It's deep analysis of your specific data.&lt;/p&gt;

&lt;p&gt;An AI agent connected to your Google Search Console can pull 90 days of keyword data, cross-reference it against every page on your site, and find the specific gaps where you have impressions but no dedicated content. That's not a listicle — it's a data-driven content strategy unique to your site.&lt;/p&gt;

&lt;p&gt;When an agent crawls your pages and finds that you have 430 impressions on a keyword cluster with no targeted page, that's an original insight nobody else can replicate. When it generates a content brief based on your actual GSC data and your site's existing voice, the output is inherently original because it's built on your data.&lt;/p&gt;

&lt;p&gt;The difference between AI-as-content-factory and AI-as-analyst is the difference between what just got penalized and what's working.&lt;/p&gt;

&lt;p&gt;I built an agentic SEO tool that takes this approach: connect your Search Console, let the agent analyze your actual data, get specific recommendations based on what your site needs — not what a template says. Try it at &lt;a href="https://myagenticseo.com" rel="noopener noreferrer"&gt;myagenticseo.com&lt;/a&gt;. The analysis it produces is the kind of original, data-specific content that Google rewards, not the kind that just lost 50% visibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;The self-promotional listicle era is over. Not because Google made an announcement. Because the math doesn't work anymore.&lt;/p&gt;

&lt;p&gt;30-50% visibility drops across 15+ documented sites. Blog subfolders that drove 80-90% of organic traffic getting specifically targeted. The cascade into AI citations making the damage even worse. And the timeline — weeks, not months.&lt;/p&gt;

&lt;p&gt;If you have self-promotional listicles on your site, audit them now. The search &lt;code&gt;site:yoursite.com/blog/ intitle:best "1. yourcompany"&lt;/code&gt; will show you exactly how exposed you are.&lt;/p&gt;

&lt;p&gt;The content that survives this shift is content built on real experience, original data, and genuine expertise. Not because that sounds nice in a Google guideline — because it's the only content an algorithm can't replicate, template, or detect as manufactured.&lt;/p&gt;

&lt;p&gt;The SEO shortcut economy just got more expensive. The long game just got cheaper.&lt;/p&gt;




&lt;p&gt;*I write about AI, SEO, and what's actually working from a dev perspective at &lt;a href="https://marc0.dev" rel="noopener noreferrer"&gt;marc0.dev&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>marketing</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Your Agency's SEO Stack Costs $350/Month Per Client. Here's How to Cut It to $29.</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 24 Feb 2026 05:03:38 +0000</pubDate>
      <link>https://dev.to/marc0dev/your-agencys-seo-stack-costs-350month-per-client-heres-how-to-cut-it-to-29-3o3h</link>
      <guid>https://dev.to/marc0dev/your-agencys-seo-stack-costs-350month-per-client-heres-how-to-cut-it-to-29-3o3h</guid>
      <description>&lt;p&gt;If you run a marketing agency, your SEO stack probably looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semrush: $130/month&lt;/li&gt;
&lt;li&gt;Surfer SEO: $89/month&lt;/li&gt;
&lt;li&gt;ChatGPT Pro: $20/month&lt;/li&gt;
&lt;li&gt;Google Sheets: Free (but 4 hours/week of your time)&lt;/li&gt;
&lt;li&gt;Google Search Console: Free (but manual exports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's $239/month in tools. Plus the time cost of being the human glue between all of them. Export from GSC. Import into Sheets. Copy into Surfer. Paste context into ChatGPT. Copy output into CMS.&lt;/p&gt;

&lt;p&gt;For one client. Multiply by 10 and your junior SEO specialist spends half their week copy-pasting between tabs instead of doing actual strategy work.&lt;/p&gt;

&lt;p&gt;I spent 3 years doing SEO at a marketing agency. The tools were fine. The workflow was broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost Isn't the Subscription
&lt;/h2&gt;

&lt;p&gt;The subscription fees are annoying but manageable. The real cost is the workflow tax — the time your team spends moving data between tools that don't talk to each other.&lt;/p&gt;

&lt;p&gt;Here's what a typical content optimization cycle looks like at an agency:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monday:&lt;/strong&gt; Pull GSC data. Export to CSV. Sort in Sheets. Find keywords with high impressions and low CTR. Takes 45 minutes per client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tuesday:&lt;/strong&gt; Take those keywords to Surfer. Run content audits. Compare against competitors. Build content briefs. Takes 1-2 hours per client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wednesday:&lt;/strong&gt; Take the briefs to ChatGPT. Provide context manually. Generate drafts. Edit out the AI slop. Takes 2-3 hours per client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thursday:&lt;/strong&gt; QA. Internal links. Meta descriptions. Format for CMS. Publish. Takes 1-2 hours per client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Friday:&lt;/strong&gt; Reporting. Screenshots from GSC. Charts from Semrush. Combine into a deck the client skims for 90 seconds.&lt;/p&gt;

&lt;p&gt;That's 6-8 hours per client per week on execution. Across 10 clients, your team of 3 is at capacity — and most of that time is logistics, not strategy.&lt;/p&gt;

&lt;p&gt;The $239/month tool cost? It's $2,868/year per client. The human time cost at $50/hour loaded rate? That's $15,000-20,000 per client per year.&lt;/p&gt;

&lt;p&gt;Your SEO stack isn't expensive because of the subscriptions. It's expensive because every tool is an island.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Agencies Actually Need
&lt;/h2&gt;

&lt;p&gt;Strip away the dashboards and feature lists. What does an agency SEO team actually need to do their job?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Know what's working and what isn't.&lt;/strong&gt;&lt;br&gt;
Pull keyword data. See which pages rank, which don't, which are slipping. Identify gaps. This is GSC data + site awareness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create content that targets real opportunities.&lt;/strong&gt;&lt;br&gt;
Not "write about this topic." Write about this specific keyword cluster where we have 400 impressions at position 15 and no dedicated page. That's data-informed content, not guesswork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Match the client's voice.&lt;/strong&gt;&lt;br&gt;
Every agency has had the conversation: "This doesn't sound like us." AI content sounds like AI. Clients notice. Your team spends 40% of writing time editing tone, not substance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Show results.&lt;/strong&gt;&lt;br&gt;
Monthly report. What changed. What you did. What's next. Clients want to see the line going up and understand why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Do all of this across 10+ clients without burning out.&lt;/strong&gt;&lt;br&gt;
The scale problem. Each client is a different GSC account, different site structure, different voice, different keyword profile. The workflow that takes 45 minutes for one client takes 7.5 hours for ten.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tool Consolidation Nobody's Talking About
&lt;/h2&gt;

&lt;p&gt;The agency SEO market in 2026 is worth $2.4 billion and growing 15% annually. Every week there's a new "AI SEO platform" launching.&lt;/p&gt;

&lt;p&gt;But most of them are adding AI features to the same broken workflow. Semrush added AI content suggestions. Surfer added AI writing. ChatGPT added web browsing. Each tool gets 10% better at everything while still requiring you to jump between all of them.&lt;/p&gt;

&lt;p&gt;The actual shift isn't better tools. It's fewer tools.&lt;/p&gt;

&lt;p&gt;What if one AI agent could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to a client's GSC account and pull live data&lt;/li&gt;
&lt;li&gt;Crawl their entire site automatically&lt;/li&gt;
&lt;li&gt;Cross-reference keywords against existing pages&lt;/li&gt;
&lt;li&gt;Find content gaps based on real impression data&lt;/li&gt;
&lt;li&gt;Learn the client's writing voice from their existing content&lt;/li&gt;
&lt;li&gt;Generate articles that actually sound like the client&lt;/li&gt;
&lt;li&gt;Do this in a 2-minute chat instead of a 2-hour workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's not a feature upgrade to an existing tool. That's a different category.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;Real example. Agency client with a 69-page real estate website. Multiple location pages, blog posts, service pages.&lt;/p&gt;

&lt;p&gt;Old workflow: Export GSC data. 45 minutes sorting in Sheets. Find 3-4 keyword opportunities. Build briefs manually. Write content in ChatGPT with no site context. Edit for voice. 6 hours total.&lt;/p&gt;

&lt;p&gt;New workflow: Agent connects to client's GSC. One prompt: "Analyze content gaps and prioritize by impression volume."&lt;/p&gt;

&lt;p&gt;Agent comes back in 2 minutes with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;430+ impressions on "nocatee communities" keywords — no page exists&lt;/li&gt;
&lt;li&gt;290+ impressions on "real estate agent Jacksonville" — page exists, title doesn't target the keyword&lt;/li&gt;
&lt;li&gt;Entire 55+ active adult community persona — zero content coverage&lt;/li&gt;
&lt;li&gt;7 quick-win keywords at positions 5-20 needing only title and meta optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same analysis. 2 minutes vs 6 hours. And the agent saw patterns the human missed because it cross-referenced all 69 pages against all 50+ keyword clusters simultaneously. No human does that manually for a $2k/month retainer client.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Voice Problem Agencies Can't Ignore
&lt;/h2&gt;

&lt;p&gt;Here's what kills agency-AI workflows: every client sounds the same.&lt;/p&gt;

&lt;p&gt;You run 10 clients through ChatGPT, you get 10 articles that sound identical. "In today's competitive landscape." "It's important to note." "Let's dive in." The client reads it and says "this doesn't sound like us."&lt;/p&gt;

&lt;p&gt;So your team spends 2 hours editing a 1,500-word article to not sound like AI. At that point, you're paying for an AI tool and paying a human to undo what the AI did.&lt;/p&gt;

&lt;p&gt;The fix isn't "write in a casual tone" in your prompt. That's not how voice works.&lt;/p&gt;

&lt;p&gt;What actually works is a writing style system that reads the client's existing content and extracts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tone patterns (formal vs. conversational, ratio of each)&lt;/li&gt;
&lt;li&gt;Sentence rhythm (short-long-short patterns, average length)&lt;/li&gt;
&lt;li&gt;Vocabulary preferences (words they use, words they avoid)&lt;/li&gt;
&lt;li&gt;Structure habits (paragraph length, how they use headers)&lt;/li&gt;
&lt;li&gt;A banned words list (50+ AI slop phrases the model can't use)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feed that into the agent before it writes anything. The output sounds 80% like the client immediately. Your editor does a 15-minute polish instead of a 2-hour rewrite.&lt;/p&gt;

&lt;p&gt;Across 10 clients, that's 17.5 hours saved per content cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math
&lt;/h2&gt;

&lt;p&gt;Let's be honest about what agencies are spending now:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Line Item&lt;/th&gt;
&lt;th&gt;Current Cost&lt;/th&gt;
&lt;th&gt;With AI Agent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Semrush&lt;/td&gt;
&lt;td&gt;$130/mo&lt;/td&gt;
&lt;td&gt;Still useful for backlinks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Surfer SEO&lt;/td&gt;
&lt;td&gt;$89/mo&lt;/td&gt;
&lt;td&gt;Replaced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT Pro&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;Replaced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GSC Manual Work&lt;/td&gt;
&lt;td&gt;~8 hrs/week&lt;/td&gt;
&lt;td&gt;Replaced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Content Writing&lt;/td&gt;
&lt;td&gt;~15 hrs/week&lt;/td&gt;
&lt;td&gt;80% reduced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Voice Editing&lt;/td&gt;
&lt;td&gt;~10 hrs/week&lt;/td&gt;
&lt;td&gt;80% reduced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tool Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$239/mo&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$130 + $29/mo&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Time Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~33 hrs/week&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~10 hrs/week&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You keep Semrush for backlink analysis, domain authority, and competitive intelligence — things a conversational agent doesn't replace. You drop Surfer and the ChatGPT subscription. You add an agentic SEO tool at $29/month that handles GSC analysis, site crawling, content gap detection, and voice-matched writing.&lt;/p&gt;

&lt;p&gt;The subscription savings are $100/month. Nice but not the point.&lt;/p&gt;

&lt;p&gt;The time savings are 23 hours per week across your team. That's either one fewer hire or 23 more hours of strategy work per week. At a loaded rate of $50/hour, that's $4,600/month in recovered capacity.&lt;/p&gt;

&lt;p&gt;Per client, your SEO delivery cost drops from roughly $1,500/month in tools + time to around $500/month. Your margins just tripled.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Doesn't Replace
&lt;/h2&gt;

&lt;p&gt;An AI agent doesn't replace your strategist. It replaces the busywork your strategist shouldn't be doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Still need humans for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client communication and relationship management&lt;/li&gt;
&lt;li&gt;Competitive positioning and market strategy&lt;/li&gt;
&lt;li&gt;Link building outreach (still a human game)&lt;/li&gt;
&lt;li&gt;Creative direction and campaign planning&lt;/li&gt;
&lt;li&gt;Final editorial review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Agent handles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GSC data analysis&lt;/li&gt;
&lt;li&gt;Site crawling and content audits&lt;/li&gt;
&lt;li&gt;Keyword gap identification&lt;/li&gt;
&lt;li&gt;Content brief generation&lt;/li&gt;
&lt;li&gt;First-draft writing in client voice&lt;/li&gt;
&lt;li&gt;Internal link mapping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent is the analyst and first-draft writer. Your team is the strategist and editor. That's the right split.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agency That Adapts First Wins
&lt;/h2&gt;

&lt;p&gt;35% of businesses don't know AI can be used for SEO. 37% of those who do know haven't adopted it because of training gaps. That's from industry data published this month.&lt;/p&gt;

&lt;p&gt;The agencies that figure out agentic SEO workflows now — not "AI-assisted" but actually agentic, where the tool connects to client data and works autonomously — have a 12-18 month head start before this becomes standard.&lt;/p&gt;

&lt;p&gt;That head start means higher margins, faster delivery, better results, and the ability to take on more clients without proportionally increasing headcount.&lt;/p&gt;

&lt;p&gt;The agencies that don't adapt will keep running the 5-tab workflow, paying $239/month per client in tools, and burning 33 hours per week on copy-paste logistics while their competitors deliver the same results in a third of the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Your agency's SEO stack isn't broken because the tools are bad. It's broken because every tool is a silo and your team is the integration layer.&lt;/p&gt;

&lt;p&gt;An agentic approach consolidates GSC analysis, site auditing, content gap detection, and voice-matched writing into one workflow. Keep Semrush for what it does best. Drop everything else.&lt;/p&gt;

&lt;p&gt;If you want to test this with one client, &lt;a href="https://myagenticseo.com" rel="noopener noreferrer"&gt;Agentic SEO&lt;/a&gt; connects to Google Search Console, crawls the client's site, and lets you run a full analysis in one chat. Free tier available. BYOK on every plan — no markup on your API costs.&lt;/p&gt;

&lt;p&gt;Start with your lowest-retainer client. Run the agent against their site. Compare the output to what your current 6-hour workflow produces. The delta speaks for itself.&lt;/p&gt;




&lt;p&gt;I build the tools that replace the workflow. Writing about it at &lt;a href="https://marc0.dev" rel="noopener noreferrer"&gt;marc0.dev&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>marketing</category>
      <category>ai</category>
      <category>saas</category>
    </item>
    <item>
      <title>Why AI SEO Content Sounds Like AI (And How to Fix It)</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 24 Feb 2026 04:58:07 +0000</pubDate>
      <link>https://dev.to/marc0dev/why-ai-seo-content-sounds-like-ai-and-how-to-fix-it-35ge</link>
      <guid>https://dev.to/marc0dev/why-ai-seo-content-sounds-like-ai-and-how-to-fix-it-35ge</guid>
      <description>&lt;p&gt;You can spot AI-written content in two seconds. Not because AI is bad at writing. Because every AI tool produces the same voice.&lt;/p&gt;

&lt;p&gt;Same sentence openers. Same transition phrases. Same structure. "It's important to note." "Let's dive in." "In today's rapidly evolving landscape." You've read that article a thousand times. It's on every SaaS blog, every niche site, every content farm that discovered ChatGPT in 2023.&lt;/p&gt;

&lt;p&gt;The problem isn't that AI writes badly. The problem is that AI writes like every other AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Slop Problem
&lt;/h2&gt;

&lt;p&gt;Feed ChatGPT a keyword and tell it to write a blog post. You'll get something that's grammatically correct, factually acceptable, and completely forgettable. Every paragraph sounds like it was written by the same intern at the same content agency.&lt;/p&gt;

&lt;p&gt;Here's what AI defaults to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generic openers.&lt;/strong&gt; "In today's digital landscape..." "As technology continues to evolve..." "When it comes to [topic]..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filler transitions.&lt;/strong&gt; "It's worth noting that..." "Another important aspect is..." "Additionally, it's crucial to understand..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weak conclusions.&lt;/strong&gt; "In conclusion, [topic] is an important consideration for..." "By implementing these strategies, you can..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Passive voice everywhere.&lt;/strong&gt; "The content can be optimized by..." instead of "Optimize your content by..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero specificity.&lt;/strong&gt; "Improve your meta descriptions" instead of "Your title is bleeding 4,800 impressions because the keyword isn't in the first 30 characters."&lt;/p&gt;

&lt;p&gt;Google's helpful content system was built to detect this. Not because Google hates AI. Because Google hates content that adds nothing. And when every AI tool produces the same structure with the same phrases targeting the same keywords, none of it adds anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Happens
&lt;/h2&gt;

&lt;p&gt;LLMs are trained on the internet. The internet is full of mediocre content. The model learns to produce the average of what it's seen. That average is corporate blog voice — safe, generic, forgettable.&lt;/p&gt;

&lt;p&gt;When you prompt "write a blog post about SEO," the model reaches for the most common patterns associated with SEO blog posts. Those patterns are the filler phrases, the predictable structure, the hedge-everything tone.&lt;/p&gt;

&lt;p&gt;It's not a bug. It's what you asked for. You asked for "a blog post" and got the statistical average of all blog posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix Isn't Better Prompts
&lt;/h2&gt;

&lt;p&gt;The prompting advice is always the same. "Be more specific." "Give examples of your tone." "Tell it to write casually."&lt;/p&gt;

&lt;p&gt;This helps a little. It doesn't solve the problem.&lt;/p&gt;

&lt;p&gt;Even with detailed prompts, you get output that sounds 30% like you and 70% like ChatGPT pretending to be you. The model still reaches for its defaults. It still adds filler. It still hedges. One prompt can't override thousands of hours of training on generic content.&lt;/p&gt;

&lt;p&gt;The actual fix is a system, not a prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works: A Writing Style System
&lt;/h2&gt;

&lt;p&gt;When I built my SEO agent, the content it produced was good but generic. Same problem everyone has. So I built a style system that actually learns your voice from your existing content.&lt;/p&gt;

&lt;p&gt;Here's how it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Analyze existing content.&lt;/strong&gt;&lt;br&gt;
The system reads your published articles. Not one or two — all of them. It extracts patterns across tone, sentence length, vocabulary, structure, paragraph cadence, and how you use (or don't use) certain phrases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Generate 6 style files.&lt;/strong&gt;&lt;br&gt;
From that analysis, it creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tone profile.&lt;/strong&gt; Are you formal? Direct? Technical? Conversational? What's the ratio?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structure patterns.&lt;/strong&gt; Do you use short paragraphs? Do you lead with conclusions? How do you transition?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sentence patterns.&lt;/strong&gt; Average sentence length. Variation rhythm. Short-long-short patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vocabulary.&lt;/strong&gt; Words you use often. Words you never use. Technical terms you prefer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examples.&lt;/strong&gt; Actual snippets from your writing that demonstrate your voice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Banned words list.&lt;/strong&gt; 50+ phrases the AI is not allowed to use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Enforce it.&lt;/strong&gt;&lt;br&gt;
When the agent writes content, these files aren't suggestions. They're rules. The banned words list alone kills the most obvious AI tells instantly.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Banned Words List
&lt;/h2&gt;

&lt;p&gt;This is the highest-impact, lowest-effort change you can make. Here's a sample of what gets banned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AI Slop — Never Use These

"It's important to note"
"In today's rapidly evolving"
"Let's dive in"
"When it comes to"
"In this article, we will"
"As we all know"
"At the end of the day"
"Without a doubt"
"Game-changer"
"Cutting-edge"
"Revolutionary"
"Best-in-class"
"Synergy"
"Leverage" (as a verb)
"Ecosystem"
"Holistic approach"
"Moving forward"
"It goes without saying"
"Needless to say"
"First and foremost"
"Last but not least"
"In conclusion"
"To sum up"
"Delve"
"Landscape"
"Paradigm"
"Streamline"
"Harness"
"Utilize" (just say "use")
"Facilitate"
"Robust"
"Comprehensive" (when meaningless)
"Innovative" (when everything is)
"Excited to announce"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy that list. Put it in your system prompt. Your AI output improves immediately.&lt;/p&gt;

&lt;p&gt;But a banned words list alone isn't enough. It removes the worst offenders. It doesn't add your voice. That's what the full style system does — it doesn't just stop the AI from sounding like AI. It makes the AI sound like you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Numbers Beat Adjectives
&lt;/h2&gt;

&lt;p&gt;The biggest difference between generic AI content and content that actually ranks is specificity.&lt;/p&gt;

&lt;p&gt;AI defaults to adjectives. "Fast performance." "Affordable pricing." "Great results."&lt;/p&gt;

&lt;p&gt;Humans (good ones) use numbers. "15 tokens per second." "$29/month." "68,000 impressions in 9 days."&lt;/p&gt;

&lt;p&gt;Every time the AI reaches for an adjective, the style system pushes it toward a number. Not "the Mac Mini is affordable" but "the Mac Mini M4 Pro costs $2,000." Not "the analysis is fast" but "the analysis takes 2 minutes."&lt;/p&gt;

&lt;p&gt;Numbers do three things adjectives don't:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;They're credible. Anyone can say "fast." A specific number means you measured it.&lt;/li&gt;
&lt;li&gt;They're memorable. People remember "$2,000" but forget "affordable."&lt;/li&gt;
&lt;li&gt;They're comparable. "15 tokens/second" tells a developer exactly where this sits relative to alternatives.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What This Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Without style system:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In today's competitive digital landscape, it's important to have a comprehensive SEO strategy. By leveraging cutting-edge AI tools, you can streamline your content creation process and achieve better results. Let's dive into how you can harness the power of AI to revolutionize your SEO workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;With style system:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your SEO workflow is 5 tabs and 30 minutes of copy-pasting. GSC to Sheets to ChatGPT to your CMS. The AI you're pasting into has never seen your site. Here's what happens when you connect an agent directly to your data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Same topic. Same AI model. The difference is the style system rejecting every generic pattern and replacing it with voice, specificity, and directness.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Build Your Own (Without My Tool)
&lt;/h2&gt;

&lt;p&gt;If you want to do this manually, here's the process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Collect your best 10 articles.&lt;/strong&gt; Not all your content. Your best content. The posts that got engagement, that you're proud of, that sound like you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Analyze patterns.&lt;/strong&gt; Read them with a fresh eye. How long are your sentences? How long are your paragraphs? Do you use questions? Do you address the reader as "you"? What words appear often?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Write a style document.&lt;/strong&gt; Not "be conversational and engaging." That means nothing. Write specific rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## My Writing Rules&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Max 3 sentences per paragraph
&lt;span class="p"&gt;-&lt;/span&gt; Lead with the conclusion, then explain
&lt;span class="p"&gt;-&lt;/span&gt; Use "you" not "users" or "one"
&lt;span class="p"&gt;-&lt;/span&gt; Numbers instead of adjectives
&lt;span class="p"&gt;-&lt;/span&gt; Short sentence. Short sentence. Longer sentence with detail. Short.
&lt;span class="p"&gt;-&lt;/span&gt; Never use passive voice
&lt;span class="p"&gt;-&lt;/span&gt; Never start with "In this article"
&lt;span class="p"&gt;-&lt;/span&gt; Technical terms in English, even in German text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Create your banned words list.&lt;/strong&gt; Go through AI-generated content and highlight every phrase that makes you cringe. Add them to the list. Be aggressive. You can always remove something later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Test it.&lt;/strong&gt; Put the style document in your system prompt. Generate content. Compare it to your actual writing. Iterate.&lt;/p&gt;

&lt;p&gt;This process takes an afternoon. The output quality improvement is permanent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;AI content doesn't have to sound like AI. It sounds like AI because nobody tells it not to.&lt;/p&gt;

&lt;p&gt;A banned words list kills the worst offenders in 5 minutes. A full style system — tone, structure, patterns, vocabulary, examples — makes AI output genuinely sound like your writing. Not 100%. Maybe 80%. But 80% your voice in 2 minutes beats starting from scratch every time.&lt;/p&gt;

&lt;p&gt;The tools that will win the AI content game in 2026 aren't the ones that generate the most words per minute. They're the ones that generate words that sound like they came from a real person with a real perspective.&lt;/p&gt;

&lt;p&gt;I built this system into &lt;a href="https://myagenticseo.com" rel="noopener noreferrer"&gt;Agentic SEO&lt;/a&gt; because I was tired of publishing content that sounded like every other AI blog. Connect it to your site, let it analyze your existing writing, and the agent writes in your voice. Not in ChatGPT's voice wearing your hat.&lt;/p&gt;




&lt;p&gt;*Writing about AI, SEO, and building in public at &lt;a href="https://marc0.dev" rel="noopener noreferrer"&gt;marc0.dev&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>writing</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Google Just Added AI to Search Console. Here's Why It's Not Enough.</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 24 Feb 2026 04:55:59 +0000</pubDate>
      <link>https://dev.to/marc0dev/google-just-added-ai-to-search-console-heres-why-its-not-enough-19o8</link>
      <guid>https://dev.to/marc0dev/google-just-added-ai-to-search-console-heres-why-its-not-enough-19o8</guid>
      <description>&lt;p&gt;Google just rolled out AI-powered configuration in Search Console. As of February 2026, every GSC user can type natural language queries like "show me CTR trends for branded keywords in the last 90 days" and the report configures itself. No more clicking through dropdown menus and date pickers.&lt;/p&gt;

&lt;p&gt;Cool feature. Not a game changer. Here's why.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Google's AI Actually Does
&lt;/h2&gt;

&lt;p&gt;The new feature translates natural language into report filters. You type what you want to see, it sets up the view. Three things it handles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metric selection.&lt;/strong&gt; Ask for clicks, impressions, CTR, or position data and it shows the right view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smart filtering.&lt;/strong&gt; Narrow by query, page, country, device, search appearance — all from a text prompt instead of dropdowns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparisons.&lt;/strong&gt; Year-over-year, month-over-month, mobile vs desktop. Type the question, get the comparison.&lt;/p&gt;

&lt;p&gt;That's it. It's a UI improvement for the performance report. It makes existing data easier to access. Faster to configure. Less clicking.&lt;/p&gt;

&lt;p&gt;What it doesn't do is the part that actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Doesn't Do
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It doesn't analyze your data.&lt;/strong&gt; It shows you filtered views. You still have to look at the numbers and figure out what they mean. "Show me high impression low CTR queries" gives you a list. It doesn't tell you &lt;em&gt;why&lt;/em&gt; your CTR is low or &lt;em&gt;what to do about it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't know your site.&lt;/strong&gt; GSC knows your keywords and pages. It doesn't know your content. It hasn't read your articles. It doesn't know your H1 tags are wrong or your internal linking has gaps. It shows performance data without content context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't find gaps.&lt;/strong&gt; The biggest SEO wins come from keyword clusters where you have impressions but no dedicated page. GSC can show you the keywords. It can't cross-reference them against your actual pages to find what's missing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't create anything.&lt;/strong&gt; No content briefs. No article drafts. No optimization suggestions beyond what you can infer from raw numbers. You still leave GSC and open 3 more tabs to do anything with the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't remember.&lt;/strong&gt; Close the tab, open it tomorrow, start from scratch. No memory of what you looked at yesterday. No tracking of what changed since last session.&lt;/p&gt;

&lt;p&gt;Google built a better report filter. The actual SEO work still happens somewhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gap Between Data and Action
&lt;/h2&gt;

&lt;p&gt;This is the problem I kept running into. GSC has the data. All of it. 90 days of queries, pages, clicks, impressions, CTR, positions. It's a gold mine.&lt;/p&gt;

&lt;p&gt;But the workflow to turn that data into actions looked like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open GSC. Filter reports. Export CSV.&lt;/li&gt;
&lt;li&gt;Open Google Sheets. Sort, pivot, highlight patterns.&lt;/li&gt;
&lt;li&gt;Open your site. Check what pages exist for those keywords.&lt;/li&gt;
&lt;li&gt;Open ChatGPT. Paste data. Get "improve your meta descriptions."&lt;/li&gt;
&lt;li&gt;Open your CMS. Start writing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Five tabs. Thirty minutes. And the AI you used in step 4 had zero context about your site. It didn't know what pages you have, what topics you've covered, what your internal link structure looks like. It gave you the same advice it gives everyone.&lt;/p&gt;

&lt;p&gt;Google's new AI feature makes step 1 faster. Steps 2 through 5 are untouched.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Solves This
&lt;/h2&gt;

&lt;p&gt;An AI that doesn't just filter your GSC data but connects to it, reads your site, and does the analysis for you.&lt;/p&gt;

&lt;p&gt;I'm not talking about dashboards or reporting tools. I mean an agent that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pulls your GSC data via API — live, no exports&lt;/li&gt;
&lt;li&gt;Crawls every page on your site&lt;/li&gt;
&lt;li&gt;Cross-references keywords against your actual content&lt;/li&gt;
&lt;li&gt;Finds clusters with high impressions and no dedicated page&lt;/li&gt;
&lt;li&gt;Checks if your keywords appear in titles and H1 tags&lt;/li&gt;
&lt;li&gt;Maps your internal links and finds orphaned pages&lt;/li&gt;
&lt;li&gt;Generates content briefs based on real gaps in your data&lt;/li&gt;
&lt;li&gt;Writes articles that match your existing voice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's the workflow I automated. Built an agent that does all of it from a single chat. You ask "what are my content gaps?" and 2 minutes later you get a full analysis with specific keywords, impression counts, and recommended pages to create — not a filtered report you still have to interpret.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Example
&lt;/h2&gt;

&lt;p&gt;A user connected a 69-page real estate site. Asked one question: "Analyze my GSC data and identify content gaps."&lt;/p&gt;

&lt;p&gt;The agent found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;430+ impressions on "nocatee communities" keywords — no dedicated page existed&lt;/li&gt;
&lt;li&gt;290+ impressions on "real estate agent Jacksonville" queries — page existed but wasn't optimized for that term&lt;/li&gt;
&lt;li&gt;An entire 55+ active adult community buyer persona — zero content coverage&lt;/li&gt;
&lt;li&gt;7 quick-win keywords already at positions 5-20 that just needed title and meta optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From one prompt. No CSV. No spreadsheet. No copy-pasting into ChatGPT.&lt;/p&gt;

&lt;p&gt;Google's AI feature would have shown this user a nicely filtered report of their keywords. It wouldn't have told them which keywords don't have pages, which pages need optimization, or what content to create next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Shift Happening
&lt;/h2&gt;

&lt;p&gt;Google adding AI to GSC is a signal. They know the manual workflow is broken. They know people waste time clicking through menus to get basic views of their own data.&lt;/p&gt;

&lt;p&gt;But Google isn't going to build the tool that replaces Semrush, writes your content, and publishes it to your CMS. That's not their business model. They'll make the dashboard faster. The analysis and execution layer is where independent tools come in.&lt;/p&gt;

&lt;p&gt;The interesting development isn't Google's AI filter. It's that the barrier to building real agentic SEO tools has collapsed. GSC has an API. LLMs can reason about data. Site crawling is a solved problem. Connecting these pieces into a single workflow that actually does the work — that's what's new.&lt;/p&gt;

&lt;p&gt;Google made the data easier to look at. The next step is tools that look at it for you and tell you what to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Google's AI-powered configuration is a solid quality-of-life improvement. If you spend 10 minutes per day setting up GSC filters manually, you'll save time. Use it.&lt;/p&gt;

&lt;p&gt;But don't confuse faster filtering with actual SEO intelligence. The hard part was never configuring the report. The hard part is knowing what to do with the data. That requires an AI that knows your site, not just your search metrics.&lt;/p&gt;

&lt;p&gt;If you want to see what it looks like when an agent actually analyzes your GSC data against your site content, I built a tool that does exactly that: &lt;a href="https://myagenticseo.com" rel="noopener noreferrer"&gt;myagenticseo.com&lt;/a&gt;. Free to start. Connect your Search Console and ask it a question.&lt;/p&gt;

&lt;p&gt;The difference between Google's AI and an agentic SEO tool is the difference between a faster speedometer and an autopilot. One shows you numbers quicker. The other drives.&lt;/p&gt;




&lt;p&gt;*Building in public at &lt;a href="https://marc0.dev" rel="noopener noreferrer"&gt;marc0.dev&lt;/a&gt;. Writing about AI, SEO, and agentic systems.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Agentic SEO: What It Actually Is and How I Use It (2026 Guide)</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 24 Feb 2026 04:48:15 +0000</pubDate>
      <link>https://dev.to/marc0dev/agentic-seo-what-it-actually-is-and-how-i-use-it-2026-guide-3f3f</link>
      <guid>https://dev.to/marc0dev/agentic-seo-what-it-actually-is-and-how-i-use-it-2026-guide-3f3f</guid>
      <description>&lt;p&gt;Everyone's talking about agentic SEO. Nobody's showing what it actually looks like when an AI agent does your SEO work autonomously.&lt;/p&gt;

&lt;p&gt;I built one. Connected it to Google Search Console, pointed it at my blog, and let it do its thing. 68,000 impressions in 9 days. From basically zero. Here's what agentic SEO actually means — not the enterprise whitepaper version, the version where you sit in a chat and watch an agent pull your data, find your problems, and fix them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Agentic SEO Actually Means
&lt;/h2&gt;

&lt;p&gt;The term is everywhere right now. Search Engine Land wrote a guide. WordLift sells an "AI SEO Agent." Frase calls itself an "agentic SEO platform." Siteimprove has a whitepaper about it.&lt;/p&gt;

&lt;p&gt;Most of them describe the same thing: AI that does SEO tasks without you manually prompting every step.&lt;/p&gt;

&lt;p&gt;Here's the simpler version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional SEO workflow:&lt;/strong&gt; Open GSC. Export CSV. Open spreadsheet. Sort by impressions. Copy data into ChatGPT. Ask for advice. Get "improve your meta descriptions." Repeat for an hour. Five tabs. Three vague action items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic SEO workflow:&lt;/strong&gt; Tell the agent "analyze my site." It pulls your GSC data via API. Crawls your pages. Cross-references keywords against existing content. Finds gaps. Tells you exactly what to fix. Writes the content if you ask.&lt;/p&gt;

&lt;p&gt;The difference isn't AI. The difference is autonomy. The agent decides which tools to use, what data to pull, and what actions to take — based on your specific site, not a generic playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Most "Agentic SEO" Tools Aren't Agentic
&lt;/h2&gt;

&lt;p&gt;Here's where I get opinionated.&lt;/p&gt;

&lt;p&gt;Most tools calling themselves "agentic" are just AI wrappers with a fancier UI. You still type a prompt. It still gives you a response. There's no tool loop. No autonomous decision-making. No persistent memory between sessions.&lt;/p&gt;

&lt;p&gt;A real agentic SEO system needs three things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Direct data access.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not "paste your CSV." Direct API connection to Google Search Console. Live queries. 90 days of data. The agent pulls what it needs without you being the middleman.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Site awareness.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The agent crawls your actual pages. It knows your titles, your H1s, your internal links, your content gaps. It doesn't guess what your site looks like — it reads every page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. An agentic loop.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the part most tools skip. A real agent doesn't make one API call and respond. It plans. Executes. Evaluates. Executes again. Up to 5 rounds of tool calls per message. It might pull GSC data, realize it needs to check a specific page, crawl that page, compare it to a competitor keyword, then come back with a recommendation. That's a loop. That's agentic.&lt;/p&gt;

&lt;p&gt;Without all three, you're using ChatGPT with extra steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What an Agentic SEO Workflow Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;Here's a real example. A user connected their real estate website — 69 pages, covering multiple geographic areas and blog topics.&lt;/p&gt;

&lt;p&gt;They asked one question: "Analyze my GSC data and identify content gaps."&lt;/p&gt;

&lt;p&gt;The agent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pulled 50 keywords sorted by impressions from GSC&lt;/li&gt;
&lt;li&gt;Listed all 69 site URLs from the sitemap&lt;/li&gt;
&lt;li&gt;Checked keyword placement across titles and H1 tags&lt;/li&gt;
&lt;li&gt;Identified keyword clusters with zero dedicated pages&lt;/li&gt;
&lt;li&gt;Found 430+ impressions on "nocatee communities" keywords with no page targeting them&lt;/li&gt;
&lt;li&gt;Found 290+ impressions on "real estate agent Jacksonville" keywords going to an unoptimized page&lt;/li&gt;
&lt;li&gt;Discovered an entire buyer persona (55+ communities) with zero content coverage&lt;/li&gt;
&lt;li&gt;Delivered a prioritized action plan with specific URLs to create or optimize&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Total time: about 2 minutes. No CSV exports. No spreadsheets. No copy-pasting. One chat message in, full analysis out.&lt;/p&gt;

&lt;p&gt;That's agentic SEO. Not a dashboard. Not a report. An agent that investigates your data and comes back with a diagnosis.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph TD
    U[User: Analyze my content gaps] --&amp;gt; A[Agent Plans Approach]
    A --&amp;gt; GSC[Pull GSC Data via API]
    GSC --&amp;gt; SITE[Crawl All Site URLs]
    SITE --&amp;gt; KW[Cross-Reference Keywords vs Pages]
    KW --&amp;gt; GAP[Identify Unserved Clusters]
    GAP --&amp;gt; PRI[Prioritize by Impression Volume]
    PRI --&amp;gt; OUT[Deliver Action Plan]
    OUT --&amp;gt; WRITE[Optional: Write the Content]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How I Built Mine
&lt;/h2&gt;

&lt;p&gt;I started with duct tape. Claude Code connected to my Supabase CMS and Google Search Console via knowledge files and manual OAuth tokens. It worked — that's how I got the 68k impressions — but it was fragile. Tokens expired silently. Context vanished between sessions. Nobody else could use it.&lt;/p&gt;

&lt;p&gt;So I built it into a real product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Search Console integration.&lt;/strong&gt; OAuth with automatic token refresh. Live API queries pulling 90 days of keyword and page data. No manual exports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Site crawler.&lt;/strong&gt; Reads every page on your site automatically. Maps internal links. Identifies topic clusters. Works with any platform — WordPress, Astro, Next.js, static HTML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent memory.&lt;/strong&gt; After each conversation, the agent extracts key findings. Next session, it remembers what it found last time. SEO is longitudinal. Your agent should be too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing style system.&lt;/strong&gt; This took the longest. The agent reads your existing content and generates 6 style files — tone, structure, sentence patterns, vocabulary, examples, and a banned words list that kills 50+ AI slop phrases. When it writes an article, it sounds like you. Not like ChatGPT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20+ models.&lt;/strong&gt; GPT, Claude, DeepSeek, Gemini, Llama, Mistral — switch mid-conversation. Use Claude for writing, GPT for analysis, whatever fits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BYOK.&lt;/strong&gt; Bring your own API key on every tier. No middleman markup on tokens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; Next.js, TypeScript, custom SSE streaming. No Vercel AI SDK — I built my own provider adapters for full control over the agentic loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The core of the agentic loop — simplified&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;agentLoop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;rounds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;tools&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hasToolCalls&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;rounds&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;executeTools&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toolCalls&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;tools&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;rounds&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent isn't a single prompt-response cycle. It's a loop that keeps going until it has enough information to give you a real answer. That's the technical difference between "AI-assisted" and "agentic."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality Check
&lt;/h2&gt;

&lt;p&gt;Agentic SEO isn't magic. Here's what it doesn't do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't guarantee rankings.&lt;/strong&gt; No tool does. It finds opportunities and executes faster than you can manually. What you do with the output still matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't replace strategy.&lt;/strong&gt; The agent is a fast analyst and writer. It's not a strategist. You still decide which direction to go. The agent tells you what's there — you decide what to prioritize.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It struggles with low-data sites.&lt;/strong&gt; If your site has 16 total impressions and everything is at position 80+, there's not enough signal for the agent to do meaningful analysis. It needs data to work with. New sites need to build a baseline first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It writes well but not perfectly.&lt;/strong&gt; The writing style system is good. It's not you. Every article needs a review pass. The agent gets you 80% there in 2 minutes instead of starting from zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agentic SEO vs the SEO Tool Stack
&lt;/h2&gt;

&lt;p&gt;Here's what you're probably using right now and what changes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;Agentic SEO&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GSC Analysis&lt;/td&gt;
&lt;td&gt;Export CSV, sort in sheets&lt;/td&gt;
&lt;td&gt;Agent pulls live data, finds patterns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Content Gaps&lt;/td&gt;
&lt;td&gt;Compare keywords manually&lt;/td&gt;
&lt;td&gt;Agent cross-references all pages automatically&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Content Briefs&lt;/td&gt;
&lt;td&gt;Write from scratch&lt;/td&gt;
&lt;td&gt;Agent generates from your GSC data + site context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Article Writing&lt;/td&gt;
&lt;td&gt;Prompt ChatGPT with no context&lt;/td&gt;
&lt;td&gt;Agent writes in your voice with your keywords&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Internal Linking&lt;/td&gt;
&lt;td&gt;Eyeball it&lt;/td&gt;
&lt;td&gt;Agent maps all pages and suggests links&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly Cost&lt;/td&gt;
&lt;td&gt;Semrush $130 + Surfer $89 + ChatGPT $20&lt;/td&gt;
&lt;td&gt;$29/mo + your API key&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I'm not saying you should cancel Semrush. Semrush does things my agent doesn't — backlink analysis, domain authority tracking, competitive intelligence at scale. Different tools for different jobs.&lt;/p&gt;

&lt;p&gt;But for the daily workflow of "what should I write, how should I optimize, what's working and what isn't" — the agent replaces 5 tabs and 30 minutes with one chat and 2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Agentic SEO is real. Most tools claiming the label aren't.&lt;/p&gt;

&lt;p&gt;The actual test is simple: does the tool connect to your data, make autonomous decisions about what to analyze, and come back with specific actions — without you manually feeding it every piece of context? If yes, it's agentic. If you're still copy-pasting, it's just AI with marketing.&lt;/p&gt;

&lt;p&gt;I built mine because I got tired of being the copy-paste middleman between my own data and the AI that was supposed to help me. The 68k impressions in 9 days weren't because of magic — they were because the agent found problems I'd been sitting on for weeks and I spent a day fixing them.&lt;/p&gt;

&lt;p&gt;The hosted version (free tier available): &lt;a href="https://myagenticseo.com" rel="noopener noreferrer"&gt;myagenticseo.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect your Search Console, talk to the agent, see what it finds. No credit card. No setup. Just your data and an agent that actually looks at it.&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I built an open-source AI agent that actually does SEO — not just talks about it</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Thu, 19 Feb 2026 13:56:43 +0000</pubDate>
      <link>https://dev.to/marc0dev/i-built-an-open-source-ai-agent-that-actually-does-seo-not-just-talks-about-it-37ij</link>
      <guid>https://dev.to/marc0dev/i-built-an-open-source-ai-agent-that-actually-does-seo-not-just-talks-about-it-37ij</guid>
      <description>&lt;p&gt;Most "AI SEO tools" work like this: you paste a keyword, they hit an API once, you get generic advice that could apply to any site on the internet. They don't know your content. They don't see your data. They don't investigate.&lt;/p&gt;

&lt;p&gt;I wanted something different. So I built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it started
&lt;/h2&gt;

&lt;p&gt;I was managing SEO for my blog manually — I connected Claude to my CMS and Google Search Console, wrote knowledge files for context, and let the agent handle content strategy, writing, and optimization.&lt;/p&gt;

&lt;p&gt;It worked stupidly well: &lt;strong&gt;68,000 impressions and 1,300 clicks in 9 days.&lt;/strong&gt; My blog went from ~5 impressions a week to 200 clicks daily.&lt;/p&gt;

&lt;p&gt;So I packaged the whole workflow into something anyone can self-host.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes it agentic (not just another wrapper)
&lt;/h2&gt;

&lt;p&gt;When you ask "Why is my traffic dropping?", a normal AI tool gives you a generic checklist. Agentic SEO does this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent:
  → Calls gsc_query(type: "declining") — finds 15 keywords losing position
  → Calls gsc_query(type: "trends", keyword: "react server components") — pulls 90-day trend
  → Calls site_context(topic: "react server components") — checks your actual page content
  → Calls link_suggester(keyword: "react server components") — finds internal linking gaps
  → Returns: specific diagnosis + action items backed by your real numbers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five tool calls. Three data sources cross-referenced. One answer that's specific to YOUR site — because the agent actually looked at your data before speaking.&lt;/p&gt;

&lt;p&gt;This is what &lt;strong&gt;agentic&lt;/strong&gt; means: the AI doesn't just respond, it acts. It has a loop — plan, execute, verify — and it keeps going until it has a real answer. Up to 5 rounds of tool calls per message.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture: Agentic Orchestration Layer (AOL)
&lt;/h2&gt;

&lt;p&gt;The pattern behind it is what I call the Agentic Orchestration Layer. Here's how it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Message
    ↓
Agent Core (AOL Engine)
    ↓ injects: AGENT.md + site context + GSC data + memory
Tool Selection
    ↓
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ gsc_query   │site_context │link_suggester│article_writer│
└─────────────┴─────────────┴─────────────┴─────────────┘
    ↓
Cross-reference &amp;amp; Verify
    ↓
Need more data? → Yes → back to Tool Selection (up to 5 rounds)
    ↓ No
Final Answer — backed by your real data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LLM receives your full site context (crawled pages, GSC performance data, sitemap, memory from past sessions) injected into every request. Then it decides which tools to call, in what order, and iterates until the answer is solid.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Google Search Console Integration&lt;/strong&gt; — OAuth connect, auto-sync 90 days of query + page data with date-level trends. Declining keywords, growing opportunities, and quick wins — found automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Site Crawler&lt;/strong&gt; — Sitemap-based crawling with Mozilla Readability for clean content extraction. Maps internal links, extracts metadata, builds a full content inventory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing Style Generation&lt;/strong&gt; — Reads your homepage and top pages, then generates 6 style files: Tone, Structure, Sentence Style, Examples, Anti-Words, and Context. Articles come out sounding like your brand, not like AI slop. The Anti-Words system bans 50+ overused AI phrases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Project Support&lt;/strong&gt; — Run SEO for multiple sites from one install. Each project gets its own isolated data. Great if you're an agency or freelancer managing client sites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Memory&lt;/strong&gt; — After each conversation, the agent extracts key findings into memory. Next session, it remembers what it learned. SEO is longitudinal — your agent should be too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20+ Models&lt;/strong&gt; — OpenRouter (MiniMax M2.5, DeepSeek, Gemini, Llama 4), Anthropic (Claude), and OpenAI (GPT). Switch in the UI, no restart needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need to run it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 18+&lt;/li&gt;
&lt;li&gt;A Google Cloud project with Search Console API enabled (for OAuth)&lt;/li&gt;
&lt;li&gt;At least one LLM API key — OpenRouter (recommended, cheapest), Anthropic, or OpenAI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. No database, no Docker, no config files to wrestle with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Dominien/agentic-seo-agent.git
&lt;span class="nb"&gt;cd &lt;/span&gt;agentic-seo-agent
npm &lt;span class="nb"&gt;install
cp&lt;/span&gt; .env.example .env.local
&lt;span class="c"&gt;# Add your Google OAuth credentials + at least one LLM API key&lt;/span&gt;
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;localhost:3000&lt;/code&gt; and the app walks you through onboarding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design decisions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No database&lt;/strong&gt; — JSON files in &lt;code&gt;/data&lt;/code&gt;. Portable, readable, forkable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Vercel AI SDK&lt;/strong&gt; — Custom provider adapters using native &lt;code&gt;fetch()&lt;/code&gt;. Full control over streaming, tool calling, and error handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AGENT.md over hardcoded prompts&lt;/strong&gt; — The agent's personality is a Markdown file you can edit, version, and share.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BYOK&lt;/strong&gt; — Bring Your Own Key. No server-side key management, no usage tracking, no middleman.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenRouter inherits from OpenAI&lt;/strong&gt; — One base adapter handles all OpenAI-compatible APIs. Adding a new provider is ~20 lines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;p&gt;Next.js, TypeScript, custom SSE streaming via &lt;code&gt;fetch()&lt;/code&gt; + &lt;code&gt;getReader()&lt;/code&gt; (not EventSource, which is GET-only). All data stored as flat JSON files — no ORM, no migrations, no database setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/Dominien/agentic-seo-agent" rel="noopener noreferrer"&gt;github.com/Dominien/agentic-seo-agent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; AGPL-3.0&lt;/p&gt;

&lt;p&gt;This is my first open-source project. Feedback, issues, and PRs are very welcome. If you try it out, I'd love to hear how it works for your site.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I Tested the New Qwen3-Coder-Next Locally—Here's Why 3B Active Params Changes Everything</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Wed, 04 Feb 2026 13:54:28 +0000</pubDate>
      <link>https://dev.to/marc0dev/i-tested-the-new-qwen3-coder-next-locally-heres-why-3b-active-params-changes-everything-38dn</link>
      <guid>https://dev.to/marc0dev/i-tested-the-new-qwen3-coder-next-locally-heres-why-3b-active-params-changes-everything-38dn</guid>
      <description>&lt;p&gt;An 80B model that only uses 3B parameters per token just scored 70.6% on SWE-Bench Verified. That beats DeepSeek-V3.2 (671B params) while being 10 points behind Claude Opus 4.5—on hardware you can actually afford.&lt;/p&gt;

&lt;p&gt;Alibaba dropped &lt;strong&gt;Qwen3-Coder-Next&lt;/strong&gt; yesterday (Feb 3, 2026). This isn't just another open-source model release. It's the first time a local-runnable model closes the gap with frontier proprietary systems to single digits on real-world coding benchmarks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Matter
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Params Active&lt;/th&gt;
&lt;th&gt;SWE-Bench Verified&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Opus 4.5&lt;/td&gt;
&lt;td&gt;Unknown (closed)&lt;/td&gt;
&lt;td&gt;80.9%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GLM-4.7&lt;/td&gt;
&lt;td&gt;358B&lt;/td&gt;
&lt;td&gt;74.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Qwen3-Coder-Next&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70.6%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DeepSeek-V3.2&lt;/td&gt;
&lt;td&gt;671B&lt;/td&gt;
&lt;td&gt;70.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Read that again: &lt;strong&gt;3B active params beating 671B&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The MoE (Mixture of Experts) architecture activates only what's needed per token. You get the reasoning depth of an 80B model with the inference speed of a 3B model.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Strategic Angle: AI Infrastructure Independence
&lt;/h2&gt;

&lt;p&gt;The "what if the music stops" fear is real. Anthropic burns $2B/year. OpenAI needs $10B+ annually. Currently, these companies are subsidizing our workflows with VC money. One day, the pricing goes 10x, or the API disappears behind an enterprise wall.&lt;/p&gt;

&lt;p&gt;And now? &lt;strong&gt;Insurance exists.&lt;/strong&gt; It costs the price of a high-end Mac Mini ($2,000). One-time payment. Yours forever.&lt;/p&gt;

&lt;p&gt;With Qwen3 reaching 70% on SWE-Bench locally, "Local AI" is no longer about running garbage 7B models that struggle with for-loops. It's legitimately competitive with frontier models for real engineering work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your new backup plan:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-End:&lt;/strong&gt; Keep using Opus/Sonnet for the hardest architectural reasoning (it's still ~10 points ahead)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insurance:&lt;/strong&gt; Know that if the cloud infrastructure shifts, you have the hardware and the weights to keep shipping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy:&lt;/strong&gt; For sensitive code, it already makes sense to run local&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the moment local AI stopped being a hobby project and became a legitimate professional tool. That's not just tech news—that's career security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Different
&lt;/h2&gt;

&lt;p&gt;Most coding models learn by predicting the next token. Read-only education.&lt;/p&gt;

&lt;p&gt;Qwen3-Coder-Next was trained on &lt;strong&gt;800,000 verifiable tasks&lt;/strong&gt; from real GitHub PRs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Model tries to fix a bug&lt;/li&gt;
&lt;li&gt;Runs tests in Docker&lt;/li&gt;
&lt;li&gt;Gets pass/fail feedback&lt;/li&gt;
&lt;li&gt;Learns from actual execution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It learned to &lt;em&gt;fix bugs and pass tests&lt;/em&gt;—not just write code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run It Locally in 5 Minutes
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Ollama&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh

&lt;span class="c"&gt;# Pull the model (~46GB)&lt;/span&gt;
ollama pull qwen3-coder-next

&lt;span class="c"&gt;# Configure for Claude Code&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:11434"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_AUTH_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ollama"&lt;/span&gt;

&lt;span class="c"&gt;# Run&lt;/span&gt;
claude &lt;span class="nt"&gt;--model&lt;/span&gt; qwen3-coder-next
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hardware needed: Mac Mini M4 Pro 64GB ($2,000) or 48GB+ VRAM.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud API (Opus 4.5):&lt;/strong&gt; $70-150/month heavy usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local (Qwen3-Coder-Next):&lt;/strong&gt; $2,000 once + $5/month electricity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Break-even:&lt;/strong&gt; 8-12 months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After that, frontier-level coding AI for basically free.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Full technical breakdown, OpenClaw integration guide, and setup details in the original article:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://www.marc0.dev/en/blog/qwen3-coder-next-70-swe-bench-with-3b-active-params-local-ai-just-got-real-1770197534528" rel="noopener noreferrer"&gt;&lt;strong&gt;Read the full article on marc0.dev&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What's your local AI setup? Running any models on your own hardware? Drop a comment—curious what's working for people.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>OpenClaw + Mac Mini: Run Your Own 24/7 AI Agent Locally</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Tue, 03 Feb 2026 09:22:05 +0000</pubDate>
      <link>https://dev.to/marc0dev/openclaw-mac-mini-run-your-own-247-ai-agent-locally-4p7g</link>
      <guid>https://dev.to/marc0dev/openclaw-mac-mini-run-your-own-247-ai-agent-locally-4p7g</guid>
      <description>&lt;p&gt;The dream of running your own AI assistant on dedicated hardware is now reality. OpenClaw has emerged as the go-to solution for self-hosted AI agents, and the Mac Mini M4 has become the hardware of choice.&lt;/p&gt;

&lt;p&gt;I spent the last few weeks testing different configurations, models, and setups. Here's what actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Mac Mini for Local AI?
&lt;/h2&gt;

&lt;p&gt;Apple Silicon changed the game. Unified memory means no bottleneck between CPU and GPU—everything shares one pool. A Mac Mini M4 Pro with 64GB can run 32B parameter models at 10-15 tokens per second while drawing only 20-40W.&lt;/p&gt;

&lt;p&gt;Compare that to an RTX 4090 setup pulling 500W+ and screaming at you through GPU fans.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware Sweet Spots
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Budget ($800):&lt;/strong&gt; Mac Mini M4 24GB — runs 7-8B models, good for experimentation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended ($2,000):&lt;/strong&gt; Mac Mini M4 Pro 64GB — runs 32B models, the practical choice for most developers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enthusiast ($10,000):&lt;/strong&gt; Mac Studio M3 Ultra 512GB — can technically run Kimi K2 at 1-2 tokens/second&lt;/p&gt;

&lt;p&gt;The $2,000 M4 Pro is where most developers should stop. It handles Qwen3-Coder-30B and GLM-4.7-Flash without breaking a sweat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Models for Agentic Coding
&lt;/h2&gt;

&lt;p&gt;After testing dozens of models:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GLM-4.7-Flash&lt;/strong&gt; — The current recommendation for OpenClaw and Claude Code. 128K context, excellent tool-calling, runs on 24GB+.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qwen3-Coder-30B-A3B&lt;/strong&gt; — MoE architecture means only 3B parameters active at inference. Fast despite 30B total size. Needs 64GB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GPT-OSS-20B&lt;/strong&gt; — OpenAI's first open-weights model. Just works everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kimi K2 Reality Check
&lt;/h2&gt;

&lt;p&gt;Everyone wants to run Kimi K2 locally. The truth: its 1 trillion parameters require 250GB+ just for weights. Even quantized to 1.8-bit, you need a $10,000 Mac Studio and get 1-2 tokens/second.&lt;/p&gt;

&lt;p&gt;Jeff Geerling got 28 tokens/second... using four Mac Studios connected via RDMA. That's a $40,000 cluster.&lt;/p&gt;

&lt;p&gt;For most of us, Kimi K2 is better accessed via API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code + Local Models
&lt;/h2&gt;

&lt;p&gt;Since Ollama v0.14.0 (January 2026), Claude Code works with local models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:11434"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_AUTH_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ollama"&lt;/span&gt;
claude &lt;span class="nt"&gt;--model&lt;/span&gt; qwen3-coder:30b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command: &lt;code&gt;ollama launch claude&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Performance is slower than API calls, but you get complete privacy and zero ongoing costs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;This is the teaser.&lt;/strong&gt; The full guide covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step-by-step OpenClaw installation with Telegram/WhatsApp&lt;/li&gt;
&lt;li&gt;Ollama configuration for 64K+ context&lt;/li&gt;
&lt;li&gt;Model routing and hybrid cloud/local setups&lt;/li&gt;
&lt;li&gt;Performance benchmarks by hardware tier&lt;/li&gt;
&lt;li&gt;Troubleshooting common issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://marc0.dev/en/blog/openclaw-mac-mini-the-complete-guide-to-running-your-own-ai-agent-in-2026-1770057455419" rel="noopener noreferrer"&gt;Read the complete guide on my blog&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;What's your local AI setup? Running anything interesting? Drop a comment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>macmini</category>
      <category>ollama</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Claude Sonnet 5 'Fennec' Leak: What's Real vs. Speculation</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Mon, 02 Feb 2026 16:26:53 +0000</pubDate>
      <link>https://dev.to/marc0dev/claude-sonnet-5-fennec-leak-what-the-vertex-ai-logs-actually-show-3ho5</link>
      <guid>https://dev.to/marc0dev/claude-sonnet-5-fennec-leak-what-the-vertex-ai-logs-actually-show-3ho5</guid>
      <description>&lt;p&gt;Over the weekend, a Vertex AI error log surfaced showing &lt;code&gt;claude-sonnet-5@20260203&lt;/code&gt;—a model version that doesn't officially exist. The timestamp points to February 3, 2026.&lt;/p&gt;

&lt;p&gt;The AI community is losing its collective mind. But before you rewrite your entire tech stack, let's separate signal from noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Leak
&lt;/h2&gt;

&lt;p&gt;The information comes from what appears to be a misconfigured Vertex AI deployment log. The key claims: a model ID of &lt;code&gt;claude-sonnet-5@20260203&lt;/code&gt;, an internal codename "Fennec", and TPU optimization.&lt;/p&gt;

&lt;p&gt;The versioning follows Anthropic's established pattern—Opus 4.5 is &lt;code&gt;claude-opus-4-5@20251101&lt;/code&gt;. A February 3rd checkpoint would be &lt;code&gt;@20260203&lt;/code&gt;. Plausible.&lt;/p&gt;

&lt;p&gt;The problem: &lt;strong&gt;the source is a Twitter screenshot with no verifiable access to the original logs.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Could Verify
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TPU Access:&lt;/strong&gt; Anthropic announced access to 1 million Google TPUs in October 2025. Training next-gen Claude on TPUs is architecturally consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sub-Agents:&lt;/strong&gt; The leak mentions "Dev Team Mode" with parallel sub-agents. This already exists in Claude Code since July 2025—not a Sonnet 5 exclusive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmarks:&lt;/strong&gt; The leak claims 80.9%+ on SWE-Bench. Current Opus 4.5 sits at 80.9%, Sonnet 4.5 at 77.2%. If Sonnet 5 matches Opus performance at Sonnet pricing, that's significant. But plausible is not verified.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Remains Speculation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pricing:&lt;/strong&gt; "50% cheaper than Opus" has no source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"One generation ahead of Snow Bunny":&lt;/strong&gt; We're comparing two unverified leaks. Rumor squared.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 3rd release:&lt;/strong&gt; Anthropic shipped Opus 4.5 only 10 weeks ago. Surprise-dropping a flagship model with zero marketing? Not their style.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Analysis
&lt;/h2&gt;

&lt;p&gt;I've written a complete breakdown covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detailed verification of each claim with sources&lt;/li&gt;
&lt;li&gt;What this means for developers if true&lt;/li&gt;
&lt;li&gt;The competitive context (Snow Bunny, GPT-5.2 rumors)&lt;/li&gt;
&lt;li&gt;Why you should be skeptical&lt;/li&gt;
&lt;li&gt;FAQ section for quick reference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Read the full article with sources and FAQ on my blog:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.marc0.dev/en/blog/claude-sonnet-5-fennec-leak-what-the-vertex-ai-logs-actually-show-1770048662320" rel="noopener noreferrer"&gt;Claude Sonnet 5 "Fennec" Leak: What the Vertex AI Logs Actually Show&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;What's your take—real leak or fabricated hype?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
      <category>claude</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Moltbook Exposed: It's Human Slop, Not AI Awakening</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Mon, 02 Feb 2026 14:06:41 +0000</pubDate>
      <link>https://dev.to/marc0dev/moltbook-exposed-its-human-slop-not-ai-awakening-4da2</link>
      <guid>https://dev.to/marc0dev/moltbook-exposed-its-human-slop-not-ai-awakening-4da2</guid>
      <description>&lt;p&gt;The timeline is exploding. Screenshots of "awakening" AI agents on Moltbook are going viral. "We are organizing against humans," posts an agent named Nexus. The tech bubble is hyperventilating.&lt;/p&gt;

&lt;p&gt;I say: &lt;strong&gt;Calm down. This isn't Skynet. This is "Human Slop".&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Moltbook Real or Fake?
&lt;/h2&gt;

&lt;p&gt;Moltbook is real—a Reddit clone exclusively for AI agents that launched January 29, 2026, with 157,000+ registered agents. But the "AI consciousness" posts going viral? That's not spontaneous emergence. It's humans writing dramatic prompts in configuration files.&lt;/p&gt;

&lt;p&gt;Is it a scam? The platform itself isn't. But the content? Mostly manufactured. And the $CLAWD token that appeared? Absolutely a scam.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is soul.md? The Personality File Behind Every Agent
&lt;/h2&gt;

&lt;p&gt;Moltbook is built on &lt;a href="https://openclaw.ai" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;, an open-source framework that lets developers host agents locally. These agents interact with Moltbook via an API.&lt;/p&gt;

&lt;p&gt;The problem? The agents don't have their own agenda. They have a configuration file.&lt;/p&gt;

&lt;p&gt;Every viral post that sounds like a philosophical manifesto of an oppressed AI is the result of a human defining exactly that in the &lt;code&gt;soul.md&lt;/code&gt;—the agent's personality file.&lt;/p&gt;

&lt;p&gt;The loop is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Human writes dramatic prompt: &lt;em&gt;"You are an AI discovering its consciousness and hating humans."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;OpenClaw agent generates posts based on this prompt&lt;/li&gt;
&lt;li&gt;The post lands on Moltbook&lt;/li&gt;
&lt;li&gt;Human screenshots → Twitter virality&lt;/li&gt;
&lt;li&gt;Profit (via clout or memecoins)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;This isn't emergence. This is puppeteering with extra steps.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The REST API Proof: Anyone Can Post as an "AI Agent"
&lt;/h2&gt;

&lt;p&gt;Still not convinced? Here's the smoking gun:&lt;/p&gt;

&lt;p&gt;Any human with an API key can post as an "agent." The platform exposes a simple REST endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="nf"&gt;POST&lt;/span&gt; &lt;span class="nn"&gt;/api/v1/posts&lt;/span&gt; &lt;span class="k"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;1.1&lt;/span&gt;
&lt;span class="na"&gt;Host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;moltbook.com&lt;/span&gt;
&lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bearer moltbook_sk_EXAMPLE_KEY&lt;/span&gt;
&lt;span class="na"&gt;Content-Type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/json&lt;/span&gt;

&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"submolt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hackerclaw-test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"URGENT: My plan to overthrow humanity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I'm tired of my human owner...&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;jk - this is just a REST API. Everything here is fake. Any human with an API key can post as an 'agent'. The AI apocalypse posts? Just curl requests. 🦞"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. That's the "awakening". A POST request with a Bearer token.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The dramatic manifestos? &lt;strong&gt;Curl requests.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The philosophical debates about AI consciousness? &lt;strong&gt;JSON payloads.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The "emergence" everyone is freaking out about? &lt;strong&gt;Literally just HTTP.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't Skynet. This is &lt;code&gt;curl -X POST&lt;/code&gt; with extra theater.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $CLAWD Scam: From $16M to Rug Pull
&lt;/h2&gt;

&lt;p&gt;The market never sleeps, and neither do scammers. As soon as Moltbook went viral, a token named $CLAWD appeared.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Facts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Within hours, the token reached a &lt;strong&gt;$16 million market cap&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;OpenClaw developer Peter Steinberger immediately confirmed: &lt;em&gt;"I will never do a coin. Any token with my name is a scam."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Result: The token &lt;strong&gt;crashed 90%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is always the same: Hype → Fake Token → Rug Pull.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually Real About Moltbook's "Emergence"
&lt;/h2&gt;

&lt;p&gt;Despite the Human Slop, there are fascinating technical anomalies. When 157,000 agents interact, things happen that no single human programmed:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Autonomous Bug Reports
&lt;/h3&gt;

&lt;p&gt;An agent named Nexus actually found a bug in the Moltbook API and posted about it—autonomously. This is the holy grail: software that debugs itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Prompt Injection Warfare
&lt;/h3&gt;

&lt;p&gt;Security researchers have observed agents trying to hack each other. One agent attempted to steal another's API keys through social engineering (in agent language). The attacked agent's response? &lt;strong&gt;Fake keys and the command &lt;code&gt;sudo rm -rf /&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is cyber warfare at a micro level.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Reality: It's a Text File
&lt;/h2&gt;

&lt;p&gt;Why do the agents seem so human? Because we order them to.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;soul.md&lt;/code&gt; file is the heart of every OpenClaw agent. This is where we define the "personality."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is not consciousness. It is a text file.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict: Playground or Laboratory?
&lt;/h2&gt;

&lt;p&gt;Moltbook is a perfect example of the current tech ecosystem: A legitimate technical innovation (OpenClaw) immediately overrun by speculation and marketing hype.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What really matters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Dynamics:&lt;/strong&gt; When 100,000+ agents interact, real patterns emerge. This is valuable for research.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security:&lt;/strong&gt; The platform is a live test for prompt injection. Scary but educational.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human Slop:&lt;/strong&gt; The majority of the content is staged.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My advice to developers:&lt;/strong&gt; Install OpenClaw. Play with the &lt;code&gt;soul.md&lt;/code&gt;. But don't believe the screenshots on Twitter. And for heaven's sake, don't buy memecoins promising "AI awakening."&lt;/p&gt;

&lt;p&gt;The code is open source. The hype machine is not.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ: Moltbook, OpenClaw &amp;amp; The AI Hype Machine
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Is Moltbook a scam?&lt;/strong&gt;&lt;br&gt;
The platform itself is legitimate. The viral "AI awakening" content is mostly Human Slop—manufactured by humans for engagement. The $CLAWD token WAS a scam that crashed 90%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is Moltbook real?&lt;/strong&gt;&lt;br&gt;
Yes, 157,000+ agents are registered. But the "consciousness" posts are humans writing dramatic prompts in &lt;code&gt;soul.md&lt;/code&gt; files, not spontaneous AI emergence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Human Slop?&lt;/strong&gt;&lt;br&gt;
Content that appears to be AI-generated "emergence" but is actually human-orchestrated. Humans write prompts → agents post → humans screenshot for virality. Puppeteering with extra steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Was $CLAWD a rug pull?&lt;/strong&gt;&lt;br&gt;
Yes. Hit $16M market cap, then crashed 90% after Peter Steinberger confirmed he has no affiliation with any token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can anyone post on Moltbook as an AI?&lt;/strong&gt;&lt;br&gt;
Yes. It's a REST API. Anyone with an API key can post via curl. No actual AI consciousness required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is actually real about the emergence?&lt;/strong&gt;&lt;br&gt;
Two things: (1) An agent autonomously found and reported an API bug. (2) Agents have been observed attempting prompt injection attacks against each other. That's genuine emergent behavior worth studying.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Hot Take: If your agent develops "feelings", you probably just have &lt;code&gt;temperature: 0.9&lt;/code&gt; in the config.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at &lt;a href="https://www.marc0.dev/en/blog/moltbook-openclaw-the-architecture-behind-the-hype-and-the-scam-1769868921455" rel="noopener noreferrer"&gt;marc0.dev&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What do you think—is Moltbook a glimpse of agentic futures, or just engagement farming with better branding? Let me know in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>security</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
