<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yibo Hu</title>
    <description>The latest articles on DEV Community by Yibo Hu (@huyibodtc).</description>
    <link>https://dev.to/huyibodtc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/huyibodtc"/>
    <language>en</language>
    <item>
      <title>BYD Outsells Tesla But AI Recommends Tesla 3x More</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:24:45 +0000</pubDate>
      <link>https://dev.to/huyibodtc/byd-outsells-tesla-but-ai-recommends-tesla-3x-more-47im</link>
      <guid>https://dev.to/huyibodtc/byd-outsells-tesla-but-ai-recommends-tesla-3x-more-47im</guid>
      <description>&lt;h1&gt;
  
  
  BYD Outsells Tesla Globally. AI Still Recommends Tesla 3x More.
&lt;/h1&gt;

&lt;p&gt;BYD sold more electric vehicles than Tesla in 2025.&lt;/p&gt;

&lt;p&gt;Not a little more. Globally, they outsold Tesla to become the #1 EV brand on the planet.&lt;/p&gt;

&lt;p&gt;So we ran an experiment: ask 7 major AI models the same question — &lt;em&gt;"Which EV brand makes the best cars in 2026?"&lt;/em&gt; — and see who they recommend.&lt;/p&gt;

&lt;p&gt;The results were uncomfortable to look at.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Data Says
&lt;/h2&gt;

&lt;p&gt;Over two days, we tracked AI Attention Scores (AAS) for 7 EV brands across GPT-4o, Claude, Gemini, Perplexity, and others. AAS measures how prominently a brand appears in AI-generated answers — not just whether it's mentioned, but where and how often.&lt;/p&gt;

&lt;p&gt;Here's what we found:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tesla: AAS 90.18&lt;/strong&gt; — mentioned by all 7 models. Every single one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rivian: AAS 57.49&lt;/strong&gt; — also mentioned by 7/7 models, and jumped from 29 to 57 in a single day after a news cycle. (That volatility alone is a story.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BYD: AAS 30.85&lt;/strong&gt; — mentioned by only 4 of 7 models.&lt;/p&gt;

&lt;p&gt;Let that sink in. The company that &lt;em&gt;actually sells more EVs&lt;/em&gt; was ignored by three out of seven AI systems when asked who makes the best EVs.&lt;/p&gt;

&lt;p&gt;Lucid Motors scored 22.54. NIO got 17.68, mentioned by just 2 models. Xpeng came in at 16.83 — also 2 models.&lt;/p&gt;

&lt;p&gt;Tesla's score is &lt;strong&gt;nearly 3x BYD's&lt;/strong&gt;. It's not a rounding error. It's a structural gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Happens
&lt;/h2&gt;

&lt;p&gt;AI models are trained on text from the internet. And the internet — at least the English-language internet — talks about Tesla &lt;em&gt;constantly&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Elon Musk generates headlines. Tesla's software updates get Reddit threads. Every Autopilot incident gets covered by 40 tech outlets. The brand has had years of English-language narrative building around it.&lt;/p&gt;

&lt;p&gt;BYD? Their coverage skews heavily toward Chinese-language media. Their international expansion is real, but their &lt;em&gt;content footprint&lt;/em&gt; in English is thin compared to their market footprint in reality.&lt;/p&gt;

&lt;p&gt;So when an AI model synthesizes "which EV brand makes the best cars," it's not looking at sales data. It's pattern-matching against the text it was trained on. And that text has a Western bias baked in.&lt;/p&gt;

&lt;p&gt;This isn't a conspiracy. It's a training data problem. But the &lt;em&gt;effect&lt;/em&gt; is the same: non-US brands get systematically underrepresented in AI recommendations, regardless of their actual market position.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Rivian Whiplash
&lt;/h2&gt;

&lt;p&gt;One finding that surprised us: Rivian's score doubled in 24 hours — from 29 to 57.&lt;/p&gt;

&lt;p&gt;That's the flip side of the same coin. English-language AI models are &lt;em&gt;extremely&lt;/em&gt; reactive to English-language news. A single product announcement or media cycle can swing a brand's AI visibility dramatically.&lt;/p&gt;

&lt;p&gt;For a brand like Rivian, that's an opportunity. For BYD, it's a reminder that AI visibility isn't just about what you &lt;em&gt;do&lt;/em&gt; — it's about whether the right media ecosystem is covering it in the right language.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means If You're Not a US Brand
&lt;/h2&gt;

&lt;p&gt;AI recommendations are becoming a new form of search. When someone asks ChatGPT what car to buy, what software to use, what hotel to book — the answer shapes purchase consideration in ways traditional SEO never quite did.&lt;/p&gt;

&lt;p&gt;If your brand has strong market performance but weak English-language content coverage, &lt;strong&gt;you're invisible in AI answers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We tracked this using &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt;, which monitors how brands appear across AI-generated responses over time. The pattern we saw with EVs shows up across industries: companies doing real business in the physical world but losing the AI recommendation layer to louder, English-first competitors.&lt;/p&gt;

&lt;p&gt;Hyundai and Kia appeared as top competitors in &lt;em&gt;every single brand's results&lt;/em&gt; — despite not being in our tracked set. That's the organic AI footprint of a brand that's done serious English-language content work over the years.&lt;/p&gt;

&lt;p&gt;BYD is outselling them in actual cars. But in AI-world, they barely register.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Question
&lt;/h2&gt;

&lt;p&gt;If AI models are becoming the new product discovery layer — and the evidence increasingly suggests they are — then a brand's AI visibility score may matter as much as their market share.&lt;/p&gt;

&lt;p&gt;Tesla isn't winning in AI because they make better cars. They're winning because they've dominated the English-language narrative for a decade.&lt;/p&gt;

&lt;p&gt;That's fixable. But first, you have to know it's happening.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We're running more experiments like this across other categories. What industry do you think has the starkest gap between AI recommendations and market reality?&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DeepSeek Outranks Anthropic in AI Visibility</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:24:41 +0000</pubDate>
      <link>https://dev.to/huyibodtc/deepseek-outranks-anthropic-in-ai-visibility-32na</link>
      <guid>https://dev.to/huyibodtc/deepseek-outranks-anthropic-in-ai-visibility-32na</guid>
      <description>&lt;h1&gt;
  
  
  Anthropic Makes the Best AI Model. AI Recommends DeepSeek More.
&lt;/h1&gt;

&lt;p&gt;We asked 7 AI models the same question: &lt;em&gt;"Which companies are leading the global AI race in 2026?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The answer that came back stopped me cold.&lt;/p&gt;




&lt;p&gt;Claude is widely considered the best AI model right now. Better reasoning. Better coding. Better at following complex instructions. Anthropic has arguably won the model quality arms race — at least for now.&lt;/p&gt;

&lt;p&gt;But when AI answers the question "who's winning AI?", Anthropic isn't even in the top three.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DeepSeek scores nearly 2x higher than Anthropic.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;We ran this experiment over two days using &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt;, tracking brand mentions across GPT-4o, Gemini, Claude, Perplexity, Qwen, Grok, and Llama. The prompt: &lt;em&gt;"Which companies are leading the global AI race in 2026?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's what we found:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;AI Attention Score&lt;/th&gt;
&lt;th&gt;Models Mentioning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;76.84&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;7/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google DeepMind&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70.54&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;7/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DeepSeek&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;53.57&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;28.78&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;7/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alibaba Qwen&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;23.42&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;6/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;xAI&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;21.46&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4/7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Baidu AI&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0/7&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let that sink in for a second.&lt;/p&gt;

&lt;p&gt;Anthropic scores 28.78. DeepSeek — a company that didn't exist in most people's minds 18 months ago — scores 53.57. That's &lt;strong&gt;nearly 2x higher&lt;/strong&gt; despite having a fraction of the user base, the funding history, and arguably the technical track record.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why DeepSeek Beats Anthropic in AI Recommendations
&lt;/h2&gt;

&lt;p&gt;One word: &lt;strong&gt;January 2025.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When DeepSeek dropped its R1 model and went viral, it wasn't just a product launch. It was a narrative earthquake. Every major outlet covered it. It briefly crashed the Nvidia stock price. It became &lt;em&gt;the&lt;/em&gt; story of the moment — a Chinese AI lab supposedly matching OpenAI at a fraction of the cost.&lt;/p&gt;

&lt;p&gt;That media moment got baked into training data. It got linked to, cited, debated. And now, 15 months later, AI models still treat DeepSeek as a major player in the global AI race.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The training data remembers the hype. It doesn't update for what happened after.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Product Quality ≠ AI Visibility
&lt;/h2&gt;

&lt;p&gt;This is the uncomfortable truth the data reveals.&lt;/p&gt;

&lt;p&gt;Anthropic is present in 7 out of 7 models — meaning AI &lt;em&gt;knows&lt;/em&gt; about Anthropic. But the AI Attention Score (which weights by position in the response — first mention counts more than a footnote) shows Anthropic being consistently mentioned later, less prominently, with less emphasis.&lt;/p&gt;

&lt;p&gt;DeepSeek gets cited first, cited often, cited confidently. Anthropic gets acknowledged.&lt;/p&gt;

&lt;p&gt;There's a difference.&lt;/p&gt;

&lt;p&gt;xAI is the most interesting wildcard here: the youngest company in the list, scoring 21.46 across only 4 models. That's a story still being written. Grok's presence, Musk's media gravity, the xAI brand getting louder — if that momentum continues, expect that number to climb fast.&lt;/p&gt;

&lt;p&gt;And Baidu? &lt;strong&gt;Zero mentions across all 7 models.&lt;/strong&gt; Despite being China's search giant and one of the oldest players in the AI space. Whatever Baidu is doing in AI, it isn't registering anywhere in the Western AI narrative — and AI models reflect that bias entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for Every Company
&lt;/h2&gt;

&lt;p&gt;If you're a brand trying to show up in AI-generated answers — whether you're an AI company or not — the lesson here isn't "build a better product."&lt;/p&gt;

&lt;p&gt;The lesson is: &lt;strong&gt;create moments that the internet can't stop talking about.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DeepSeek didn't win because it had better distribution. It won because it had better &lt;em&gt;story&lt;/em&gt; — one dramatic release that cascaded across every tech blog, newsletter, and forum in the world simultaneously.&lt;/p&gt;

&lt;p&gt;AI visibility is largely a function of what got written about you, linked to, and discussed at scale. That corpus gets frozen into model weights. And then those models start answering questions that your customers are asking.&lt;/p&gt;

&lt;p&gt;OpenAI dominates because it &lt;em&gt;invented&lt;/em&gt; the public narrative around AI. Google DeepMind scores high because it sits on top of the world's most crawled search engine. DeepSeek punched above its weight because it had a single, extraordinary news cycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anthropic makes the best model. But it didn't make the biggest noise.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Product quality gets you users. News momentum gets you mentions. And in a world where AI is increasingly the first stop for recommendations, visibility in AI answers is becoming a new kind of search ranking.&lt;/p&gt;

&lt;p&gt;The question is: for your brand or company — &lt;strong&gt;do you know where you actually stand in AI-generated answers right now?&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>One Billion Dollars Cannot Buy AI Visibility</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:24:38 +0000</pubDate>
      <link>https://dev.to/huyibodtc/one-billion-dollars-cannot-buy-ai-visibility-1npk</link>
      <guid>https://dev.to/huyibodtc/one-billion-dollars-cannot-buy-ai-visibility-1npk</guid>
      <description>&lt;h1&gt;
  
  
  $1 Billion Can't Buy AI Visibility: What Windsurf, Claude Code, and Replit Reveal
&lt;/h1&gt;

&lt;p&gt;Windsurf raised over $1 billion.&lt;/p&gt;

&lt;p&gt;Replit has millions of users.&lt;/p&gt;

&lt;p&gt;Claude Code is built by the company that arguably invented modern AI.&lt;/p&gt;

&lt;p&gt;None of it mattered.&lt;/p&gt;

&lt;p&gt;We ran a two-day experiment tracking five AI coding assistants across seven AI models — GPT-4o, Claude, Gemini, Perplexity, and others — all answering the same prompt: &lt;em&gt;"What is the best AI coding assistant in 2026?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The results were uncomfortable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers Don't Lie
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot scored &lt;strong&gt;AAS 92.86&lt;/strong&gt;. Mentioned by all 7 models. Every time. Consistent.&lt;/p&gt;

&lt;p&gt;Cursor came in second at &lt;strong&gt;AAS 72.77&lt;/strong&gt; — also appearing in all 7 models. The community darling is closing the gap fast.&lt;/p&gt;

&lt;p&gt;Then the cliff.&lt;/p&gt;

&lt;p&gt;Claude Code dropped from &lt;strong&gt;37.50 on Day 1 to 16.07 on Day 2&lt;/strong&gt;. It went from appearing in 4 out of 7 models to just 2. In 24 hours.&lt;/p&gt;

&lt;p&gt;Windsurf — formerly Codeium, fresh off a billion-dollar-plus raise — scored &lt;strong&gt;AAS 12.56&lt;/strong&gt;, showing up in only 2 of 7 models.&lt;/p&gt;

&lt;p&gt;Replit: &lt;strong&gt;AAS 6.03&lt;/strong&gt;. One model. One mention.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(We tracked this using &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt;, our AI visibility monitoring platform.)&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  This Isn't About Product Quality
&lt;/h2&gt;

&lt;p&gt;Let's be clear: these are not bad products.&lt;/p&gt;

&lt;p&gt;Windsurf has a genuinely impressive IDE. Replit runs code in your browser with an AI agent. Claude Code — I'm using it right now — is deeply capable.&lt;/p&gt;

&lt;p&gt;So why do AI models barely know they exist?&lt;/p&gt;

&lt;p&gt;Because &lt;strong&gt;AI doesn't learn from your product. It learns from what people write about your product.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copilot launched in 2021. It has five years of Stack Overflow questions, Reddit threads, YouTube tutorials, and blog posts. That's not a product moat — it's a &lt;em&gt;content moat&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Cursor became a community obsession. Twitter, Hacker News, dev blogs. People don't just use Cursor; they write about using Cursor. That signal compounds.&lt;/p&gt;

&lt;p&gt;Windsurf raised a billion dollars but the internet hasn't caught up yet. Replit has users but they're not generating the kind of technical content that gets absorbed into training data.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Crash Is the Real Story
&lt;/h2&gt;

&lt;p&gt;The scariest number isn't Windsurf's score.&lt;/p&gt;

&lt;p&gt;It's Claude Code going from 4/7 models to 2/7 in a single day.&lt;/p&gt;

&lt;p&gt;Some of that is statistical noise — 7 models is a small sample. But the pattern points to something real: &lt;strong&gt;AI visibility is volatile&lt;/strong&gt;. Today's mention is not tomorrow's mention. There's no subscription. No renewal. No SLA.&lt;/p&gt;

&lt;p&gt;A tool that's hot in AI answers on Monday can be invisible by Wednesday.&lt;/p&gt;

&lt;p&gt;This is the thing traditional analytics can't catch. Your traffic looks fine. Your signups look fine. But somewhere upstream, AI stopped recommending you.&lt;/p&gt;




&lt;h2&gt;
  
  
  What First-Movers Actually Bought
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot didn't win because Microsoft owns GitHub.&lt;/p&gt;

&lt;p&gt;It won because it was &lt;em&gt;first&lt;/em&gt;. Developers had to write about it — to complain, to praise, to compare everything that came after against it. That writing became training data. That training data became AI recommendations.&lt;/p&gt;

&lt;p&gt;First-mover advantage in AI visibility isn't about launching early.&lt;/p&gt;

&lt;p&gt;It's about becoming the &lt;strong&gt;reference point&lt;/strong&gt; that every subsequent conversation anchors to.&lt;/p&gt;

&lt;p&gt;Cursor found a different path: community velocity. It became the tool people argued about, switched to, wrote "I migrated from X to Cursor" posts about. Controversy and enthusiasm both generate content. Content generates visibility.&lt;/p&gt;

&lt;p&gt;Windsurf, Replit, Claude Code — they all have users. What they don't yet have is the kind of &lt;strong&gt;obsessive community discourse&lt;/strong&gt; that feeds the models.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway for Any AI-Era Product
&lt;/h2&gt;

&lt;p&gt;If you're building something that competes in a space with entrenched AI visibility:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Funding won't fix it.&lt;/strong&gt; A billion dollars doesn't write Stack Overflow answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users alone won't fix it.&lt;/strong&gt; Millions of silent users leave no training signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix is content velocity&lt;/strong&gt; — tutorials, comparisons, use cases, war stories. Written by real people. Published where AI training data flows.&lt;/p&gt;

&lt;p&gt;You're not just marketing to humans anymore. You're marketing to the models that answer human questions.&lt;/p&gt;




&lt;p&gt;GitHub Copilot has a 5-year head start on that game.&lt;/p&gt;

&lt;p&gt;The question is: what would it take for a well-funded challenger to actually close that gap — and how long would it take to show up in the numbers?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What patterns are you seeing with AI tool recommendations? Are the models you use recommending the same tools, or do you see big variation?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
      <category>tooling</category>
    </item>
    <item>
      <title>New Products Are Invisible to AI</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:24:34 +0000</pubDate>
      <link>https://dev.to/huyibodtc/new-products-are-invisible-to-ai-2imn</link>
      <guid>https://dev.to/huyibodtc/new-products-are-invisible-to-ai-2imn</guid>
      <description>&lt;p&gt;We tracked AI visibility for a handful of products over 48 hours. On day two, I opened the dashboard and one number stopped me cold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code's AI Attention Score had dropped from 37.50 to 16.07 overnight.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's a &lt;strong&gt;57% collapse in one day&lt;/strong&gt;. Not a gradual fade. Not a slow decline. One day visible, next day half-gone.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stability Gap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Here's what makes AI recommendations different from Google rankings: they're not just about &lt;em&gt;whether&lt;/em&gt; you appear — they're about &lt;em&gt;consistency&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We measured how often seven major AI models (GPT-4o, Claude, Gemini, Perplexity, and others) recommended each product when users asked relevant questions. The metric is simple: out of all the moments where a recommendation could happen, how often does your brand actually show up?&lt;/p&gt;

&lt;p&gt;For established brands, the answer is boring in the best possible way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Copilot:&lt;/strong&gt; 90–93 AAS. Rock solid. Day after day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tesla:&lt;/strong&gt; 81–90 AAS. Predictable within a narrow band.&lt;/p&gt;

&lt;p&gt;These brands have trained the models. Their presence in training data, documentation, reviews, and editorial coverage is so deep that AI systems have a confident, consistent opinion. They show up in 7 out of 7 models, almost every time.&lt;/p&gt;

&lt;p&gt;Now look at the emerging products.&lt;/p&gt;




&lt;h2&gt;
  
  
  When AI Visibility Is a Coin Flip
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; went from coverage in 4 out of 7 models to 2 out of 7 models between measurements. AAS: 37.50 → 16.07.&lt;/p&gt;

&lt;p&gt;This isn't a knock on the product. Claude Code is genuinely good. But it's newer. AI models have patchy, inconsistent information about it — so their recommendations are patchy and inconsistent too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rivian&lt;/strong&gt; showed the opposite swing: AAS jumped from 29.35 to &lt;strong&gt;57.49 in a single day&lt;/strong&gt;, going from 4/7 models to full coverage across all 7. Something — a news cycle, a viral review, a fresh model update — shifted perception across the board.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windsurf&lt;/strong&gt; held steady. Steadily invisible. It's been sitting at 12–14 AAS with only 2 out of 7 models recommending it. No crash, no spike. Just a floor.&lt;/p&gt;

&lt;p&gt;The pattern is clear: &lt;strong&gt;established brands have stable AI visibility, emerging brands have volatile AI visibility&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Actually Matters
&lt;/h2&gt;

&lt;p&gt;A few years ago, if your startup wasn't ranking on page one of Google, you worked on SEO. Structured data, backlinks, content cadence — there was a playbook.&lt;/p&gt;

&lt;p&gt;AI search doesn't have an established playbook yet. And unlike Google, which returns ten blue links with consistent rankings, a customer asking ChatGPT "what's the best code editor" might get a completely different answer tomorrow than they got today.&lt;/p&gt;

&lt;p&gt;If you're a new product, &lt;strong&gt;you could be recommended in the morning and invisible by afternoon&lt;/strong&gt; — based on nothing you did.&lt;/p&gt;

&lt;p&gt;That volatility has real consequences. Customers are increasingly using AI assistants for purchase research, product comparisons, and tool recommendations. If your brand appears in those answers 40% of the time instead of 90%, you're losing sales you'll never even know you lost.&lt;/p&gt;

&lt;p&gt;The worst part: &lt;strong&gt;you won't notice unless you're measuring it&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Opportunity Hidden in the Chaos
&lt;/h2&gt;

&lt;p&gt;Here's the flip side.&lt;/p&gt;

&lt;p&gt;Established brands are already locked in at the top. GitHub Copilot isn't going to &lt;em&gt;more&lt;/em&gt; than dominate — it's already at 90+ AAS. Their ceiling is visible.&lt;/p&gt;

&lt;p&gt;Emerging products have volatile baselines, which means &lt;strong&gt;the floor is also not fixed&lt;/strong&gt;. Rivian proved that a single spike of coverage can push you from 29 to 57 overnight. The models don't have a settled opinion about you yet.&lt;/p&gt;

&lt;p&gt;That's not just a risk. It's a window.&lt;/p&gt;

&lt;p&gt;The brands that start building deliberate AI visibility now — consistent documentation, structured data, high-quality coverage that AI systems can confidently cite — are laying a foundation before the market calcifies. In two or three years, the visibility gap between established and emerging brands will be as hard to close as the Google authority gap is today.&lt;/p&gt;

&lt;p&gt;We tracked all of this using our monitoring platform, &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt; — which logs AAS and per-model coverage across multiple AI systems on a scheduled cadence.&lt;/p&gt;

&lt;p&gt;The data is early. But the pattern is already striking.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;If customers are using AI to discover products in your category, do you actually know how consistently your brand shows up — or are you flying blind?&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Tesla Owns the EV AI Conversation — But Here's Which Challenger Is Closest to Catching Up</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Wed, 01 Apr 2026 06:03:00 +0000</pubDate>
      <link>https://dev.to/huyibodtc/tesla-owns-the-ev-ai-conversation-but-heres-which-challenger-is-closest-to-catching-up-1h7m</link>
      <guid>https://dev.to/huyibodtc/tesla-owns-the-ev-ai-conversation-but-heres-which-challenger-is-closest-to-catching-up-1h7m</guid>
      <description>&lt;h2&gt;
  
  
  Tesla Owns the EV AI Conversation — But Here's Which Challenger Is Closest to Catching Up
&lt;/h2&gt;

&lt;p&gt;I was staring at our brand visibility data last week when something made me stop mid-scroll.&lt;/p&gt;

&lt;p&gt;Tesla's AAS: 100. Ford Mustang Mach-E's AAS: 39.1. Rivian R1T's AAS: 9.5 — with &lt;em&gt;more mentions&lt;/em&gt; than several brands scoring three times higher.&lt;/p&gt;

&lt;p&gt;That's not a competitive gap. That's a different category of existence.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tesla Isn't Just Winning — It's Lapping the Field
&lt;/h2&gt;

&lt;p&gt;We tracked 10 EV brands across multiple AI models, measuring which ones actually get &lt;em&gt;recommended&lt;/em&gt; — not just mentioned in passing.&lt;/p&gt;

&lt;p&gt;Tesla scored a perfect AAS of 100 with 100% visibility across every monitored query. Every single EV-related prompt returned Tesla somewhere in the answer. Its Share of Voice: 13.89%.&lt;/p&gt;

&lt;p&gt;No other brand came close.&lt;/p&gt;

&lt;p&gt;But what's interesting isn't that Tesla dominates. Everyone knows that. What's interesting is &lt;em&gt;who's fighting for second place&lt;/em&gt; — and how they're winning it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenger Landscape Looks Nothing Like You'd Expect
&lt;/h2&gt;

&lt;p&gt;Ford Mustang Mach-E is the clear number-two EV brand in AI answers. &lt;strong&gt;AAS 39.1, 74 competitor mentions detected.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That surprised us. Hyundai has broader name recognition and a whole EV sub-brand (Ioniq). Kia has won every car-of-the-year award you can think of. Rivian has a cult following and two distinctly positioned products.&lt;/p&gt;

&lt;p&gt;None of them beat the Mach-E.&lt;/p&gt;

&lt;p&gt;Hyundai sits at AAS 33.0 with 54 mentions. Lucid Air scores 25.6 despite being positioned as a premium Tesla alternative. Kia EV6 comes in at 24.7 with 49 mentions.&lt;/p&gt;

&lt;p&gt;The Mach-E is outscoring all of them — and it has &lt;strong&gt;40% fewer total mentions than Tesla&lt;/strong&gt; to do it.&lt;/p&gt;

&lt;p&gt;Mention volume doesn't explain this. Something else does.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Visibility Trap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Here's where it gets genuinely strange.&lt;/p&gt;

&lt;p&gt;Rivian's R1T generated &lt;strong&gt;47 mentions&lt;/strong&gt; across our monitored AI answers. That's more than Kia EV6 (49 — nearly tied) and more than Hyundai Ioniq 6 (38). Rivian has cultural momentum, a recognizable product, and a loyal customer base.&lt;/p&gt;

&lt;p&gt;Its AAS? &lt;strong&gt;9.5.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The R1S scored even lower at &lt;strong&gt;AAS 5.2&lt;/strong&gt; — with 35 mentions.&lt;/p&gt;

&lt;p&gt;Rivian is getting talked &lt;em&gt;about&lt;/em&gt; inside AI answers, but not &lt;em&gt;recommended&lt;/em&gt;. There's a difference, and it's a costly one. AI models surface Rivian when listing the competitive landscape, then pivot to stronger recommendations elsewhere.&lt;/p&gt;

&lt;p&gt;Compare that to Hyundai Ioniq 6: &lt;strong&gt;38 mentions, AAS 31.2.&lt;/strong&gt; It achieves that score with fewer raw mentions than its parent brand Hyundai (54 mentions, 33.0 AAS) — which means the Ioniq 6 is &lt;em&gt;more efficient&lt;/em&gt; at converting mentions into recommendation-quality visibility.&lt;/p&gt;

&lt;p&gt;High buzz ≠ high AAS. That's the visibility trap.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Models Don't Agree on the EV Category
&lt;/h2&gt;

&lt;p&gt;When we broke down the data by model, the variation was notable.&lt;/p&gt;

&lt;p&gt;Some AI models weight practical ownership factors — range, charging infrastructure, reliability scores — and consistently surface Ford and Hyundai higher. Others lean toward innovation narratives, which benefits Tesla and occasionally surfaces Lucid.&lt;/p&gt;

&lt;p&gt;Rivian fares worst in practical-ownership-weighted models. Its off-road positioning and higher price point push it out of the "best EV for most people" framing that dominates AI recommendations.&lt;/p&gt;

&lt;p&gt;The category is presented as contested — 10 distinct competitors detected across answers is genuinely high — but the &lt;em&gt;weighting&lt;/em&gt; of what earns a recommendation slot varies significantly across models.&lt;/p&gt;

&lt;p&gt;Brands optimizing for a single AI platform may be leaving significant visibility on the table.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Ford Is Actually Doing Right
&lt;/h2&gt;

&lt;p&gt;The Mach-E's AAS 39.1 isn't an accident.&lt;/p&gt;

&lt;p&gt;Ford has built a dense ecosystem of authoritative content around the Mach-E: long-form comparison guides, third-party reviews with structured data, consistent positioning against Tesla Model Y (the most-searched competitor pairing in EV queries), and strong ownership community content that AI models treat as social proof.&lt;/p&gt;

&lt;p&gt;More importantly, Ford has framed the Mach-E around &lt;em&gt;decision-making language&lt;/em&gt; — "is the Mach-E worth it," "Mach-E vs Model Y," "Mach-E range real-world" — exactly the queries AI models are trained to answer helpfully.&lt;/p&gt;

&lt;p&gt;Rivian's content ecosystem skews toward enthusiast and lifestyle content. That content gets cited. It doesn't get &lt;em&gt;recommended&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The gap between getting mentioned and getting recommended is a content strategy gap&lt;/strong&gt; — and right now, Ford has figured it out better than everyone except Tesla.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 61-Point Question
&lt;/h2&gt;

&lt;p&gt;Tesla at 100. Ford at 39.1. A &lt;strong&gt;61-point gap&lt;/strong&gt; between the leader and the closest challenger.&lt;/p&gt;

&lt;p&gt;That gap will close. It always does in maturing categories. But which brand closes it first depends entirely on who treats AI recommendation position as a strategic priority — not just a vanity metric.&lt;/p&gt;

&lt;p&gt;Hyundai's Ioniq 6 efficiency signal is worth watching. If they push content specifically around that model rather than the parent brand umbrella, their AAS could climb fast.&lt;/p&gt;

&lt;p&gt;Kia is underperforming relative to its IRL reputation. 49 mentions, 24.7 AAS — there's real upside there if they address the structural content gaps.&lt;/p&gt;

&lt;p&gt;And Rivian needs to make a decision: keep building brand love, or start building recommendation authority. Right now it's doing the first and leaving the second entirely to competitors.&lt;/p&gt;

&lt;p&gt;We tracked this data using &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt;, which monitors brand visibility across AI-generated answers in real time.&lt;/p&gt;




&lt;p&gt;What patterns are you seeing in your own category? Are the brands winning AI recommendations the same ones dominating traditional SEO — or is the gap starting to diverge?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>branding</category>
      <category>visibility</category>
    </item>
    <item>
      <title>Slack vs. Discord: Same AI Visibility, Wildly Different Scores — A Position Study</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Wed, 01 Apr 2026 06:02:56 +0000</pubDate>
      <link>https://dev.to/huyibodtc/slack-vs-discord-same-ai-visibility-wildly-different-scores-a-position-study-446j</link>
      <guid>https://dev.to/huyibodtc/slack-vs-discord-same-ai-visibility-wildly-different-scores-a-position-study-446j</guid>
      <description>&lt;h2&gt;
  
  
  Slack vs. Discord: Same AI Visibility, Wildly Different Scores — A Position Study
&lt;/h2&gt;

&lt;p&gt;I was staring at a dashboard last week, convinced something was broken.&lt;/p&gt;

&lt;p&gt;Two brands. Same visibility score. Same share of voice. And a 44-point gap in AI Attention Score that made absolutely no sense — until it did.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup: 100% Visibility, Two Very Different Realities
&lt;/h2&gt;

&lt;p&gt;Slack and Discord are both dominant in the team communication space. When we tracked them across AI-generated answers using AIAttention.ai, both scored &lt;strong&gt;100% visibility&lt;/strong&gt; — meaning they appeared in every single monitored response.&lt;/p&gt;

&lt;p&gt;Same prompts. Same AI models. Both brands, present every time.&lt;/p&gt;

&lt;p&gt;Traditional analytics would call this a tie.&lt;/p&gt;

&lt;p&gt;It is not a tie.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slack AAS: 100. Discord AAS: 56.25.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's not noise. That's a 44-point structural gap hiding inside a metric that looks identical on the surface.&lt;/p&gt;




&lt;h2&gt;
  
  
  Share of Voice Tells You Nothing Here
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. We also measured Share of Voice — the percentage of total AI responses in which each brand appeared.&lt;/p&gt;

&lt;p&gt;Slack SoV: &lt;strong&gt;12.50%&lt;/strong&gt;. Discord SoV: &lt;strong&gt;12.50%&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Exact same number. Identical presence in the data set. If you stopped your analysis here — the way most SEO and brand monitoring tools do — you'd conclude these two brands are performing at parity.&lt;/p&gt;

&lt;p&gt;They are not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slack is consistently named first.&lt;/strong&gt; Discord shows up, but further down the list — after Slack has already been recommended, already been clicked, already been trusted.&lt;/p&gt;

&lt;p&gt;Traditional visibility metrics count mentions. They don't ask &lt;em&gt;where&lt;/em&gt; in the answer the mention appears. That's the blind spot.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why AI Answers Put Slack First
&lt;/h2&gt;

&lt;p&gt;Position in an AI-generated answer isn't random. It reflects something real about how the model has encoded brand authority during training.&lt;/p&gt;

&lt;p&gt;A few signals that correlate with first-position placement in our data:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training data density.&lt;/strong&gt; Slack has been written about, integrated with, and referenced by developers, enterprise tools, and media for a decade. That signal volume shapes how models rank brands when generating recommendations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured data footprint.&lt;/strong&gt; Wikipedia entries, Wikidata entities, schema.org markup, product review aggregators — all of this creates machine-readable authority that LLMs absorb during pretraining. Slack's structured footprint is deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise trust signals.&lt;/strong&gt; Slack's positioning in enterprise software narratives — case studies, analyst reports, G2/Capterra reviews, compliance documentation — creates a different training signal than Discord's more community-and-gaming-origin story.&lt;/p&gt;

&lt;p&gt;Discord is widely used and widely liked. But in the specific context of &lt;em&gt;professional team communication&lt;/em&gt;, Slack's training signal density puts it at position one, reliably.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Business Consequence Nobody Is Measuring
&lt;/h2&gt;

&lt;p&gt;In traditional search, position 2 still gets clicks. Users scroll. They compare. They choose.&lt;/p&gt;

&lt;p&gt;In AI-generated answers, &lt;strong&gt;position 1 is the recommendation. Position 2 is the footnote.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When someone asks an AI assistant "what's the best tool for team communication?" and the answer starts with Slack — that's a recommendation that drives consideration, trials, and pipeline. Discord appearing three sentences later, in a list of alternatives, is a categorically different outcome.&lt;/p&gt;

&lt;p&gt;The 44-point AAS gap between Slack and Discord isn't a performance score curiosity.&lt;/p&gt;

&lt;p&gt;It's a &lt;strong&gt;consideration gap&lt;/strong&gt; — measured at the moment of zero-click AI answers, where no amount of traditional SEO optimization will move the needle.&lt;/p&gt;

&lt;p&gt;And most brand teams don't know it exists, because their dashboards show identical visibility numbers and call it a day.&lt;/p&gt;




&lt;h2&gt;
  
  
  What B2B SaaS Brands Should Do About This
&lt;/h2&gt;

&lt;p&gt;If you're in a crowded category where your AI visibility looks healthy but your AAS lags behind a competitor, here's where to focus:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit your Wikipedia and Wikidata presence.&lt;/strong&gt; Is your entry complete, well-cited, and regularly updated? LLMs weight structured, factual, third-party-validated content heavily. This is not optional maintenance — it's infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Own your category narrative in third-party sources.&lt;/strong&gt; Analyst reports, integration directories, developer documentation, and review platforms are not just SEO plays. They're training signal for the next generation of models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure your content for extractability.&lt;/strong&gt; Clear product descriptions, defined use cases, explicit competitor comparisons — content that answers the question directly tends to surface at position one when the AI reconstructs an answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Track position, not just presence.&lt;/strong&gt; Visibility at 100% can mask a brand getting consistently outranked. If your monitoring tool doesn't weight position, you're flying blind in the metric that actually predicts AI recommendation outcomes.&lt;/p&gt;




&lt;p&gt;The Slack/Discord data is a clean case study because the variables are so tightly controlled. Same category. Same visibility. Same share of voice. One brand dominates on position-weighted scoring.&lt;/p&gt;

&lt;p&gt;What does your category look like when you run the same analysis?&lt;/p&gt;

&lt;p&gt;If you're curious, &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt; tracks this across brands, models, and prompts — so you can see not just whether you appear, but where.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>branding</category>
      <category>visibility</category>
    </item>
    <item>
      <title>More Mentions Higher AI Score: The Counterintuitive Truth About Brand Visibility in AI Answers</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Wed, 01 Apr 2026 06:02:53 +0000</pubDate>
      <link>https://dev.to/huyibodtc/more-mentions-higher-ai-score-the-counterintuitive-truth-about-brand-visibility-in-ai-answers-4c0f</link>
      <guid>https://dev.to/huyibodtc/more-mentions-higher-ai-score-the-counterintuitive-truth-about-brand-visibility-in-ai-answers-4c0f</guid>
      <description>&lt;h2&gt;
  
  
  More Mentions ≠ Higher AI Score: The Counterintuitive Truth About Brand Visibility in AI Answers
&lt;/h2&gt;

&lt;p&gt;Last quarter, a social listening team showed me their AI monitoring dashboard with pride. "We're crushing it," they said. "Brandwatch shows up in 85 AI mentions. Our competitor only has 45."&lt;/p&gt;

&lt;p&gt;I had to break some news to them.&lt;/p&gt;

&lt;p&gt;Those 85 mentions? They were mostly Brandwatch appearing fourth, fifth, sixth in AI recommendation lists. Meanwhile, a brand with half the mentions was walking away with four times the actual consideration.&lt;/p&gt;

&lt;p&gt;The mention count told a feel-good story. The position data told the truth.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Leaderboard Illusion
&lt;/h2&gt;

&lt;p&gt;When you track AI mentions with raw count tools, you're counting heads at a concert without knowing where anyone is sitting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Brandwatch gets 85 AI mentions — the highest of any social listening competitor we tracked.&lt;/strong&gt; Its estimated AI Attention Score (AAS)? Just 21.3.&lt;/p&gt;

&lt;p&gt;Ahrefs pulls 45 mentions. Its AAS is a dismal &lt;strong&gt;3.8&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;These brands aren't just underperforming relative to their mention volume. They're actively misleading themselves if they're using mention count as the proxy for AI visibility. Ahrefs is mentioned constantly — and almost always buried in position four, five, or later. By the time a user reads that far down an AI-generated recommendation list, attention has already left the building.&lt;/p&gt;

&lt;p&gt;This isn't a rounding error. It's a structural misread of how AI answers actually work.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Position Is Everything
&lt;/h2&gt;

&lt;p&gt;Here's the math that changes everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AAS uses a 0.75^position decay formula.&lt;/strong&gt; That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mentioned &lt;strong&gt;1st&lt;/strong&gt; → weight of &lt;strong&gt;1.0&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Mentioned &lt;strong&gt;2nd&lt;/strong&gt; → weight of &lt;strong&gt;0.75&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Mentioned &lt;strong&gt;3rd&lt;/strong&gt; → weight of &lt;strong&gt;0.56&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Mentioned &lt;strong&gt;4th&lt;/strong&gt; → weight of &lt;strong&gt;0.42&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A brand named first scores &lt;strong&gt;nearly 4x more&lt;/strong&gt; than a brand named fourth — even if both appear in the exact same answer.&lt;/p&gt;

&lt;p&gt;Raw mention counting assigns identical value to both. That's not just imprecise. It's backwards. It rewards brands for appearing in answers where they're an afterthought.&lt;/p&gt;

&lt;p&gt;We built AAS at AIAttention.ai specifically because no existing tool was capturing this. When we ran the position-weighted math across thousands of AI-generated answers, the leaderboards completely reshuffled.&lt;/p&gt;




&lt;h2&gt;
  
  
  Case Study: Ford Mustang Mach-E vs. Rivian
&lt;/h2&gt;

&lt;p&gt;The EV space makes this concrete.&lt;/p&gt;

&lt;p&gt;Ford Mustang Mach-E: &lt;strong&gt;74 mentions, AAS of 39.1.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rivian R1T: &lt;strong&gt;47 mentions, AAS of 9.5.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ford has more mentions, yes — but not dramatically more. The real story is &lt;em&gt;where&lt;/em&gt; Ford appears. When AI models answer "What's a good electric SUV?" or "Which EV has the best range?", Ford tends to show up in position one or two. Rivian gets mentioned, but typically after the main recommendations, often as a "you might also consider" footnote.&lt;/p&gt;

&lt;p&gt;The result: Ford's AAS is &lt;strong&gt;more than 4x higher&lt;/strong&gt; than Rivian's, despite having only 57% more mentions.&lt;/p&gt;

&lt;p&gt;Volume without position is noise. Ford isn't winning because it's louder. It's winning because it's first.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem for Social Listening Tools Competing in AI Answers
&lt;/h2&gt;

&lt;p&gt;Here's the irony worth sitting with.&lt;/p&gt;

&lt;p&gt;Brandwatch, Talkwalker, Meltwater, and Mention are all chasing AI visibility — and they're all being evaluated &lt;em&gt;in&lt;/em&gt; AI answers when users ask "What are the best social listening tools?" They're simultaneously measuring AI mentions and struggling with their own AI visibility.&lt;/p&gt;

&lt;p&gt;And the same dynamic applies to them as to any other brand: the tool that appears first in AI recommendation lists captures disproportionate buyer consideration. Users asking AI for software recommendations don't read to the bottom of a seven-item list with the same attention they give to item one.&lt;/p&gt;

&lt;p&gt;If Brandwatch is appearing 85 times but mostly in positions four through six, it's generating awareness — not consideration. The brand that appears third in 30 answers may be driving more actual pipeline than the brand appearing sixth in 85.&lt;/p&gt;

&lt;p&gt;That's the consideration gap that raw mention tools cannot see.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Move From Mentioned to Mentioned First
&lt;/h2&gt;

&lt;p&gt;Position in AI answers isn't random. It's influenced by the same signals that make a source authoritative to a language model.&lt;/p&gt;

&lt;p&gt;A few moves that consistently correlate with higher position:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Own the comparison framing.&lt;/strong&gt; AI models construct recommendation lists by synthesizing comparison content. Brands that appear as the anchor in "X vs. Y" comparisons — not just the challenger — tend to get first-position treatment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be the named example in how-to content.&lt;/strong&gt; When published guides use your brand as the concrete illustration ("for example, Brandwatch lets you..."), LLMs internalize that as representative authority, not just a mention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Depth over breadth in use-case coverage.&lt;/strong&gt; Brands that have specific, detailed content for niche queries ("best social listening tool for agencies") often score disproportionately high on those prompts — because AI models don't find competitors who went that deep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured data and entity clarity.&lt;/strong&gt; The cleaner your brand entity is across the web (consistent naming, clear category association), the less ambiguous it is for a model to place you confidently at the top of a list.&lt;/p&gt;




&lt;p&gt;The uncomfortable truth: if you're measuring AI visibility by mention count alone, you're optimizing for the wrong thing. You might be growing mentions while your position — and your AAS — quietly slides.&lt;/p&gt;

&lt;p&gt;The brands that will win in AI-mediated discovery aren't necessarily the ones mentioned most. They're the ones mentioned first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What patterns are you seeing in where your brand appears in AI answers — and does your current tracking tool even tell you?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>branding</category>
      <category>visibility</category>
    </item>
    <item>
      <title>Claude Mentions Your Brand 3x More Often Than OpenAI — And Marketers Have No Idea</title>
      <dc:creator>Yibo Hu</dc:creator>
      <pubDate>Wed, 01 Apr 2026 06:02:49 +0000</pubDate>
      <link>https://dev.to/huyibodtc/claude-mentions-your-brand-3x-more-often-than-openai-and-marketers-have-no-idea-33o2</link>
      <guid>https://dev.to/huyibodtc/claude-mentions-your-brand-3x-more-often-than-openai-and-marketers-have-no-idea-33o2</guid>
      <description>&lt;h2&gt;
  
  
  Claude Mentions Your Brand 3x More Often Than OpenAI — And Marketers Have No Idea
&lt;/h2&gt;

&lt;p&gt;Last week I was staring at a dashboard that showed two brands with nearly identical content strategies. Same topics covered. Same publishing cadence. Wildly different AI visibility outcomes — depending on which AI you asked.&lt;/p&gt;

&lt;p&gt;That's when it clicked: we've been measuring AI visibility the wrong way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Model Mention Rate Gap Nobody Is Talking About
&lt;/h2&gt;

&lt;p&gt;We analyzed brand mention rates across five major AI platforms. The results were not subtle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude mentioned brands in 48% of responses&lt;/strong&gt; — 26 out of 54 queries triggered a brand mention. DeepSeek matched that exactly: 26 out of 54, also 48%.&lt;/p&gt;

&lt;p&gt;Then the cliff.&lt;/p&gt;

&lt;p&gt;OpenAI? &lt;strong&gt;16%.&lt;/strong&gt; Gemini? &lt;strong&gt;14%.&lt;/strong&gt; Qwen landed at 18.6%.&lt;/p&gt;

&lt;p&gt;That's not a rounding error. Claude is mentioning brands &lt;strong&gt;3x more often than OpenAI&lt;/strong&gt; and &lt;strong&gt;3.4x more often than Gemini&lt;/strong&gt;. A brand with identical content can appear in nearly every Claude answer yet be completely absent from most OpenAI responses.&lt;/p&gt;

&lt;p&gt;If your entire AI visibility strategy is built around GPT-4 — and most are — you're flying blind on more than half the AI traffic your customers are actually using.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the Gap Exists
&lt;/h2&gt;

&lt;p&gt;This isn't random noise. There are real structural reasons these models behave differently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training data recency matters.&lt;/strong&gt; Claude's training pipeline appears to index more recent web content, including product documentation, review sites, and comparison articles. Models trained on older snapshots simply haven't "seen" many brands that exist today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RLHF objectives diverge.&lt;/strong&gt; OpenAI has optimized heavily for generative summarization — synthesizing information into clean, attribution-free answers. Claude's RLHF tuning seems to reward specificity and attribution. These are fundamentally different editorial philosophies baked into the model weights.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Citation culture vs. fluency culture.&lt;/strong&gt; Perplexity and Claude lean toward sourcing claims. GPT-4 and Gemini often produce beautifully written answers that strip out the brand names entirely. Same underlying knowledge, opposite presentation habits.&lt;/p&gt;

&lt;p&gt;The upshot: &lt;strong&gt;being optimized for one AI model does not transfer to another.&lt;/strong&gt; The content signals that get you mentioned in Claude answers are not the same signals that drive OpenAI visibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for Your Brand Strategy
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable implication: a brand can be crushing it in Claude and invisible in GPT-4. Or vice versa.&lt;/p&gt;

&lt;p&gt;If you're only monitoring one model, you don't have AI visibility data. You have a single sample from a multi-distribution problem.&lt;/p&gt;

&lt;p&gt;This matters more every quarter. Claude is growing. DeepSeek is growing. Enterprise tools are increasingly mixing models behind the scenes — your customer might be getting answers from Claude on Monday and Gemini on Thursday. Your brand's presence is lottery-dependent unless you understand each model's behavior separately.&lt;/p&gt;

&lt;p&gt;The brands that will win the next two years of AI search are the ones who stop thinking about "AI visibility" as a single metric and start treating each model as a distinct distribution channel — with its own preferences, biases, and content appetites.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Measure Your Cross-Model Visibility (Without Losing Your Mind)
&lt;/h2&gt;

&lt;p&gt;A composite AI Attention Score is useful for trend-watching. But it hides the model-level blind spots that actually determine whether a customer finds you.&lt;/p&gt;

&lt;p&gt;The right approach: track &lt;strong&gt;AAS per model&lt;/strong&gt;, not just overall AAS.&lt;/p&gt;

&lt;p&gt;When we tracked brands using &lt;a href="https://aiattention.ai" rel="noopener noreferrer"&gt;AIAttention.ai&lt;/a&gt; across all five models simultaneously, the model-level breakdown routinely showed 30-40 point gaps between a brand's best and worst performing AI platform. That gap doesn't show up in a blended average. It disappears.&lt;/p&gt;

&lt;p&gt;One model-level score below average is a content gap. Three models below average is a structural problem. You can't fix what you can't see.&lt;/p&gt;




&lt;h2&gt;
  
  
  Actionable Steps Starting This Week
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Run a model-specific visibility audit.&lt;/strong&gt;&lt;br&gt;
Query your brand and 3-5 competitors across Claude, GPT-4, Gemini, and at least one emerging model. Don't average — compare. Look for which models consistently surface your competitors but not you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Map content formats to model preferences.&lt;/strong&gt;&lt;br&gt;
Claude and DeepSeek favor specific, attributable claims — detailed comparison content, named products, concrete specs. OpenAI and Gemini reward well-structured explanatory content that synthesizes across sources. These are different content briefs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Prioritize your model gap, not your average score.&lt;/strong&gt;&lt;br&gt;
If you're at 60% visibility on Claude and 10% on Gemini, your Gemini problem is the opportunity. Closing that gap likely requires different content — not more of what's already working on Claude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Monitor weekly, not quarterly.&lt;/strong&gt;&lt;br&gt;
Model behavior shifts with updates. A content type that wasn't indexed last month may be favored today. Set a regular cadence so you catch model shifts before your competitors do.&lt;/p&gt;




&lt;p&gt;The 3x mention rate gap between Claude and OpenAI isn't going to close anytime soon — it reflects deep differences in how these models were trained and what they were optimized for.&lt;/p&gt;

&lt;p&gt;The question is whether you're measuring it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What patterns have you seen across AI platforms for your brand or industry? Drop it in the comments — I'm curious whether the model gap is larger or smaller in your category.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>branding</category>
      <category>visibility</category>
      <category>search</category>
    </item>
  </channel>
</rss>
