<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: siyahtonu</title>
    <description>The latest articles on DEV Community by siyahtonu (@siyahtonu).</description>
    <link>https://dev.to/siyahtonu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/siyahtonu"/>
    <language>en</language>
    <item>
      <title>Free AI-SEO score: see if ChatGPT, Claude, Gemini cite your site</title>
      <dc:creator>siyahtonu</dc:creator>
      <pubDate>Sat, 25 Apr 2026 12:09:15 +0000</pubDate>
      <link>https://dev.to/siyahtonu/free-ai-seo-score-see-if-chatgpt-claude-gemini-cite-your-site-4e56</link>
      <guid>https://dev.to/siyahtonu/free-ai-seo-score-see-if-chatgpt-claude-gemini-cite-your-site-4e56</guid>
      <description>&lt;p&gt;Answena.com scores any URL across 10 retrieval signals + 5 AI assistants (ChatGPT, Claude, Gemini, Perplexity, Google AIO) and gives you a ranked fix list. Free, no signup, ~15 sec scan. Open methodology, real LLM probes, calibrated against research benchmarks.&lt;/p&gt;

&lt;p&gt;I built Answena because traditional SEO tools optimize for Google's blue links — but my customers were finding my product through ChatGPT and Perplexity, not Google. Existing tools couldn't tell me how I'd appear inside an AI answer.&lt;/p&gt;

&lt;p&gt;What it does:&lt;/p&gt;

&lt;p&gt;Drop in any URL. In ~15 seconds, Answena scores it on 10 retrieval-pipeline signals (topic alignment, intent coverage, trust, readability, brand recognition, quotability, crawlability, freshness, external mentions, action surface) and runs the same query through real LLM panels — Claude, ChatGPT, DeepSeek, and others — to measure actual citation rate.&lt;/p&gt;

&lt;p&gt;What's different:&lt;/p&gt;

&lt;p&gt;Open methodology: every weight is documented, calibrated against the Princeton GEO paper (KDD 2024). No black box.&lt;/p&gt;

&lt;p&gt;Real LLM probes (not just static signals) blended at 30% into the final score. Configure your own API keys.&lt;/p&gt;

&lt;p&gt;Production-ready validation: Spearman ρ, AUC-ROC, NDCG@10, Brier score, ECE — measured against ground truth, not opinion.&lt;/p&gt;

&lt;p&gt;🇹🇷 Multilingual: full Turkish + English support; per-language readability formulas (Ateşman, Bezirci, Flesch, Amstad, Oborneva, OSMAN).&lt;/p&gt;

&lt;p&gt;Ranked fixes, not 200-item audits — top 3-5 things that move your score most.&lt;/p&gt;

&lt;p&gt;Free, no signup, runs without API keys (heuristic mode). Drop your OPENAI_API_KEY / ANTHROPIC_API_KEY etc. for full live-probe mode.&lt;/p&gt;

&lt;p&gt;Built with Node.js + vanilla JS. Methodology open at github.com/siyahtonu/gec.&lt;/p&gt;

&lt;p&gt;Looking forward to your feedback! Especially curious to hear from teams whose customers are starting to come through AI assistants — what would you want measured?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>analytics</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
