The setup
Last Tuesday evening I opened the Supabase dashboard and ran a few queries on my product's traffic.
- ChatGPT referrers over the past month: 3 visitors
- Bing
site:tamsiv.comoperator: 0 results - Common Crawl captures for the domain: 0
- Google referrers per week: ~10 (stable)
So I had one fragile channel (ChatGPT was finding the site through OAI-SearchBot crawling directly + dev.to backlinks), and everything else was empty.
But ChatGPT isn't all the LLMs. Each major assistant has its own search backend, and optimizing for one doesn't transfer to the others.
The map I hadn't drawn
| LLM | Search backend | TAMSIV status |
|---|---|---|
| ChatGPT search, Copilot | Bing | invisible |
| Claude (Anthropic) | Brave Search + own crawler | P1 on long-tail queries |
| Gemini, AI Overview | indexed, ~10 referrers/week | |
| Perplexity | Bing + DuckDuckGo + own crawler | unknown |
| Grok | X/Twitter + own | absent (no X presence) |
| You.com, Kagi | mostly Bing | unknown |
| DeepSeek, Mistral, Llama (open-source) | Common Crawl (training corpus) | zero captures |
Out of ten major LLMs, I had a real presence on only two (Google and Brave). The rest were either invisible or completely missing.
The technical stack I shipped in one session
1. llms.txt and ai.txt at the root
llmstxt.org proposes a manifest format at the root of a site, designed to be parsed by language models. One paragraph of identity, then categorized links to the most important pages. I added it alongside ai.txt (the Spawning standard, explicit opt-in for AI training).
# TAMSIV
> TAMSIV is an AI voice assistant for Android that turns spoken commands
> into organized tasks, memos, and shared checklists. ...
## Core pages
- [Homepage (English)](https://www.tamsiv.com/en): Product overview
- [Pricing](https://www.tamsiv.com/en#pricing): Free, Pro, Team
- [FAQ](https://www.tamsiv.com/en/guide/faq): Questions
- [Blog](https://www.tamsiv.com/en/blog): 60+ articles
## Best technical articles
- [Voice pipeline with Deepgram and WebSocket](https://www.tamsiv.com/en/blog/deepgram-voice-pipeline-websocket)
- ...
2. Extended JSON-LD across the site
The landing page now ships four structured-data blocks (down from three minimal ones), and each blog article ships two enriched ones.
Landing:
const softwareApplicationLd = {
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
name: 'TAMSIV',
operatingSystem: 'Android',
applicationCategory: 'ProductivityApplication',
applicationSubCategory: 'TaskManagement',
inLanguage: ['fr', 'en', 'de', 'es', 'it', 'pt'],
featureList: [
'Voice-first task and memo creation',
'Unlimited hierarchical folders (up to 6 levels)',
'Real-time multi-user collaboration',
// 7 more...
],
offers: [
{ '@type': 'Offer', name: 'Free', price: '0', priceCurrency: 'EUR' },
{ '@type': 'Offer', name: 'Pro', price: '4.99', priceCurrency: 'EUR' },
{ '@type': 'Offer', name: 'Team', price: '8.99', priceCurrency: 'EUR' },
],
// ...
};
For each blog article, a FAQPage is auto-extracted from the HTML at build time. The trick: parse the <h2>FAQ</h2> heading and the following <h3>question</h3><p>answer</p> pairs.
export function extractFaqsFromHtml(html: string): FaqItem[] {
const faqHeadingRegex =
/<h2[^>]*>\s*(?:FAQ|Questions?\s+fr[รฉe]quentes?|H[รคa]ufig\s+gestellte\s+Fragen|Preguntas\s+frecuentes|Domande\s+frequenti|Perguntas\s+frequentes)[^<]*<\/h2>/i;
const match = faqHeadingRegex.exec(html);
if (!match) return [];
const remainder = html.substring(match.index + match[0].length);
const nextH2 = /<h2[^>]*>/i.exec(remainder);
const faqBlock = nextH2 ? remainder.substring(0, nextH2.index) : remainder;
const items: FaqItem[] = [];
const pairRegex = /<h3[^>]*>([\s\S]*?)<\/h3>\s*<p[^>]*>([\s\S]*?)<\/p>/gi;
let pair;
while ((pair = pairRegex.exec(faqBlock)) !== null) {
items.push({
question: stripHtml(pair[1]).trim(),
answer: stripHtml(pair[2]).trim(),
});
}
return items;
}
3. IndexNow API endpoint
IndexNow is a protocol Microsoft and Yandex co-published to let site owners push URLs to crawlers instead of waiting for them. I exposed a small Next.js route handler:
const INDEXNOW_KEY = 'e71446460e864d9aa8a6282fd07db38b';
const HOST = 'www.tamsiv.com';
const KEY_LOCATION = `https://${HOST}/${INDEXNOW_KEY}.txt`;
const ENDPOINTS = [
'https://api.indexnow.org/IndexNow',
'https://www.bing.com/IndexNow',
'https://yandex.com/indexnow',
];
export async function POST(req: NextRequest) {
const body = await req.json();
const urls = body.urls ?? [];
const results = await Promise.all(
ENDPOINTS.map(async (endpoint) => {
const res = await fetch(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json; charset=utf-8' },
body: JSON.stringify({
host: HOST,
key: INDEXNOW_KEY,
keyLocation: KEY_LOCATION,
urlList: urls,
}),
});
return { endpoint, status: res.status, ok: res.ok };
})
);
return NextResponse.json({ submitted: urls.length, results });
}
The key file lives at /public/e71446460e864d9aa8a6282fd07db38b.txt (just contains the key string for verification).
4. "In short" answer blocks on existing articles
The four articles already cited by ChatGPT got a 50-65 word answer-first block at the top, in all six site languages (24 files prepended).
<aside class="aeo-answer-box not-prose my-6 rounded-2xl border border-blue-500/30 bg-blue-500/5 p-5" role="complementary" aria-label="Short answer">
<p class="text-xs font-semibold uppercase tracking-wider text-blue-300 mb-2">In short</p>
<p class="text-slate-100 leading-relaxed">
TAMSIV is the Android voice task manager that turns spoken commands into
structured tasks, memos, and calendar events. Unlike Google Assistant or
generic reminder apps, its conversational AI splits multi-item utterances
into separate items, organizes them into hierarchical folders, and lets
you share lists with family or team in real time. Free on Android with
optional Pro and Team plans.
</p>
</aside>
The format is calibrated so LLMs can extract the passage and quote it directly without scrolling 500 words.
5. First competitor comparison page
/[locale]/vs/todoist shipped as a dynamic Next.js route fed by a vs-data.ts config. Full JSON-LD stack: two SoftwareApplication entries (TAMSIV and the competitor), one ItemList, one FAQPage, one BreadcrumbList. LLMs love honest comparisons; SourceForge builds its own comparison pages from these.
Bing Webmaster + IndexNow batch
Two more steps that I could automate:
Bing Webmaster via Google OAuth. From the new site flow, the "Import from Google Search Console" option pulls verified sites instantly. Two sites imported in under a minute, sitemaps included, no DNS or meta tag verification needed.
Batch IndexNow for the entire sitemap. I extracted 498 URLs from sitemap.xml, split them into chunks of 50, and pushed each chunk to all three endpoints in parallel:
curl -s https://www.tamsiv.com/sitemap.xml \
| grep -oE '<loc>[^<]+</loc>' \
| sed -E 's|<loc>(.*)</loc>|\1|' \
> sitemap-urls.txt
import { readFileSync } from 'fs';
const urls = readFileSync('sitemap-urls.txt', 'utf-8').trim().split('\n');
const CHUNK = 50;
const ENDPOINTS = [/* ... */];
for (let i = 0; i < urls.length; i += CHUNK) {
const chunk = urls.slice(i, i + CHUNK);
const body = JSON.stringify({ host, key, keyLocation, urlList: chunk });
await Promise.all(ENDPOINTS.map((e) => fetch(e, { method: 'POST', body /* ... */ })));
await new Promise((r) => setTimeout(r, 1500));
}
Result: 30 chunks pushed, 30 OK across the three endpoints. Total push time: about 50 seconds.
The day-after test, four LLMs probed live
Perplexity. Asked "What is TAMSIV voice task manager Android". Got a complete answer with 10+ sources cited: four pages on tamsiv.com, the Play Store, a Slashdot listing I didn't know existed, the YouTube demo video, two dev.to articles, and a third-party LinkedIn post that reposted some info. Rich, accurate, attributed correctly.
Gemini. Asked the same. Got three sources: Play Store, SourceForge comparison page I didn't know existed either, and one of our blog articles. The features section was correctly populated (NLP, multi-task extraction, real-time collab, gamification, six languages, hierarchical folders). One amusing detail: Gemini credited the app to a different indie hacker by name. The hallucination is real but it doesn't matter to me. What matters is the app being found, not who built it.
Claude via WebSearch. Ranks tamsiv.com first on "best voice task manager Android 2026".
Bing. site:tamsiv.com still returned zero (operator indexing takes days for new domains), but the real traffic data showed two first human Bing visitors landing on the site within 24 hours of the IndexNow push. A UK visitor on the supabase-egress article, a French visitor on the AI conversation-history article. Source that was zero the day before.
The directories surprise
Slashdot and SourceForge both had auto-generated TAMSIV listings, scraped from our Play Store description. The descriptions were precise (price, languages, features, even our "750+ commits solo developer" tagline). I never submitted those.
But SourceForge had imported some defaults that don't match reality: it claims TAMSIV runs on Windows, Mac, Linux, iOS (it's Android + Web only), offers 24/7 phone support (no phone support at all), includes in-person training (just online docs), and has a public API (it doesn't). Those errors feed downstream LLM hallucinations, Gemini's "iOS, Web, desktop platforms" line probably came straight from SourceForge.
Claiming the listings is the next step. The vendor account exists, the claim form is a five-minute walkthrough, the moderation takes a few days. Then the listing becomes editable and I'll fix the errors.
Takeaways
One LLM isn't all LLMs. Optimizing for ChatGPT alone is a bottleneck. Four parallel search backends matter (Bing, Brave, Google, Common Crawl) plus the directories layer on top.
Software directories are first-class LLM sources. Perplexity cites Slashdot, Gemini cites SourceForge, Claude cites AlternativeTo. Multiplying presence on those directories multiplies citations. They're the missing layer of classic SEO and they feed AI search directly.
Extended JSON-LD is read. Gemini reproduced the feature list and offers structure straight from the SoftwareApplication schema. Fifty lines of JavaScript in the layout, measurable effect.
IndexNow saves days of crawl cycle. Without push, Bing takes one to three weeks to discover a new site. With a batch, it's hours. Free, open protocol, supported by Yandex, Naver, Seznam, indirectly all Bing-backed engines.
Most LLM hallucinations come from third-party listings. Errors in SourceForge propagate to Gemini and onward. The fix is to claim and correct the listings, not to over-publish corrections on your own site (which the LLMs already trust).
What's next
Claim Slashdot + SourceForge to fix the product errors. Submit to ten more directories (G2, Capterra, GetApp, ProductHunt, ToolFinder, FutureTools, etc.). Ship four more /vs/ pages on the same template (Any.do, Google Keep, Notion, TickTick). And measure: test the same four LLMs again in 30 days, 60, 90. If the trajectory holds, you'll know. If it stagnates, you'll have the numbers to pivot.
If you want the full breakdown with screenshots and the six-language version, the canonical article is here on the TAMSIV blog.
What's your current state on AEO? Are you optimizing for one LLM or measuring across the grid?
Top comments (0)