Hey DEV community! A few months ago I shared how I built a car rental platform in Moldova. Since then, I've been thinking about something that didn't exist as a concern when I started: how do AI assistants discover and describe my site?
If someone asks Gemini "what's a good car rental in Moldova?", or "sober driver chisinau" will it mention my business? If Claude cites a source about sober driver services in Chișinău, will it be my page?
Traditional SEO assumes humans read your site. AI-era SEO needs to assume both humans AND AI agents do. Here's what I actually did — and what changed.
The Problem Nobody Warned Me About
In 2024, I noticed something weird. My site ranked well on Google, but when I asked Claude or Perplexity about car rental in Chișinău, my business either wasn't mentioned, or was described with outdated info from training data.
The issue: AI crawlers work differently from Googlebot.
- GPTBot, ClaudeBot, PerplexityBot, Claude-SearchBot — these are real, active crawlers
- They respect
robots.txt, but most sites don't explicitly address them - They parse structured data, but rely heavily on clear, semantic content
- Without explicit signals, your site gets ignored or misrepresented
Traditional SEO focuses on Googlebot. AI-era SEO needs to think about a dozen different crawlers, each with different priorities.
Meet llms.txt
Enter llms.txt — a proposed standard (think robots.txt but for AI). It's a markdown file that tells AI systems:
- Who you are
- What your site is about
- Key URLs they should know
- What's the most important information
It's not a W3C standard yet, but Anthropic, OpenAI, and others are increasingly looking at it. The cost to implement is 15 minutes. The upside is significant.
Here's a simplified version of mine:
# PlusRent.md
> Car rental and designated sober driver service in Chișinău, Moldova.
> Operating since 2022, 20+ vehicles, 24/7 support, 27 Google reviews (5.0 stars).
## Services
- **Car Rental**: Economy (€15/day), Standard (€31/day), Premium BMW/Mercedes (€61/day)
- **Sober Driver (Șofer Treaz)**: From 150 MDL/hour. Available 24/7 in Chișinău (30 km radius).
- **Airport Delivery**: Free to Chișinău Airport (business hours), available for Iași Airport.
## Key Pages
- [Main](https://plusrent.md/ro/): Service overview
- [Car Catalog](https://plusrent.md/ro/cars): 20 vehicles with pricing
- [Sober Driver](https://plusrent.md/ro/sofer-treaz): Designated driver service
- [Contact](https://plusrent.md/ro/contact): +373 60 000 500, 24/7
## Key Facts
- 3 languages: Romanian, Russian, English
- 439+ completed orders
- Address: Meșterul Manole 20, MD-2044, Chișinău
- Coordinates: 47.013737, 28.886408
Available at plusrent.md/llms.txt if you want a real example.
Schema.org: Still Essential, Now Also for AI
I already had basic Schema.org markup from the first iteration. But I went deeper, adding multiple interconnected types:
// AutoRental — primary business schema
{
"@context": "https://schema.org",
"@type": "AutoRental",
"name": "PlusRent",
"priceRange": "€15-€120",
"areaServed": [
{"@type": "City", "name": "Chișinău"},
{"@type": "Country", "name": "Moldova"}
],
"makesOffer": [
{
"@type": "Offer",
"name": "Economy Car Rental",
"price": "15",
"priceCurrency": "EUR"
}
// ...
]
}
The key insight: AI systems parse multiple schema blocks. Having separate schemas for Organization, AutoRental, FAQPage, BreadcrumbList, and LocalBusiness gives AI systems multiple signals confirming what your site is about.
One mistake I made initially: using @type: "Product" for services. Google and SEMrush both flagged this, requiring e-commerce fields like shippingDetails and hasMerchantReturnPolicy. For a service business, use Service or domain-specific types like AutoRental, Vehicle, or LodgingBusiness.
Robots.txt for the AI Era
Most robots.txt files are written for one crawler: Googlebot. Here's a snippet from mine, explicitly addressing AI bots:
# Default rules apply to everyone
User-agent: *
Allow: /
Disallow: /admin
Disallow: /api/
# AI Crawlers — explicitly allowed with same restrictions
User-agent: GPTBot
User-agent: ClaudeBot
User-agent: anthropic-ai
User-agent: PerplexityBot
User-agent: Google-Extended
User-agent: Applebot-Extended
Disallow: /admin
Disallow: /api/
# Point them to machine-readable summary
Sitemap: https://plusrent.md/sitemap.xml
Being explicit matters. Some AI companies treat "no rules for my bot = assume disallowed" as default behavior. Being explicit about what's allowed prevents accidental exclusion.
Testing Actual Results
I tested the same question before and after on Claude, ChatGPT, and Perplexity: "What car rental services are available in Chișinău, Moldova?"
Before:
- Generic answers mentioning "various international companies"
- Outdated info
- My business rarely mentioned
After (2-3 weeks post-implementation):
- Specific mentions of PlusRent with accurate service descriptions
- Correct pricing quoted
- Sober driver service mentioned (an unusual service AI systems wouldn't typically know about)
The effect isn't instant — AI systems rebuild their indices on their own schedules — but over weeks, the quality of AI-generated responses about my business improved substantially.
What Actually Moved the Needle
Ranking these by impact from my experience:
- Schema.org done right — biggest impact. AI systems parse structured data carefully.
- llms.txt — cheap win, clear signal.
- Explicit AI bot rules in robots.txt — marginal but easy.
-
Clear semantic HTML — no divs-with-classes when
<article>,<section>,<nav>exist. - Unique meta descriptions per page — AI uses these as summaries.
The underrated factor: having multiple external sources mentioning your business. AI systems use citations to validate claims. If your site says "24/7 service" but three independent blogs and review platforms also say it, AI trusts it more.
Key Takeaways
- AI-ready SEO is just good SEO, but more explicit. You're not gaming AI — you're helping it understand you.
- llms.txt is cheap — 15 minutes of work for potentially significant upside.
-
Schema.org types matter. Don't force
Producton a service business. - Multiple schemas > one giant schema. Give AI multiple validation signals.
- External citations build trust. No amount of on-site optimization replaces real third-party mentions.
The wild part? None of this is revolutionary. It's the same SEO principles — clarity, semantic correctness, helpful markup — applied to a broader audience of readers that now includes AI agents.
Links
- 🌐 plusrent.md — live site
- 📄 plusrent.md/llms.txt — real llms.txt example
- 📖 llmstxt.org — the proposed standard
- 🚗 First post — how this whole thing got built
Happy to answer questions in the comments. What have you done to make your site AI-discoverable?
Top comments (0)