I'm running an experiment to see how AI search engines like Perplexity and ChatGPT discover and cite content differently than traditional Google SEO.
The Setup
Built a productivity blog with 5 posts, each using different optimization strategies:
- Traditional SEO - Keyword-stuffed content (old-school Google tactics)
 - Structured Data - Heavy JSON-LD schema (FAQ, HowTo)
 - LLM-Friendly - Conversational, clear definitions
 - Q&A Format - Mimics AI training data structure
 - AI-Optimized - Authoritative, comprehensive coverage
 
The Test
Over 30 days, I'll track which posts get:
- Cited by Perplexity
 - Referenced by ChatGPT Search
 - Mentioned by Claude
 - Ranked by Google AI Overviews
 
Current Status
Just deployed: https://focusos-blog-3ryy.vercel.app
Day 1: Not indexed yet (baseline established)
Will share results weekly. Has anyone else tried optimizing specifically for AI search engines rather than traditional SEO?
Topics covered:
Curious what the community thinks about "Agentic SEO" vs traditional optimization.
    
Top comments (4)
I started a tiny mirror site a while back, but I really just wanted to see how much I could impact AI search from GitHub and not traverse nine circles of Dante's DevOps to do it myself. 😆 I'm pretty sure I put that in the repo, too, at some point!
This is the background story I wrote up on the topic. I have worked on it a little since then and have a couple new ideas I'm hoping to get to this weekend. The repo is anchildress1/devto-mirror. It's really not much, but take whatever might help. Structured output is up next I think, just as soon as I get it registered with GCP (which has been at the top of my list since I got up this morning... where it remains currently 🤣).
Haha I love this — you’re literally doing what I’ve been experimenting with too 😆
I’ve been running an “Agentic SEO” test lately, trying to see how well AI search discovers and cites different kinds of web content.
The devto-mirror idea sounds perfect for that kind of test. Would totally love to compare notes or even cross-reference results if you’re open to it.
I’m especially curious how your structured output plan goes once you get it registered with GCP — that’s been on my radar too for crawl feedback and indexing checks.
I’ll check out your repo — if you have any docs, scripts, or patterns you used to automate the mirroring or measure AI visibility, that’d be super helpful 🙌
If you’re up for swapping findings, I can share how I’m tracking agentic indexing signals on my side too.
Sounds good—you can track me down on Discord, usually. I needed something to give to Kiro before I wasted all of my free credits this month. So that's the job it got 😆 Tiny bits of progress, I did manage a decent test round via GH pages though and verified the on/off flags do indeed work as theorized with their hosted solution. Also, I managed to get it verified with Google's Search setup. Course, I'm not quite sure what all it can really yet, but there's options!
Fair warning: that repo setup is mostly an over-controlling attempt at vibe coding (which I'm incapable of, apparently) that is seriously lacking in any real structure or design of any kind. Right now my TODO list in that more than a few things to tackle. All the things I usually set up first were completely ignored when I told myself it's just a "quick test" and I didn't really need a whole suite of tests for the test. 😵 It serves the original POC purpose, but time for some upgrades now!
Ping me though, we'll chat 💬
That makes sense — my setup is also mostly AI-generated, built as a quick proof of concept that grew into a deeper experiment.
It’s far from structured right now, but I think that raw, “vibe-coded” nature makes it a more authentic test case. Once I have enough data, I’ll move toward a cleaner, human-refined version for comparison.
If you’re up for it, feel free to reach out — I’m usually around on Discord at s_victor_og .