If you've ever spent hours digging through Google results, opening 30+ tabs, cross-referencing sources, and trying to synthesize everything into a coherent answer — you know how painful deep research is.
That's the problem we set out to solve with Neiro.
What is Neiro?
Neiro is an AI-powered deep research engine. You ask a complex question — the kind that Google can't answer in a single snippet — and Neiro goes to work.
It doesn't just search. It researches. It scans thousands of live web sources, reads them, cross-references data points, and delivers a structured intelligence dossier with full citations.
Who is it for?
- VC analysts doing due diligence on startups
- Strategy teams running competitive intelligence
- Researchers conducting literature reviews
- Product managers sizing markets
- Anyone who needs answers that require more than a quick search
The Technical Challenge
Building a research engine is fundamentally different from building a chatbot. Here's why:
1. Breadth of Sources
A chatbot responds from training data. A search engine returns 10 links. Neiro needs to actively crawl and process thousands of sources per query — in real time. This means the system needs to be highly parallelized, with intelligent source selection to avoid noise.
2. Zero Hallucinations
When you're producing research that professionals rely on for million-dollar decisions, you can't afford hallucinations. Every data point in a Neiro dossier needs to be traceable to its original source. This is a hard constraint that shapes the entire architecture.
3. Structured Output
Raw text isn't useful for research. Neiro produces structured dossiers with:
- Executive summaries
- Data tables
- Source citations
- Risk assessments
- Competitive matrices
This requires a fundamentally different output pipeline than a typical LLM chat interface.
Our SEO Strategy (The Fun Part for Devs)
As a startup, we can't just build a great product and hope people find it. We needed a comprehensive organic growth strategy. Here's what we implemented:
Dynamic Sitemap
Our blog content is published automatically. A static sitemap.xml wouldn't cut it — it would go stale immediately.
We built a dynamic sitemap endpoint on our Express backend that queries the database for all published posts and generates the XML on-the-fly.
app.get('/sitemap.xml', async (req, res) => {
const posts = await db.query(
"SELECT slug, updated_at FROM blog_posts WHERE status = 'published'"
);
const urls = posts.rows.map(p => `
<url>
<loc>https://yoursite.com/blog/${p.slug}</loc>
<lastmod>${new Date(p.updated_at).toISOString().split('T')[0]}</lastmod>
</url>
`).join('');
res.type('application/xml').send(`<?xml version="1.0"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
${urls}
</urlset>`);
});
Since our frontend is a Vite SPA on Vercel, we use Vercel rewrites to proxy /sitemap.xml to this backend endpoint.
RSS Feed for Syndication
We added an RSS 2.0 feed endpoint that enables automatic syndication to platforms like Feedly, dev.to, and Google News.
AI Discoverability with llms.txt
This is a newer concept — a file at /llms.txt that tells AI models (ChatGPT, Perplexity, Claude) about your product. Think of it as robots.txt for AI.
# Neiro — AI Deep Research Engine
> Neiro is an AI-powered deep research platform that
> synthesizes knowledge from 10,000+ real-time sources
> into verified intelligence dossiers.
## What Neiro Does
- Deep research on any topic
- Verified citations for every claim
- Structured dossiers, not chat responses
Dynamic OG Tags with Edge Middleware
SPAs have a classic problem: social media crawlers can't read client-rendered meta tags. When someone shares a blog post on LinkedIn or Twitter, the preview shows generic site info instead of the post's actual title and image.
Our solution: Vercel Edge Middleware.
export default async function middleware(request: Request) {
const ua = request.headers.get('user-agent') || '';
if (!CRAWLERS.test(ua)) return next();
const post = await fetch(`${API}/blog/post/${slug}`);
return new Response(generateOgHtml(post), {
headers: { 'Content-Type': 'text/html' }
});
}
Crawlers get pre-rendered HTML with correct meta tags. Real users get the normal SPA experience. Best of both worlds.
IndexNow for Instant Indexing
Instead of waiting for search engines to discover new content, we proactively notify them using the IndexNow protocol:
async function pingIndexNow(postUrl: string) {
await fetch('https://api.indexnow.org/IndexNow', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
host: 'yoursite.com',
key: process.env.INDEXNOW_KEY,
urlList: [postUrl]
})
});
}
This fires automatically every time a new blog post is published. Bing, Yandex, and other supporting engines pick it up within minutes.
Programmatic SEO Pages
We created landing pages targeting high-intent keywords:
- Use case pages: /use-cases/ai-due-diligence, /use-cases/market-research, etc.
- Comparison pages: /compare/neiro-vs-perplexity, /compare/neiro-vs-chatgpt, etc.
Each page has unique content, FAQ structured data (for Google rich snippets), breadcrumb schema, and proper OG tags. These are data-driven — a single React component renders different content based on a data file, making it easy to add new pages.
Results So Far
We're early, but the infrastructure is solid:
- ✅ All blog posts indexed within hours of publishing
- ✅ Correct social previews when shared on LinkedIn/Twitter
- ✅ AI models (ChatGPT, Claude) can discover and reference Neiro
- ✅ Growing organic traffic from long-tail keywords
- ✅ RSS feed enables automatic cross-posting
Try It
If you're curious about what AI deep research looks like in practice, try Neiro for free — 3 researches, no credit card.
I'd love to hear your feedback, especially from anyone working on similar problems in the AI/search space.
What SEO strategies are you using for your side projects? Drop them in the comments — always looking to learn from the community.
Top comments (0)