A developer's guide to AEO/GEO - the SEO nobody's talking about yet.
I didn't set out to rank on ChatGPT.
I just built a portfolio. Pushed it to GitHub. Moved on.
Then someone messaged me: "Dude, your template shows up first when I ask ChatGPT for Next.js portfolio templates."
I thought they were trolling. Tried it myself.
They weren't.
My portfolio template was the #1 result for "best Next.js portfolio template GitHub" on ChatGPT search.
No ads. No backlink campaigns. No SEO agency.
So I did what any curious dev would do - I reverse-engineered why.
Turns out, I accidentally nailed something called AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization).
Here's everything I learned.
TL;DR (If You Want the Playbook)
- Add JSON-LD schema so AI can understand “who/what” your site is.
- Set metadata so crawlers can read + reuse your content (within reason).
- Write specific, verifiable content (entities + numbers + links).
- Keep the site crawlable (sitemap + robots + clean structure).
The Game Changed (And Most Devs Missed It)
Google isn't the only search engine anymore.
ChatGPT, Perplexity, Claude - they're not just chatbots. They're search engines.
The important part isn't the numbers. It's the behavior shift: people ask AI, not “type keywords”.
But here's the thing: AI search works completely differently.
Google ranks links based on backlinks, domain authority, keywords.
AI engines? They retrieve and cite information. They need to understand your content, not just index it.
Traditional SEO asks: "How do I rank higher?"
AEO/GEO asks: "How do I become the answer?"
Big difference.
The Secret Sauce: JSON-LD Schema
Let's cut to the chase.
The #1 thing that made my portfolio "visible" to AI is structured data.
Specifically, JSON-LD schema markup.
Here's what I added to my homepage:
// app/(root)/page.tsx
const personSchema = {
"@context": "https://schema.org",
"@type": "Person",
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
image: "https://res.cloudinary.com/.../og-image.png",
jobTitle: "Applied AI Engineer",
sameAs: ["https://github.com/namanbarkiya", "https://x.com/namanbarkiya"],
};
const softwareSchema = {
"@context": "https://schema.org",
"@type": "SoftwareApplication",
name: "Next.js Portfolio Template",
applicationCategory: "DeveloperApplication",
operatingSystem: "Web",
offers: {
"@type": "Offer",
price: "0",
priceCurrency: "USD",
},
author: {
"@type": "Person",
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
},
};
Then injected it into the page:
<Script
id="schema-person"
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(personSchema) }}
/>
<Script
id="schema-software"
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(softwareSchema) }}
/>
Why this works:
When a crawler hits my page, it doesn't just see a wall of HTML.
It sees structured data that explicitly says:
- "This is a Person named Naman Barkiya"
- "His job is Applied AI Engineer"
- "He's the same as this GitHub profile and Twitter account"
- "He authored this software application"
This is how AI builds its knowledge graph. This is how you become citable.
Schema Types You Should Know
| Schema Type | When to Use |
|---|---|
Person |
Personal portfolios, about pages |
Organization |
Company websites |
SoftwareApplication |
Dev tools, apps, templates |
Article |
Blog posts, tutorials |
FAQPage |
Q&A sections |
HowTo |
Step-by-step guides |
The One Line That Changed Everything
Most websites are invisible to AI search.
Not because the content is bad - but because crawlers often only ingest snippets.
Here's the line that fixes it:
// app/layout.tsx
export const metadata = {
// ... other metadata
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
"max-image-preview": "large",
"max-snippet": -1, // ← This is the magic line
},
},
};
max-snippet: -1
This tells (some) crawlers: "You can use as much of my content as you want."
Important nuance: this directive is widely recognized for Google (googleBot), and some other crawlers may respect it too. Either way, it’s a good default if your goal is discoverability.
With -1, AI can cite your entire page.
Small change. Massive impact.
Metadata That Actually Matters
Here's my full metadata setup:
export const metadata = {
metadataBase: new URL("https://nbarkiya.xyz"),
title: {
default: "Naman Barkiya - Applied AI Engineer",
template: "%s | Naman Barkiya - Applied AI Engineer",
},
description:
"Naman Barkiya - Applied AI Engineer working at the intersection of AI, data, and scalable software systems.",
keywords: [
"Naman Barkiya",
"Applied AI Engineer",
"Next.js Developer",
"XYZ Inc",
"Databricks",
// ... more keywords
],
authors: [
{
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
},
],
alternates: {
canonical: "https://nbarkiya.xyz",
},
openGraph: {
type: "website",
locale: "en_US",
url: "https://nbarkiya.xyz",
title: "Naman Barkiya - Applied AI Engineer",
description: "Applied AI Engineer working at...",
siteName: "Naman Barkiya - Applied AI Engineer",
images: [
{
url: "https://res.cloudinary.com/.../og-image.png",
width: 1200,
height: 630,
alt: "Naman Barkiya - Applied AI Engineer",
},
],
},
robots: {
index: true,
follow: true,
googleBot: {
"max-snippet": -1,
"max-image-preview": "large",
},
},
};
Notice the pattern?
I repeat "Naman Barkiya" + "Applied AI Engineer" everywhere:
title.defaultdescriptionkeywordsauthorsopenGraph.titleopenGraph.siteName
This isn't keyword stuffing. It's entity reinforcement.
AI needs to see the same entity described consistently across multiple signals to build confidence in what it "knows" about you.
Write Content That AI Can Actually Use
Here's something most people miss:
AI doesn't trust vague claims.
This won't get you cited:
❌ "Worked on various web development projects"
❌ "Experienced software engineer"
❌ "Built many applications"
This will:
✅ "Built client dashboard at XYZ serving global traders"
✅ "Reduced API load time by 30%"
✅ "Scaled platform to 3,000+ daily users"
AI models are trained to identify:
- Named entities (XYZ, Databricks, Next.js)
- Quantified results (30%, 3,000 users, first month)
- Verifiable links (company URLs, GitHub repos)
Here's how I structure my experience data:
// config/experience.ts
{
id: "xyz",
position: "Software Development Engineer",
company: "XYZ",
location: "Mumbai, India",
startDate: new Date("2024-08-01"),
endDate: "Present",
achievements: [
"Shipped production features within the first month for a trader-facing P&L dashboard",
"Won XYZ AI Venture Challenge by building data transformation pipelines",
"Led a 12-member team in an internal hackathon",
],
companyUrl: "https://www.xyz.com",
skills: ["Typescript", "React", "Databricks", "Python"],
}
Every claim is:
- Specific (not vague)
- Quantified (where possible)
- Verifiable (company URL included)
The Technical Foundation
Don't skip the basics.
Sitemap:
// app/sitemap.ts
import { MetadataRoute } from "next";
export default function sitemap(): MetadataRoute.Sitemap {
return [
{
url: "https://nbarkiya.xyz",
lastModified: new Date(),
changeFrequency: "monthly",
priority: 1.0,
},
{
url: "https://nbarkiya.xyz/projects",
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.8,
},
// ... more routes
];
}
Robots.txt:
User-agent: *
Allow: /
Sitemap: https://nbarkiya.xyz/sitemap.xml
Simple. Open. Crawlable.
The Complete Checklist
Here's everything in one place:
Schema & Structured Data
- [ ] JSON-LD
Personschema on homepage - [ ] Additional schemas for your content type (
SoftwareApplication,Article, etc.)
Metadata
- [ ]
max-snippet: -1in robots config - [ ] Canonical URLs on every page
- [ ]
authorsfield with name and URL - [ ] Entity-rich descriptions
Content
- [ ] Specific, quantified achievements
- [ ] Named entities (companies, tools, technologies)
- [ ] External verification links
- [ ] Semantic HTML (proper heading hierarchy, lists)
Technical
- [ ] Dynamic sitemap
- [ ] Open robots.txt
- [ ] Fast page loads (AI crawlers have timeouts too)
Final Thoughts
The future of search is AI-first.
Google isn't going anywhere, but it's no longer the only game in town. If your content can't be understood by LLMs, you're invisible to a growing chunk of the internet.
The good news? It's not that hard to fix.
Add schema markup. Open up your snippets. Write specific, verifiable content.
That's it. That's the whole playbook.
I open-sourced my entire portfolio template. You can see all of this implemented:
github.com/namanbarkiya/minimal-next-portfolio
Fork it. Use it. Make it yours.
And maybe someday I'll ask ChatGPT for portfolio templates and see your site at #1.
That'd be pretty cool.
Top comments (0)