I launched a website. Not a template on Tilda, not a one-pager on Notion — a full-fledged static site built with Astro, featuring a blog, a product landing page, and support for two languages. Before writing a single blog post, I spent a disproportionately large amount of time making sure this site would be properly visible to search engines — and, more importantly, to large language models.
This article is about what I set up, why I did it, and why in 2026 SEO alone is no longer enough.
Why Have Your Own Website?
I have a project called “Na Derevo” (On the Tree). I create digital tools and write about development. My main product right now is the Telegram bot MENO for task management. Before this, everything lived on third-party platforms: articles on Habr, posts on dev.to, updates in my Telegram channel.
The problem: the content was scattered, nothing was connected, and if tomorrow any platform changed its rules — I would lose everything.
That’s why I built a website that solves three main tasks:
- MENO landing page — a place to send people who want to understand what this bot is.
- Blog — so my articles live with me, not only on someone else’s platforms.
- Central hub — so everything I do is in one place.
And there’s one more, less obvious reason.
POSSE: Publish on Your Own Site, Syndicate Elsewhere
There’s an approach called POSSE — Publish on Own Site, Syndicate Elsewhere. The idea is simple: the original version of the content always lives on your own website, while you publish copies on Habr, dev.to, Reddit, or anywhere else.
Why do this:
- You retain the rights. You are the author, you have the canonical URL, you control the text.
- You don’t depend on platforms. A platform can shut down, ban you, or change its algorithms — your content remains safely on your own domain.
- You can connect any tools. RSS, newsletters, auto-posting — anything you want, without platform restrictions.
This article, by the way, was also written following the POSSE principle — the original lives in my blog, and what you’re reading now is the syndicated version.
Technical Foundation: Astro and Two Languages
The site is built with Astro — a framework for static websites. No client-side JavaScript for content; everything is rendered into clean HTML during the build process. For search engines and AI crawlers, this is ideal: they see a ready-made page, not an empty <div id="app"> that still needs to be executed.
The site is bilingual:
-
/ru/— Russian version -
/en/— English version
Blog posts are stored in Markdown using Astro Content Collections — one file per language. This allows me to maintain one blog in two languages, while each article knows about its alternative language version.
The root of the site (/) is not a content page. It’s a redirect entry: a script detects the browser language or the user’s saved preference and redirects to /ru/ or /en/. This is convenient for UX. For SEO — it’s a compromise, which I’ll mention below.
SEO: What I Set Up and Why
SEO in 2026 is no longer about “stuffing keywords into the text.” It’s a set of technical signals that help search engines understand what the page is about, what language it’s in, and which version to show the user.
Here’s what I configured.
Basic Meta Tags
Every page has its own:
-
<title>— the title visible in search results -
<meta name="description">— the description shown under the title in search results
It sounds obvious, but the number of sites where every page has the same title is surprising. My titles and descriptions are unique for each page: homepage, MENO landing page, and every blog article.
Canonical
<link rel="canonical" href="https://naderevo.com/ru/blog/my-article/" />
The canonical tag tells search engines: “this is the original version of this page.” If content is available under multiple URLs (for example, with and without parameters), canonical indicates which URL should be indexed. Without it, search engines may treat duplicates as different pages and split the ranking.
Hreflang
<link rel="alternate" hreflang="ru" href="https://naderevo.com/ru/blog/my-article/" />
<link rel="alternate" hreflang="en" href="https://naderevo.com/en/blog/my-article/" />
Hreflang is a way to tell search engines: “this page has a version in another language, here it is.” Google uses hreflang to show users the correct language version in search results.
I have hreflang set up for all pages, including blog posts. Alternative links are built using the routeSlug field in the frontmatter — a special field that explicitly defines the URL. This is more reliable than relying on automatic slugs.
Open Graph and Twitter Cards
<meta property="og:title" content="..." />
<meta property="og:description" content="..." />
<meta property="og:image" content="..." />
<meta name="twitter:card" content="summary_large_image" />
These are what people see when someone shares a link on social media or messengers — title, description, and image. Without OG tags, the link looks like a bare URL. With them — it appears as a nice card.
For now, I’m using universal OG images: one for the site, one for the blog, and one for MENO. Not ideal for CTR, but sufficient at the current stage.
Robots.txt and Sitemap
User-agent: *
Allow: /
Sitemap: https://naderevo.com/sitemap-index.xml
The robots.txt is open to everyone — both search engine bots and AI crawlers. No blocks.
The sitemap is generated automatically during the build and includes all indexable pages. I removed junk URLs and the root / (because it’s a redirect, not content).
Structured Data (JSON-LD)
This is markup that helps search engines not just read the page, but understand what’s on it. Not “text about a bot,” but “this is a software product, here’s its name, here’s the author, here’s the rating.”
The site has three types of structured data:
-
WebSiteon the homepage — description of the site as an entity -
SoftwareApplicationon the MENO landing page — description of the product -
BlogPostingon articles — description of each article with author, date, and headline
Example for an article:
{
"@type": "BlogPosting",
"headline": "How I Stopped Losing Tasks in Telegram",
"author": {
"@type": "Person",
"name": "Nikita"
},
"datePublished": "2026-07-10",
"description": "..."
}
Search engines see this and can use it for rich snippets — enhanced cards in search results.
GEO: What It Is and Why It Matters
Now for something many people still don’t take into account.
GEO — Generative Engine Optimization — is the optimization of a website not for search engine robots, but for large language models. ChatGPT, Perplexity, Gemini, Claude — they all know how to crawl websites, read content, and cite it in their answers. The question is: will your site be among the sources they quote?
If SEO is “how to get into Google’s search results,” then GEO is “how to get into ChatGPT’s answers.”
It may sound like something from the distant future, but it’s already working. People are increasingly searching not in Google, but by asking AI. And AI takes information from somewhere. If your content is clean, well-structured, and accessible for crawling — your chances of being cited increase.
What Matters for GEO
Content in clean HTML, not client-side JavaScript. AI crawlers don’t always execute JavaScript. If your content is rendered on the client — the model may see an empty page. Astro renders everything to HTML during build — the crawler sees ready text.
Don’t block AI crawlers. Some sites block ChatGPT, Anthropic, and other bots in robots.txt. I deliberately left everything open. If a model wants to read my content and reference it — be my guest.
Structured data. JSON-LD helps not only Google but also language models. When a page has BlogPosting markup with author, date, and description — it’s easier for models to understand the context and decide whether to cite it.
Author layer. This is a less obvious but important point. Content without an author looks impersonal. The model cannot attribute it, and therefore is less likely to cite it. I added a visible author block to every article and mapped the author in the schema to a Person type with a name.
What GEO Doesn’t Cover (Yet)
The root of the site remains a redirect entry — for a language model, this is a weak page. If someone asks AI “what is naderevo.com,” the model may not get an answer from the root because there’s no content there — only a redirect script.
Comparative articles in the blog can be strengthened with links to primary sources — this increases the model’s trust in the content.
The homepage is brand-oriented but not answer-first — it doesn’t answer the question directly in the first paragraph. For GEO, it’s better when the page starts with a direct answer and then expands on the details.
None of this is critical, but it remains on the to-do list.
Google Search Console and Yandex.Webmaster
On the day I wrote this article, I registered the site in Google Search Console and Yandex.Webmaster. There’s no statistics yet — both need time to process.
Why do this even if you have a brand-new site with zero traffic:
- You tell the search engine about your existence. Without registration, Google might find your site on its own — or it might not. Search Console is a guarantee.
- You can submit the sitemap manually. The search engine will start crawling pages faster.
- You can see indexing errors. If something is broken — hreflang, canonical, redirect — Console will show it. Better to find out now than in three months.
- Query data. When traffic appears, you’ll see which queries people find your site with, which pages are shown, and with what CTR.
For Yandex, everything is similar, only through Webmaster. If you target a Russian-speaking audience — this is mandatory.
I registered in both. In a couple of weeks I’ll check what they found and share the results.
Lighthouse: Making Sure It Wasn’t in Vain
Another tool worth knowing about is Lighthouse. It’s a built-in audit in Chrome DevTools that evaluates a page on four parameters:
- Performance — how fast the page loads and becomes interactive
- Accessibility — how accessible the site is: contrast, alt texts, keyboard navigation
- Best Practices — adherence to web standards: HTTPS, correct headers, no deprecated APIs
- SEO — basic technical SEO hygiene: meta tags, canonical, readability for crawlers
Each parameter is scored from 0 to 100. The green zone is 90+. This is not a guarantee of success in search results. I ran the audit on my site. Results:
Homepage:
| Performance | Accessibility | Best Practices | SEO |
|---|---|---|---|
| 99 | 100 | 96 | 100 |
MENO Landing Page:
| Performance | Accessibility | Best Practices | SEO |
|---|---|---|---|
| 93 | 95 | 96 | 100 |
The homepage scores almost maximum across all parameters. This is thanks to Astro: static HTML without client-side JavaScript loads instantly.
The landing page is slightly lower in Performance (93) and Accessibility (95) — it has more content, product screenshots, and interactive sections. That’s normal. 93 is still in the green zone, and I know where to dig if I want to push it further.
Why run Lighthouse at all: it’s a free “idiot check.” A couple of minutes — and you see if you forgot alt text on an image, if a font is loading in a blocking way, or if the meta description is empty. Better to hear it from Lighthouse than from a user who left the site after half a second.
Summary
The site was launched yesterday. Traffic is still zero. But the technical foundation so that both search engines and AI models can see the content is ready.
What has been done:
- Static site on Astro, two languages, clean HTML
- SEO: canonical, hreflang, OG, structured data, sitemap, unique meta tags
- GEO: open robots.txt, author layer, Person schema, content without client-side JS
- POSSE: original content on my own site, syndication to other platforms
- Google Search Console and Yandex.Webmaster connected
SEO and GEO are marathons. GEO rules are still evolving. But if you don’t start now — catching up later will be much harder.
Links
Website: naderevo.com
MENO Bot: @menoapp_bot





Top comments (0)