Introduction
Blogs have always driven traffic to websites since the dawn of the internet. It is one of the organic ways to get visits and also build credibility and expand backlinks for your website.
According to DemandSage, 80% of businesses use blogs as a marketing tool, and blogging can boost web traffic by 55% today. People feel like blogging is dead due to the rise of Generative AI, but it has provided another source of website traffic that you can add to your website. Much research shows that generative engine visitors are more likely to convert.
So, having a blog is still a very valuable asset to any website. For a developer-tools based SaaS, it is necessary. Today, we are going to look into some SEO strategies that a developer can use to improve the SEO (Search Engine Optimization) and GEO (Generative Engine Optimization) of a website. It is a developer-focused guide. I have used all these methods on my website, surajon.dev.
So, let’s get started.
Structural Integrity: Semantic HTML for Search Engines
Using correct HTML tags allows crawlers and bots to better understand the webpage, which helps with ranking for relevant keywords. We are going to look into the tags that we can focus on.
<div>
Rather than spamming <div> tags everywhere on the article page, use them carefully where a specific semantic tag is not available. Make use of <p>, <h1>, and other tags.
<article>
Place the article inside the <article> tag. It is used for a self-contained, independent piece of content (a blog post, forum comment, product card).
<section>
Use it to group related content within a document (e.g., chapters of an article, different feature sections on a homepage). Must have a heading (<h1> to <h6>).
<code>
Rather than using a <span> tag and styling it, use the code tag for in-line technical code snippets. This is crucial for content about programming.
On our website, we have implemented different tags to use for each type for better ranking of the website. This can be achieved by using markdown and then converting to HTML tags for proper styling and correct HTML structure. I have used tailwindcss/typography for proper styling of each element.
Rendering Method to Use
Client-side rendering should be avoided as it delays the response from the backend, which can result in a blank HTML page for crawlers. It possesses a high SEO risk, which can result in a blank page getting indexed.
Using Server-Side Rendering (SSR) is ideal for displaying the blog post. The fully formed HTML document is loaded on the server and then displayed to the user. Not only is the full page loaded, but it is also a faster way to load the article.
We have used the server-side capabilities of Next.js to load all the pages on the server side and then display them to the user.
Core Web Vitals
While it does not directly affect SEO in terms of content, it improves the performance of the article page and leads to a better user experience, which is a ranking factor.
Here are some of the core vitals that you can improve.
- Largest Contentful Paint
It is the time it takes for the largest image or text block to become visible. Ensure the LCP element is loaded with the highest priority (e.g., preload hints, not lazy-loaded).
- Interaction to Next Paint
Break up long JavaScript tasks. Necessary scripts can be loaded with strategy="afterInteractive". This loads the script after the page becomes interactive, which can reduce initial load times and improve interactivity.
We have used it on our website using the Next.js Script tag.
<Script
src="https://cloud.umami.is/script.js"
data-website-id="idddfadf-dfadfas"
strategy="afterInteractive"
/>
- Speed Index
Speed Index simply means how quickly content is visually populated. It quantifies the effectiveness of your rendering strategy—the time the user spends staring at a blank or partial screen.
For large SPAs (React, Vue), implement code splitting so the initial load only includes the JavaScript needed for the current route/view.
Meta Tags and H1 tag
Meta tags such as title and description should change for each article to represent the article correctly. For the title, make sure it lies between 50-60 characters, and for description, 150-160 characters for SEO purposes.
H1 tags should not be spammed on the webpage. One page should have only one H1 tag. For our website, we are using the H1 tag for the title of the article, and for other headings inside the article, we are using H2 onwards. This helps crawlers and chatbots understand the article and webpage correctly.
// In your app/blog/[slug]/page.tsx
export async function generateMetadata({ params }) {
const post = await getPostData(params.slug);
return {
title: post.title,
description: post.summary,
openGraph: {
title: post.title,
description: post.summary,
images: [post.headerImage], // Must be an absolute URL
},
twitter: {
card: 'summary_large_image',
title: post.title,
description: post.summary,
images: [post.headerImage], // Must be an absolute URL
},
};
}
sitemap.xml and robots.txt
A sitemap.xml file is crucial for helping search engines discover all the pages on a website. It helps search engines to identify new pages and index them automatically. Below is an example of a sitemap file.
<urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://www.easywrite.dev</loc>
<lastmod>2025-11-04T06:06:30.177Z</lastmod>
<changefreq>daily</changefreq>
<priority>0.7</priority>
</url>
<url>
<loc>https://www.easywrite.dev/blog</loc>
<lastmod>2025-11-04T06:06:30.177Z</lastmod>
<changefreq>daily</changefreq>
<priority>0.7</priority>
</url>
<url>
<loc>https://www.easywrite.dev/terms-and-conditions</loc>
<lastmod>2025-11-04T06:06:30.177Z</lastmod>
<changefreq>monthly</changefreq>
<priority>0.7</priority>
</url>
<url>
<loc>https://www.easywrite.dev/privacy-policy</loc>
<lastmod>2025-11-04T06:06:30.177Z</lastmod>
<changefreq>monthly</changefreq>
<priority>0.7</priority>
</url>
</urlset>
robots.txt defines rules for web crawlers, such as which pages to exclude from crawling, or which bots are allowed or disallowed. I highly recommend letting generative AI crawl your website.
Here is an example of robots.txt.
# ===================================================================
# Rules for all crawlers, including standard search engines
# ===================================================================
User-agent: *
Allow: /
Disallow: /api/ # Disallow crawling of API routes, if you have any
# ===================================================================
# Specific rules for AI crawlers
# By explicitly allowing them, you signal that your content can be
# used for their training sets, in accordance with their policies.
# ===================================================================
# For OpenAI's models (ChatGPT)
User-agent: GPTBot
Allow: /
# For Google's AI models (Gemini, etc.)
User-agent: Google-Extended
Allow: /
# For other AI crawlers (add more as they become known)
User-agent: PerplexityBot
Allow: /
User-agent: Claude-Web
Allow: /
# ===================================================================
# Location of the sitemap
# ===================================================================
Sitemap: https://www.surajon.dev/sitemap.xml
EasyWrite.dev
We are building a platform that lets you manage
- Your blog strategies🚀
- Keyword research🔥
- Topic generation🧾
- AI Automation🤖
- SEO and GEO focus articles📈
You can join it from easywrite.dev.
Conclusion
As a developer and technical writer, I highly recommend utilizing these techniques to improve the SEO and GEO of your articles to get more views from search engines and chatbots.
I hope this article has helped you learn more about the technical aspects of blog SEO. Thanks for reading the article.
Top comments (0)