If you are building an AI application like a chatbot, a summarizer, or a research agent, you have likely run into the garbage in, garbage out problem.
You want to let your user interact with your chatbot about your products. So, you spin up a headless browser with Puppeteer, dump the document.body.innerHTML, and feed it to OpenAI or Claude.
That has 3 problems!
- Token Waste: Raw HTML is 60% boilerplate with divs, classes, scripts, styles, etc. You are paying for tokens that carry no semantic meaning.
- Hallucinations: LLMs get confused by navigation bars, footers, and cookie banners.
- Bot Detection: If you try to scrape a modern React site from your local server, you’ll get blocked by Cloudflare or CAPTCHAs.
The solution is to stop scraping HTML and start extracting Markdown.
In this tutorial, I’ll show you how to use the Geekflare API to turn any webpage into LLM-ready Markdown.
Why Markdown?
LLMs love Markdown. It represents the structure of a document Headers, Lists, Tables without the noise of HTML.
HTML example
<div class="content-wrapper">
<h1 class="hero-title">The Future of AI</h1>
<div class="ad-banner">...</div>
<p class="text-body">AI is changing how we code...</p>
</div>
Markdown example
# The Future of AI
AI is changing how we code...
Scraping Setup
We are going to use Node.js for this, but you can use Python, Go, or any of your favorite languages.
You will need:
- Geekflare API Key
- Node.js installed
We aren't going to use Puppeteer. We don't want to manage headless Chrome instances. We will offload that to the API.
- Create a file named scrape.js:
const axios = require('axios');
const GEEKFLARE_API_KEY = 'YOUR_API_KEY';
async function scrapeToMarkdown(targetUrl) {
try {
const response = await axios.post(
'https://api.geekflare.com/webscraping',
{
url: targetUrl,
format: 'markdown',
},
{
headers: {
'x-api-key': GEEKFLARE_API_KEY,
'Content-Type': 'application/json'
}
}
);
console.log("--- SCRAPED MARKDOWN ---");
console.log(response.data.data);
} catch (error) {
console.error("Scraping failed:", error.response ? error.response.data : error.message);
}
}
scrapeToMarkdown('https://docs.docker.com/get-started/');
The Geekflare Scraping API handles the rendering, blocking, and formatting.
Connecting to an LLM for RAG
Now that you have clean Markdown, the cost savings are massive.
If you send raw HTML to GPT 5.2, a standard blog post might cost you 4,000 tokens. If you send the Markdown version, it will be ~1,200 tokens.
Here is a code example of how the pipeline looks:
const markdown = await scrapeToMarkdown('https://example.com/article');
const completion = await openai.chat.completions.create({
messages: [
{ role: "system", content: "You are a helpful assistant. Answer based on the context provided." },
{ role: "user", content: `Context: ${markdown}\n\nQuestion: Summarize this article.` }
],
model: "gpt-5.2",
});
Conclusion
Building a scraping pipeline in-house is fun until you have to maintain it. Websites change their DOM structure, new anti-bot measures are deployed, and your IP gets banned.
If your goal is to build an AI product, don't waste time building a scraper. Offload the infrastructure so you can focus on the intelligence.
You can grab a scraping API key and try the Markdown extraction.
Top comments (0)