Building a RAG (Retrieval-Augmented Generation) pipeline sounds easy until you hit the data ingestion step.
If you are trying to build a "Chat with Docs" app for a modern framework (like Next.js, Stripe, or Supabase), you know the pain:
- Hydration issues: Standard
fetchorBeautifulSoupget an emptydivbecause the content loads via JS. - Noise: You scrape the content, but you also get the navbar, the footer, the "Copyright 2025", and the "Sign Up" button. All this junk wastes your context window tokens.
- Broken formatting: Code blocks lose their structure, and tables turn into a mess.
The Solution
I got tired of fixing these issues manually for every project, so I built a specialized Actor on Apify designed specifically for RAG pipelines.
It does three things:
- Uses a headless browser to wait for the page to fully hydrate.
- Smart extraction: It identifies the main content area (
<article>,main, etc.) and strips away the UI noise. - Markdown conversion: It turns the HTML into clean Markdown, preserving code blocks and tables.
How to use it
You can try it for free on Apify. You just plug in the URL of the documentation (e.g., https://docs.stripe.com/) and get a JSON/Markdown file ready for your Vector Database.
👉 Link to the tool: https://apify.com/hedelka/tech-docs-scraper
I'm currently using it to feed Pinecone for my personal projects. Let me know if it helps with your data ingestion layer!
Top comments (0)