<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Miguel Álvarez</title>
    <description>The latest articles on DEV Community by Miguel Álvarez (@malvads).</description>
    <link>https://dev.to/malvads</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/malvads"/>
    <language>en</language>
    <item>
      <title>Mojo: A Lightweight C++ Web Crawler for converting websites to RAG ready data (Fast, Simple, CI/CD-Friendly)</title>
      <dc:creator>Miguel Álvarez</dc:creator>
      <pubDate>Sun, 01 Feb 2026 23:17:13 +0000</pubDate>
      <link>https://dev.to/malvads/mojo-a-lightweight-c-web-crawler-for-converting-websites-to-rag-ready-data-fast-simple-36ia</link>
      <guid>https://dev.to/malvads/mojo-a-lightweight-c-web-crawler-for-converting-websites-to-rag-ready-data-fast-simple-36ia</guid>
      <description>&lt;p&gt;When building RAG systems or LLM-powered pipelines, you often don’t need a massive distributed crawler or a cloud scraping platform.&lt;/p&gt;

&lt;p&gt;Most of the time, you just want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Crawl a website deeply&lt;/li&gt;
&lt;li&gt;Convert pages into clean text (Markdown)&lt;/li&gt;
&lt;li&gt;Feed them into embeddings or downstream processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, many existing tools introduce complexity or overhead:&lt;/p&gt;

&lt;p&gt;Scrapy is extremely powerful and flexible, but requires writing spiders, managing Python dependencies, and building custom pipelines.&lt;/p&gt;

&lt;p&gt;Apify offers a full scraping platform, but relies on cloud infrastructure, subscriptions, and heavier runtime environments (Node.js/Python).&lt;/p&gt;

&lt;p&gt;Firecrawl and similar APIs are great for large-scale ingestion, but can be overkill if you want reproducible, local-first CI workflows.&lt;/p&gt;

&lt;p&gt;That’s why I built Mojo, a lightweight, cross-platform C++ web crawler designed specifically for LLM/RAG workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why building Mojo
&lt;/h2&gt;

&lt;p&gt;Mojo focuses on one simple thing, efficiently crawling websites and producing clean, structured output suitable for LLM pipelines.&lt;/p&gt;

&lt;p&gt;Compared to Python/Node-based crawlers, Mojo is significantly faster and lighter on CPU/RAM, making it ideal for cloud jobs, Lambdas, CI pipelines or cheap servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Example
&lt;/h2&gt;

&lt;p&gt;Crawl an entire documentation site up to depth 2 and export everything as Markdown:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./mojo -d 2 https://docs.example.com -o ./docs&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  For JS-rendered websites (SPAs):
&lt;/h2&gt;



&lt;p&gt;&lt;code&gt;./mojo --render https://spa-example.com -o ./docs_rendered&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Note: --render requires Chromium/Chrome installed on the machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using proxies:
&lt;/h2&gt;



&lt;p&gt;&lt;code&gt;./mojo -p socks5://127.0.0.1:9050 https://target.com&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Or with a proxy list:
&lt;/h2&gt;



&lt;p&gt;&lt;code&gt;./mojo --config example_config.yaml https://target.com&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Perfect for CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;Mojo was built with automation in mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example GitHub Actions workflow:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Generate docs with Mojo

on:
  workflow_dispatch:
  schedule:
    - cron: '0 3 * * *'

jobs:
  crawl:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Download Mojo
        run: |
          curl -L -o mojo https://github.com/malvads/mojo/releases/download/v0.1.0/mojo-0.1.0-linux-x86_64
          chmod +x mojo

      - name: Run crawler
        run: ./mojo -d 2 https://docs.example.com -o ./generated_docs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When Should You Use Mojo?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Want fast website → Markdown conversion&lt;/li&gt;
&lt;li&gt;Prefer local tools over cloud services&lt;/li&gt;
&lt;li&gt;Care about performance and reproducibility&lt;/li&gt;
&lt;li&gt;Are building RAG, search, or LLM pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You might prefer heavier frameworks if you need advanced scraping logic per page or complex data extraction workflows&lt;/p&gt;

&lt;p&gt;But for most LLM ingestion use cases, Mojo keeps things simple and efficient.&lt;/p&gt;

&lt;p&gt;Mojo is fully open source under the MIT license.&lt;/p&gt;

&lt;p&gt;Feel free to check out -&amp;gt; &lt;a href="https://github.com/malvads/mojo" rel="noopener noreferrer"&gt;https://github.com/malvads/mojo&lt;/a&gt; :)&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>rag</category>
      <category>showdev</category>
      <category>webscraping</category>
    </item>
  </channel>
</rss>
