<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: vivek</title>
    <description>The latest articles on DEV Community by vivek (@crawlycat).</description>
    <link>https://dev.to/crawlycat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/crawlycat"/>
    <language>en</language>
    <item>
      <title>I built my own website crawler because SEO tools were too restrictive</title>
      <dc:creator>vivek</dc:creator>
      <pubDate>Mon, 23 Mar 2026 04:45:10 +0000</pubDate>
      <link>https://dev.to/crawlycat/i-built-my-own-website-crawler-because-seo-tools-were-too-restrictive-2bp9</link>
      <guid>https://dev.to/crawlycat/i-built-my-own-website-crawler-because-seo-tools-were-too-restrictive-2bp9</guid>
      <description>&lt;h3&gt;
  
  
  The problem
&lt;/h3&gt;

&lt;p&gt;I run &lt;a href="https://nerdyelectronics.com" rel="noopener noreferrer"&gt;nerdyelectronics.com&lt;/a&gt;, a tech blog. Every time I published a batch of posts or changed my site structure, I'd go through the same painful cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make changes&lt;/li&gt;
&lt;li&gt;Wait for Google Search Console to recrawl (days)&lt;/li&gt;
&lt;li&gt;Discover broken links and issues... days later&lt;/li&gt;
&lt;li&gt;Fix them&lt;/li&gt;
&lt;li&gt;Wait again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I tried the usual tools. Screaming Frog is powerful but feels like enterprise software for a simple job. SaaS crawlers charge per page or per scan. I just wanted to know: &lt;strong&gt;what's broken on my site right now?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;CrawlyCat&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What it does
&lt;/h3&gt;

&lt;p&gt;CrawlyCat crawls your website and reports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP 4xx/5xx errors&lt;/li&gt;
&lt;li&gt;Redirect chains&lt;/li&gt;
&lt;li&gt;Missing or bad &lt;code&gt;&amp;lt;title&amp;gt;&lt;/code&gt; and meta descriptions&lt;/li&gt;
&lt;li&gt;Missing or duplicate &lt;code&gt;&amp;lt;h1&amp;gt;&lt;/code&gt; tags&lt;/li&gt;
&lt;li&gt;Internal broken links&lt;/li&gt;
&lt;li&gt;External link inventory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing revolutionary — but it runs &lt;strong&gt;locally&lt;/strong&gt;, has &lt;strong&gt;no limits&lt;/strong&gt;, and takes about 30 seconds to set up.&lt;/p&gt;

&lt;h3&gt;
  
  
  The architecture
&lt;/h3&gt;

&lt;p&gt;Two crawling modes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser mode&lt;/strong&gt; (default): Uses Playwright with headless Chromium. This handles JavaScript-rendered pages and even bypasses Cloudflare challenges using &lt;code&gt;playwright-stealth&lt;/code&gt;. Slower, but necessary for modern sites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fast mode&lt;/strong&gt;: Uses &lt;code&gt;httpx&lt;/code&gt; for raw HTTP requests. About 10x faster, great for static sites or blogs. No JS rendering.&lt;/p&gt;

&lt;p&gt;Both modes use BeautifulSoup for HTML parsing, share the same issue-detection logic, and respect &lt;code&gt;robots.txt&lt;/code&gt; automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three interfaces
&lt;/h3&gt;

&lt;p&gt;This is where CrawlyCat gets interesting. Most crawlers give you one interface. CrawlyCat has three:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLI&lt;/strong&gt; — for scripting, CI pipelines, and cron jobs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; crawler &lt;span class="nt"&gt;--url&lt;/span&gt; https://example.com &lt;span class="nt"&gt;--max-pages&lt;/span&gt; 200 &lt;span class="nt"&gt;--html-out&lt;/span&gt; report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GUI&lt;/strong&gt; — tkinter desktop app with live progress and status tabs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph43myxq0djq4mv21n8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph43myxq0djq4mv21n8j.png" alt="crawlycat gui" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web UI&lt;/strong&gt; — Flask app with Server-Sent Events for real-time updates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi263pz5h4nvazjx1wrwb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi263pz5h4nvazjx1wrwb.gif" alt="crawlycat web ui" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Web UI is my favorite. It shows a live crawl log, tabbed status views (by HTTP code, by issue type), and a summary panel. No npm, no node — just Flask serving a single HTML template.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tradeoffs I made
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Browser mode vs. Fast mode&lt;/strong&gt;: I could have forced everything through Playwright, but that's 10x slower. For a static blog, httpx is plenty. For a React SPA behind Cloudflare, you need the browser. So I kept both and let the user choose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No sitemap seeding (yet)&lt;/strong&gt;: Right now CrawlyCat discovers pages by following links from the start URL. Sitemap support is on the roadmap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;robots.txt is always respected&lt;/strong&gt;: You can't disable this. If a page is disallowed, it's skipped. I didn't want to build a tool that encourages ignoring robots.txt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local only&lt;/strong&gt;: No cloud, no accounts, no telemetry. Your crawl data stays on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results on my own site
&lt;/h3&gt;

&lt;p&gt;I ran CrawlyCat against nerdyelectronics.com and found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broken internal links from old posts that referenced moved pages&lt;/li&gt;
&lt;li&gt;Redirect chains from HTTP → HTTPS → www → non-www&lt;/li&gt;
&lt;li&gt;Pages missing meta descriptions&lt;/li&gt;
&lt;li&gt;A few posts with duplicate H1 tags&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All things I wouldn't have caught until Search Console flagged them days later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try it
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/bhageria/crawlycat.git
&lt;span class="nb"&gt;cd &lt;/span&gt;crawlycat
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
python &lt;span class="nt"&gt;-m&lt;/span&gt; playwright &lt;span class="nb"&gt;install &lt;/span&gt;chromium
python &lt;span class="nt"&gt;-m&lt;/span&gt; crawler web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Open &lt;code&gt;http://127.0.0.1:5000&lt;/code&gt;, paste your URL, and hit crawl.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/bhageria/crawlycat" rel="noopener noreferrer"&gt;github.com/bhageria/crawlycat&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;CrawlyCat is open source under GPL v3. Contributions welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
