<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Doru Gradinaru</title>
    <description>The latest articles on DEV Community by Doru Gradinaru (@doru_gradinaru_889c1b8687).</description>
    <link>https://dev.to/doru_gradinaru_889c1b8687</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/doru_gradinaru_889c1b8687"/>
    <language>en</language>
    <item>
      <title>How I Stopped Retry Storms from Destroying My Scraping Budget</title>
      <dc:creator>Doru Gradinaru</dc:creator>
      <pubDate>Sat, 18 Apr 2026 15:01:56 +0000</pubDate>
      <link>https://dev.to/doru_gradinaru_889c1b8687/how-i-stopped-retry-storms-from-destroying-my-scraping-budget-5b7e</link>
      <guid>https://dev.to/doru_gradinaru_889c1b8687/how-i-stopped-retry-storms-from-destroying-my-scraping-budget-5b7e</guid>
      <description>&lt;p&gt;Last month I watched a scraping job quietly burn $240 overnight.&lt;/p&gt;

&lt;p&gt;The target site started returning 403s around 2 AM. The agent retried. Got another 403. Retried again. By morning it had made 600+ identical requests to a URL that was never going to respond — and I paid for every single one of them in compute units.&lt;/p&gt;

&lt;p&gt;The frustrating part: I had already set &lt;code&gt;maxRetries: 3&lt;/code&gt;. It didn't matter. The URL kept getting requeued across multiple runs, and each new run reset the counter. Three retries × 200 actor runs = 600 requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem
&lt;/h2&gt;

&lt;p&gt;Most retry-limiting advice assumes retries happen in a single session. They don't. In a distributed scraping setup, the same blocked URL can bounce between runs, queues, and workers indefinitely. Your &lt;code&gt;maxRetries&lt;/code&gt; setting only sees a slice of the full picture.&lt;/p&gt;

&lt;p&gt;What you actually need is something that tracks patterns across your entire workspace — not just within a single run.&lt;/p&gt;

&lt;p&gt;The pattern that causes 90% of bill spikes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run 1: URL → 403 → retry → 403 → retry → 403 → fail → requeue
Run 2: URL → 403 → retry → 403 → retry → 403 → fail → requeue
Run 3: ...repeat 200 times...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each run thinks it only retried 3 times. The bill says otherwise.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Helps
&lt;/h2&gt;

&lt;p&gt;After debugging this for a while, here's what works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Hash the request, not just the URL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Group requests by &lt;code&gt;action + URL + query params&lt;/code&gt;. A scraper hitting &lt;code&gt;/product?id=123&lt;/code&gt; and &lt;code&gt;/product?id=456&lt;/code&gt; are different patterns. &lt;code&gt;/blocked-page&lt;/code&gt; requested 50 times is a storm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Track patterns cross-run, not just in-session&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need a persistent store that survives across actor runs. A simple Redis counter works: increment on each request, expire after 60 seconds. If the same URL hash hits 10+ times in a minute — it's a storm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Block upstream, not downstream&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The mistake I made: trying to fix this inside the scraper. By the time the scraper knows it's in a storm, the compute is already running. The block needs to happen &lt;em&gt;before&lt;/em&gt; the actor starts — at the queue level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Alert in real-time, not post-mortem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google Sheets cost monitoring is useful for weekly reviews. But by the time Sheets catches a spike, you've already paid for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;After hitting this problem one too many times, I built &lt;a href="https://proceedgate.dev" rel="noopener noreferrer"&gt;ProceedGate&lt;/a&gt; — a lightweight gate that sits outside the agent loop and blocks retry storms before the bill accumulates.&lt;/p&gt;

&lt;p&gt;It works like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent action → ProceedGate → ✅ allowed (proceed_token issued)
                           → ⚠️ friction (retry #4–10, warning)
                           → 🚫 blocked (storm, &amp;gt;10/min)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gate tracks request pattern hashes across your entire workspace. It doesn't care which run or which actor triggered the request — if the same pattern hits 10+ times in 60 seconds, it hard-blocks.&lt;/p&gt;

&lt;p&gt;Integration with Node.js/Crawlee takes about 10 lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createProceedGateClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;requireGateStepOk&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@proceedgate/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createProceedGateClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROCEEDGATE_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;actor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-scraping-agent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Before each request:&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;requireGateStepOk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;policyId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;retry_friction_v1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;web_scrape&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;attempt_in_window&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;retryCount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;task_hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;urlHash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;cost_estimate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it's a storm, &lt;code&gt;requireGateStepOk&lt;/code&gt; throws and the actor stops — before the compute accumulates.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The same scraping setup that burned $240 in one night now hard-stops within 60 seconds of hitting a storm pattern. The bill for that scenario: &lt;strong&gt;$0.30 in compute&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;Free tier available at &lt;a href="https://proceedgate.dev" rel="noopener noreferrer"&gt;proceedgate.dev&lt;/a&gt; (5,000 checks/month, no card required).&lt;/p&gt;

&lt;p&gt;Open source Node SDK: &lt;a href="https://github.com/loquit-doru/proceedgate-node-sdk" rel="noopener noreferrer"&gt;github.com/loquit-doru/proceedgate-node-sdk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full docs: &lt;a href="https://proceedgate.dev/docs" rel="noopener noreferrer"&gt;proceedgate.dev/docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've dealt with this problem differently, curious to hear your approach in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
