TL;DR: dev.to's API allows ~1 article per 30 seconds. Trying to batch-publish 6 articles in 18 seconds hit 429 Rate limit reached, try again in 30 seconds. The fix: 35-second delay between requests + 3-attempt retry per article.
The error I hit
# This fails on article #3
for article in articles:
requests.post("https://dev.to/api/articles", json=payload, ...)
time.sleep(2) # ← too short
After 2 successful publishes, I got:
{"error": "Rate limit reached, try again in 30 seconds", "status": 429}
For all 4 remaining articles. Same response, no exponential backoff hint.
What works
import time, requests
def publish_with_retry(article_payload, max_attempts=3, retry_wait=35):
for attempt in range(max_attempts):
r = requests.post(
"https://dev.to/api/articles",
headers={"api-key": API_KEY, "content-type": "application/json"},
json=article_payload,
timeout=30,
)
if r.ok:
return r.json()
if r.status_code == 429:
print(f" 429, waiting {retry_wait}s (attempt {attempt+1}/{max_attempts})")
time.sleep(retry_wait)
continue
# Non-429 error — give up
return {"error": r.status_code, "body": r.text[:200]}
return {"error": "max retries exceeded"}
# Batch
for article in articles:
result = publish_with_retry(article)
print(result.get("url", "FAIL"))
time.sleep(35) # ← wait between successful publishes too
35 seconds is the minimum. Tested: 30s sometimes still 429s, 32s mostly works, 35s always works.
Why dev.to chose 30 seconds
I asked the dev.to team. The reasoning:
- 1 article per 30 sec = max 2880 articles per day per user
- Stops spam bots posting hundreds of articles
- Aligns with their internal queue dispatch speed for indexing/notifications
For indie use, this is fine. For a content farm, this is the bottleneck.
What this taught me
- Always wrap batch API calls in retry logic. Even "respectful" APIs rate-limit. Always 429-tolerant.
- 35-second is the new 1-second. dev.to is fast enough for indie scale, but not for unrestrained loops.
- Track which articles published. If 2/6 succeed, capture those URLs and don't re-publish. Otherwise you'll create duplicates.
- Background the long batch. A 6-article batch with 35-sec waits = 3-4 minutes. Run it as a background process and let your terminal continue.
Code: tracking what published
import json
from pathlib import Path
PUBLISHED_LOG = Path("dashboard/devto_published.json")
def get_published():
if PUBLISHED_LOG.exists():
return json.loads(PUBLISHED_LOG.read_text())
return {}
def mark_published(article_id, url):
state = get_published()
state[article_id] = {"url": url, "ts": time.time()}
PUBLISHED_LOG.write_text(json.dumps(state, indent=2))
# In main loop
for article in articles:
if article["id"] in get_published():
print(f" skip: already published")
continue
result = publish_with_retry(article)
if "url" in result:
mark_published(article["id"], result["url"])
Now the batch is idempotent. Re-run it and it picks up where it left off.
Bonus: rate-limit-aware batch progress
import time
class RateLimitedBatch:
def __init__(self, items, min_interval_sec=35):
self.items = items
self.min_interval = min_interval_sec
self.last_run = 0
def wait_if_needed(self):
now = time.time()
wait = self.last_run + self.min_interval - now
if wait > 0:
print(f" waiting {wait:.1f}s")
time.sleep(wait)
self.last_run = time.time()
def run(self, fn):
for item in self.items:
self.wait_if_needed()
fn(item)
# Usage
RateLimitedBatch(articles, min_interval_sec=35).run(publish_with_retry)
This gives you a clean "schedule" pattern instead of scattered time.sleep() calls.
The dev.to API key reality
dev.to API keys are tied to user accounts. You cannot have multiple keys per account, but you can have multiple accounts (with different emails) for higher batch throughput. Most indies don't need that — 1 article per 30 sec is plenty.
Source
Full retry-aware batch publisher with logging:
AutoApp Dashboard ($39) includes:
-
devto_publish_with_retry.py(this article) -
devto_published_log.json(idempotent state) -
devto_batch_runner.py(CSV-driven)
If you've hit a rate limit on any indie API, you're not alone. The fix is always: wait + retry + idempotent logging.
Top comments (0)