TL;DR: 9 dev.to articles, each 800-1500 words, published in ~60 minutes (with a 35-second rate limit between each). Recipe: paste-ready frontmatter + Claude Code first-draft + dev.to REST API publisher with retry. Full 50-line script included.
What "9 articles in 1 hour" actually means
Not "I wrote 9 articles in an hour." That's impossible.
"I published 9 already-written paste-ready articles in an hour" — the publishing step is the bottleneck, not the writing.
The full pipeline:
-
Pre-existing: 9 paste-ready files in
reports/devto-article-*-paste-ready.md(frontmatter + body) -
At the moment: Run
devto_publish_batch.pywhich loops through all 9 - Time: 9 × 35-second wait = ~5.25 minutes API time + ~3 minutes for verification
Total wall-clock: ~10 min.
The 1-hour figure includes:
- 5 min reviewing each paste-ready file before publish
- 5 min final publishing
- 30 min updating site/ pages with new LIVE URLs
- 20 min updating STATUS / RESUME / memory files
So: 60 min wall-clock to ship 9 LIVE articles + all downstream consolidation.
The recipe
Step 1: Paste-ready files with frontmatter
Each article lives in reports/devto-article-{N}-paste-ready.md:
---
id: devto-N-topic-slug
title: "dev.to #N - Title for Display"
category: content
priority: P1
status: ready
eta_min: 5
actions: [preview, copy-clipboard, open-devto]
tags: [tag1, tag2, tag3, tag4] , 4 max for dev.to
created: 2026-05-07
publish_target_date: 2026-05-07
---
# Title for Article (mirrored from frontmatter)
Body content here, ~800-1500 words. Code samples in fenced blocks. Real numbers.
Step 2: Claude Code first-draft
Tell Claude: "Write a dev.to article about [topic]. 1000 words. Include code, tables, real numbers. Format as paste-ready file with YAML frontmatter."
I get a usable first draft in 30-60 sec. I edit it for 5-10 min for voice + accuracy. Total ~10 min per article.
For 9 articles, that's ~90 min of writing. Spread across multiple days.
Step 3: Publish batch script
"""Publish multiple dev.to articles via API with rate-limit retry."""
import json, re, time
from pathlib import Path
import requests
API_KEY = "your_devto_api_key_here"
BASE = "https://dev.to/api"
ROOT = Path(__file__).parent.parent
ARTICLES = [
{"file": f"devto-article-{N}-paste-ready.md", "title": "...", "tags": [...]}
for N in range(51, 60) # adjust range
]
def strip_frontmatter(text):
if text.startswith("---"):
end = text.find("\n---", 4)
if end != -1:
text = text[end + 4:].lstrip()
return re.sub(r"^#\s+[^\n]+\n+", "", text, count=1)
def publish(article):
body = strip_frontmatter((ROOT / "reports" / article["file"]).read_text(encoding="utf-8"))
payload = {"article": {
"title": article["title"],
"body_markdown": body,
"published": True,
"tags": article["tags"],
}}
for attempt in range(4):
r = requests.post(f"{BASE}/articles",
headers={"api-key": API_KEY, "content-type": "application/json"},
json=payload, timeout=30)
if r.ok:
return {"ok": True, "url": r.json().get("url")}
if r.status_code == 429:
time.sleep(35)
continue
return {"ok": False, "status": r.status_code, "body": r.text[:200]}
return {"ok": False, "error": "max retries"}
def main():
for a in ARTICLES:
print(json.dumps(publish(a), ensure_ascii=False))
time.sleep(35)
if __name__ == "__main__":
main()
50 lines. Drop in your dashboard/, run with python dashboard/devto_publish_batch.py.
Step 4: Wait + verify
After the batch finishes, run a quick HTTP HEAD check on each new URL:
def verify_live(urls):
for url in urls:
r = requests.head(url, timeout=10, allow_redirects=True)
print(f"{r.status_code} {url}")
All 200s = ship.
Step 5: Update downstream artifacts
-
STATUS.md— list new LIVE URLs -
site/just-published.html— add cards for each -
dashboard/verify_all_live_urls.py— add to audit list - Memory state file — log catalog
For 9 articles, this takes ~25 min of focused work. Templates make it faster — most updates are 1-line additions.
Real numbers from my 9-article batch
Time: 60 min wall-clock
Articles: 9
Words total: ~9000
LIVE rate: 100% (after retry on 4 articles that hit rate limit)
HTTP 200 verified: 9/9 within 5 min of publish
Why batches > daily posting
Daily posting:
- 9 days × 60 min consolidation overhead = 9 hours
- Mental switching cost: 9 contexts, 9 days
Batch posting:
- 1 hour total consolidation
- Mental switching cost: 1 context, 1 day
Saves 8 hours per 9 articles. Worth optimizing for.
What this requires
- 9 paste-ready files written in advance (10 hours of writing spread across days)
- Working dev.to API key
- Python + requests + time
- A retry-aware batch script
- Patience for the 35-sec rate limit
What it doesn't require
- Special tools
- Premium dev.to subscription
- Bot account workarounds
- Manual paste-paste-paste
Source
Full retry-aware batch publisher + paste-ready file generator + downstream consolidation:
AutoApp Dashboard ($39) includes:
-
devto_publish_batch.py(this article) - 60+ paste-ready dev.to article examples with frontmatter
-
STATUS.mdtemplate + auto-updater -
verify_all_live_urls.pyaudit script
If you have 9+ paste-ready articles and aren't batch-publishing, you're losing 8 hours per batch to consolidation overhead. 50 lines of Python solves it.
Top comments (0)