At day 60 of a 100-day indie iOS experiment, I've published 60 dev.to articles. The question isn't "why did I write 60 articles?" It's "how is writing 60 articles even possible without burning out?"
The answer: I didn't write most of them the way you'd normally write an article. I built infrastructure that makes writing fast.
Days 60–67, I'm not shipping new features. I'm shipping the platform that made shipping 60 features possible in the first place. And it compounds harder than any feature ever will.
The unglamorous list
Manifest-first asset management:
- Every new file gets YAML frontmatter with metadata (type, status, created date, tags)
- A dashboard auto-scans the repo, builds an index, and displays actionable items
- No manual INDEX.md maintenance
- Cost to add a new SKU / article / doc: add the file + frontmatter, run a single
git push
Commercial-grade code standards:
- Pre-commit linting (Swift, Python, Bash)
- Every script has
-h/--helpoutput - All Python uses type hints
- All Swift uses nullability checks (no force-unwraps in production)
- Cost: 5 minutes per 100 lines of code (linter runs automatically)
Weekly briefing automation:
- One bash script:
bash orchestrator/weekly_review.sh - Outputs: article count, revenue YTD, ASC submission status, B2B pipeline, next week's priorities
- Takes 90 seconds wall-clock
- Cost: 90 seconds per week, replaces 30 minutes of manual status checks
Here's the core weekly review logic:
#!/bin/bash
# orchestrator/weekly_review.sh
# 90-second status check across all project dimensions
set -e
echo "=== Weekly Review $(date +%Y-%m-%d) ==="
echo ""
# Article count
ARTICLE_COUNT=$(ls reports/devto-article-*-paste-ready.md | wc -l)
echo "Articles published: $ARTICLE_COUNT"
# Revenue (parse Gumroad SKU metadata)
REVENUE=$(grep -r "revenue:" INBOX/status.md | awk '{sum+=$NF} END {print "$" sum}')
echo "Revenue YTD: $REVENUE"
# ASC submission status
ASC_STATUS=$(curl -s "http://localhost:5000/api/asc-status" | jq -r '.submitted,.in_review,.approved' | tr '\n' ' ')
echo "ASC status: $ASC_STATUS"
# B2B pipeline
PROSPECTS=$(grep -r "prospect:" products/b2b-ai-consulting/ | wc -l)
echo "Active B2B prospects: $PROSPECTS"
echo ""
echo "Next week priorities:"
grep "^-" NOW.md | head -3
This runs in 90 seconds and gives you every decision signal for the week.
Dev.to API automation:
- One Python script writes articles directly to dev.to without the web UI
- Every article gets frontmatter → auto-extracted → posted with canonical URL + custom title
- Cost: 10 seconds per article (vs 3 minutes manual copy-paste)
ASC submission tooling:
- Python scripts: submit for review, poll submission state, detect rejections
- Reduced app release cycle from 2 hours (manual clicks in web UI) to 15 minutes (one command)
- Cost to automate: 3 hours upfront; saves 90 minutes per app per cycle
Dashboard with actionable endpoints:
-
/api/health: Is the repo healthy? Any stale docs? -
/api/audit: What's the revenue breakdown? ASC status? -
/api/refresh: Rescan all files for manifest changes -
/api/action: Trigger workflows (dev.to publish, ASC submit, document archive) - Cost to build: 2 hours; saves 5 minutes per decision-making session (every day)
The health endpoint is simple but critical:
from flask import Flask, jsonify
from pathlib import Path
import yaml
app = Flask(__name__)
@app.route('/api/health', methods=['GET'])
def health_check():
"""Scan repo for manifest compliance and stale docs."""
health = {
"status": "ok",
"files_checked": 0,
"files_missing_manifest": [],
"stale_docs": []
}
for filepath in Path("reports").glob("*.md"):
health["files_checked"] += 1
with open(filepath) as f:
content = f.read()
# Check for YAML frontmatter
if not content.startswith("---"):
health["files_missing_manifest"].append(str(filepath))
# Check for stale docs (modified >30 days ago)
mtime = filepath.stat().st_mtime
days_old = (time.time() - mtime) / 86400
if days_old > 30 and "DEPRECATED" not in content:
health["stale_docs"].append(str(filepath))
if health["files_missing_manifest"] or health["stale_docs"]:
health["status"] = "warning"
return jsonify(health)
Run this every morning, get instant visibility into what's breaking the system.
Memory-driven context persistence:
- MEMORY.md: facts, lessons, patterns that work
- RESUME.md: current state (apps submitted, customers, live content)
- AI agents read these first on handoff
- Cost: 10 min/week; saves 20 min per handoff (5–7 handoffs/week)
Why this matters more than articles
Let's run the math:
60 articles written manually:
- 60 articles × 90 minutes per article (research + write + edit) = 5400 minutes = 90 hours
60 articles written with infrastructure:
- Dev.to automation: 60 articles × 2 minutes (write + publish via API) = 120 minutes
- Weekly briefing: 7 weeks × 2 minutes = 14 minutes
- Memory update: 7 weeks × 10 minutes = 70 minutes
- Infrastructure building (upfront): 12 hours
Total: 12 hours upfront + 3 hours ongoing = 15 hours for 60 articles + infrastructure.
Time saved: 75 hours (90 hours − 15 hours).
That's not "write articles faster". That's "unlock a whole different scale of output by building the right foundation".
The three categories of work I'm tracking
A. Content (articles, SKUs, outreach):
- Output: visible, measurable, competes for time
- Cycles: weekly
- Compounding: OK (helps with SEO, audience, credibility)
B. Tooling (scripts, dashboards, API automation):
- Output: invisible until you scale
- Cycles: builds in weeks 1–8, saves time in weeks 9+
- Compounding: exponential (every new feature is faster to ship)
C. Meta (documentation, process, standards):
- Output: boring
- Cycles: maintenance every week
- Compounding: multiplicative (reduces friction for B and A)
Most indie devs spend 90% of their time on A, 10% on B, and skip C entirely.
My allocation for days 60–67 was:
- A (content): 30% (write 2–3 articles per day)
- B (tooling): 40% (build 1 new endpoint, improve 1 script)
- C (meta): 30% (update MEMORY, document patterns, lint code)
By day 90, that allocation flips. B and C do their work, so A becomes faster.
The specific tools I built (and when they paid for themselves)
| Tool | Cost | Payoff | ROI by Day |
|---|---|---|---|
| Manifest-first dashboard | 2 hours | Saves 5 min/day index maintenance | Day 25 |
| Dev.to API script | 1.5 hours | Saves 2 min/article × 60 articles | Day 8 |
| ASC submission script | 2 hours | Saves 1 hour/app × 4 apps | Day 5 |
| Weekly review bash | 45 minutes | Saves 20 min/week | Day 3 |
| MEMORY.md system | 30 minutes/week | Saves 15 min/handoff × 5 handoffs/week | Day 2 |
| Pre-commit linting | 2 hours | Prevents bugs; saves 30 min debugging/month | Day 40 |
All of these are in the red initially. But by day 30–60, they're all positive ROI.
The meta-insight: you can't see the ROI until you've run the operation long enough. This is why most indie devs don't build infrastructure — they ship for 2–4 weeks and judge ROI on that timescale. Tooling is a month-4+ play.
The stuff I still do manually (and why)
Strategic decisions:
- Which B2B prospects to contact
- Which SKU to build next (no automation, requires human judgment)
- Whether to pivot (no script for this)
Identity/auth:
- I enter my Apple ID + ASC credentials myself (never automated)
- Same with Gumroad + Substack credentials
Verification:
- Every live deployment gets a human spot-check (API success doesn't mean the site renders correctly)
- Every SKU goes through at least one manual test before launch
Approval gates:
- Articles go live automatically, but I choose which articles to write
- B2B emails are paste-ready, but I choose who to contact
The rule: automate the repetitive, standardizable parts. Never automate the judgment call.
The bootsteap path
If you're starting an indie project:
Day 1: Repo structure (/src, /docs, /reports, /INBOX). Every commit explains why.
Day 15: Add MEMORY.md (facts) and RESUME.md (state + blockers). These become your north star.
Day 30: Manifest-first: YAML frontmatter on every doc. One health dashboard.
Day 60: Automate 2 top tasks. Add 90-second weekly review.
Day 90: Extend tooling, refine processes.
The boring truth
The most impactful work I've done from days 60–67 isn't visible. It's not shipped. No customer sees it. No user clicks a button and feels the benefit.
But in days 68–100, every article takes 2 minutes instead of 90. Every app release takes 15 minutes instead of 2 hours. Every B2B outreach is a paste-ready email, not a blank slate.
That's what compounding looks like when you can't see it yet.
The next 30 days
Days 60–67 were infrastructure. Days 68–90 will be output.
The critical path:
- Ship Bible SKU ($29 PDF, 50 pages, launch EOW)
- ASC submit all 4 apps (test the automation under real pressure)
- B2B outreach batch 3 (5 fresh prospects, test the conversion)
- Measure: which of the 3 is converting?
Then days 91–100, double down on the winner.
All of that will feel fast because the infrastructure is in place. But the infrastructure is what's been built from day 1–60.
If you're building a long-term indie operation, what infrastructure did you build first? Drop a comment — curious what pays for itself the fastest.
Related reads:
- Manifest-driven workflow article (forthcoming)
- Weekly automation script
- API dashboard endpoints (tech deep dive forthcoming)
Top comments (0)