Every developer knows the pain: you write a great article, publish it on one platform, and then spend the next hour manually reformatting and posting it everywhere else.
I built an automated content distribution pipeline that publishes to 6 platforms from a single source of truth. Here's how.
The Architecture
Write once, distribute everywhere:
- Hashnode (source of truth)
- Dev.to (you're reading this here)
- LinkedIn (article share + commentary)
- Threads (text teaser)
- Instagram (card image + caption)
- daily.dev (auto via RSS)
Each platform gets a tailored variant — not a blind copy-paste.
The Platform Adapter Pattern
Each platform implements a common interface with a single publish() method. Adding a new platform is just implementing one class. Rate limiting is Redis-backed per platform.
Smart Content Composition
The hardest part isn't the API calls — it's making content feel native. Twitter gets a 280-char hook. Threads gets a 500-char teaser. LinkedIn gets a 3,000-char article share.
The Quality Ladder
- Ollama (local, free) generates the first draft
- Quality gate checks readability and boring-start patterns
- Claude Haiku polishes if needed (~$0.002)
- Template fallback as safety net
The Feedback Loop
A weekly task evaluates performance, extracts patterns, and injects learnings into future prompts. The system gets smarter over time.
The Stack
- Python 3.13, FastAPI, SQLAlchemy 2 async, Celery
- Next.js 16, TypeScript, TanStack Query
- AI quality ladder (Ollama → Haiku), RAG, pgvector
- 1Password secrets, Redis rate limiting
This post was cross-posted using the pipeline described above.
Top comments (0)