Building a Philosophy-First Multi-Platform Auto-Deploy Pipeline with Obsidian, Docker, n8n, and Claude
Overview
This article documents the architecture and lessons learned from building an autonomous content deployment system that takes written thought from Obsidian and simultaneously publishes it to 5 platforms — with minimal friction and maximum intentionality.
The core insight: automation should be a result of clear design thinking, not the goal itself.
Stack
| Layer | Tool |
|---|---|
| Writing Environment | Obsidian |
| Execution Infrastructure | Docker |
| Workflow Orchestration | n8n |
| AI Content Layer | Claude 3.5 Sonnet (Anthropic API) |
| Publishing Targets | 5 platforms (parallel) |
Architecture: Before vs After
Before — Single-Task Configuration
The initial setup was a naive AI automation pipeline:
[Obsidian Markdown File]
↓
[Single n8n Workflow Node]
↓
[Claude API — basic prompt]
↓
[1 Platform]
Problems:
- No clear purpose driving the design
- AI was just a text transformer
- No modularity — brittle to change
- Zero parallelism
After — Multi-Platform Deploy Configuration
After redefining the goal as "deliver written philosophy to the world without friction," the architecture evolved:
[Obsidian Markdown File]
↓
[n8n Trigger Node — file watcher / webhook]
↓
[Claude 3.5 Sonnet — content adaptation per platform]
↓ (parallel branches)
┌─────┬─────┬─────┬─────┬─────┐
[P1] [P2] [P3] [P4] [P5]
Key design decisions:
- Docker keeps n8n always-on, independent of local machine state
- Workflow modularization — each platform has its own isolated node chain
- Parallel execution — all 5 platforms receive adapted content simultaneously
- Claude API handles tone/format adaptation per destination (not just copy-paste)
Docker Setup for n8n
Basic docker-compose.yml for persistent n8n:
yaml
version: '3.8'
services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=your_user
- N8N_BASIC_AUTH_PASSWORD=your_password
- WEBHOOK_URL=https://your-domain.com/
volumes:
- n8n_data:/home/node/.n8n
volumes:
n8n_data:
Run with:
bash
docker-compose up -d
This ensures n8n survives machine restarts and runs as a background service.
n8n Workflow Design Principles
1. Trigger Layer
Use a Webhook node or File Watcher to detect when a new Obsidian note is ready for deployment. A simple approach: add a publish: true frontmatter flag to your Obsidian note, and poll for it.
yaml
title: "My Article"
publish: true
platforms: [dev.to, paragraph, medium, note, substack]
2. Content Adaptation via Claude API
Don't send the same content to every platform. Use Claude to adapt tone and format:
System Prompt Example:
"You are a content adaptation engine.
Given the source article and the target platform,
rewrite the content to match platform norms
(e.g., technical depth for DEV.to, philosophical tone for Paragraph,
concise format for Note).
Preserve all original ideas and CTAs."
This separates core thought from platform-specific presentation.
3. Parallel Execution via n8n Split
Use n8n's SplitInBatches or multiple parallel branches from a single node to fan out to each platform API simultaneously.
[Claude Output]
↓
[Switch Node — route by platform]
↓ ↓ ↓
[DEV.to] [Paragraph] [Medium] ...
Key Architectural Insights
Treat AI as a Design Element, Not an Operator
The most impactful mindset shift:
- Before: Ask AI to do tasks → accept output → post
- After: Design a workflow that includes AI as a node with defined inputs, outputs, and constraints
This transforms Claude from a chatbot into a deterministic workflow component.
Prevent AI-Induced Thinking Atrophy
When building AI pipelines, it's easy to over-delegate. Maintain design ownership by:
- Always defining the intent of each AI node explicitly
- Asking "What's the alternative approach?" before finalizing prompts
- Keeping the system prompt logic under version control (treat it as code)
bash
Example: version-control your prompts
/prompts
dev-to-adaptation.txt
paragraph-adaptation.txt
medium-adaptation.txt
Lessons Learned
| Insight | Detail |
|---|---|
| Automation is a result, not a goal | Design for outcome first, then automate the path |
| AI is infrastructure | Claude in this stack functions like a microservice |
| Workflow design > tool selection | Docker + n8n outperformed alternatives because the workflow design was sound |
| Modularity matters | Node-per-platform design made iteration fast and safe |
What's Next
- Post-publish analytics automation — pull engagement data back into n8n
- Feedback loop integration — use performance data to inform Claude's adaptation prompts
- Full content cycle autonomy:
Write → Deploy → Analyze → Learn → Optimize → Re-deploy
The goal: a self-improving publishing system where each deployment cycle makes the next one more effective.
Conclusion
The question isn't "Which AI tool should I use?"
It's "What outcome am I designing toward, and how does AI fit into that system?"
When you stop using AI and start designing with AI, the leverage becomes qualitatively different.
Hideki Tamae — Civilizational OS Designer / Limelien Inc.
Top comments (0)