Building an Autonomous Multi-Platform Content Deployment Pipeline with Obsidian, Docker, n8n, and Claude
Overview
This article documents the architectural evolution from a single-node auto-posting script to a fully modular, multi-platform content deployment system. The stack: Obsidian (authoring), Docker (runtime), n8n (workflow orchestration), and Claude 3.5 Sonnet (AI content generation and adaptation).
The end state: writing a Markdown note in Obsidian triggers automated, parallel publishing to 5 platforms simultaneously — with zero manual intervention.
The Problem with "AI Automation as a Goal"
The initial prototype was a classic mistake: build an AI-powered automation because it's possible, not because the problem is defined.
Initial Architecture (v0):
- Single API connection to one AI model
- Single publishing destination
- Imperative, linear workflow
- No modularity
The workflow worked, but it solved nothing meaningful. The purpose was undefined.
Redefining the Goal First
Before touching any code, the objective was restated as:
Deliver written ideas to the world with zero friction.
This single constraint drove every architectural decision that followed.
System Architecture (v1 — Production)
[Obsidian Vault]
|
| File watcher / webhook trigger
v
[n8n Workflow Engine] ← running in Docker
|
|── [Claude 3.5 Sonnet API]
| └── Content adaptation per platform
|
|── Node A: Platform 1
|── Node B: Platform 2
|── Node C: Platform 3
|── Node D: Platform 4
└── Node E: Platform 5
(parallel execution)
Key Components
| Component | Role |
|---|---|
| Obsidian | Markdown authoring environment |
| Docker | Isolated, persistent runtime for n8n |
| n8n | Visual workflow orchestration, webhook handling |
| Claude 3.5 Sonnet | Per-platform content rewriting and formatting |
| Platform Nodes | Independent API integrations (one node per destination) |
Docker Setup for n8n
Running n8n as a persistent service via Docker Compose:
yaml
docker-compose.yml
version: '3.8'
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
- WEBHOOK_URL=http://localhost:5678/
volumes:
- n8n_data:/home/node/.n8n
volumes:
n8n_data:
Running persistently means the workflow is always live — no need to manually start a process before writing.
bash
docker compose up -d
Workflow Design Principles
1. Modular Node Separation
Each publishing destination is an isolated node. This allows:
- Independent failure handling per platform
- Easy addition/removal of platforms
- Per-platform retry logic
2. Parallel Execution
All platform nodes execute concurrently after the AI adaptation step, reducing total deployment time from O(n) to O(1) relative to platform count.
3. AI as a Workflow Component, Not an Operator
Claude is wired into the workflow as a transformation node, not as a decision-maker. Its role:
- Receive raw Markdown content
- Receive platform-specific formatting rules as system prompt
- Output platform-adapted content
- Pass result downstream
Example n8n HTTP Request node config for Claude API:
{
"method": "POST",
"url": "https://api.anthropic.com/v1/messages",
"headers": {
"x-api-key": "{{ $env.ANTHROPIC_API_KEY }}",
"anthropic-version": "2023-06-01",
"content-type": "application/json"
},
"body": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"system": "You are a content adapter. Reformat the following Markdown article for [PLATFORM_NAME]. Rules: [PLATFORM_SPECIFIC_RULES]",
"messages": [
{
"role": "user",
"content": "{{ $json.raw_markdown }}"
}
]
}
}
Obsidian → n8n Trigger
Two viable approaches:
Option A: Folder-based file watcher
Use a local script (e.g., Python watchdog) to monitor a specific Obsidian folder and POST to the n8n webhook on file save.
python
import time
import requests
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
WEBHOOK_URL = "http://localhost:5678/webhook/deploy"
WATCH_DIR = "/path/to/obsidian/vault/publish/"
class MarkdownHandler(FileSystemEventHandler):
def on_modified(self, event):
if event.src_path.endswith(".md"):
with open(event.src_path, "r") as f:
content = f.read()
requests.post(WEBHOOK_URL, json={"raw_markdown": content, "filepath": event.src_path})
if name == "main":
observer = Observer()
observer.schedule(MarkdownHandler(), WATCH_DIR, recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
Option B: Obsidian Shell Commands plugin
Trigger a curl POST directly from within Obsidian via a hotkey.
bash
curl -X POST http://localhost:5678/webhook/deploy \
-H "Content-Type: application/json" \
-d "{\"raw_markdown\": $(cat \"$filepath\" | jq -Rs .)}"
What "AI as Design Element" Actually Means
The architectural shift that mattered most:
| Old Mental Model | New Mental Model |
|---|---|
| Ask AI to do a task | Define what transformation AI performs in the pipeline |
| AI decides output | AI receives constraints, executes transformation, passes result |
| One-off prompts | Reproducible, versioned system prompts per node |
| AI is the product | AI is a component |
This is the difference between using AI interactively and engineering with AI.
Key Lessons
- Automation is a result, not a goal. Define the end state first; the tooling becomes obvious.
- Workflow design is the critical layer. AI capability is abundant; orchestration is the bottleneck.
- Modularity enables iteration. Isolated nodes mean you can swap platforms or AI models without rebuilding the pipeline.
- Docker + n8n is underrated for personal publishing infrastructure. Always-on, low-overhead, and visually debuggable.
Roadmap
- [ ] Automated post-publish analytics ingestion
- [ ] Feedback loop: performance data → prompt refinement
- [ ] Content optimization cycle:
Write → Publish → Analyze → Optimize → Republish
Conclusion
The question "which AI is best?" is almost always the wrong question. The right questions are:
- What is the desired end state?
- What transformation needs to happen?
- Where does AI fit as a component in that flow?
Once those are answered, the stack — Claude, GPT-4, Llama, whatever — becomes an implementation detail.
The system is the product. AI is a node in the graph.
Hideki Tamae — Civilizational OS Designer / Limelien Inc.
Top comments (0)