The Problem: Friction Is the Enemy of Thought
Every time I finished writing a piece, I was forced to context-switch into administrator mode — reformatting Markdown for different platforms, manually translating content to English, and clicking through publishing UIs. This operational overhead was silently killing my creative momentum.
The goal: write once in Obsidian, and have an autonomous pipeline handle classification, translation, multi-platform deployment, and archival — entirely without human intervention.
System Architecture Overview
[Obsidian Vault (Local)]
│
│ File drop trigger (n8n watches directory)
▼
[n8n Workflow Engine (Docker)]
│
├─► Claude API: content classification (is_technical: true/false)
│
├─► Route A: Paragraph.xyz ← Philosophy / Web3 content
│ (Bilingual: English translation + Japanese original)
│
├─► Route B: DEV.to ← Technical content only
│ (English only, restructured for engineers)
│
└─► Archive: move .md file to /published folder
Stack:
- Obsidian — local-first writing environment
- n8n (self-hosted via Docker) — workflow orchestration
- Claude API (claude-opus-4) — AI classification + translation
- Paragraph API — Web3-native publishing
- DEV.to API — developer community publishing
Step 1: Dockerizing n8n
Self-hosting n8n via Docker gives you full control over credentials and avoids cloud rate limits.
yaml
docker-compose.yml
version: '3.8'
services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=yourpassword
- WEBHOOK_URL=http://localhost:5678/
volumes:
- n8n_data:/home/node/.n8n
- /path/to/obsidian/vault:/data/vault # Mount your Obsidian vault
volumes:
n8n_data:
bash
docker-compose up -d
Access the n8n editor at http://localhost:5678.
Step 2: The n8n Workflow — Sequential Design
This is the critical architectural decision. My first attempt used parallel branches, which caused a Race Condition: multiple branches tried to move the same file to the archive simultaneously, resulting in cascading 404 errors.
❌ Broken: Parallel Architecture
Trigger → Claude API
├──► POST to Paragraph ┐
└──► POST to DEV.to ┼──► Move to Archive ← RACE CONDITION
┘
✅ Fixed: Sequential (Linear) Architecture
Trigger → Read File → Claude API → Route A (Paragraph) → Route B (DEV.to) → Move to Archive
By making the archive step the final node in a strictly linear chain, the file is moved exactly once, after all publishing steps are confirmed complete.
Step 3: The Claude API Prompt — Structured JSON Output
The single most important prompt engineering decision was enforcing strict JSON output. Any freeform text response breaks the downstream workflow.
javascript
// n8n Code Node — Build Claude Request
const fileContent = $input.first().json.content;
const systemPrompt = `You are a global deployment agent for "Care Capitalism."
Analyze the input Japanese text and output STRICTLY valid JSON.
No text outside the JSON object.
JSON Schema:
{
"is_technical": boolean,
"route_a_content": "string", // Bilingual: English first, Japanese original second
"route_b_content": "string | null" // English-only technical article, or null
}`;
return [{
json: {
model: "claude-opus-4-5",
max_tokens: 8192,
system: systemPrompt,
messages: [{
role: "user",
content: fileContent
}]
}
}];
Key prompt engineering principles applied here:
- Assign a concrete persona to the AI ("deployment agent")
- Specify the output schema explicitly in the system prompt
- Enforce
nullfor the optional field to prevent hallucinated content
Step 4: Parsing Claude's Response and Routing
javascript
// n8n Code Node — Parse Claude Response
const rawResponse = $input.first().json.content[0].text;
// Strip markdown code fences if Claude wraps output in
const cleaned = rawResponse.replace(/^\n?/, '').replace(/\n?$/, '');
let parsed;
try {
parsed = JSON.parse(cleaned);
} catch (e) {
throw new Error(JSON parse failed. Raw response: ${rawResponse});
}
return [{
json: {
is_technical: parsed.is_technical,
route_a_content: parsed.route_a_content,
route_b_content: parsed.route_b_content // null if not technical
}
}];
Downstream, use an IF node in n8n:
- Condition:
{{ $json.is_technical }} === true - True branch → POST to both Paragraph and DEV.to
- False branch → POST to Paragraph only
Step 5: Posting to DEV.to API
javascript
// n8n HTTP Request Node configuration (via Code Node for dynamic body)
const technicalContent = $input.first().json.route_b_content;
const title = $input.first().json.title || "Untitled";
return [{
json: {
article: {
title: title,
published: true,
body_markdown: technicalContent,
tags: ["automation", "n8n", "docker", "ai"]
}
}
}];
DEV.to HTTP Request Node settings:
- Method:
POST - URL:
https://dev.to/api/articles - Header:
api-key: YOUR_DEVTO_API_KEY - Body: JSON (from Code Node above)
Step 6: Archiving the File
Using n8n's Move Files node (or a Code Node with filesystem access via the mounted Docker volume):
javascript
// n8n Code Node — Build archive path
const sourcePath = $input.first().json.filePath;
const fileName = sourcePath.split('/').pop();
const archivePath = /data/vault/published/${fileName};
return [{ json: { sourcePath, archivePath } }];
This node executes only after the Paragraph and DEV.to POST nodes have returned 2xx responses, guaranteeing the file is never moved prematurely.
Key Lessons: The Engineering Mirrors the Philosophy
| Mistake | Root Cause | Fix |
|---|---|---|
| Parallel branches → Race Condition on file move | Over-engineering, wanting to "do everything at once" | Sequential linear pipeline |
| Payload too large → API timeout | Perfectionism, sending full bilingual text everywhere | Structured JSON with concise Abstract + full body |
| Claude returning freeform text → JSON parse error | Ambiguous system prompt | Explicit schema enforcement in prompt |
The act of debugging this system was a direct mirror of the philosophical problem it was built to solve. Excess causes collapse. Subtraction creates flow.
Result: A "Write-and-Forget" System
The final pipeline completes in under 45 seconds from file drop to multi-platform publication:
- Drop
.mdfile into Obsidian vault folder - n8n detects file (polling every 30s)
- Claude classifies and generates platform-specific content
- Posts to Paragraph (always) + DEV.to (if technical)
- Moves file to
/publishedarchive - Done. No human action required.
Full Workflow JSON (Import into n8n)
The complete n8n workflow export is available in the companion repository. Core nodes:
-
Read Binary File— reads the dropped Markdown file -
HTTP Request (Claude)— sends to Anthropic API -
Code (Parse Response)— JSON extraction and validation -
IF (is_technical)— routing logic -
HTTP Request (Paragraph)— Web3 publishing -
HTTP Request (DEV.to)— developer publishing -
Move Files (Archive)— final archival step
Conclusion
If you find yourself spending more time managing the logistics of publishing than actually thinking, you're losing the war against friction. This pipeline eliminates that overhead entirely.
The Civilization OS concept that inspired this build is a reminder: the infrastructure of thought matters as much as the thought itself. Build your environment to be as frictionless as your ideas deserve.
→ Questions or improvements? Drop them in the comments. Forks welcome.
Top comments (0)