DEV Community

hideki-tamae
hideki-tamae

Posted on

Care Capitalism - Civilization OS

Building an Autonomous Multi-Platform Publishing Pipeline with Obsidian, Docker, n8n, and Claude

As engineers, we often fall into the trap of building automation for automation's sake. This article documents how I redesigned a single-node auto-posting script into a modular, parallel, multi-platform deployment system — and the architectural decisions that made the difference.


The Problem: Single-Node, Single-Purpose

The initial setup was straightforward:

  • A single n8n workflow triggered by a file event in Obsidian
  • One API call to a single publishing platform
  • Claude used as a glorified text formatter

This worked, but it was brittle and didn't scale. More importantly, the design goal was unclear: I was automating a process without defining what outcome I actually wanted.


System Architecture Overview

[Obsidian Vault]


[File Watcher Trigger]


[n8n Workflow Engine] ── Docker Container (always-on)

├──► [Claude 3.5 Sonnet] ── Content generation & platform-specific formatting

├──► [Node: Platform A] ── API POST
├──► [Node: Platform B] ── API POST
├──► [Node: Platform C] ── API POST
├──► [Node: Platform D] ── API POST
└──► [Node: Platform E] ── API POST

Key design principles:

  • Separation of concerns: each platform gets its own isolated node
  • Parallel execution: all platform nodes fire simultaneously
  • Single source of truth: Obsidian markdown is the canonical input

Stack

Component Role
Obsidian Writing environment & trigger source
Docker Always-on runtime for n8n
n8n Workflow orchestration
Claude 3.5 Sonnet Content transformation & generation
5× Platform APIs Publishing targets

Docker Setup for n8n

Running n8n as a persistent service via Docker Compose:

yaml
version: '3.8'

services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
- N8N_HOST=localhost
- N8N_PORT=5678
- WEBHOOK_URL=http://localhost:5678/
volumes:
- n8n_data:/home/node/.n8n
- ${OBSIDIAN_VAULT_PATH}:/obsidian:ro

volumes:
n8n_data:

The Obsidian vault is mounted as a read-only volume, giving n8n direct access to markdown files without any intermediate sync step.


n8n Workflow Design

Trigger: File Watcher

A Local File Trigger node monitors the Obsidian vault for new or modified .md files matching a specific tag or folder pattern (e.g., publish: true in frontmatter).

// Example frontmatter that triggers the pipeline

title: "My Article"
publish: true

platforms: ["dev.to", "paragraph", "note", "zenn", "hashnode"]

Content Processing: Claude Integration

The n8n HTTP Request node calls the Anthropic API:

{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "Transform the following Markdown article for [PLATFORM_NAME]. Adapt tone, formatting, and structure to fit the platform's conventions. Preserve all technical content exactly.\n\n{{ $json.fileContent }}"
}
]
}

Each platform node passes a different system prompt, so Claude produces platform-native output rather than a one-size-fits-all article.

Parallel Deployment via Split-in-Batches + Merge

Instead of sequential posting (which would take 5× longer and fail silently on partial errors), nodes are split into parallel branches:

[Claude Output]


[Switch Node: route by platform list]

┌───┼───┬───┬───┐
▼ ▼ ▼ ▼ ▼
[A] [B] [C] [D] [E] ← Parallel HTTP POST nodes
└───┴───┴───┴───┘


[Merge Node]


[Notification / Log]

Using n8n's Switch node to route based on the platforms array in frontmatter, combined with parallel branch execution, all five API calls fire simultaneously.


Platform API Integration Examples

DEV.to

javascript
// n8n Function Node — prepare DEV.to payload
const article = {
article: {
title: $json.frontmatter.title,
body_markdown: $json.claudeOutput,
published: true,
tags: $json.frontmatter.tags || []
}
};

return { json: article };

HTTP Request node:

  • Method: POST
  • URL: https://dev.to/api/articles
  • Header: api-key: {{ $env.DEVTO_API_KEY }}

Paragraph (Web3)

javascript
// Paragraph uses a different auth scheme
const payload = {
title: $json.frontmatter.title,
content: $json.claudeOutput,
status: 'published'
};

return { json: payload };


Error Handling Strategy

Each platform node is wrapped in a try/catch pattern using n8n's error output pin:

[Platform Node]
│ │
success error
│ │
▼ ▼
[Merge] [Error Logger]


[Retry after 60s]

This ensures that a failure on one platform (e.g., rate limiting) does not block successful posts to the other four.


Key Architectural Lessons

1. Modularize by output, not by step

The first design had nodes organized by action (write → format → post). The refactored design organizes nodes by destination. This makes it trivial to add or remove platforms without touching shared logic.

2. AI belongs in the workflow, not around it

Treating Claude as an external tool you call manually means you bottleneck on human review. Treating it as a workflow node means it operates at machine speed, with human oversight applied at the design level (prompt engineering), not the execution level.

3. Docker persistence is non-negotiable for production

Running n8n without Docker (npx n8n) means losing all workflows on restart. The volume-mounted Docker Compose setup gives you:

  • Persistent workflow storage
  • Auto-restart on server reboot
  • Reproducible environment

4. Define the end state before the tools

The rewrite only became possible once the goal was articulated precisely:

"Writing an article should mean publishing to all platforms — no additional steps."

With that constraint defined, the architecture almost designed itself.


Metrics After Refactor

Metric Before After
Platforms per publish 1 5
Manual steps after writing 4–5 0
Total publish time ~15 min ~45 sec
Failure isolation None Per-platform

What's Next

  • Analytics ingestion: pull engagement data from each platform API back into n8n, store in a local DB
  • Feedback loop: use performance data as additional context for Claude when generating future content
  • A/B variant generation: have Claude produce two headline variants and route them to different platforms, then compare CTR

The target architecture:

Write → Deploy → Measure → Learn → Optimize → Re-deploy

All automated, with human input only at the writing stage.


Conclusion

The shift that unlocked this system wasn't a better tool or a cleverer prompt. It was reframing the question from "how do I automate posting?" to "what system do I want to exist?"

Once AI (Claude) was treated as a design element — a node in a workflow with defined inputs, outputs, and responsibilities — rather than a service to call, the architecture became composable, maintainable, and fast.

If you're building content automation, start with the end state. The tools will follow.


Hideki Tamae — Civilizational OS Designer / Limelien Inc.

Top comments (0)