DEV Community

hideki-tamae
hideki-tamae

Posted on

Care Capitalism - Civilization OS

From Single-Node to Multi-Platform: Building an Autonomous Thought Deployment Pipeline with Obsidian, Docker, n8n, and Claude

Overview

This article documents a real architectural evolution: starting from a basic single-node AI automation script and rebuilding it into a fully modular, multi-platform content deployment system. The core stack is Obsidian + Docker + n8n + Claude 3.5 Sonnet, and the goal was to reduce the friction between writing and publishing to near zero.


The Problem with "AI Automation" as a Goal

Many engineers start building AI pipelines with the automation itself as the objective. The result is typically:

  • A single API call to an LLM
  • One output target
  • No clear definition of the end state

This is a workflow without a system. The tooling works, but the design is missing.

The turning point was redefining the goal:

Deliver written thought to the world — without friction.

Once the outcome was concrete, the architecture became obvious.


Architecture: Before vs After

Before — Single-Task Node

[Obsidian Vault]
|
[n8n trigger]
|
[Claude API]
|
[1 platform POST]

  • Manual trigger or file-watch
  • Single HTTP request node
  • No error isolation
  • Tightly coupled

After — Modular Multi-Platform Deploy

[Obsidian Vault]
|
[File Watcher / Webhook Trigger]
|
[Preprocessor Node] — normalize frontmatter, strip local syntax
|
[Claude 3.5 Sonnet Node] — adapt tone/format per platform
|
[Router Node] — branch by target
/ | | | \
[P1] [P2] [P3] [P4] [P5]

Each platform node is isolated. Failures in one branch do not cascade.


Infrastructure: Running n8n on Docker

docker-compose.yml

yaml
version: '3.8'

services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
- WEBHOOK_URL=https://your-domain.com/
volumes:
- n8n_data:/home/node/.n8n

volumes:
n8n_data:

Key decisions:

  • restart: always ensures the workflow engine stays live
  • Volume mount persists workflow definitions and credentials across container restarts
  • Expose via reverse proxy (nginx/Caddy) for webhook access from Obsidian

Obsidian Integration

Obsidian is the authoring environment. The trigger is a Webhook call from the Obsidian Shellcommands plugin or a folder-watch via n8n's local file trigger.

Frontmatter Convention

Each note includes structured metadata:

yaml

title: "Article Title"
targets: ["dev.to", "paragraph", "note", "hashnode", "substack"]
lang: ja

status: ready

The targets field controls which platform branches activate in n8n.


n8n Workflow Design

Node Structure

  1. Trigger Node — Webhook or filesystem watch
  2. Frontmatter Parser — Extract metadata using a Code node (JavaScript)
  3. Content Normalizer — Strip Obsidian-specific syntax ([[wikilinks]], local image paths)
  4. Claude API Node — HTTP Request node calling Anthropic API
  5. Router (Switch Node) — Branch per target platform
  6. Platform Nodes — Individual HTTP Request nodes per API

Frontmatter Parser (Code Node)

javascript
const raw = $input.first().json.body;
const lines = raw.split('\n');

let inFrontmatter = false;
let frontmatterLines = [];
let contentLines = [];

for (const line of lines) {
if (line.trim() === '---') {
inFrontmatter = !inFrontmatter;
continue;
}
if (inFrontmatter) {
frontmatterLines.push(line);
} else {
contentLines.push(line);
}
}

const frontmatter = {};
for (const line of frontmatterLines) {
const [key, ...rest] = line.split(':');
frontmatter[key.trim()] = rest.join(':').trim().replace(/^"|"$/g, '');
}

return [{
json: {
frontmatter,
content: contentLines.join('\n').trim()
}
}];

Claude API Node (HTTP Request)

{
"method": "POST",
"url": "https://api.anthropic.com/v1/messages",
"headers": {
"x-api-key": "{{ $env.ANTHROPIC_API_KEY }}",
"anthropic-version": "2023-06-01",
"content-type": "application/json"
},
"body": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "You are a global deploy agent. Adapt the following article for the target platform: {{ $json.frontmatter.targets }}. Preserve the author's voice. Output in the appropriate format.\n\n{{ $json.content }}"
}
]
}
}


Platform Node Examples

DEV.to

{
"method": "POST",
"url": "https://dev.to/api/articles",
"headers": {
"api-key": "{{ $env.DEVTO_API_KEY }}",
"content-type": "application/json"
},
"body": {
"article": {
"title": "{{ $json.frontmatter.title }}",
"published": false,
"body_markdown": "{{ $json.adapted_content }}",
"tags": ["n8n", "docker", "ai", "automation"]
}
}
}

Note: published: false creates a draft. Manual review before final publish is recommended.


Key Design Principles

1. Separation of Concerns

Each node does one thing. Content normalization, AI adaptation, and platform posting are fully decoupled.

2. AI as a Workflow Component, Not an Operator

Claude is not making decisions about what to publish. It adapts format and tone based on explicit instructions. The human defines the routing logic.

3. Idempotent Design

Each platform node checks for duplicate titles via a pre-flight GET before POSTing. Prevents double-publishing on re-triggers.

4. Error Isolation

Each branch has its own error handler node. A failed Substack post does not block the DEV.to post.


Lessons Learned

Assumption Reality
AI automation is the goal Automation is the output of clear goal definition
More AI = better results Workflow design quality > model choice
Tool comparison matters Stack selection resolves naturally once purpose is clear
Docker adds complexity Docker ensures stability; worth it from day one

What's Next

  • Analytics ingestion: Auto-pull engagement metrics from each platform into a unified dashboard
  • Feedback loop: Feed performance data back into the Claude prompt to improve future adaptations
  • Full autonomy cycle: Write → Deploy → Analyze → Optimize → Re-distribute

Summary

The architectural shift that mattered most was not technical — it was conceptual:

From using AI as a proxy → to designing systems that include AI as a component.

Once you treat the LLM as a node in a workflow graph rather than an autonomous agent, your system becomes predictable, debuggable, and extensible.

The stack (Obsidian + Docker + n8n + Claude) is secondary. The design philosophy is primary.


Hideki Tamae — Civilizational OS Designer / Limelien Inc.

Top comments (0)