DEV Community

hideki-tamae
hideki-tamae

Posted on

Care Capitalism - Civilization OS

Building a Philosophy-to-Multiplatform Auto-Deploy Pipeline with Obsidian, Docker, n8n, and Claude

Overview

This article documents the architecture and reasoning behind building an autonomous, multi-platform content deployment system — one where writing a single article in Obsidian triggers simultaneous publishing across 5 platforms, orchestrated by n8n running in Docker and powered by Claude 3.5 Sonnet.

The key insight: the goal was never automation itself — it was frictionless delivery of ideas to the world.


Stack

Layer Tool
Writing Environment Obsidian
Execution Infrastructure Docker
Workflow Orchestration n8n
AI Content Layer Claude 3.5 Sonnet (Anthropic API)
Publishing Targets 5 platforms (parallel)

Architecture: Before vs After

Before — Single-Task, Single-Platform

The initial setup was a straightforward AI automation pattern:

[Obsidian] --> [Single API Call to AI] --> [1 Platform]

  • Followed AI instructions passively
  • Single API connection
  • Published to one destination

Problem: The purpose was undefined. "AI automation" had become the goal itself, not a means to an end.


After — Modular Multi-Platform Deploy Pipeline

After redefining the core objective:

Deliver written ideas to the world — without friction.

The architecture evolved to:

[Obsidian Vault]
|
v
[File Watcher / Trigger]
|
v
n8n Workflow Engine
|
|-- [Node: Claude 3.5 Sonnet — Content Adaptation]
|
|-- [Node: Platform A Publisher]
|-- [Node: Platform B Publisher]
|-- [Node: Platform C Publisher]
|-- [Node: Platform D Publisher]
|-- Node: Platform E Publisher


Key Technical Decisions

1. Docker for Persistent n8n Runtime

n8n runs as a persistent service via Docker Compose, ensuring the workflow engine is always available without manual startup.

yaml

docker-compose.yml (simplified)

version: '3.8'
services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
volumes:
- n8n_data:/home/node/.n8n
- ./obsidian_vault:/vault:ro
volumes:
n8n_data:

Why Docker? Eliminates environment drift, enables consistent execution across machines, and allows the pipeline to run headlessly.


2. Workflow Modularization in n8n

Each publishing destination is isolated into its own node branch. This means:

  • Failures in one platform don't block others
  • Each adapter can be updated independently
  • New platforms can be added without touching existing logic

[Content Prepared]
|
[Switch Node]
/ | | | \
[A] [B] [C] [D] [E]


3. Claude 3.5 Sonnet as a Workflow Node — Not a Replacement for Thinking

The AI layer handles:

  • Tone adaptation per platform
  • Format conversion (Markdown → platform-specific markup)
  • Metadata generation (tags, summaries)

Critically, Claude is not making design decisions. It operates on defined inputs with defined output schemas. The engineer holds the system design intent.

// Example n8n HTTP Request Node payload to Anthropic API
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 2048,
"messages": [
{
"role": "user",
"content": "Adapt the following article for [Platform X]. Requirements: [defined schema]. Article: {{$node['Read File'].json['content']}}"
}
]
}


4. Parallel Execution (Multi-Deploy)

n8n supports parallel branch execution natively. All 5 platform nodes fire simultaneously after content preparation completes, minimizing total publish latency.

Result: Writing an article = publishing complete across all platforms.


The Mental Model Shift That Matters

The most impactful engineering decision wasn't technical — it was conceptual.

Old Model New Model
Delegate tasks to AI Design systems that include AI
AI as operator AI as a workflow component
Use AI outputs directly AI outputs as inputs to further logic

Once AI is treated as a design element rather than a black-box executor, the system becomes composable, auditable, and evolvable.


Preventing Cognitive Offloading

A real risk with AI-integrated pipelines: the engineer stops reasoning.

Practices adopted to counter this:

  • Never accept AI output directly — always validate against intent
  • Ask for alternatives — "What's another approach?" surfaces better solutions
  • Own the system design — AI handles execution within defined boundaries, not architecture

AI as a thinking interface, not an answer machine.


Lessons Learned

  1. Automation is a result, not a goal — define the outcome first, then automate the path to it
  2. AI is a design element — embed it into workflow graphs with defined I/O contracts
  3. Workflow design is the critical skill — the nodes matter less than the graph
  4. Docker + n8n is a strong pairing for persistent, modular content pipelines

What's Next

  • [ ] Automated analysis of post-publish metrics
  • [ ] Feedback ingestion into content generation prompts
  • [ ] Building a closed improvement loop: Write → Learn → Optimize → Redistribute

The end state: a fully autonomous content intelligence system where each publish cycle improves the next.


Conclusion

The question that unlocked this system wasn't "Which AI should I use?" — it was "What exactly do I want to achieve?"

The technical stack follows naturally from a clear objective. Docker gives you persistence. n8n gives you composability. Claude gives you adaptable content transformation. But the architecture — the design — that's yours to own.


Hideki Tamae — Civilizational OS Designer / Limelien Inc.

Top comments (0)