DEV Community

hideki-tamae
hideki-tamae

Posted on

Care Capitalism - Civilization OS

I Automated My Writing Pipeline with Obsidian, Docker, and n8n — Here's What the System Taught Me About My Own Architecture

I've been developing a framework I call "Civilization OS" — an ongoing attempt to articulate and systematize ideas about how societies and individuals operate. The writing itself was never the bottleneck. The bottleneck was deployment: the manual, energy-draining process of taking a finished piece and adapting it for each platform.

So I built a pipeline. And the process of building it turned into something I didn't expect: a technical mirror that showed me exactly where my own thinking was broken.


The Initial Architecture (and Why It Failed)

The Ambition: 5 Platforms, 1 Trigger

My initial design was aggressive. One save in Obsidian would trigger a full n8n workflow deploying simultaneously to:

  • Zenn
  • DEV.to
  • Paragraph
  • Two additional platforms

Each node in the workflow handled a different platform's API spec, prompt tuning for Claude, and payload formatting. On paper, it was elegant. In practice, it was a system designed to fail.

Why It Broke

The failure modes were predictable in hindsight:

  1. API spec divergence — Each platform has different authentication schemes, content field structures, and rate limits. Managing five simultaneously meant five independent failure surfaces.
  2. Prompt instability — Claude's output formatting needed to be tuned per-platform. With five targets, prompt drift caused inconsistent outputs that broke downstream nodes.
  3. Complexity cascades — A single malformed response from Claude could propagate errors across all five branches simultaneously. Debugging required tracing through dozens of interdependent nodes.

The core insight: I had built my own cognitive pattern into the system. My tendency to want "everything, all at once" was instantiated directly in the workflow graph. The system's brittleness was a structural reflection of my thinking.


The Redesign: Subtract Until It's Stable

Migration to Local Docker

Before rethinking the logic, I needed to stabilize the execution environment. I migrated from a cloud-hosted n8n instance to a local Docker setup. This gave me:

  • Full control over the runtime
  • Faster iteration cycles for debugging
  • No unexpected cloud-side timeouts or resource limits

Basic docker-compose setup:

yaml
version: '3.8'
services:
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
volumes:
- ~/.n8n:/home/node/.n8n
- /path/to/obsidian/vault:/data/obsidian
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=yourpassword

Mounting the Obsidian vault directly into the container means n8n can watch for file changes via a filesystem trigger without any intermediary sync service.

The Two-Route Architecture

I collapsed five platforms into two semantically distinct routes:

Route A — Technical Content

  • Targets: DEV.to, Zenn
  • Trigger condition: Claude classifies the article as technical (contains code, system architecture, implementation specifics)

Route B — Philosophical Content

  • Target: Paragraph
  • Trigger condition: Claude classifies the article as conceptual/philosophical

This single architectural decision — routing by content semantics rather than duplicating logic per platform — reduced the workflow node count by roughly 60% and eliminated the vast majority of error surfaces.


The Classification Layer: Claude as a Router

The pivot point of the entire system is a single Claude API call that acts as a semantic classifier and router.

The Classification Prompt

You are a content routing agent. Read the following article and determine its primary category.

Respond with ONLY a valid JSON object in this exact format:
{
"category": "technical" | "philosophical",
"confidence": 0.0-1.0,
"routing_reason": "one sentence explanation"
}

Classify as "technical" if the article contains: source code, system architecture descriptions, specific implementation methods, API usage, or infrastructure configuration.

Classify as "philosophical" if the article primarily contains: conceptual frameworks, social theory, personal reflection, or abstract ideas without concrete implementation details.

Article:
{{$json["content"]}}

n8n Routing Logic

After the Claude node, an IF node branches the flow:

javascript
// IF node condition
{{ $json.category === 'technical' }}
// True branch → Route A (DEV.to / Zenn)
// False branch → Route B (Paragraph)

This is the entire routing mechanism. One API call. One conditional. Two branches.


The Payload Problem: When "Complete" Becomes the Enemy

The Timeout on Paragraph

Even after simplifying to two routes, Route B hit a consistent failure: timeout errors when posting to the Paragraph API.

The root cause: I was generating full bilingual content — complete English body + complete Japanese body — and attempting to post both. This resulted in:

  • Claude generating ~4,000-6,000 tokens per article (double the usual)
  • Paragraph API payload sizes exceeding practical limits
  • n8n HTTP request nodes timing out at the default 30-second limit

The Solution: English Primary + Japanese Abstract

I restructured the content generation prompt for Route B:

Generate content for a global Web3 audience on Paragraph.

Structure:

  1. Full article body in English (target: 600-900 words)
  2. A refined Japanese abstract (要約) of 150-200 characters to prepend for Japanese readers

Do NOT generate a full Japanese translation of the body.

Output format:
{
"english_body": "...",
"japanese_abstract": "...",
"title_en": "..."
}

Result:

  • Token usage dropped ~45%
  • Payload size dropped below Paragraph's practical limits
  • Zero timeouts since implementation
  • Improved UX: Japanese readers get a clean abstract, global readers get an uninterrupted English piece

Fail-Safe Design: The File Movement Mechanism

One of the most important additions to the pipeline is what happens when things go wrong.

The Folder Structure

obsidian-vault/
├── _deploy/ # Drop files here to trigger the pipeline
├── _archive/ # Successfully deployed files land here
└── _failed/ # Files that errored stay here for review

The Trigger and Movement Logic

n8n watches _deploy/ for new .md files using a filesystem trigger. After the workflow completes:

On success: Move file from _deploy/_archive/

javascript
// Execute Command node
const filename = $json["filename"];
exec(mv /data/obsidian/_deploy/${filename} /data/obsidian/_archive/${filename});

On error: Move file from _deploy/_failed/ and log the error

javascript
// Error branch Execute Command node
const filename = $json["filename"];
const timestamp = new Date().toISOString();
exec(mv /data/obsidian/_deploy/${filename} /data/obsidian/_failed/${timestamp}_${filename});

The psychological effect of this mechanism is significant: watching the file disappear from _deploy/ is a confirmation signal. The system communicates its own success through the filesystem. No notification needed.


Key Takeaways for Engineers Building Similar Systems

1. Your system architecture is a cognitive self-portrait

The way you structure a workflow reveals how you think. Bloated, overly-connected node graphs often indicate a desire to solve everything at once. Identify that pattern early.

2. Semantic routing is more maintainable than platform-specific branching

Instead of one branch per platform, route by content type. Add new platforms within the appropriate branch rather than adding new top-level branches.

3. LLM calls are not free — design for token economy

Every unnecessary token in a prompt is a latency cost and a reliability risk. "Complete" bilingual output sounds comprehensive; it's actually a timeout waiting to happen.

4. Fail-safes should be visible

If an error moves a file somewhere you can't see, you'll forget about it. Make failure states as visible as success states.

5. Subtract before you optimize

Before tuning prompts or adding retry logic, ask whether the component should exist at all. In my case, removing three platforms was more effective than any amount of error handling.


The Stack (Summary)

Component Role
Obsidian Writing environment + deployment trigger (filesystem)
Docker (local) Stable, controlled execution environment
n8n Workflow orchestration + routing logic
Claude API Content classification + platform-specific rewriting
DEV.to API Technical content publishing
Zenn Technical content publishing (JP)
Paragraph API Philosophical/conceptual content publishing

The pipeline is running. This article passed through it.

If you're building something similar — a writing automation system, a content routing layer, or any AI-mediated publishing pipeline — the most important architectural question isn't "how do I add more platforms?" It's "what do I remove to make this stable enough to trust?"

Top comments (0)