DEV Community

Cover image for One Open-Source Repo Turned Claude Code Into an n8n Architect — And n8n Has Never Been More Useful
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

One Open-Source Repo Turned Claude Code Into an n8n Architect — And n8n Has Never Been More Useful

Last Tuesday at 11 PM I watched Claude Code erase 140 product variants from a production e-commerce store.Not maliciously. Not even incorrectly, technically.

The workflow I'd asked it to fix was sending a PUT request to PrestaShop's XML API with only 4 fields. PrestaShop interpreted that as "set everything else to empty." References gone. Barcodes gone. 140 combinations silently wiped while I was comparing execution logs in another terminal tab.

TL;DR: n8n isn't dying. Manual workflow building is dying. One open-source repo (czlonkowski/n8n-mcp) turned Claude Code into an n8n architect. I've used it on a real 55-node production pipeline, and separately disabled my own AI agent because it burned tokens for nothing. Below: what actually works, what breaks, and a framework for when to use which tool.

rentierdigital logo watermark for tech blog article about n8n automation workflows


When your logo is more reliable than your AI agent.

Fifteen minutes later, Claude Code had also diagnosed the root cause, built the fix (a GET-before-PUT pattern across 4 new nodes), deployed all 12 changes in a single atomic API call, and verified the repair by pulling the PrestaShop API state before and after.

The same investigation in the n8n UI would have taken me north of an hour. I know because I've done it before, clicking through 55 nodes, switching between browser tabs, manually exporting JSON, eyeballing execution data.

Two weeks earlier, I had disabled my own AI agent. Pulled the plug on OpenClaw, the multi-model system I'd been building and writing about for months. The ratio of tokens spent to value produced was negative. The agent kept spending cycles trying to update itself instead of doing actual work.

So now I had a paradox sitting in my terminal: an AI agent I built myself that couldn't justify its own electricity bill, and an AI-assisted workflow system that just saved a production database in 15 minutes flat.

That paradox is basically the whole n8n debate right now.

The Repo That Changed n8n's Job Description

Three months ago I asked Claude Code to build me an n8n workflow.

It invented a node called scheduleTrigger. That node doesn't exist. The real one is called schedule. I spent 20 minutes debugging an import error on a JSON file that looked perfectly valid because every property name was almost right. Close enough to import, wrong enough to break. Claude was so confident about it too. Peak hallucination energy. "Here's your workflow" and the JSON looks beautiful until you realize half the node types are fan fiction.

I stopped asking Claude Code to touch n8n after that.

Then a developer named czlonkowski published an open-source MCP server. And the next time I asked Claude Code to build a workflow, it knew every node name, every property schema, every IF-branch connection syntax. No hallucination. No guessing.

The debate online about what happened next goes like this: Claude Code is eating n8n's lunch. Google Trends show it overtaking n8n in search interest. YouTube creators who built audiences on n8n tutorials are watching their views drop. Forum threads titled "Is n8n Dead?" pop up every other week. The consensus take is always the same lukewarm thing: "they're complementary tools for different skill levels."

That consensus is accurate and completely unhelpful.

The actual shift is more specific. czlonkowski's n8n-mcp server (on GitHub) gives Claude Code structured access to 1,084 n8n nodes, 2,600+ real-world configurations extracted from templates, and validation rules that catch mistakes before deployment. A companion project adds 7 Claude Code skills covering expression syntax, architecture patterns, and debugging.

n8n shipped their own official MCP too (Settings > Instance-level MCP), but theirs is deliberately more limited: trigger and query existing workflows, not build new ones. Security-conscious choice, not a weakness.

So what happened is, n8n's visual builder didnt die. It changed jobs. It used to be the construction tool. Now it's the monitoring dashboard. Claude Code builds, n8n runs.

But building workflows faster is only half of it. I have 15 workflows running on my n8n instance. Last month I opened one I hadn't touched in 3 months and had no idea what the middle 20 nodes did. One MCP call and Claude Code dropped sticky notes on every node explaining what it does, what it expects, what breaks if you change it. That alone would have been worth the setup. But then I realized I could do this after every modification.

The thing that edits the workflow also documents the change and commits it to git with a message that actually describes what happened. Across my three debugging sessions, that meant 30 commits with full rollback capability, zero manual export/import clicks. Try doing that in the n8n UI.

Fair warning: this setup means you're handing an API token to an AI that gets full read/write access to your n8n instance. I've done it on staging and production. Im not going to pretend thats risk-free. On one of my workflows, Claude Code could see the PrestaShop API key sitting in plain text inside a Parameters node. Useful for direct curl verification. Terrible for credential hygiene. Security people reading this just felt a disturbance in the Force. Sorry. The MCP layer has zero credential scoping today.

I argued last month that CLIs still beat MCP for most AI agent tooling. For general-purpose agents, I still think that's true. But for a specific, well-documented platform like n8n? The structured MCP approach wins. czlonkowski proved it.

Three Moments From a Real Production Debug

I run an e-commerce pipeline for redactedshop.com, a European outdoor retail site on PrestaShop 8.2.1. A supplier sends a CSV catalog via FTP. An n8n pipeline reads the CSV, parses it into Google Sheets (with AI-assisted variant detection via Gemini), then creates and updates ~265 products in PrestaShop including prices, stocks, images, categories, and translations in 3 languages. Two main workflows, six sub-workflows, 55 nodes total.

Over three debugging sessions across two weeks, the combo of Claude Code + n8n-MCP handled problems that ranged from trivial to "how did this ever work." Three moments stood out because each proves a different point.

The Silent Wipe

The update variants prix node was sending a PUT to /api/combinations/{id} with only id, id_product, minimal_quantity, and price. PrestaShop's API treats a partial PUT as "replace the entire resource." Ugh. Every field you dont send gets blanked. This had silently erased reference codes and EAN barcodes on 140 product variants. A second node had the same pattern AND was set to executeOnce: true, meaning only the first product ever got updated. The rest were skipped entirely.

Claude Code pulled the execution data via MCP, extracted a specific combination ID, did a GET on the PrestaShop API to see the damage, built 4 new Code nodes implementing the GET-before-PUT pattern, rewired 4 connections on IF branches, removed the executeOnce flag, and deployed all 12 changes in one n8n_update_partial_workflow call. In the n8n UI, that sequence means: create node, position it, fill 6+ config fields, drag connection wires, repeat four times, then manually verify. Realistically 20 minutes of careful clicking. Done in one API call, verified in 3 minutes.

That was the expensive bug. The next one was cheap and infuriating.

The Type That Didn't Exist

Google Sheets returns product_id: 32210 (a number) when a cell has content, but product_id: "" (an empty string) when it doesn't. The n8n IF node's exists operator silently passes empty strings because an empty string exists. It's just empty.

JavaScript type coercion strikes again. The language where [] == false but [] ? true : false returns true. Google Sheets just made it worse.

Took 4 commits across the session to converge on a working filter: first exists failed, then isNotEmpty choked on numbers, finally gt 0 with typeValidation: loose worked.

The MCP execution inspector made this diagnosable. It showed the exact data structure coming out of each node, which immediately revealed the mixed-type issue. Without programmatic access to execution data, I'd have been clicking through the n8n UI execution inspector node by node, squinting at JSON output panels. Still, Claude Code's first three attempts at the IF node config were wrong. The mixed-type behavior from Google Sheets isn't documented anywhere. Had to be discovered empirically. Four commits to fix a filter. Thats the real pace.

Both of those bugs had clean endings. The third one didn't.

The Test That Wasn't

After fixing the variant update path, I needed to validate the simple product update path too. Problem: no simple product existed in the current Google Sheet dataset. Claude Code's solution was pragmatic: run the Code node's JavaScript locally via node -e, then do a manual PUT via curl on a known product to verify the XML transformation preserved all fields.

This validates the regex and XML logic. It does NOT validate the n8n expression references, the node chaining, or the continueOnFail behavior. The fix went to production partially untested. I know it. Claude Code flagged it. A full end-to-end test would require injecting a simple product row into the Google Sheet, and I havent done that yet.

Im telling you this because every other article about Claude Code + n8n shows you the happy path. The real path includes a fix that went live without a proper integration test and a Code node that crashed on first execution because Claude Code followed its own skill documentation (which recommended array-wrapped returns) while n8n's runtime validator in runOnceForEachItem mode rejects arrays.

The smartest tool in the room still needs a failed execution to learn the room's rules.

Why I Disabled My Own AI Agent

That last section might sound like I'm negative on the combo. I'm not. I'm negative on the version where every demo ends at the happy path and nobody shows the Code node that crashed on first run. That gap between demo and production, when you take it further, is exactly what killed my agent.

I built OpenClaw, a multi-model AI agent running on a $5 VPS. Kimi K2.5 as primary model, MiniMax M2.5 as fallback. The original version ran on Claude's OAuth, which Anthropic killed and forced everyone to paid API keys. That's when the economics shifted from "cheap experiment" to "real budget decision."

Peter Steinberger argued you should use frontier models or nothing. He's probably right on the technical side, most of OpenClaw's issues came from models that weren't smart enough to reliably self-update and self-repair. A frontier model would likely solve that. But Steinberger joined OpenAI shortly after saying this, and frontier API prices for an unsupervised agent can hit $200/day. For the rest of us running real workloads on real budgets, being right doesn't make it affordable.

I deployed OpenClaw for months on the cheaper models. Then I turned it off. Like unplugging a Roomba that keeps bumping into the same wall but with API costs.

The ratio was honest and brutal: tokens spent divided by value produced was negative. The agent kept consuming cycles trying to update itself, fix its own tooling, restart failed processes. The fix exists, smarter models. The price tag doesn't. What the space actually needs is an intelligent CRON system and real self-tooling capabilities that work without burning frontier-tier tokens. I'm considering building that myself. But that's a different article.

Meanwhile, my n8n workflows have been running for months. Webhooks fire. CRONs execute. Products update. Stocks sync. Notifications send. Zero intervention. Zero token cost beyond the initial build. The execution logs are auditable, the error handling is deterministic, the hosting is stable.

80% of business automation is boring and nobody in the "AI agents will replace everything" camp wants to hear that. It's CRON jobs that fire at 6 AM, webhooks that catch Stripe events, IF/ELSE branches that route data to the right endpoint. You dont need an agent that "reasons" about this. You need a pipeline that executes reliably and costs nothing to run.

n8n does that.

Claude Code + MCP makes building it dramatically faster. Actually, let me put a number on "dramatically": the PrestaShop debug that would have cost me an afternoon was done in 15 minutes. My stack runs on a Claude Max subscription and a self-hosted n8n instance on a $5 VPS. Not cheap, but compared to frontier API costs for an unsupervised agent running 24/7, it's a rounding error. And that combination covers 90% of the use cases that people think they need autonomous agents for.

Agents aren't useless. They're just rare. When you genuinely need dynamic reasoning that can't be reduced to a decision tree, that's agent territory. Everything else is a workflow with extra steps and a bigger bill.

I built the agent. I wrote the articles. I pulled the plug.

The Framework: When to Use What

After running both in production and killing one of them, this is how I decide.

n8n alone handles more than you think. Stable, repetitive processes. Webhooks, scheduled CRONs, data sync between two APIs, notification pipelines. When a non-technical person needs to understand what's running. When you need audit trails and execution logs. When uptime matters more than flexibility. My 5 SaaS products run on n8n alone for roughly 80% of their automation.

Claude Code alone is the right tool when things don't repeat. Quick script to transform a dataset. Prototype you'll throw away next week. Custom logic so specific that wiring it into n8n would be overengineering. Exploration mode, not deployment mode.

Now the interesting part. If you're about to spend 45 minutes dragging nodes in the n8n UI for a workflow you already have mapped in your head, stop. That's Claude Code + n8n via MCP territory. Debugging complex pipelines where you need to cross-reference execution data with external API state. Migrating or refactoring existing workflows. Auto-documenting a dozen workflows you haven't touched in months. And increasingly, building real applications with n8n as the backend instead of duct-taping Airtable and Google Sheets into something that pretends to be an interface. I haven't shipped a full front-end this way yet, but the architecture is obvious: Claude Code scaffolds a real UI that talks to n8n webhooks. Proper app with a proper backend. That's next on my list.

And then there's autonomous AI agents. The right call when the task requires reasoning that genuinely can't be reduced to IF/ELSE. When the input is unpredictable enough that you can't pre-map the decision tree. When the cost of tokens is justified by the value of the output. In my experience across 5 SaaS products, this covers maybe 10% of automation tasks. The other 90% are workflows wearing agent costumes 💀 I say this as someone who dressed his agent in the fanciest costume. It still couldn't dance.

The same discipline applies here as everywhere: define the contract before you let the AI execute. Whether it's a prompt contract for Claude Code or a workflow spec for n8n, the pattern is identical. Clarity upfront, execution after.

Last thing. czlonkowski maintains n8n-mcp against every new n8n release. Open-source, MIT license, no paywall. One person built the thing that made this entire article possible. Go star the repo. That's the least we owe him.

Claude Code AI assistant building n8n workflow automation with MCP server integration


Claude finally learned n8n nodes aren't just creative writing prompts.


The repo: https://github.com/czlonkowski/n8n-mcp


If this saved you from rebuilding something that already works or from buying into an agent you don't need, I write one of these per week. Workflow automation, Claude Code in production, and the stuff that breaks when tutorials end. Subscribe if that's useful to you.

That cover image? AI did it. I peaked creatively at CSS gradients in 2017 and my artistic career has been in managed decline ever since.

Top comments (0)