<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Artyom Rabzonov</title>
    <description>The latest articles on DEV Community by Artyom Rabzonov (@ratamaha).</description>
    <link>https://dev.to/ratamaha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ratamaha"/>
    <language>en</language>
    <item>
      <title>Claude Code MCP Server Setup on Windows</title>
      <dc:creator>Artyom Rabzonov</dc:creator>
      <pubDate>Thu, 14 May 2026 10:59:46 +0000</pubDate>
      <link>https://dev.to/ratamaha/claude-code-mcp-server-setup-on-windows-4clo</link>
      <guid>https://dev.to/ratamaha/claude-code-mcp-server-setup-on-windows-4clo</guid>
      <description>&lt;h2&gt;
  
  
  How to Set Up an MCP Server in Claude Code on Windows
&lt;/h2&gt;

&lt;p&gt;Add HTTP, SSE, and stdio MCP servers to Claude Code on Windows. The &lt;code&gt;cmd /c npx&lt;/code&gt; wrapper, the three scopes, and the errors you hit if you skip the wrapper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Use &lt;code&gt;claude mcp add&lt;/code&gt; for remote HTTP servers without modification. Wrap any stdio server running through &lt;code&gt;npx&lt;/code&gt; with &lt;code&gt;cmd /c&lt;/code&gt;. Without the wrapper, npx fails to spawn and the server appears in &lt;code&gt;claude mcp list&lt;/code&gt; but never connects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code installed.&lt;/strong&gt; Run &lt;code&gt;claude --version&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js 18 or newer&lt;/strong&gt; on PATH if you plan to run stdio servers via npx.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A working terminal.&lt;/strong&gt; Windows PowerShell, Windows Terminal, or Git Bash. WSL is not required.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pick a Scope Before You Add a Server
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scope&lt;/th&gt;
&lt;th&gt;Loads in&lt;/th&gt;
&lt;th&gt;Shared with team&lt;/th&gt;
&lt;th&gt;Stored in&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;local (default)&lt;/td&gt;
&lt;td&gt;Current project only&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;~/.claude.json&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;project&lt;/td&gt;
&lt;td&gt;Current project only&lt;/td&gt;
&lt;td&gt;Yes, via Git&lt;/td&gt;
&lt;td&gt;.mcp.json in project root&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;user&lt;/td&gt;
&lt;td&gt;All your projects&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;~/.claude.json&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Option 1: Add a Remote HTTP Server (No Windows-Specific Quirks)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add &lt;span class="nt"&gt;--transport&lt;/span&gt; http sentry https://mcp.sentry.dev/mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For servers that authenticate with a token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add &lt;span class="nt"&gt;--transport&lt;/span&gt; http github https://api.githubcopilot.com/mcp/ &lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer YOUR_GITHUB_PAT"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Option 2: Add a Local Stdio Server (Where Windows Differs)
&lt;/h2&gt;

&lt;p&gt;On Windows, &lt;code&gt;npx&lt;/code&gt; resolves to &lt;code&gt;npx.cmd&lt;/code&gt;, a batch script the Node child-process spawn inside Claude Code does not invoke directly. The fix is to wrap the call with &lt;code&gt;cmd /c&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Method A: Through the CLI
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add &lt;span class="nt"&gt;--transport&lt;/span&gt; stdio filesystem &lt;span class="nt"&gt;--&lt;/span&gt; cmd /c npx &lt;span class="nt"&gt;-y&lt;/span&gt; @modelcontextprotocol/server-filesystem C:/Users/you/projects
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All flags come &lt;strong&gt;before&lt;/strong&gt; the server name. The double-dash &lt;code&gt;--&lt;/code&gt; separates the name from the command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Method B: Edit .mcp.json Directly
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"filesystem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cmd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-filesystem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:/Users/you/projects"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use forward slashes in path arguments. Restart Claude Code after editing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verify the Server Is Connected
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;claude mcp list&lt;/code&gt; - the server should appear.&lt;/li&gt;
&lt;li&gt;Inside Claude Code, type &lt;code&gt;/mcp&lt;/code&gt; - servers show as connected, pending, or failed.&lt;/li&gt;
&lt;li&gt;Use a tool to confirm the connection works end-to-end.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Errors
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;spawn npx ENOENT&lt;/strong&gt; - add the &lt;code&gt;cmd /c&lt;/code&gt; wrapper, or confirm Node is on PATH.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server failed to start&lt;/strong&gt; after OAuth - set &lt;code&gt;MCP_TIMEOUT=10000&lt;/code&gt; before launching Claude Code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;This project MCP servers must be approved&lt;/strong&gt; - approve at the prompt, or run &lt;code&gt;claude mcp reset-project-choices&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server shows in list but never appears under /mcp&lt;/strong&gt; - flag ordering is wrong; re-run with all options before the name.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why does npx fail on Windows?&lt;/strong&gt; On Windows it resolves to &lt;code&gt;npx.cmd&lt;/code&gt;, a batch script the Node child-process spawn does not invoke as an executable. Wrapping with &lt;code&gt;cmd /c&lt;/code&gt; hands the script to the shell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where is the MCP config file stored on Windows?&lt;/strong&gt; Local and user-scoped servers live in &lt;code&gt;%USERPROFILE%\.claude.json&lt;/code&gt;. Project-scoped servers live in &lt;code&gt;.mcp.json&lt;/code&gt; at the project root.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I need WSL?&lt;/strong&gt; No. The native Windows install of Claude Code runs MCP servers directly.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://automatelab.tech/claude-code-mcp-windows-setup/" rel="noopener noreferrer"&gt;automatelab.tech&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>windows</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What is MCP (Model Context Protocol)? A 2026 Primer</title>
      <dc:creator>Artyom Rabzonov</dc:creator>
      <pubDate>Thu, 14 May 2026 10:51:46 +0000</pubDate>
      <link>https://dev.to/ratamaha/what-is-mcp-model-context-protocol-a-2026-primer-378k</link>
      <guid>https://dev.to/ratamaha/what-is-mcp-model-context-protocol-a-2026-primer-378k</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; The Model Context Protocol is an open standard that enables language models to interface with external tools and information sources through a unified JSON-RPC 2.0 mechanism.&lt;/p&gt;

&lt;h2&gt;
  
  
  How MCP Works
&lt;/h2&gt;

&lt;p&gt;MCP operates through JSON-RPC 2.0 across a connection between three components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosts&lt;/strong&gt; - The language model application users interact with (Claude Desktop, Cursor, ChatGPT)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clients&lt;/strong&gt; - The connector within the host communicating with servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Servers&lt;/strong&gt; - Lightweight services exposing specific capabilities (GitHub, Slack, PostgreSQL, filesystem)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each server provides three core capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resources&lt;/strong&gt; - Read-only contextual information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompts&lt;/strong&gt; - Reusable templated prompts users can select&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt; - Functions models can invoke&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why MCP Exists: The M-by-N Problem
&lt;/h2&gt;

&lt;p&gt;Previously, each language model application required custom integration with third-party tools. With M models and N tools, this created M x N necessary integrations. MCP reduces this to M+N. By early 2026, over 500 public MCP servers existed. Anthropic, OpenAI, and Google DeepMind all support the protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP vs Zapier, Make, and n8n
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Trigger Model&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Zapier / Make&lt;/td&gt;
&lt;td&gt;Event-driven&lt;/td&gt;
&lt;td&gt;When Stripe payment arrives, post in Slack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;n8n&lt;/td&gt;
&lt;td&gt;Event-driven with self-hosting&lt;/td&gt;
&lt;td&gt;Same plus custom logic and on-premises data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP&lt;/td&gt;
&lt;td&gt;On-demand&lt;/td&gt;
&lt;td&gt;Look up latest Stripe payments and draft refund email&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How Automation Builders Use MCP Today
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;n8n MCP nodes.&lt;/strong&gt; Any existing n8n workflow becomes available as an MCP tool to AI hosts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zapier MCP.&lt;/strong&gt; Leverage existing Zap connections without rebuilding authentication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local servers in Claude Desktop or Cursor.&lt;/strong&gt; Run a filesystem MCP server against a local folder.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Who created MCP?&lt;/strong&gt; Anthropic introduced MCP in November 2024 as an open protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What distinguishes MCP from a regular API?&lt;/strong&gt; MCP is a standardization layer providing AI hosts with consistent methods to discover capabilities, authenticate, and invoke functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is MCP equivalent to Zapier?&lt;/strong&gt; No. Zapier provides event-driven workflow automation. MCP enables on-demand tool access. Both work together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Must I write code to use MCP with n8n?&lt;/strong&gt; No. The MCP Client and MCP Server Trigger nodes feature visual configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is MCP secure?&lt;/strong&gt; Security responsibility falls to the host. Only install MCP servers from trusted sources.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://automatelab.tech/what-is-mcp-protocol/" rel="noopener noreferrer"&gt;automatelab.tech&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>n8n MCP Server: Build, Lint, and Debug Workflows From Your AI Agent</title>
      <dc:creator>Artyom Rabzonov</dc:creator>
      <pubDate>Thu, 14 May 2026 10:49:41 +0000</pubDate>
      <link>https://dev.to/ratamaha/n8n-mcp-server-build-lint-and-debug-workflows-from-your-ai-agent-5ahg</link>
      <guid>https://dev.to/ratamaha/n8n-mcp-server-build-lint-and-debug-workflows-from-your-ai-agent-5ahg</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;"Install &lt;code&gt;@automatelab/n8n-mcp&lt;/code&gt;, point your AI agent at it, and get nine tools that generate, lint, and diagnose n8n workflow JSON correctly the first time."&lt;/p&gt;

&lt;p&gt;The package addresses a specific problem: generic language models produce n8n JSON that imports successfully but fails during runtime due to incorrect connection topology, deprecated node types, and silent data loss between nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nine Tools: Four Stateless, Five Live-Instance
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stateless tools (no n8n instance required):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;n8n_generate_workflow&lt;/code&gt; - Converts plain English descriptions to workflow JSON with AI-Agent-aware topology&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n_scaffold_node&lt;/code&gt; - Generates TypeScript files for custom node packages&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n_lint_workflow&lt;/code&gt; - Identifies errors and warnings in workflow JSON&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n_explain_execution&lt;/code&gt; - Diagnoses failed executions with per-node analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Live-instance tools (require API credentials):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;n8n_list_workflows&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;n8n_get_workflow&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;n8n_create_workflow&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;n8n_activate_workflow&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;n8n_list_executions&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @automatelab/n8n-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Requires Node 20 or later. API keys are optional for stateless tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Across Hosts
&lt;/h2&gt;

&lt;p&gt;The standard MCP configuration block works for Cursor, Claude Desktop, Claude Code, Cline, and Windsurf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"n8n"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@automatelab/n8n-mcp"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"N8N_API_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://your-n8n.example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"N8N_API_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"n8n_..."&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AI Agent Topology
&lt;/h3&gt;

&lt;p&gt;The generator correctly wires AI Agent sub-nodes using typed connections (&lt;code&gt;ai_languageModel&lt;/code&gt;, &lt;code&gt;ai_memory&lt;/code&gt;, &lt;code&gt;ai_tool&lt;/code&gt;) rather than defaulting to &lt;code&gt;main&lt;/code&gt; connections that cause silent failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linting Capabilities
&lt;/h3&gt;

&lt;p&gt;The lint catches a small but high-value set: deprecated node types (&lt;code&gt;function&lt;/code&gt; -&amp;gt; &lt;code&gt;code&lt;/code&gt;, &lt;code&gt;spreadsheetFile&lt;/code&gt; -&amp;gt; &lt;code&gt;convertToFile&lt;/code&gt;), AI Agents missing a language model, IF-v1 schema, missing &lt;code&gt;webhookId&lt;/code&gt;, broken connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Silent Data Loss Diagnosis
&lt;/h3&gt;

&lt;p&gt;The explain tool identifies when nodes return zero items, causing downstream nodes to skip execution without errors - the top-cited n8n debugging pain point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison to Alternatives
&lt;/h2&gt;

&lt;p&gt;This server focuses on first-run correctness and execution diagnosis. The alternative &lt;code&gt;czlonkowski/n8n-mcp&lt;/code&gt; provides broader coverage with 20+ tools indexing all n8n nodes. Both can run simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live instance required?&lt;/strong&gt; No - four stateless tools work offline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Agent import failures?&lt;/strong&gt; Usually incorrect connection types; use the linter before importing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Silent data loss?&lt;/strong&gt; The explain tool flags zero-item handoffs with common cause hints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API key location?&lt;/strong&gt; Settings -&amp;gt; API -&amp;gt; Create API key in your n8n instance.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://automatelab.tech/n8n-mcp-server/" rel="noopener noreferrer"&gt;automatelab.tech&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>ai</category>
      <category>n8n</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Notion as the Control Plane: Per-Task Model Dispatch for Agent Workflows</title>
      <dc:creator>Artyom Rabzonov</dc:creator>
      <pubDate>Tue, 12 May 2026 13:56:06 +0000</pubDate>
      <link>https://dev.to/ratamaha/agency-os-the-notion-board-that-plans-your-work-then-ships-it-3aeg</link>
      <guid>https://dev.to/ratamaha/agency-os-the-notion-board-that-plans-your-work-then-ships-it-3aeg</guid>
      <description>&lt;p&gt;Most "AI for ops" tools fail in one of two ways. Fully autonomous agents go off the rails and ship the wrong thing. Draft-only assistants do 10% of the job and leave the breakdown, sequencing, and execution on you.&lt;/p&gt;

&lt;p&gt;I wanted a third option: agents own decomposition and execution, humans own approval. The substrate is Notion, because that's where the work already lives.&lt;/p&gt;

&lt;p&gt;This post is the pattern, not the pitch. The repo is at the bottom.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with one-model-per-workflow
&lt;/h2&gt;

&lt;p&gt;Direct LLM chat handles a single prompt well. It does not handle "launch the pricing page" well, because that isn't a prompt - it's a project. The model either improvises sub-steps in one long stream of consciousness, or you do the project management by hand and copy-paste between threads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxppi9tqadqemhkrar8wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxppi9tqadqemhkrar8wj.png" alt="Direct LLM chat scatters task chaos. agency-os stacks the same idea into a tidy dependency graph." width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fix is to separate three things that get conflated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Decomposition&lt;/strong&gt; - turning one task into a graph of subtasks with dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dispatch&lt;/strong&gt; - choosing the right model for each subtask and running them in the right order&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approval&lt;/strong&gt; - the human gate between planning and execution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once those are separate, you can keep humans in the decision seat without making them babysit the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The loop
&lt;/h2&gt;

&lt;p&gt;Four phases, all anchored in a single Notion task database:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Submit.&lt;/strong&gt; Operator creates a task in Notion in plain language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan.&lt;/strong&gt; A planner agent reads the task and emits a subtask graph. Each subtask gets a description, a list of dependencies, and an explicit model tier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approve.&lt;/strong&gt; The plan lands back in Notion as child pages. Operator reviews, edits, or rejects. Nothing dispatches without sign-off.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute.&lt;/strong&gt; On approval, an orchestrator dispatches subtasks in parallel where the graph allows, sequentially where it doesn't. Each subtask runs on its assigned model. Outputs land back in the same Notion task tree.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqyr2bvo5tbbm7j9d5j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqyr2bvo5tbbm7j9d5j1.png" alt="Agent decomposes one idea into four stages and routes each subtask to the right model tier" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A concrete example
&lt;/h2&gt;

&lt;p&gt;A "launch the website" task decomposes into 21 subtasks across three model tiers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Work&lt;/th&gt;
&lt;th&gt;Parallelism&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Opus&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Information architecture + outline&lt;/td&gt;
&lt;td&gt;sequential (root)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Page copy drafts&lt;/td&gt;
&lt;td&gt;parallel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Haiku&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Asset/image prompts&lt;/td&gt;
&lt;td&gt;parallel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Haiku&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Directory submission entries&lt;/td&gt;
&lt;td&gt;parallel&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One operator approval. Twenty-one outputs back in Notion, ready for review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why per-task model assignment matters
&lt;/h2&gt;

&lt;p&gt;Defaulting to the smartest model for every subtask is the easiest way to burn money on agentic workflows. Directory submissions don't need Opus reasoning. Architecture decisions shouldn't run on Haiku.&lt;/p&gt;

&lt;p&gt;In practice, routing clerical work to Haiku and reserving Opus for the reasoning-heavy nodes cuts model spend roughly an order of magnitude, with no quality loss on the parts that matter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5effrt2mxqq9iq5rp1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5effrt2mxqq9iq5rp1t.png" alt="Per-task model dispatch typically cuts model spend by an order of magnitude compared to running every task on a flagship model." width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation notes
&lt;/h2&gt;

&lt;p&gt;A few things that turned out to be load-bearing once I started running this daily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The approval gate is doing real work.&lt;/strong&gt; Without it, the planner occasionally invents subtasks or misjudges scope. With a 30-second human review, those get caught before they consume tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Force the planner to declare dependencies explicitly.&lt;/strong&gt; "Run in parallel where possible" only works if the planner outputs &lt;code&gt;dependsOn: []&lt;/code&gt; for every node. Implicit ordering doesn't survive contact with fan-out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give the planner a short rubric for tier selection.&lt;/strong&gt; Without it, the planner over-picks the flagship "to be safe." A one-paragraph rubric in the system prompt (Haiku for clerical, Sonnet for writing, Opus for reasoning/architecture) is enough.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notion as the substrate is the unlock.&lt;/strong&gt; It means non-technical operators can drive the workflow, edit plans, and consume outputs without a custom UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Not a fit if your team doesn't already live in Notion.&lt;/li&gt;
&lt;li&gt;The pattern is strongest for parallel fan-out of independent subtasks. Workflows that need iterative refinement between two agents mid-run are weaker.&lt;/li&gt;
&lt;li&gt;Once you fan out 10+ Haiku tasks at once, rate limits start mattering - back-pressure in the orchestrator is non-optional.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

&lt;p&gt;I've open-sourced the implementation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/ratamaha-git/agency-os" rel="noopener noreferrer"&gt;github.com/ratamaha-git/agency-os&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Happy to answer questions on the planner prompt shape, the dependency-graph schema, or the model-tier rubric.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>automation</category>
      <category>notion</category>
    </item>
  </channel>
</rss>
