DEV Community

Roman
Roman

Posted on

🚀 How I Built a Multi-Agent AI Workflow System with n8n and Python

Introduction

Over the last few months, I’ve been working on an AI Agents Repository — a modular system where each agent handles a specific automation task using n8n, LangChain, and Python microservices.

The goal? To create a shared framework where developers can easily contribute new AI agents that solve real-world automation problems — from sales call prep to document RAG pipelines.

💡 Problem

Every automation project I’ve seen starts from scratch: new logic, new scripts, new integrations.
That’s fine for prototypes, but it doesn’t scale — especially when working with multiple AI workflows across clients.

I wanted a way to:
Reuse modular AI logic (LLMs, data pipelines, integrations)
Maintain consistency and quality across agents
Support fast iteration and collaboration across teams

🧠 Architecture Overview

+--------------------------+
| n8n Workflow |
| (Agent Orchestrator) |
+-----------+--------------+
|
v
+--------------------------+
| Python Microservice |
| (LangChain / FastAPI) |
+-----------+--------------+
|
v
+--------------------------+
| Shared Vector Database |
| (PGVector / Pinecone) |
+--------------------------+
Each agent runs inside n8n but delegates LLM logic to a Python microservice using FastAPI.
That microservice uses LangChain and a shared vector store for context retrieval and reasoning.

⚙️ Example Use-Case: Sales Call Prep Agent

The “Sales Calls Prep” AI Agent automatically gathers a prospect’s info, analyzes their company, identifies potential pain points, and produces a concise briefing for a sales call.

Inputs:
Prospect name, company domain, and LinkedIn profile

Outputs:
Short summary of the person and company
Pain points inferred from public data
Suggested call flow with value propositions
This agent runs daily for each active sales lead and outputs structured JSON + Markdown for human review.

🧩 Stack & Tools

n8n — visual workflow orchestrator
LangChain — LLM pipeline and tool calling
FastAPI — backend microservice
PostgreSQL + PGVector — semantic memory store
Docker Compose — for easy local development
OpenAI / Claude APIs — model layer

🔍 Lessons Learned

Workflow modularity is critical: separating orchestration (n8n) from logic (Python) simplifies debugging and versioning.
Schema validation saves time: strict JSON outputs make downstream automation reliable.
Vector databases matter: semantic search eliminates brittle keyword matching.

🌍 What’s Next

I’m currently extending this system to support:
Role-based access control for multi-tenant setups
Shared prompt libraries and reusable embeddings
Web dashboard for monitoring AI agent performance
If you’re working on anything similar — multi-agent systems, RAG pipelines, or workflow automation — I’d love to connect and exchange ideas.

🧑‍💻 About Me

I’m Roman Buhyna, a Senior AI Engineer & Full-Stack Developer (Python, FastAPI, React, LangChain, Docker, GCP).
Over 10+ years, I’ve helped teams build scalable backend systems and AI-powered products — from health-tech to workflow automation platforms.
Let’s connect on LinkedIn or GitHub — I’m always open to new collaborations and technical discussions.

Top comments (0)