DEV Community

Cover image for Building a Multi-Agent AI Consensus Engine with n8n, Groq, and Supabase
Akash
Akash

Posted on

Building a Multi-Agent AI Consensus Engine with n8n, Groq, and Supabase

I Built a Multi-Agent AI Consensus Engine: Here’s What I Learned 🤖⚖️

Most people use AI by sending a single prompt to a single model. But what happens when you need certainty? I decided to build a "Jury System" where multiple AI agents act as specialized experts to debate a product's value before giving a final verdict.

In this project, I bridged the gap between a modern web UI and a complex AI backend. Here is the breakdown of how I orchestrated n8n, Groq, Supabase, and Replit to make it happen.


🏗️ The Tech Stack

To build this, I used a mix of low-code, high-performance inference, and cloud hosting:

  • Frontend: HTML5/JavaScript hosted on Replit.
  • Workflow Engine: n8n (The "Glue" that connects everything).
  • AI Models: Groq (using Llama 3.3-70b for sub-second responses).
  • Database: Supabase (To log every verdict).
  • Tunneling: ngrok (To expose my local n8n instance to the internet).

🧠 The Logic: The "Relay Race" Workflow

The core of this project is an n8n workflow that functions like a relay race. Instead of one agent doing everything, the task is passed along a chain:

  1. The Trigger: A Webhook receives a product search from my Replit UI.
  2. Expert 1 (Technical Analyst): Investigates build quality, ISO standards, and certifications.
  3. Expert 2 (Market Analyst): Takes the tech report and finds the best global pricing in USD, INR, and EUR.
  4. The Jury Foreman: Reviews the conflicting (or matching) reports and generates a final JSON verdict including a "Consensus Score."
  5. The Storage: The final data is saved to Supabase.
  6. The Response: The Webhook node sends the structured data back to the user in under 15 seconds.

🛠️ Lessons from the Trenches (Troubleshooting)

Building this wasn't without its hurdles. Here are three major things I had to solve:

1. Structured Data is a Must

Web UIs don't like conversational AI "chatter." They want JSON. I had to learn to use Structured Output Parsers in n8n to ensure the AI only returned valid keys like winningProduct and consensusScore.

2. Handling the "Baton"

In n8n, agents sometimes "forget" the original user question as the workflow gets longer. I learned to use Absolute References like {{ $('Webhook').item.json.product }} to ensure the final Jury Foreman knew exactly what the user asked for at the start.

3. The 204 vs. 200 "Ghost" Response

I initially struggled with Replit receiving a "204 No Content" error. I discovered that the Webhook node must be set to "Wait for Respond to Webhook Node" to keep the connection open while the AI is thinking.


💻 Snippet: The JS Fetch

Here is how I handled the asynchronous call from the frontend:

const response = await fetch(N8N_URL, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ product: query })
});

const data = await response.json();
// Map the AI verdict to the UI
document.getElementById('productTitle').innerText = data.winningProduct;
document.getElementById('score').innerText = `${data.consensusScore}%`;

Enter fullscreen mode Exit fullscreen mode

🏁 Final Thoughts


By the end of this project, I didn't just build a search tool; I built an automated decision-making pipeline. Seeing the data flow from a Replit search, through a local ngrok tunnel, into a Llama 3.3 model on Groq, and finally into a Supabase table was incredibly satisfying.

What's next?

I'm planning to add a "Conflict Detection" agent that flags when the Technical Expert and Market Analyst radically disagree!

Have you tried building multi-agent workflows? Let's discuss in the comments!

Top comments (0)