<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Akash</title>
    <description>The latest articles on DEV Community by Akash (@webakash).</description>
    <link>https://dev.to/webakash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/webakash"/>
    <language>en</language>
    <item>
      <title>Building a Multi-Agent AI Consensus Engine with n8n, Groq, and Supabase</title>
      <dc:creator>Akash</dc:creator>
      <pubDate>Mon, 19 Jan 2026 16:28:31 +0000</pubDate>
      <link>https://dev.to/webakash/building-a-multi-agent-ai-consensus-engine-with-n8n-groq-and-supabase-93k</link>
      <guid>https://dev.to/webakash/building-a-multi-agent-ai-consensus-engine-with-n8n-groq-and-supabase-93k</guid>
      <description>&lt;h1&gt;
  
  
  I Built a Multi-Agent AI Consensus Engine: Here’s What I Learned 🤖⚖️
&lt;/h1&gt;

&lt;p&gt;Most people use AI by sending a single prompt to a single model. But what happens when you need &lt;strong&gt;certainty&lt;/strong&gt;? I decided to build a "Jury System" where multiple AI agents act as specialized experts to debate a product's value before giving a final verdict.&lt;/p&gt;

&lt;p&gt;In this project, I bridged the gap between a modern web UI and a complex AI backend. Here is the breakdown of how I orchestrated &lt;strong&gt;n8n&lt;/strong&gt;, &lt;strong&gt;Groq&lt;/strong&gt;, &lt;strong&gt;Supabase&lt;/strong&gt;, and &lt;strong&gt;Replit&lt;/strong&gt; to make it happen.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ The Tech Stack
&lt;/h2&gt;

&lt;p&gt;To build this, I used a mix of low-code, high-performance inference, and cloud hosting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; HTML5/JavaScript hosted on &lt;strong&gt;Replit&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow Engine:&lt;/strong&gt; &lt;strong&gt;n8n&lt;/strong&gt; (The "Glue" that connects everything).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Models:&lt;/strong&gt; &lt;strong&gt;Groq&lt;/strong&gt; (using Llama 3.3-70b for sub-second responses).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; &lt;strong&gt;Supabase&lt;/strong&gt; (To log every verdict).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tunneling:&lt;/strong&gt; &lt;strong&gt;ngrok&lt;/strong&gt; (To expose my local n8n instance to the internet).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧠 The Logic: The "Relay Race" Workflow
&lt;/h2&gt;

&lt;p&gt;The core of this project is an n8n workflow that functions like a relay race. Instead of one agent doing everything, the task is passed along a chain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;The Trigger:&lt;/strong&gt; A Webhook receives a product search from my Replit UI.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Expert 1 (Technical Analyst):&lt;/strong&gt; Investigates build quality, ISO standards, and certifications.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Expert 2 (Market Analyst):&lt;/strong&gt; Takes the tech report and finds the best global pricing in USD, INR, and EUR.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Jury Foreman:&lt;/strong&gt; Reviews the conflicting (or matching) reports and generates a final JSON verdict including a "Consensus Score."&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Storage:&lt;/strong&gt; The final data is saved to &lt;strong&gt;Supabase&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Response:&lt;/strong&gt; The Webhook node sends the structured data back to the user in under 15 seconds.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠️ Lessons from the Trenches (Troubleshooting)
&lt;/h2&gt;

&lt;p&gt;Building this wasn't without its hurdles. Here are three major things I had to solve:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Structured Data is a Must
&lt;/h3&gt;

&lt;p&gt;Web UIs don't like conversational AI "chatter." They want JSON. I had to learn to use &lt;strong&gt;Structured Output Parsers&lt;/strong&gt; in n8n to ensure the AI only returned valid keys like &lt;code&gt;winningProduct&lt;/code&gt; and &lt;code&gt;consensusScore&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Handling the "Baton"
&lt;/h3&gt;

&lt;p&gt;In n8n, agents sometimes "forget" the original user question as the workflow gets longer. I learned to use &lt;strong&gt;Absolute References&lt;/strong&gt; like &lt;code&gt;{{ $('Webhook').item.json.product }}&lt;/code&gt; to ensure the final Jury Foreman knew exactly what the user asked for at the start.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The 204 vs. 200 "Ghost" Response
&lt;/h3&gt;

&lt;p&gt;I initially struggled with Replit receiving a "204 No Content" error. I discovered that the Webhook node must be set to &lt;strong&gt;"Wait for Respond to Webhook Node"&lt;/strong&gt; to keep the connection open while the AI is thinking.&lt;/p&gt;




&lt;h2&gt;
  
  
  💻 Snippet: The JS Fetch
&lt;/h2&gt;

&lt;p&gt;Here is how I handled the asynchronous call from the frontend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;N8N_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;product&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;// Map the AI verdict to the UI&lt;/span&gt;
&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;productTitle&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;winningProduct&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;consensusScore&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;%`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🏁 Final Thoughts
&lt;/h2&gt;




&lt;p&gt;By the end of this project, I didn't just build a search tool; I built an &lt;strong&gt;automated decision-making pipeline&lt;/strong&gt;. Seeing the data flow from a Replit search, through a local ngrok tunnel, into a Llama 3.3 model on Groq, and finally into a Supabase table was incredibly satisfying.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's next?
&lt;/h3&gt;

&lt;p&gt;I'm planning to add a "Conflict Detection" agent that flags when the Technical Expert and Market Analyst radically disagree!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have you tried building multi-agent workflows? Let's discuss in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>n8nbrightdatachallenge</category>
      <category>supabase</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Built a Voice-Controlled OBS Assistant (Metaltank) — Here’s What Really Happened</title>
      <dc:creator>Akash</dc:creator>
      <pubDate>Mon, 19 Jan 2026 05:19:17 +0000</pubDate>
      <link>https://dev.to/webakash/i-built-a-voice-controlled-obs-assistant-metaltank-heres-what-really-happened-oem</link>
      <guid>https://dev.to/webakash/i-built-a-voice-controlled-obs-assistant-metaltank-heres-what-really-happened-oem</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post is about debugging pain, systems thinking, and the moment everything finally worked.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🎯 What I Wanted to Build
&lt;/h2&gt;

&lt;p&gt;I wanted to &lt;strong&gt;remove clicks from OBS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not automate one button.&lt;br&gt;&lt;br&gt;
Not trigger a hotkey.&lt;/p&gt;

&lt;p&gt;I wanted to &lt;strong&gt;talk to OBS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Say things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Metaltank mute mic”&lt;/li&gt;
&lt;li&gt;“Metaltank switch scene”&lt;/li&gt;
&lt;li&gt;“Metaltank start recording”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…and have OBS respond instantly.&lt;/p&gt;

&lt;p&gt;No Stream Deck.&lt;br&gt;&lt;br&gt;
No keyboard shortcuts.&lt;br&gt;&lt;br&gt;
No mouse.&lt;/p&gt;

&lt;p&gt;Just &lt;strong&gt;voice → intent → OBS WebSocket → action&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That project is called &lt;strong&gt;Metaltank&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What I Actually Built (So Far)
&lt;/h2&gt;

&lt;p&gt;Metaltank is a &lt;strong&gt;Node.js-based voice controller for OBS&lt;/strong&gt; that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects to OBS using &lt;strong&gt;obs-websocket&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Captures microphone audio using &lt;strong&gt;arecord&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Streams short audio chunks to &lt;strong&gt;whisper.cpp (local, offline)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Converts speech → text&lt;/li&gt;
&lt;li&gt;Parses intent using a custom rule engine&lt;/li&gt;
&lt;li&gt;Executes OBS actions (mute, unmute, toggle mic, scenes, recording)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All &lt;strong&gt;offline&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
No cloud APIs.&lt;br&gt;&lt;br&gt;
No paid services.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js (ESM)&lt;/li&gt;
&lt;li&gt;OBS WebSocket&lt;/li&gt;
&lt;li&gt;whisper.cpp (server mode)&lt;/li&gt;
&lt;li&gt;arecord (ALSA)&lt;/li&gt;
&lt;li&gt;Custom rule-based intent parser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple stack.&lt;br&gt;&lt;br&gt;
Hard execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  😤 Why This Was Way Harder Than It Sounds
&lt;/h2&gt;

&lt;p&gt;Let me be honest — nothing worked the first time.&lt;/p&gt;

&lt;h3&gt;
  
  
  1️⃣ Native Modules Failed (Vosk)
&lt;/h3&gt;

&lt;p&gt;I initially tried &lt;code&gt;vosk&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It failed because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Native compilation&lt;/li&gt;
&lt;li&gt;Missing build tools&lt;/li&gt;
&lt;li&gt;Node-gyp issues&lt;/li&gt;
&lt;li&gt;Environment limitations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lesson:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Offline” doesn’t always mean “easy.”&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  2️⃣ OBS Was “Connected” But Not Ready
&lt;/h3&gt;

&lt;p&gt;I kept hitting errors like:&lt;/p&gt;

&lt;p&gt;Error: Socket not identified&lt;br&gt;
Error: Not connected&lt;/p&gt;

&lt;p&gt;Root cause:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OBS WebSocket connect ≠ identify&lt;/li&gt;
&lt;li&gt;Actions were being called before OBS completed its handshake&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit OBS-ready state&lt;/li&gt;
&lt;li&gt;No actions allowed before identification&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3️⃣ Voice Was Triggering Commands Without Me Speaking
&lt;/h3&gt;

&lt;p&gt;At one point, Metaltank muted my mic without me saying anything.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simulated voice input still wired&lt;/li&gt;
&lt;li&gt;Voice module executing too early&lt;/li&gt;
&lt;li&gt;No wake-word guard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strict wake word: metaltank&lt;/li&gt;
&lt;li&gt;OBS readiness gate&lt;/li&gt;
&lt;li&gt;Clear separation between CLI and voice input&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4️⃣ whisper.cpp Flags Betrayed Me
&lt;/h3&gt;

&lt;p&gt;I tried flags like:&lt;/p&gt;

&lt;p&gt;--step&lt;br&gt;
--length&lt;/p&gt;

&lt;p&gt;They don’t exist in &lt;code&gt;whisper-cli&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop guessing flags&lt;/li&gt;
&lt;li&gt;Read &lt;code&gt;--help&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Switch to whisper-server&lt;/li&gt;
&lt;li&gt;POST WAV files properly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lesson:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Always check the CLI help. Even when you’re confident.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  5️⃣ Audio Was Recording… But Whisper Heard Nothing
&lt;/h3&gt;

&lt;p&gt;This was the hardest part.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAV files existed&lt;/li&gt;
&lt;li&gt;Audio played correctly&lt;/li&gt;
&lt;li&gt;Whisper returned &lt;code&gt;[BLANK_AUDIO]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Root causes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chunk timing too short&lt;/li&gt;
&lt;li&gt;Silence dominance&lt;/li&gt;
&lt;li&gt;Wrong assumptions about streaming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed-length chunks (3 seconds)&lt;/li&gt;
&lt;li&gt;File-based inference&lt;/li&gt;
&lt;li&gt;Let whisper finish before deleting audio&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔥 The Moment It Worked
&lt;/h2&gt;

&lt;p&gt;🗣️ Heard: mute mic&lt;br&gt;
[VOICE] MUTE_MIC&lt;br&gt;
🎙 Mic muted&lt;/p&gt;

&lt;p&gt;I didn’t celebrate loudly.&lt;/p&gt;

&lt;p&gt;I just smiled.&lt;/p&gt;

&lt;p&gt;Because this wasn’t luck —&lt;br&gt;&lt;br&gt;
it was layers finally aligning.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Current Metaltank Capabilities
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🎙 Mute / unmute / toggle mic&lt;/li&gt;
&lt;li&gt;🎬 Scene control&lt;/li&gt;
&lt;li&gt;⏺ Recording control&lt;/li&gt;
&lt;li&gt;🧠 Continuous listening&lt;/li&gt;
&lt;li&gt;🔒 Fully offline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OBS reacts to my voice.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;“Connected” doesn’t mean “ready”&lt;/li&gt;
&lt;li&gt;Audio pipelines fail silently&lt;/li&gt;
&lt;li&gt;Logging saves hours&lt;/li&gt;
&lt;li&gt;If you’re confused, the system is confused too&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Biggest lesson:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Complex systems don’t fail loudly — they fail quietly.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🚧 Still Phase 1
&lt;/h2&gt;

&lt;p&gt;This is still Phase 1.&lt;/p&gt;

&lt;p&gt;The vision is bigger:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero-click OBS setup&lt;/li&gt;
&lt;li&gt;Scene creation via voice&lt;/li&gt;
&lt;li&gt;Layout &amp;amp; webcam control&lt;/li&gt;
&lt;li&gt;Full recording workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal stays simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No clicks. Only intent.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🏁 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project reminded me why I love engineering.&lt;/p&gt;

&lt;p&gt;Not because things work —&lt;br&gt;&lt;br&gt;
but because they don’t, and you make them.&lt;/p&gt;

&lt;p&gt;If you’re building something ambitious and it feels impossible right now:&lt;/p&gt;

&lt;p&gt;You’re probably doing it right.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>beginners</category>
      <category>software</category>
    </item>
  </channel>
</rss>
