<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manushree Patil</title>
    <description>The latest articles on DEV Community by Manushree Patil (@manushree_patil_22e650fcc).</description>
    <link>https://dev.to/manushree_patil_22e650fcc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manushree_patil_22e650fcc"/>
    <language>en</language>
    <item>
      <title>Building a Voice-Controlled Local AI Agent with Whisper, Groq &amp; Streamlit</title>
      <dc:creator>Manushree Patil</dc:creator>
      <pubDate>Fri, 10 Apr 2026 13:17:25 +0000</pubDate>
      <link>https://dev.to/manushree_patil_22e650fcc/building-a-voice-controlled-local-ai-agent-with-whisper-groq-streamlit-3dfj</link>
      <guid>https://dev.to/manushree_patil_22e650fcc/building-a-voice-controlled-local-ai-agent-with-whisper-groq-streamlit-3dfj</guid>
      <description>&lt;h1&gt;
  
  
  Building a Voice-Controlled Local AI Agent with Whisper, Groq &amp;amp; Streamlit
&lt;/h1&gt;

&lt;p&gt;For my Mem0 AI/ML internship assignment, I built a fully working voice-controlled &lt;br&gt;
AI agent that accepts audio input, classifies intent, executes local tools, and &lt;br&gt;
displays everything in a clean UI. Here's how I built it and what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;p&gt;You speak (or type) a command → the agent transcribes it → classifies your intent &lt;br&gt;
→ executes the right action → shows the result. All in one pipeline.&lt;/p&gt;

&lt;p&gt;Supported intents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;create_file&lt;/strong&gt; — creates a new file in the output/ folder&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;write_code&lt;/strong&gt; — generates code using LLM and saves it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;summarize&lt;/strong&gt; — summarizes provided text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;general_chat&lt;/strong&gt; — conversational Q&amp;amp;A&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;compound&lt;/strong&gt; — multiple commands in one utterance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;Audio Input → STT (Whisper/Groq) → Intent Classification (LLM) → Tool Execution → Streamlit UI&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Speech-to-Text&lt;/td&gt;
&lt;td&gt;Groq Whisper API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intent + Generation&lt;/td&gt;
&lt;td&gt;Groq (llama-3.3-70b)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UI&lt;/td&gt;
&lt;td&gt;Streamlit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Model Choices &amp;amp; Why
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;STT — Groq Whisper API&lt;/strong&gt;: I chose Groq over local HuggingFace Whisper because &lt;br&gt;
my machine doesn't have a GPU. Groq processes audio in under 1 second using &lt;br&gt;
&lt;code&gt;whisper-large-v3&lt;/code&gt; on their free tier. The code supports local Whisper as well &lt;br&gt;
via HuggingFace transformers as a fallback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM — Groq (llama-3.3-70b)&lt;/strong&gt;: For intent classification, I needed structured &lt;br&gt;
JSON output reliably. Groq's API with &lt;code&gt;response_format: json_object&lt;/code&gt; gave &lt;br&gt;
consistent results. The system prompt instructs the model to return intent, &lt;br&gt;
filename, language, and sub_tasks for compound commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Challenge — Intent Classification
&lt;/h2&gt;

&lt;p&gt;Getting the LLM to return reliable JSON every time was the hardest part. My &lt;br&gt;
solution was a strict system prompt that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Defines every intent clearly&lt;/li&gt;
&lt;li&gt;Forces JSON-only output&lt;/li&gt;
&lt;li&gt;Has a fallback parser that strips markdown fences if the model adds them&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Bonus Features Implemented
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compound commands&lt;/strong&gt; — "Generate bubble sort and save it as bubble.py"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human-in-the-loop&lt;/strong&gt; — confirmation prompt before any file operation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graceful degradation&lt;/strong&gt; — handles LLM failures, bad audio, unknown intents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session memory&lt;/strong&gt; — chat context preserved across turns&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Safety
&lt;/h2&gt;

&lt;p&gt;All file operations are restricted to an &lt;code&gt;output/&lt;/code&gt; folder. The &lt;code&gt;_safe_path()&lt;/code&gt; &lt;br&gt;
function strips any directory traversal attempts and adds timestamps to filenames &lt;br&gt;
to prevent overwrites.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prompt engineering for structured output is more important than model size&lt;/li&gt;
&lt;li&gt;Groq's free tier is surprisingly powerful for production-quality inference&lt;/li&gt;
&lt;li&gt;Streamlit makes it incredibly fast to build AI demo UIs&lt;/li&gt;
&lt;li&gt;Always restrict file operations to a sandboxed directory&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/manushreepatil/voice-agent-ai" rel="noopener noreferrer"&gt;https://github.com/manushreepatil/voice-agent-ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Demo: 
&lt;a href="https://www.loom.com/share/477a2e16f1144f91800194819ea06c35" rel="noopener noreferrer"&gt;https://www.loom.com/share/477a2e16f1144f91800194819ea06c35&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
      <category>streamlit</category>
    </item>
  </channel>
</rss>
