<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: deeptiverma12</title>
    <description>The latest articles on DEV Community by deeptiverma12 (@deeptiverma12).</description>
    <link>https://dev.to/deeptiverma12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deeptiverma12"/>
    <language>en</language>
    <item>
      <title>Building a Voice-Controlled Local AI Agent with Groq Whisper and Llama 3.3-70b</title>
      <dc:creator>deeptiverma12</dc:creator>
      <pubDate>Sun, 12 Apr 2026 19:02:21 +0000</pubDate>
      <link>https://dev.to/deeptiverma12/building-a-voice-controlled-local-ai-agent-with-groq-whisper-and-llama-33-70b-1e54</link>
      <guid>https://dev.to/deeptiverma12/building-a-voice-controlled-local-ai-agent-with-groq-whisper-and-llama-33-70b-1e54</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I recently built a voice-controlled AI agent that accepts audio input, detects the user's intent, and executes local actions automatically. In this article I'll walk through the architecture, the models I chose, and the challenges I faced while building it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Agent Does
&lt;/h2&gt;

&lt;p&gt;The agent supports four intents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create File&lt;/strong&gt; — creates a new file in a dedicated output folder&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write Code&lt;/strong&gt; — generates code using an LLM and saves it to a file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summarize&lt;/strong&gt; — summarizes provided text in 3-4 lines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General Chat&lt;/strong&gt; — answers general questions conversationally&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The pipeline has five components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. stt.py — Speech to Text&lt;/strong&gt;&lt;br&gt;
Converts uploaded audio to text using Groq's Whisper large-v3 model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. intent.py — Intent Detection&lt;/strong&gt;&lt;br&gt;
Sends the transcribed text to Llama 3.3-70b with a structured prompt asking it to return a JSON object containing the intent, details, and suggested filename.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. tools.py — Tool Execution&lt;/strong&gt;&lt;br&gt;
Based on the detected intent, executes the appropriate action. All file operations are restricted to an output/ folder for safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. memory.py — Session Memory&lt;/strong&gt;&lt;br&gt;
Maintains a list of all commands executed during the session, displayed in the UI with expand/collapse functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. app.py — Streamlit UI&lt;/strong&gt;&lt;br&gt;
Connects all components and displays the transcription, detected intent, confirmation prompt, result, and session history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models I Chose
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Whisper large-v3 via Groq API&lt;/strong&gt; for speech to text. I chose Groq over running Whisper locally because local inference on my machine is not efficient enough for real-time use. Groq provides extremely fast inference which keeps the pipeline responsive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Llama 3.3-70b-versatile via Groq API&lt;/strong&gt; for intent detection and tool execution. I chose this model because it follows structured JSON instructions reliably, which is critical for intent classification. It also generates high quality code for the write_code intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges I Faced
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. JSON parsing from LLM responses&lt;/strong&gt;&lt;br&gt;
Llama sometimes wraps its JSON response in markdown backticks like&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;json ...&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
. This caused json.loads() to crash. I fixed this by stripping the backticks before parsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Audio format conversion&lt;/strong&gt;&lt;br&gt;
The Groq Whisper API does not support .ogg format directly. I used pydub to automatically convert any uploaded audio format to .wav before sending it to Whisper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Streamlit session state&lt;/strong&gt;&lt;br&gt;
When the user clicked Yes on the confirmation prompt, Streamlit reruns the entire page which caused all variables to reset. I fixed this by storing the transcribed text, intent result, and output in st.session_state so they persist across reruns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Python 3.13 compatibility&lt;/strong&gt;&lt;br&gt;
The audioop module was removed in Python 3.13, which broke pydub. I fixed this by installing the audioop-lts package which brings back the missing module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Human-in-the-loop&lt;/strong&gt; — confirmation prompt before any file operation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Memory&lt;/strong&gt; — full history of all commands in the UI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Format Conversion&lt;/strong&gt; — handles ogg, mp3, m4a, wav automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graceful Degradation&lt;/strong&gt; — markdown stripping ensures LLM responses always parse correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/deeptiverma12/voice-local-agent" rel="noopener noreferrer"&gt;https://github.com/deeptiverma12/voice-local-agent&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building this agent taught me how to wire together STT, LLM intent detection, and local tool execution into a clean pipeline. The biggest learning was handling Streamlit's rerun behaviour with session state. Overall a very practical project that shows how voice can be used as a natural interface for AI agents.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
    </item>
  </channel>
</rss>
