<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SWETHA K K </title>
    <description>The latest articles on DEV Community by SWETHA K K  (@swetha_kk).</description>
    <link>https://dev.to/swetha_kk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/swetha_kk"/>
    <language>en</language>
    <item>
      <title>How I Built a Voice Controlled AI Agent That Listens, Thinks, and Acts</title>
      <dc:creator>SWETHA K K </dc:creator>
      <pubDate>Wed, 15 Apr 2026 15:11:53 +0000</pubDate>
      <link>https://dev.to/swetha_kk/how-i-built-a-voice-controlled-ai-agent-that-listens-thinks-and-acts-aco</link>
      <guid>https://dev.to/swetha_kk/how-i-built-a-voice-controlled-ai-agent-that-listens-thinks-and-acts-aco</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
What if you could just speak to your computer and have it create files, write code, or summarize text, automatically? That's exactly what I built: a voice-controlled local AI agent that accepts audio input, figures out what you want, and executes it.&lt;br&gt;
Here's how I built it, what I used, and what I learned along the way.&lt;/p&gt;

&lt;p&gt;The Architecture&lt;br&gt;
The pipeline has 5 stages:&lt;br&gt;
Audio Input → Speech-to-Text → Intent Detection → Tool Execution → UI Display&lt;/p&gt;

&lt;p&gt;Audio Input — The user speaks into a microphone or uploads an audio file&lt;br&gt;
Speech-to-Text — The audio is transcribed to text using Whisper&lt;br&gt;
Intent Detection — An LLM reads the text and classifies what the user wants&lt;br&gt;
Tool Execution — The right action is triggered (create file, write code, summarize, or chat)&lt;br&gt;
UI Display — Everything is shown in a clean web interface&lt;/p&gt;

&lt;p&gt;Models I Chose&lt;br&gt;
Speech-to-Text: Groq Whisper Large v3&lt;br&gt;
I originally planned to run Whisper locally via HuggingFace. However, my machine couldn't run it efficiently enough for real-time use. I switched to the Groq API, which runs Whisper Large v3 in the cloud at incredible speed — transcription happens in under a second.&lt;br&gt;
LLM: LLaMA 3.3-70b via Groq&lt;br&gt;
For intent classification and response generation, I used LLaMA 3.3-70b served through Groq. I chose this because:&lt;/p&gt;

&lt;p&gt;It's free to use on Groq's generous free tier&lt;br&gt;
It's extremely fast (Groq's hardware is purpose-built for LLM inference)&lt;br&gt;
It follows structured JSON instructions reliably, which is critical for intent classification&lt;/p&gt;

&lt;p&gt;UI: Gradio&lt;br&gt;
I used Gradio to build the frontend. It lets you spin up a web UI with just a few lines of Python — perfect for a project like this.&lt;/p&gt;

&lt;p&gt;Supported Intents&lt;br&gt;
The agent can detect and handle four intents:&lt;/p&gt;

&lt;p&gt;Create File — Creates a .txt file in the output/ folder&lt;br&gt;
Write Code — Generates Python code and saves it as a .py file&lt;br&gt;
Summarize — Summarizes the spoken content and saves it&lt;br&gt;
General Chat — Has a normal conversation&lt;/p&gt;

&lt;p&gt;Challenges I Faced&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running Whisper locally
My biggest challenge was getting speech-to-text to work. Running Whisper locally via HuggingFace required significant RAM and GPU, which my machine didn't have. Switching to the Groq API solved this instantly.&lt;/li&gt;
&lt;li&gt;Getting structured JSON from the LLM
For intent detection, I needed the LLM to return clean JSON every time. Early on, it would sometimes add extra explanation text around the JSON, breaking the parser. I fixed this by making the system prompt very strict — telling it to return only JSON with no preamble.&lt;/li&gt;
&lt;li&gt;Windows file extension issues
A surprisingly tricky issue — Windows hides file extensions by default, so saving agent.py in Notepad actually created agent.py.txt. Had to enable "Show file extensions" in File Explorer to fix it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example Flow&lt;br&gt;
User says: "Create a Python file with a retry function"&lt;/p&gt;

&lt;p&gt;Groq Whisper transcribes the audio to text&lt;br&gt;
LLaMA detects intent: write_code&lt;br&gt;
LLaMA generates the Python retry function&lt;br&gt;
File is saved to output/code_20260415.py&lt;br&gt;
The UI shows the transcription, intent, and the generated code&lt;/p&gt;

&lt;p&gt;What I'd Improve Next&lt;/p&gt;

&lt;p&gt;Add support for compound commands ("summarize this and save it to a file")&lt;br&gt;
Add a confirmation prompt before executing file operations&lt;br&gt;
Support more intents like web search or sending emails&lt;br&gt;
Build a persistent session memory so the agent remembers context&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Building this agent taught me how powerful combining simple APIs can be. Groq's speed makes real-time voice interaction actually feel snappy, and Gradio makes deploying a UI embarrassingly easy.&lt;br&gt;
The full code is available on GitHub: &lt;a href="https://github.com/swetha-kk/voice-agent" rel="noopener noreferrer"&gt;https://github.com/swetha-kk/voice-agent&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
