<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhay Raj Singh Hada</title>
    <description>The latest articles on DEV Community by Abhay Raj Singh Hada (@abhayyraj).</description>
    <link>https://dev.to/abhayyraj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhayyraj"/>
    <language>en</language>
    <item>
      <title>How I Built a Voice-Controlled AI Agent in Python</title>
      <dc:creator>Abhay Raj Singh Hada</dc:creator>
      <pubDate>Sat, 11 Apr 2026 07:08:53 +0000</pubDate>
      <link>https://dev.to/abhayyraj/how-i-built-a-voice-controlled-ai-agent-in-python-4hee</link>
      <guid>https://dev.to/abhayyraj/how-i-built-a-voice-controlled-ai-agent-in-python-4hee</guid>
      <description>&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Article:
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;I recently applied for an internship at Mem0 AI, a memory layer startup in San Francisco. As part of their selection process they gave me a technical assignment — build a voice-controlled AI agent that listens to commands, understands intent, and takes real actions on the computer.&lt;br&gt;
I am fairly new to AI so this was a good challenge for me. Here is what I built and how.&lt;/p&gt;

&lt;p&gt;What it does&lt;br&gt;
You speak a command, and the agent:&lt;/p&gt;

&lt;p&gt;Converts your speech to text&lt;br&gt;
Understands what you want&lt;br&gt;
Takes the actual action — creates files, writes code, summarizes text, or chats&lt;br&gt;
Shows everything in a browser UI&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Four files, each doing one job:&lt;br&gt;
Voice → stt.py → intent.py → tools.py → app.py&lt;br&gt;
stt.py — sends audio to Groq Whisper, gets back text&lt;br&gt;
intent.py — sends text to LLaMA 3.3, gets back JSON telling us what the user wants&lt;br&gt;
tools.py — executes the action based on intent&lt;br&gt;
app.py — shows everything in a Gradio UI&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  The JSON trick
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
The most interesting part was forcing LLaMA to reply only in JSON format. This way I can reliably extract things like filename and programming language without parsing messy natural language.&lt;br&gt;
json{&lt;br&gt;
  "intent": "write_code",&lt;br&gt;
  "filename": "bubble_sort.py",&lt;br&gt;
  "language": "Python"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Groq instead of local models
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
My laptop runs on CPU only. Whisper locally takes 30-60 seconds per request which makes the agent unusable. Groq gives free API access with responses under 1 second. Swapping to Ollama later needs just one line change.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;Gradio version conflicts took the most debugging time&lt;br&gt;
The LLaMA model I started with got decommissioned mid-development&lt;br&gt;
Getting consistent JSON output from the LLM needed a very explicit system prompt&lt;/p&gt;

&lt;p&gt;I tested both whisper-large-v3 and whisper-large-v3-turbo for speech to text. The turbo version was slightly faster but slightly less accurate on Indian accents. I went with the standard version for better accuracy.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;How to build an end to end AI pipeline in Python&lt;br&gt;
Using LLMs for structured JSON output&lt;br&gt;
How Whisper speech to text works&lt;br&gt;
Building UIs with Gradio&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Abhayy-Raj/voice-agent" rel="noopener noreferrer"&gt;https://github.com/Abhayy-Raj/voice-agent&lt;/a&gt;&lt;br&gt;
Youtube Link: &lt;a href="https://youtu.be/Ii8TeJdH27w" rel="noopener noreferrer"&gt;https://youtu.be/Ii8TeJdH27w&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>gradio</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
