<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: hamsiniananya</title>
    <description>The latest articles on DEV Community by hamsiniananya (@hamsiniananya).</description>
    <link>https://dev.to/hamsiniananya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hamsiniananya"/>
    <language>en</language>
    <item>
      <title>Building a Voice-Controlled Local AI Agent: Architecture, Models, and Hard-Won Lessons</title>
      <dc:creator>hamsiniananya</dc:creator>
      <pubDate>Mon, 13 Apr 2026 14:12:35 +0000</pubDate>
      <link>https://dev.to/hamsiniananya/building-a-voice-controlled-local-ai-agent-architecture-models-and-hard-won-lessons-31h9</link>
      <guid>https://dev.to/hamsiniananya/building-a-voice-controlled-local-ai-agent-architecture-models-and-hard-won-lessons-31h9</guid>
      <description>&lt;p&gt;I recently built a voice-controlled AI agent that runs almost entirely on my local machine. You speak a command, it transcribes you, figures out what you want, and actually does it — creates files, writes code, summarises text, or just chats back. Here's how I built it, the architectural decisions I made, and the surprises along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We're Building
&lt;/h2&gt;

&lt;p&gt;The agent has four stages in its pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Speech-to-Text (STT)&lt;/strong&gt; — converts your voice to text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intent Classification&lt;/strong&gt; — an LLM determines &lt;em&gt;what&lt;/em&gt; you want&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Execution&lt;/strong&gt; — the correct action is performed on your machine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streamlit UI&lt;/strong&gt; — displays every stage transparently&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The guiding principle was &lt;em&gt;local-first&lt;/em&gt;: I wanted this running on my laptop without monthly API bills. Cloud providers are available as fallbacks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Deep Dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stage 1 — Speech-to-Text
&lt;/h3&gt;

&lt;p&gt;The obvious choice is OpenAI's Whisper. I used the &lt;code&gt;openai-whisper&lt;/code&gt; pip package, which lets you run the model entirely offline. I went with the &lt;code&gt;base&lt;/code&gt; model (~74M parameters) as a balance between accuracy and speed on CPU. On my machine (Intel i7, 16GB RAM, no GPU), it transcribes a 10-second clip in about 12 seconds. Acceptable for a demo; I'd switch to a GPU or Groq's API for production.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;whisper&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;whisper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transcribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio.wav&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why not wav2vec?&lt;/strong&gt; wav2vec2 is excellent for short, clean speech but less robust to diverse accents and background noise. Whisper is trained on 680,000 hours of multilingual audio — it just handles the real world better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware workaround&lt;/strong&gt;: If your machine can't run Whisper in real time, Groq's Whisper API is free-tier friendly and returns results in under a second. I built this as a selectable option in the sidebar. In the README I document this choice explicitly, as required.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 2 — Intent Classification
&lt;/h3&gt;

&lt;p&gt;This is where LLM prompt engineering gets interesting. Rather than fine-tuning a model, I use a structured zero-shot classification prompt that forces the model to return a JSON object with &lt;code&gt;intents&lt;/code&gt;, &lt;code&gt;reasoning&lt;/code&gt;, and &lt;code&gt;entities&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Given&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;command,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;identify&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ALL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;applicable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;intents&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;list:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;create_file,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;write_code,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;summarize_text,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;general_chat,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;unknown&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;Return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ONLY:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"intents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"intent1"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"entities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"filename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;entities&lt;/code&gt; field is crucial — it lets the tool executor pick up the filename, programming language, or text content mentioned in the command without needing another LLM call.&lt;/p&gt;

&lt;p&gt;I used &lt;strong&gt;Ollama&lt;/strong&gt; with &lt;code&gt;llama3.2&lt;/code&gt; for local inference. Ollama runs as a local HTTP server, which means calling it from Python is just a POST request — dead simple and no GPU required (though it helps).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compound command support&lt;/strong&gt;: Because I extract a &lt;em&gt;list&lt;/em&gt; of intents, a command like "Summarize this text and save it to summary.txt" correctly returns &lt;code&gt;["summarize_text"]&lt;/code&gt; with &lt;code&gt;filename: "summary.txt"&lt;/code&gt; in entities — the tool executor then both generates the summary &lt;em&gt;and&lt;/em&gt; saves it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 3 — Tool Execution
&lt;/h3&gt;

&lt;p&gt;Each intent maps to a tool function. All file operations are restricted to an &lt;code&gt;output/&lt;/code&gt; directory — a critical safety constraint I implemented by calling &lt;code&gt;Path(filename).name&lt;/code&gt; to strip any parent directory components before constructing the output path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_safe_output_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;safe_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;   &lt;span class="c1"&gt;# strips "../../../etc/passwd" attacks
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;OUTPUT_DIR&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;safe_name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For code generation, I send the user's request back to the LLM with a code-only prompt. For summarization, a summarization prompt. For general chat, a straightforward conversational prompt. Three prompts, one LLM call each.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 4 — Streamlit UI
&lt;/h3&gt;

&lt;p&gt;Streamlit was the natural fit for a rapid Python UI. It required no JavaScript, and the entire UI state (session history, settings) lives in &lt;code&gt;st.session_state&lt;/code&gt;. I used custom CSS injected via &lt;code&gt;st.markdown(..., unsafe_allow_html=True)&lt;/code&gt; to give it a dark, terminal-like feel that matches the "local agent" aesthetic.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Human-in-the-Loop&lt;/strong&gt; feature — a toggle in the sidebar — intercepts any file-writing intent and shows a confirmation dialog before executing. This is implemented with a simple boolean in session state.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenges
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Parsing LLM JSON Reliably
&lt;/h3&gt;

&lt;p&gt;The biggest headache was getting consistent JSON back from the LLM. Even with explicit instructions, models occasionally wrap their response in markdown fences or add a preamble like "Sure, here is the JSON:". My solution: strip markdown fences with regex, then use &lt;code&gt;re.search(r"\{.*\}", text, re.DOTALL)&lt;/code&gt; to extract the JSON object, then &lt;code&gt;json.loads()&lt;/code&gt;. Never trust raw LLM output.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Whisper Audio Format
&lt;/h3&gt;

&lt;p&gt;Whisper is finicky about input formats. Streamlit's &lt;code&gt;st.audio_input&lt;/code&gt; returns bytes in a format that soundfile doesn't always parse cleanly. The fix: write to a temp &lt;code&gt;.wav&lt;/code&gt; file and pass the path to Whisper, then clean up.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Ollama Cold Start
&lt;/h3&gt;

&lt;p&gt;The first inference call after starting Ollama takes 3–8 seconds to load the model into memory. Subsequent calls are fast (~1s for classification). I added a spinner in the UI so users don't think the app has frozen.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Compound Intents
&lt;/h3&gt;

&lt;p&gt;Supporting "Summarize this and save it to file.txt" required rethinking the tool dispatcher. My first version mapped one intent to one tool. The fix was to always prioritise &lt;code&gt;write_code&lt;/code&gt; → &lt;code&gt;create_file&lt;/code&gt; → &lt;code&gt;summarize_text&lt;/code&gt; → &lt;code&gt;general_chat&lt;/code&gt; in that order, while passing the full &lt;code&gt;entities&lt;/code&gt; dict to every tool so the filename is always available regardless of which tool runs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Model Choices Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;Local Model&lt;/th&gt;
&lt;th&gt;Cloud Fallback&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;STT&lt;/td&gt;
&lt;td&gt;Whisper base&lt;/td&gt;
&lt;td&gt;Groq Whisper-large-v3&lt;/td&gt;
&lt;td&gt;Robustness, multilingual&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM&lt;/td&gt;
&lt;td&gt;Ollama llama3.2&lt;/td&gt;
&lt;td&gt;Groq llama-3.1-8b-instant&lt;/td&gt;
&lt;td&gt;JSON compliance, speed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Speed comparison&lt;/strong&gt; (informal benchmarking on my machine):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whisper base (CPU): ~12s for 10s clip&lt;/li&gt;
&lt;li&gt;Groq Whisper API: ~0.8s for same clip&lt;/li&gt;
&lt;li&gt;Ollama llama3.2 (CPU): ~4s for intent classification&lt;/li&gt;
&lt;li&gt;Groq llama-3.1-8b: ~0.5s for same prompt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cloud APIs are 5–15× faster, but the local stack costs nothing after setup and keeps all your data on your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Build Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Voice Activity Detection (VAD)&lt;/strong&gt;: Instead of pressing a button to record, use Silero VAD to auto-start/stop recording when speech is detected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streaming code output&lt;/strong&gt;: Stream the LLM's code generation token-by-token into the UI for a ChatGPT-style typing effect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent memory across sessions&lt;/strong&gt;: Store chat history and created files in SQLite for true agent memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool plugins&lt;/strong&gt;: A simple plugin system where new tools can be registered by dropping a Python file into a &lt;code&gt;tools/&lt;/code&gt; directory.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The most surprising thing about this project was how accessible the local AI stack has become. A year ago, running a capable LLM on a laptop felt impossible. Today, Ollama + llama3.2 gives you a genuinely useful language model in one terminal command. Combine that with Whisper for STT and Streamlit for UI, and you have a full voice AI agent in under 400 lines of Python.&lt;/p&gt;

&lt;p&gt;The code is on GitHub: &lt;a href="https://github.com/hamsiniananya/Voice-Controlled-Local-AI-Agent.git" rel="noopener noreferrer"&gt;https://github.com/hamsiniananya/Voice-Controlled-Local-AI-Agent.git&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;*All opinions are my own. Built as part of an AI engineering assignment.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
