<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsh Vardhan Singh</title>
    <description>The latest articles on DEV Community by Harsh Vardhan Singh (@harshvsingh).</description>
    <link>https://dev.to/harshvsingh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/harshvsingh"/>
    <language>en</language>
    <item>
      <title>Voice Agent</title>
      <dc:creator>Harsh Vardhan Singh</dc:creator>
      <pubDate>Fri, 17 Apr 2026 12:28:22 +0000</pubDate>
      <link>https://dev.to/harshvsingh/voice-agent-5hhm</link>
      <guid>https://dev.to/harshvsingh/voice-agent-5hhm</guid>
      <description>&lt;h1&gt;
  
  
  Building a Local-First Voice AI Agent: Architecture, Models, and Constraints
&lt;/h1&gt;

&lt;p&gt;The demand for capable, privacy-preserving AI agents is growing, but developing these systems to run entirely on local consumer hardware presents a strict set of engineering constraints. Cloud-based agents can afford to use massive, generalized models in complex cyclic loops. Local agents, constrained by limited memory (6GB to 8GB of VRAM), require a much more deliberate approach.&lt;/p&gt;

&lt;p&gt;To explore these constraints, I developed &lt;strong&gt;VoiceAgent&lt;/strong&gt;, a voice-controlled, completely offline AI assistant capable of processing speech, detecting user intents, writing code, and executing sandboxed file system operations. &lt;/p&gt;

&lt;p&gt;This article outlines the architecture of the system, the rationale behind the selected models, and the engineering challenges overcome during the build.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Architecture: The Deterministic Pipeline
&lt;/h2&gt;

&lt;p&gt;When designing autonomous agents, the industry standard is often a ReAct (Reasoning and Acting) loop, typically orchestrated by frameworks like LangGraph. In these architectures, the LLM determines when to call a tool, evaluates the output, and decides internally when to finish.&lt;/p&gt;

&lt;p&gt;While powerful for frontier models (like GPT-4), cyclic architectures are highly unstable for local models under 10 billion parameters. Small models frequently hallucinate tool parameters, invent nonexistent files, or fall into infinite execution loops. &lt;/p&gt;

&lt;p&gt;To solve this, VoiceAgent abandons the cyclic loop in favor of a &lt;strong&gt;deterministic pipeline&lt;/strong&gt; powered by structured outputs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Input and Transcription&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Audio is captured via the user interface and transcribed entirely offline using Hugging Face's Whisper models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Intent Routing via Structured Outputs&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The transcribed text is passed to a Router LLM. Instead of generating free-form text, the router model is strictly constrained to a Pydantic schema (JSON). Its sole purpose is to map the user's natural language to a predefined action plan containing an intent (e.g., &lt;code&gt;create_file&lt;/code&gt;, &lt;code&gt;write_code&lt;/code&gt;, &lt;code&gt;summarize_text&lt;/code&gt;) and the extracted arguments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Plan Normalization and Selective HITL&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A Python middleware validates the model's output. If the requested intent modifies the file system (creating directories or writing code), the pipeline pauses and requires explicit Human-In-The-Loop (HITL) approval before proceeding. Safe operations, such as reading or summarizing an existing file, bypass this check and execute immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Deterministic Execution&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Once approved, a Python executor runs the determined tools. The LLM does not interact directly with the file system; it only provides the validated arguments to the deterministic Python functions.&lt;/p&gt;

&lt;p&gt;This linear approach effectively eliminates tool-calling hallucinations, ensuring that the system either executes exactly what was structured or fails gracefully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering Challenges and Model Selection
&lt;/h2&gt;

&lt;p&gt;Running an end-to-end agent on limited local hardware requires ruthless optimization. You cannot simply load a monolithic 70B parameter model. Instead, VoiceAgent relies on a split-model architecture, delegating specific tasks to specialized, smaller models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 1: The Schema Adherence Problem
&lt;/h3&gt;

&lt;p&gt;Smaller models famously struggle to output valid, parseable JSON. They often inject conversational filler, which breaks parsing logic. &lt;/p&gt;

&lt;p&gt;During development, I benchmarked several models for the intent routing task. While &lt;code&gt;llama3.1:8b&lt;/code&gt; was slightly faster in pure generation speed (averaging 4.8 seconds for a complex routing request), &lt;code&gt;qwen2.5:7b-instruct-q4&lt;/code&gt; was selected as the designated Router LLM. Despite a slightly slower inference time (5.4 seconds), Qwen 2.5 demonstrated vastly superior reliability in adhering to strict JSON schemas without hallucinating extraneous text. &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 2: Context Dilution
&lt;/h3&gt;

&lt;p&gt;Providing an AI agent with access to a file system often involves adding the entire directory tree to the system prompt. On models with smaller context windows and limited reasoning capabilities, this rapidly degrades performance and leads to confused outputs.&lt;/p&gt;

&lt;p&gt;To mitigate this, VoiceAgent utilizes targeted context injection. The full directory tree is never blindly passed to the router. Instead, the system relies on specialized generation models. When the router identifies a code generation task, the pipeline hands the task off to &lt;code&gt;qwen2.5-coder:7b&lt;/code&gt;. When text summarization is required, it utilizes &lt;code&gt;llama3.1:8b&lt;/code&gt;. This compartmentalization keeps prompts lean and generation high-quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 3: System Security and Path Traversal
&lt;/h3&gt;

&lt;p&gt;Autonomous file writing is inherently dangerous. A naive implementation might allow a model to generate arguments like &lt;code&gt;../../etc/passwd&lt;/code&gt; or overwrite critical project files.&lt;/p&gt;

&lt;p&gt;To secure the agent, all file operations are strictly jailed within an isolated &lt;code&gt;/output&lt;/code&gt; directory. The Python execution layer actively resolves absolute paths and programmatically blocks path traversal attempts. Furthermore, the root sandbox directory is protected against programmatic deletion, regardless of what the LLM requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 4: UI State vs. Agent Memory
&lt;/h3&gt;

&lt;p&gt;A significant challenge arose when integrating the agent pipeline with the frontend interface. Because Streamlit operates on a continuous rerun cycle, uploaded or recorded audio blobs would persist in the widget state. This resulted in "ghost prompts," where the application would transcribe and execute the same audio file in an infinite loop upon every UI refresh.&lt;/p&gt;

&lt;p&gt;This was resolved by implementing stateful cryptographic hashing. The system now hashes the payload of the audio blob and tracks it within the session state. This decouples the UI render cycle from the agent execution logic, ensuring that an audio command is only ever transcribed and processed once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building VoiceAgent demonstrated that highly capable, offline AI assistants do not require massive parameter counts or complex, opaque cloud frameworks. &lt;/p&gt;

&lt;p&gt;By enforcing strict structured outputs, isolating high-risk logic behind a deterministic execution layer, and selectively applying Human-in-the-Loop oversight, it is entirely possible to build a safe, fast, and reliable agent that operates comfortably within the constraints of consumer hardware.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
