<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MileyFu</title>
    <description>The latest articles on DEV Community by MileyFu (@mileyfu).</description>
    <link>https://dev.to/mileyfu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mileyfu"/>
    <language>en</language>
    <item>
      <title>Building an Open Source Voice AI Agent in Rust: November Devlog- Dynamic Prompts &amp; Firmware UX</title>
      <dc:creator>MileyFu</dc:creator>
      <pubDate>Fri, 05 Dec 2025 04:07:32 +0000</pubDate>
      <link>https://dev.to/mileyfu/building-an-open-source-voice-ai-agent-in-rust-november-devlog-dynamic-prompts-firmware-ux-o29</link>
      <guid>https://dev.to/mileyfu/building-an-open-source-voice-ai-agent-in-rust-november-devlog-dynamic-prompts-firmware-ux-o29</guid>
      <description>&lt;p&gt;EchoKit is an open-source toolkit designed to help developers build real-world AI applications using Rust and ESP32. It handles the full pipeline: Voice Activity Detection (VAD), ASR, LLM orchestration, and TTS.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/second-state/echokit_server" rel="noopener noreferrer"&gt;https://github.com/second-state/echokit_server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We just released our November updates for both the hardware firmware and the agent server. Here is what we shipped.&lt;/p&gt;

&lt;p&gt;Server Update: Dynamic Personas Hardcoding prompts is fine for a demo, but rigid for an agent. We added Dynamic Prompt Loading. You can now configure the server to fetch system prompts from a URL.&lt;/p&gt;

&lt;p&gt;Why it matters: You can update your AI's behavior, knowledge base, or personality remotely without touching the server binary.&lt;/p&gt;

&lt;p&gt;Firmware Update: Quality of Life For the ESP32 side, we focused on usability:&lt;/p&gt;

&lt;p&gt;Unified Provisioning: Wi-Fi credentials and Server URLs are now configured in a single step.&lt;/p&gt;

&lt;p&gt;Physical Controls: We mapped the hardware buttons (K1 and K2) to volume control, allowing immediate audio adjustments.&lt;/p&gt;

&lt;p&gt;MCP Feedback: When the AI uses the Model Context Protocol (MCP) to perform a search or an action, the device now verbally notifies the user ("Please wait...") so they know the agent is "thinking".&lt;/p&gt;

&lt;p&gt;Get Started You can flash the new firmware via the ESP32 Launchpad or build the Rust server from source.&lt;/p&gt;

&lt;p&gt;Docs: &lt;a href="https://echokit.dev/docs/" rel="noopener noreferrer"&gt;https://echokit.dev/docs/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update blog: &lt;a href="https://echokit.dev/docs/dev/server-firmware-updates-nov/" rel="noopener noreferrer"&gt;https://echokit.dev/docs/dev/server-firmware-updates-nov/&lt;/a&gt;&lt;br&gt;
Learn more by watching EchoKit demos at Open Source Conferences: &lt;a href="https://www.secondstate.io/articles/ossummit-korea-and-kubecon-na-2025/" rel="noopener noreferrer"&gt;https://www.secondstate.io/articles/ossummit-korea-and-kubecon-na-2025/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>ai</category>
      <category>opensource</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
