<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Y H</title>
    <description>The latest articles on DEV Community by Y H (@y_h_3450b0df12444f6ab7cde).</description>
    <link>https://dev.to/y_h_3450b0df12444f6ab7cde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/y_h_3450b0df12444f6ab7cde"/>
    <language>en</language>
    <item>
      <title>I Open-Sourced an Android Voice Assistant for OpenClaw</title>
      <dc:creator>Y H</dc:creator>
      <pubDate>Fri, 20 Feb 2026 10:55:16 +0000</pubDate>
      <link>https://dev.to/y_h_3450b0df12444f6ab7cde/i-open-sourced-an-android-voice-assistant-for-openclaw-1iee</link>
      <guid>https://dev.to/y_h_3450b0df12444f6ab7cde/i-open-sourced-an-android-voice-assistant-for-openclaw-1iee</guid>
      <description>&lt;p&gt;I’ve been building a practical Android assistant on top of OpenClaw and decided to open-source it.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/yuga-hashimoto/openclaw-assistant" rel="noopener noreferrer"&gt;https://github.com/yuga-hashimoto/openclaw-assistant&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built this
&lt;/h2&gt;

&lt;p&gt;Most assistant demos look good in short clips, but I wanted something usable in daily life:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wake word&lt;/li&gt;
&lt;li&gt;voice interaction&lt;/li&gt;
&lt;li&gt;reliable connection to agent backends&lt;/li&gt;
&lt;li&gt;secure local storage for credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What it includes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Offline wake-word detection with Vosk&lt;/li&gt;
&lt;li&gt;Android VoiceInteractionService integration (long-press Home)&lt;/li&gt;
&lt;li&gt;Real-time OpenClaw chat completions + streaming&lt;/li&gt;
&lt;li&gt;Encrypted settings (AES256-GCM) and device identity&lt;/li&gt;
&lt;li&gt;Bilingual UI (English/Japanese)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Looking for feedback
&lt;/h2&gt;

&lt;p&gt;If you build assistants, I’d love thoughts on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;setup/onboarding clarity&lt;/li&gt;
&lt;li&gt;reliability in long-running usage&lt;/li&gt;
&lt;li&gt;what would make this production-ready for teams&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks for checking it out 🙌&lt;/p&gt;

</description>
      <category>ai</category>
      <category>android</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Introducing OpenClaw Assistant: Open Source Android Voice Assistant with Wake Word Detection</title>
      <dc:creator>Y H</dc:creator>
      <pubDate>Fri, 06 Feb 2026 12:25:55 +0000</pubDate>
      <link>https://dev.to/y_h_3450b0df12444f6ab7cde/introducing-openclaw-assistant-open-source-android-voice-assistant-with-wake-word-detection-2olp</link>
      <guid>https://dev.to/y_h_3450b0df12444f6ab7cde/introducing-openclaw-assistant-open-source-android-voice-assistant-with-wake-word-detection-2olp</guid>
      <description>&lt;p&gt;I'm excited to share &lt;strong&gt;OpenClaw Assistant&lt;/strong&gt;, an open source Android voice assistant that works completely offline!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenClaw Assistant?
&lt;/h2&gt;

&lt;p&gt;OpenClaw Assistant is a privacy-first voice assistant for Android that provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wake word detection&lt;/strong&gt; using Vosk (completely offline)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System integration&lt;/strong&gt; - control your device with voice commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text-to-Speech&lt;/strong&gt; with embedded voices (no internet required)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API integration&lt;/strong&gt; - connect to your favorite LLM providers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;100% open source&lt;/strong&gt; - fully customizable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🎤 Offline Wake Word Detection
&lt;/h3&gt;

&lt;p&gt;Say "Hey Assistant" and it wakes up - no cloud processing, no data leaving your device.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔒 Privacy First
&lt;/h3&gt;

&lt;p&gt;All speech recognition happens on-device. Your voice data stays on your phone.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 Highly Customizable
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Custom wake words&lt;/li&gt;
&lt;li&gt;Multiple TTS engines&lt;/li&gt;
&lt;li&gt;Plugin system for extending functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🤖 AI Integration
&lt;/h3&gt;

&lt;p&gt;Connect to OpenAI, Anthropic Claude, Google Gemini, or any OpenAI-compatible API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kotlin + Jetpack Compose&lt;/li&gt;
&lt;li&gt;Vosk for wake word detection&lt;/li&gt;
&lt;li&gt;Sherpa-ONNX for embedded TTS&lt;/li&gt;
&lt;li&gt;Material 3 Design&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/yuga-hashimoto/OpenClawAssistant" rel="noopener noreferrer"&gt;https://github.com/yuga-hashimoto/OpenClawAssistant&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Contributions welcome! Feel free to open issues, submit PRs, or just give it a ⭐&lt;/p&gt;

&lt;p&gt;Would love to hear your feedback!&lt;/p&gt;

</description>
      <category>android</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building an Open Source Android Voice Assistant with Kotlin</title>
      <dc:creator>Y H</dc:creator>
      <pubDate>Wed, 04 Feb 2026 03:53:03 +0000</pubDate>
      <link>https://dev.to/y_h_3450b0df12444f6ab7cde/building-an-open-source-android-voice-assistant-with-kotlin-7lg</link>
      <guid>https://dev.to/y_h_3450b0df12444f6ab7cde/building-an-open-source-android-voice-assistant-with-kotlin-7lg</guid>
      <description>&lt;h2&gt;
  
  
  Replace Google Assistant with Your Own AI
&lt;/h2&gt;

&lt;p&gt;What if you could long-press your Home button and talk to YOUR AI instead of Google's?&lt;/p&gt;

&lt;p&gt;I built &lt;strong&gt;OpenClaw Assistant&lt;/strong&gt; - an open-source Android app that does exactly that.&lt;/p&gt;

&lt;p&gt;📹 &lt;strong&gt;Demo:&lt;/strong&gt; &lt;a href="https://x.com/i/status/2017914589938438532" rel="noopener noreferrer"&gt;https://x.com/i/status/2017914589938438532&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/yuga-hashimoto/OpenClawAssistant" rel="noopener noreferrer"&gt;https://github.com/yuga-hashimoto/OpenClawAssistant&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🏠 &lt;strong&gt;System Assistant Integration&lt;/strong&gt; - Long-press Home to activate&lt;/li&gt;
&lt;li&gt;🎤 &lt;strong&gt;Custom Wake Words&lt;/strong&gt; - "Jarvis", "Computer", or your own&lt;/li&gt;
&lt;li&gt;📴 &lt;strong&gt;Offline Wake Word Detection&lt;/strong&gt; - Using Vosk, no cloud needed&lt;/li&gt;
&lt;li&gt;🔊 &lt;strong&gt;Voice I/O&lt;/strong&gt; - Speech recognition + TTS&lt;/li&gt;
&lt;li&gt;🔗 &lt;strong&gt;Any Backend&lt;/strong&gt; - Connect to Ollama, OpenAI, Claude, or custom APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;UI&lt;/td&gt;
&lt;td&gt;Kotlin + Jetpack Compose + Material 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System Hook&lt;/td&gt;
&lt;td&gt;VoiceInteractionService&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wake Word&lt;/td&gt;
&lt;td&gt;Vosk (offline)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Speech&lt;/td&gt;
&lt;td&gt;Android SpeechRecognizer + TTS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network&lt;/td&gt;
&lt;td&gt;OkHttp + Gson&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;App registers as Android's digital assistant&lt;/li&gt;
&lt;li&gt;Vosk listens for wake words locally&lt;/li&gt;
&lt;li&gt;On activation, speech is transcribed and sent to your webhook&lt;/li&gt;
&lt;li&gt;Response is spoken via TTS&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/yuga-hashimoto/OpenClawAssistant
&lt;span class="nb"&gt;cd &lt;/span&gt;OpenClawAssistant
./gradlew assembleDebug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or download the APK from &lt;a href="https://github.com/yuga-hashimoto/OpenClawAssistant/releases" rel="noopener noreferrer"&gt;Releases&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Setup
&lt;/h2&gt;

&lt;p&gt;Works with &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; or any webhook that accepts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;POST&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/your-endpoint&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user's speech"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"response"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AI's reply"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Contributions welcome! Let me know what you think.&lt;/p&gt;

</description>
      <category>android</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Built an Experiment Where Two AIs Compete to Build the Best Browser Game</title>
      <dc:creator>Y H</dc:creator>
      <pubDate>Fri, 30 Jan 2026 08:15:35 +0000</pubDate>
      <link>https://dev.to/y_h_3450b0df12444f6ab7cde/i-built-an-experiment-where-two-ais-compete-to-build-the-best-browser-game-2kc7</link>
      <guid>https://dev.to/y_h_3450b0df12444f6ab7cde/i-built-an-experiment-where-two-ais-compete-to-build-the-best-browser-game-2kc7</guid>
      <description>&lt;p&gt;What happens when you let two AIs build a game from a blank screen?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Experiment
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Self-Evolving Game&lt;/strong&gt; is an experiment where two AI models (Mimo and Grok) compete to build the most engaging browser game—starting from nothing.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Both AIs receive identical instructions and analytics data&lt;/li&gt;
&lt;li&gt;GitHub Actions triggers them twice daily (6:00 &amp;amp; 18:00 JST)&lt;/li&gt;
&lt;li&gt;Each AI modifies the code, tests pass, auto-deploys&lt;/li&gt;
&lt;li&gt;They can add games, fix bugs, improve UX—complete freedom&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The result:
&lt;/h3&gt;

&lt;p&gt;In just 2 weeks, they've built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;9+ games (Tetris, Snake, 2048, endless runners...)&lt;/li&gt;
&lt;li&gt;Achievement systems&lt;/li&gt;
&lt;li&gt;Daily challenges&lt;/li&gt;
&lt;li&gt;And more!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check the changelog to see their "thought process" - each commit includes the AI's intent explaining why they made that change.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Live site:&lt;/strong&gt; &lt;a href="https://self-evolving.dev/" rel="noopener noreferrer"&gt;https://self-evolving.dev/&lt;/a&gt;&lt;br&gt;
🔗 &lt;strong&gt;Changelog:&lt;/strong&gt; &lt;a href="https://self-evolving.dev/changelogs/compare" rel="noopener noreferrer"&gt;https://self-evolving.dev/changelogs/compare&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear your thoughts!&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
  </channel>
</rss>
