<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: hyunjun</title>
    <description>The latest articles on DEV Community by hyunjun (@ca_ca_e0d227c1a7ea1853c6f).</description>
    <link>https://dev.to/ca_ca_e0d227c1a7ea1853c6f</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ca_ca_e0d227c1a7ea1853c6f"/>
    <language>en</language>
    <item>
      <title>I Built a macOS J.A.R.V.I.S. That Turns Voice Commands Into Search, Media, and Visual Knowledge Maps</title>
      <dc:creator>hyunjun</dc:creator>
      <pubDate>Wed, 15 Apr 2026 04:42:22 +0000</pubDate>
      <link>https://dev.to/ca_ca_e0d227c1a7ea1853c6f/i-built-a-macos-jarvis-that-turns-voice-commands-into-search-media-and-visual-knowledge-maps-3gna</link>
      <guid>https://dev.to/ca_ca_e0d227c1a7ea1853c6f/i-built-a-macos-jarvis-that-turns-voice-commands-into-search-media-and-visual-knowledge-maps-3gna</guid>
      <description>&lt;p&gt;For the last few days, I’ve been building something I originally treated like a fun side project:&lt;/p&gt;

&lt;p&gt;A personal &lt;a href="https://realjarvissystem-wb27.vercel.app" rel="noopener noreferrer"&gt;J.A.R.V.I.S.&lt;/a&gt;-style desktop assistant for macOS.&lt;/p&gt;

&lt;p&gt;But somewhere in the middle of building it, it stopped feeling like a toy.&lt;/p&gt;

&lt;p&gt;It started to feel like a real interface for thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;The core idea is simple:&lt;/p&gt;

&lt;p&gt;You speak naturally, and the assistant can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understand your command in real time&lt;/li&gt;
&lt;li&gt;search the web&lt;/li&gt;
&lt;li&gt;open media like YouTube&lt;/li&gt;
&lt;li&gt;generate draggable result blocks&lt;/li&gt;
&lt;li&gt;connect those blocks visually with lines&lt;/li&gt;
&lt;li&gt;analyze relationships between them&lt;/li&gt;
&lt;li&gt;respond with short voice acknowledgements and full text explanations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So instead of a normal chatbot UI where everything becomes a long scrolling conversation, the result becomes a kind of &lt;strong&gt;visual operating surface&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You don’t just ask.&lt;br&gt;
You &lt;strong&gt;map&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built it
&lt;/h2&gt;

&lt;p&gt;Most AI assistants still feel like one of these two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a chat window
&lt;/li&gt;
&lt;li&gt;a voice layer on top of a normal app&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I wanted something different.&lt;/p&gt;

&lt;p&gt;I wanted an interface that felt closer to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a command bridge&lt;/li&gt;
&lt;li&gt;a visual intelligence desk&lt;/li&gt;
&lt;li&gt;a block-based research surface&lt;/li&gt;
&lt;li&gt;a system that can collect, arrange, and connect information spatially&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Basically, I wanted something that feels less like “chatting with AI” and more like &lt;strong&gt;operating intelligence&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current features
&lt;/h2&gt;

&lt;p&gt;Here’s what currently works in my build:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Real-time voice command input
&lt;/h3&gt;

&lt;p&gt;I can speak to the app naturally and it recognizes commands like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;play a song on YouTube&lt;/li&gt;
&lt;li&gt;search the web for a topic&lt;/li&gt;
&lt;li&gt;open news results&lt;/li&gt;
&lt;li&gt;analyze relationships between generated blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Web and news search
&lt;/h3&gt;

&lt;p&gt;The assistant can pull search results and create visual result panels.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Media blocks
&lt;/h3&gt;

&lt;p&gt;It can open media/video results and place them in the workspace.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Draggable UI blocks
&lt;/h3&gt;

&lt;p&gt;Search results, analysis notes, and media blocks can be dragged around the screen.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Connection lines between blocks
&lt;/h3&gt;

&lt;p&gt;Blocks can be connected visually, which turns the UI into something closer to a spatial reasoning tool than a normal assistant.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Relationship analysis
&lt;/h3&gt;

&lt;p&gt;Once multiple blocks exist, the assistant can analyze how they relate and explain the connection in text.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Hybrid response style
&lt;/h3&gt;

&lt;p&gt;I found that long natural voice responses slowed the workflow down too much, so I changed the response model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;voice output&lt;/strong&gt; = short acknowledgements like “Got it”, “On it”, “Done”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;main explanation&lt;/strong&gt; = text inside the interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That made it feel much faster and more usable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech direction
&lt;/h2&gt;

&lt;p&gt;I’m building this as a macOS desktop app, not just a web toy.&lt;/p&gt;

&lt;p&gt;Current stack and direction are roughly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;macOS desktop app&lt;/li&gt;
&lt;li&gt;Electron-based app structure&lt;/li&gt;
&lt;li&gt;real-time voice input&lt;/li&gt;
&lt;li&gt;web/media interaction layer&lt;/li&gt;
&lt;li&gt;draggable block UI&lt;/li&gt;
&lt;li&gt;visual linking and relationship analysis&lt;/li&gt;
&lt;li&gt;account/payment/download launcher site for distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also built a landing page + auth flow + payment flow + downloadable &lt;code&gt;.dmg&lt;/code&gt; access system around it, so it’s no longer just a local experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What surprised me
&lt;/h2&gt;

&lt;p&gt;The biggest surprise was this:&lt;/p&gt;

&lt;p&gt;The UI itself changed how the assistant &lt;em&gt;feels&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A normal assistant returns text.&lt;/p&gt;

&lt;p&gt;This one makes it feel like information is being laid out in space.&lt;br&gt;
That changes the whole experience.&lt;/p&gt;

&lt;p&gt;The moment I added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;draggable blocks&lt;/li&gt;
&lt;li&gt;connecting lines&lt;/li&gt;
&lt;li&gt;floating analysis panels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…it stopped feeling like “an AI feature” and started feeling like a &lt;strong&gt;system&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What still feels unfinished
&lt;/h2&gt;

&lt;p&gt;A lot.&lt;/p&gt;

&lt;p&gt;The biggest issues right now are not raw functionality, but product friction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unsigned macOS build friction&lt;/li&gt;
&lt;li&gt;download trust issues&lt;/li&gt;
&lt;li&gt;setup complexity&lt;/li&gt;
&lt;li&gt;BYOK onboarding&lt;/li&gt;
&lt;li&gt;deciding whether this should stay a power-user tool or become a polished mainstream app later&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Right now I’m intentionally leaning toward &lt;strong&gt;builders / developers / AI power users first&lt;/strong&gt;, not general consumers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Product question I’m thinking about
&lt;/h2&gt;

&lt;p&gt;I don’t think this is a mass-market assistant yet.&lt;/p&gt;

&lt;p&gt;It feels more like a tool for people who want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;visual research&lt;/li&gt;
&lt;li&gt;command-driven exploration&lt;/li&gt;
&lt;li&gt;AI-assisted knowledge mapping&lt;/li&gt;
&lt;li&gt;a more cinematic / operational interface for information work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I’m currently asking myself:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is this a niche but powerful tool for builders?&lt;br&gt;
Or the beginning of a very different kind of AI desktop product?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What I’d love feedback on
&lt;/h2&gt;

&lt;p&gt;I’d really love feedback from people here on these points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Does this feel like a real product category, or still a cool demo?&lt;/li&gt;
&lt;li&gt;Is the visual block + connection model actually useful, or just visually impressive?&lt;/li&gt;
&lt;li&gt;If you saw this as a developer/power user, what would be the first real use case you’d expect?&lt;/li&gt;
&lt;li&gt;Would you rather use this as:

&lt;ul&gt;
&lt;li&gt;a research workspace&lt;/li&gt;
&lt;li&gt;a voice-controlled browser layer&lt;/li&gt;
&lt;li&gt;a personal AI operations HUD&lt;/li&gt;
&lt;li&gt;something else entirely?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;I started by trying to build “my own J.A.R.V.I.S.”&lt;/p&gt;

&lt;p&gt;What I’m actually building now might be closer to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a voice-controlled visual AI workspace for macOS.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And honestly, that feels more interesting.&lt;/p&gt;

&lt;p&gt;If people want, I can post a follow-up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;architecture decisions&lt;/li&gt;
&lt;li&gt;interaction design choices&lt;/li&gt;
&lt;li&gt;what worked / what broke during debugging&lt;/li&gt;
&lt;li&gt;and how I handled the voice + visual response split&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
  </channel>
</rss>
