Hi everyone! Yesterday, I shared the first version of my 100% offline JARVIS desktop assistant built with Python, PyQt6, and Ollama.
Thanks to the feedback and ideas I've received (like adding Spotify integration and improving how the AI handles memory), I got straight to work. Today, I'm excited to share Version 1.1!
🚀 What's New in v1.1?
-
🧠 The "Living Profile": I wanted JARVIS to know me without scanning my hard drive. Now, it reads a private, local
system_profile.txton startup. This injects my name, interests, and preferred tone into the system prompt. JARVIS knows who I am from second zero! -
⚙️ Native PC Automation: I built an
automations.pymodule. You can now tell JARVIS to play/pause Spotify, launch Steam games, or even send Discord messages via pyautogui/webhooks. - 👁️ Drag-and-Drop Vision: I integrated support for the LLaVA model. You can now drag images directly into the holographic UI and ask JARVIS to analyze them.
- 🗣️ Smart Bilingual Voice: The voice recognition now dynamically checks for Spanish and English inputs seamlessly on a single audio capture.
💻 Check out the code
I've refactored the routing logic to handle these new features efficiently.
GitHub Repo: https://github.com/Jm7997/JARVIS
I'd love to hear your thoughts on the new architecture, specially the automations.py module. What feature should I build next?
Top comments (0)