Hey everyone 333,
I wanted to share an open-source project called ProxyFace. If you're interacting with LLMs and want a more engaging experience, this adds a real-time, pixel-art avatar that reacts to the AI's output with actual emotions—and it runs entirely on your own machine.
Your AI now has a face, voice, and ears, but with zero telemetry and zero cloud dependencies for inference.
✨ What makes it special:
100% Local Emotion Brain: Runs a highly optimized 4MB TinyBERT model at 60ms via WebGPU/WASM. The face reacts to the AI's text (embarrassed, curious, delighted, etc.) without hitting any external APIs.
Hands-Free Voice Interaction: Hold Alt+T to speak and release to send. The AI replies and reacts, making it awesome for language learning or just natural conversation.
On-Device Eye Tracking: Uses MediaPipe locally so the avatar’s pupils follow your gaze. Video never leaves your computer.
Customizable Pixel Art: Comes with 40+ characters. You can easily drop in your own sprite sheet and instantly use your own custom avatar.
️ The Tech Stack: Built with React 18, Vite, Tailwind CSS, ONNX Runtime Web, and packaged for desktop with Electron. It is fully open-source under the GPL-3.0 license.
We are actively looking for feedback, developers, and pixel artists who want to submit their own characters to the official gallery (email us at yes@proxyface.com).
If you find the project interesting, giving us a ⭐ on GitHub helps out a lot. Let me know what you think of the tech stack or if you have any questions!




Top comments (0)