The vision
"The long-term goal of this system is to support those who truly need assistance—individuals who are paralyzed, bedridden, or require 24/7 care.
However, to build a rapid MVP and a scalable architecture, I am starting with healthy users in ideal scenarios as my baseline. This choice simplifies the problem space and enables faster iteration during the early development stages.
System Architecture: The Bridge Between Virtual & Logic
To maintain a clean separation of concerns, I designed a decoupled architecture where Unity handles the physical simulation and Python acts as the high-level brain.
Here is the data flow from user behavior in Unity to the AI decision-making process in the backend:
The Simulation Environment
Currently, the entire system is being verified within a simulated home environment built in Unity.
I have defined three core experimental zones—Living Room, Workspace, and Kitchen—equipped with a dense network of virtual cameras. This setup allows me to test the robot's perception and spatial grounding in a controlled yet complex environment.
What’s Next?
In the next post, I will walk you through the Unity Experiment Setup and Development Environment in more detail.
We will explore how the 3D environment is designed to simulate daily routines and how the Unity-to-Python bridge handles real-time data streaming. Before diving into the complex AI "brain," it's essential to understand the "world" our robot lives in.
Stay tuned!


Top comments (0)