This is a submission for the World's Largest Hackathon Writing Challenge: After the Hack.
The Journey Begins
It's interesting. After hearing about the world's largest hackathon in April, I never imagined I'd be so driven to create something that could be impactful. I genuinely enjoyed the process of designing and thinking through each concept deeply. But beyond the significance of the design, it never crossed fully into the realm of traditional programming.
Discovering the Power of Vibe Coding
My initial impression of "vibe coding" felt like it was just an interesting new method for developers. Something created for them. But then I tried Bolt.new and realised I had completely misinterpreted its groundbreaking potential. The ability for anyone to use natural language to produce real, working code felt like standing on foreign land.
In the early stages, I kept asking the AI: "Yeah, this looks cool, but is this real, workable code not some clever derivative?" And it was all real.
Something I take for granted now, I think we all do, but I used to just watch the code writing itself. It reminded me of the film Transcendence (in which E.Musk briefly appears). There's a scene where the code rewrites itself autonomously. Now we have just adapted to this technological breakthrough.
Then again, as humans, we're adaptive, probably the most adaptable species on this planet. What was new last week becomes obsolete a week later. The AI industry is proof of this, constantly pushed to find new paths to innovate.
That's the philosophy HomeLLM was built on.
The Initial Spark
I can't quite recall the exact moment the spark came, sometime around late May during the Google I/O event, my initials are io, so I had to watch. But I do remember the core focus for me: visual, specifically. How to give LLMs a way to emotionally react to what users tell them through artificial body language, facial expressions and visual context. How to make advertisements feel native and non-intrusive beyond what I see, they had to be another way.
Something that could potentially be life-changing, especially for young people with autism. A child using text or speech to express something, and the AI responds with emotional visuals in a way that's tailored and human. Visuals created by AI artists or anyone with an imagination.
The Visual
With AI artists now generating thousands of visuals every week, which are getting more incredibly detailed, why are most limited to just social posts. This was a challenge for me to find a new approach with the creator community.
Next Step
Text and voice have advanced dramatically and will only get better. But expressive visuals are still limited. Most LLMs seem to know this, which is why many bridge the gap with emojis. Intuitively, words alone aren't enough, and voice is not always convenient.
Examples of Visual Emotional Response
If a user says, "I got good news today, do you want to know?" The LLM could show a visual of someone leaning forward, ears perked with curiosity.
(Users rarely speak to LLMs this way today, but with visual feedback like this, it could shift human-AI dialogue across modalities.)
Or: "I feel depressed. I don't know what to do." The AI could animate a figure standing alone in the middle of nowhere, followed by a hand gently placed on their shoulder.
The goal is to provide an immediate emotional response, not just textual empathy.
Educational Applications
In an educational context:
If a user says: "I think I just injured my ankle, how do I know if it's serious?" The LLM could show an animation of movement restrictions typical in ankle injuries. Then text could coordinate by saying, "If you cannot move your ankle like this, get emergency help."
Or: "How do cells multiply?" The AI could respond with a short, illustrative animation replacing the need to search through long explainer videos.
The Integrated Ecosystem
There are about 6 integrated concepts within HomeLLM. The goal wasn't to build six features—it was to create an ecosystem where each component supports another. Nothing exists in isolation.
For example:
The agentic agents only appear after a user is invited into a chatroom by the AI. This happens not randomly, but based on their engagement with the LLM.
The 1% Remnant ad model only works because of the dissolving visuals that come prior. Without the user's curiosity being sparked by the visual, the 1% wouldn't mean anything.
Gemini's JSON scanning reads these visuals, pulling detailed metadata and ensuring accurate emotional alignment between the visual and the prompt.
The Home Experience
Finally, the Home, the reason it's called HomeLLM. I wanted to create a user experience that not only visually feels alive, but through the ElevenLabs voice, it acts alive giving subtle updates.
I knew with over 100k other creators, I needed to create something that could try and grab attention.
The Home in its natural state is aesthetically pleasing but non-invasive with its dark, grimy cottage look. On hover, it could say, "I just noticed six weeks ago you had a chat about improving your diet. Do you notice the changes?"
With these new tools like LLMs, agents, voice, etc, the goal is to implicitly find ways to design them into users' daily lives instead of saying, "Here's a technological breakthrough, now figure out how to use it."
Final Thoughts
It's been a strange and transformative moment. Most creators in this hackathon will understand the feeling: building in isolation, lost in the cave, obsessing over small details, watching your ideas slowly evolve into something tangible. Celebrating small wins, then back to hating them, knowing it's far from finished. Worrying about what you had to leave out etc.
But when it's over, at least at this stage, just for a moment, you can pause the design scepticism and enjoy the realisation of what's been holistically created through fresh eyes. Then the painful out-of-control limbo of waiting and just not knowing.
One thing I know for sure: creating with Bolt.new has been like an exosuit for my designs. Allowing me to push the limits and opening up new worlds
Top comments (0)