The idea sounded simple… until we tested it
In the previous post, I talked about moving AI from just “responding” to actually “participating.” That idea became Aaradhya on CloYou.
But the interesting part wasn’t the idea. It was what happened when people actually started using it.
Because once you move beyond answers and let users create moments, the system behaves very differently.
People don’t use it like a tool
One thing became clear quickly: users don’t treat this like a normal AI tool.
They don’t come in with:
- structured prompts
- specific tasks
- “optimize this” mindset
Instead, they do things like:
- “let’s create something together”
- “imagine this moment”
- “what if we try this scene”
It’s less like using software, more like exploring something.
The role of image upload changed everything
We initially thought image upload would be a small feature.
It wasn’t.
Once users could upload their own image:
- they became part of the generated scene
- identity started to matter
- outputs felt less random
This shifted the system from:
generic generation → personalized experience
And that’s a big difference.
Consistency is not a feature — it’s the system
Most generative systems fail at one thing: consistency.
You can generate something impressive once, but across multiple interactions:
- faces drift
- styles change
- nothing connects
We realized quickly that without consistency, the entire idea breaks.
So we focused on:
- keeping the AI character stable
- aligning outputs with the user’s identity
- making each generated moment feel related to the last
Without this, you don’t have an experience. You just have outputs.
Memory had to be intentional
Another thing we tested was automatic memory.
At first, it sounds like a good idea: just save everything.
In practice, it becomes noise.
So we switched to a simple model:
- user creates a moment
- system generates it
- user decides if it should be kept
This keeps memory:
- clean
- relevant
- user-controlled
And it changes how people value what they create.
Recognition made the system feel aware
One unexpected layer came from recognition.
When users uploaded images where the AI character was already present, the system could identify that context.
This added something subtle but important:
- awareness of the scene
- continuity across interactions
- stronger connection between input and response
It didn’t make the system “intelligent” in a new way, but it made it feel more consistent.
The interaction model is different now
If you look at the full loop, it’s no longer:
input → output → done
It becomes:
- conversation
- imagination
- generation
- optional memory
- continuity
That loop keeps going.
And that’s what makes it feel different.
This is where Aaradhya fits in
Aaradhya isn’t just a chatbot layer on top of a model.
It’s a combination of:
- conversational interface
- identity system
- visual generation pipeline
- user-driven memory
All working together.
You don’t just get answers. You build something across interactions.
What this means going forward
We’re starting to see a shift in how AI systems are used.
Not just for:
- solving tasks
- generating outputs
But for:
- creating experiences
- maintaining continuity
- building interaction over time
This is still early, but it points toward a different direction.
Where we’re building this
This is part of what we’re exploring with CloYou.
Not replacing traditional AI systems, but extending them into something more interaction-driven.
Aaradhya is one implementation of that idea.
Final thought
AI is already good at answering.
The next step might be making interactions feel like they actually go somewhere.
🚀 If you want to try it
You can explore it here: https://cloyou.com
Try a normal conversation, but instead of asking something useful, try creating a moment.
That’s where the difference shows up.
Top comments (0)