DEV Community

Anna Jambhulkar
Anna Jambhulkar

Posted on

Why I’m building a Windows-first emotional AI assistant (lessons so far)

Most AI products today are optimized for speed, accuracy, and scale.

And that makes sense.

But while using AI tools daily, I kept running into the same feeling:
every interaction felt stateless. Every session started from zero.
No memory. No continuity. No sense of knowing the user.

That’s where my curiosity started.

The problem I noticed

Modern AI assistants are impressive, but they behave like strangers who forget you every day.

You explain your preferences again.
You restate context again.
You rebuild workflows again.

From a technical perspective, this is fine.
From a human perspective, it feels broken.

Humans don’t work in isolated prompts — we work in continuity.

Why Windows-first (and not cloud-first)

One decision I made early was to build this as a Windows-first assistant, not a browser tab or a purely cloud-based tool.

Why?

Because a personal computer is still the most intimate computing device we own:

It holds our files

It reflects our workflows

It stays with us for years

Building locally (or at least desktop-native) allows:

Better context awareness

Stronger privacy boundaries

Tighter integration with daily work

Instead of AI being “somewhere on the internet”, it becomes present.

Emotional AI ≠ pretending to be human

A common misconception:
emotional AI means making the assistant sound emotional.

That’s not what I’m exploring.

For me, emotional AI is about:

Remembering preferences

Maintaining interaction history

Adapting tone and behavior over time

It’s not about fake empathy.
It’s about continuity.

What I’ve learned so far (the hard parts)

  1. Memory is expensive — technically and ethically

Storing memory isn’t just a database problem.
You need to decide:

What’s worth remembering?

What should be forgotten?

Who controls that memory?

  1. “Personal” quickly becomes “creepy” if done wrong

There’s a very thin line between helpful continuity and overreach.
Designing that boundary is more important than model choice.

  1. Developers underestimate emotion in tools

Many devs (myself included) initially think users only care about features.
In reality, how a tool makes you feel over time strongly affects retention.

Why I’m sharing this early
This project is still in a tech-trial stage.
I’m intentionally sharing before everything is “perfect”.

Because the most valuable insights so far haven’t come from metrics —
they’ve come from conversations.

A question for builders here

When you think about the tools you use daily:

Do you value memory and continuity?

Or do you prefer tools to stay stateless and predictable?

*Where do you personally draw the line?
*

I’d love to learn from real experiences, not just theory.

Thanks for reading 🙏

Top comments (0)