DEV Community

Esther Studer
Esther Studer

Posted on

We built an AI that just listens — and had to fight the AI's urge to help

Everyone's building AI that gives advice. We built one that refuses to.Ascoltus is an AI listener. No advice, no suggestions, no "have you tried..." — just presence. It sounds simple. Building it was surprisingly hard.## The problem with every AI assistantHere's what happens when you tell a language model to "just listen":> User: I'm really stressed about my job situation.>> Bad AI: I understand how stressful that must be. Here are some strategies for managing work stress: 1) Try time-blocking your calendar, 2) Practice the 5-4-3-2-1 grounding technique...*The model immediately wants to *fix. It's trained on text where helpful responses include suggestions and advice. Getting it to hold back — to genuinely just receive what someone is sharing — requires deliberate and persistent prompt engineering.## Why people need a space to just talkTherapists know this: the act of articulating something out loud is itself therapeutic. You're not going to your therapist for their advice. You're going to hear yourself think, with a witness who holds space for you.For most people, that's genuinely hard to access:- Talking to friends feels like being a burden- Journaling is talking to yourself- Therapy is expensive and has a waiting list- Existing apps either give advice or gamify your moodThe gap we're filling: a space to process that's available at 2am, remembers everything you've ever shared, and doesn't need anything from you in return.## The technical challenge: teaching a model to not respond helpfullyOur system prompt took about 30 iterations to get right. Key learnings:1. "Don't give advice" isn't enoughThe model would comply literally but still steer toward "positive reframes" which felt like advice in disguise. We had to get more specific: no reframes, no silver linings, no "at least...", no forward-looking suggestions.2. Empathetic reflection, not empty validationThere's a thin line between active listening and saying "I hear you, that sounds hard" on repeat. Real listening includes:- Reflecting back specific details- Asking one clarifying question (not multiple — that feels interrogative)- Noticing and naming emotional tone*3. Knowing when to break the ruleIf someone says they're in crisis, listening isn't enough. We built in a safety layer — if certain signals appear, the AI acknowledges them and provides crisis resources. This isn't advice; it's baseline care.## Memory as the core featureMost AI tools forget everything between sessions. For a listening tool, that's fatal. If someone spends 3 months processing a complicated situation and the AI starts fresh every time, it's worse than useless.We built a persistent memory layer. Across sessions, the AI knows:- What you've shared before- The emotional arc of previous conversations- Names you've mentioned, situations you've describedUsers tell us this is the feature that makes it feel real. Not the conversation quality in isolation — the continuity.## What surprised us about how people use it- Many users come at very specific times: late night, early morning- The topics are not what you'd expect: often work frustration, relationship dynamics, feeling stuck — not dramatic crises- Some users prefer to write in their native language even though the AI responds in English (we're working on this)- Audio responses (the AI speaks back) changed how people *feel heard — text felt colder## The uncomfortable product questionIs an AI listener good for people?Honest answer: we don't know long-term. Our hypothesis: it's better than bottling things up, and it doesn't replace human connection — it supplements it.What we're watching for: any signs of unhealthy dependency, avoidance of human relationships, or users treating it as a substitute for professional care when professional care is clearly needed.We've added clear language about this throughout the product. We're not therapists. We're not a crisis service. We're a space to process.## Try itAscoltus has a free trial (40 conversations). No credit card, no signup flow designed to trap you.I'm curious: have you ever built something where the hardest part was getting the AI to not do something it was optimized to do? What was the pattern?

Top comments (0)