DEV Community

Matan Ellhayani
Matan Ellhayani

Posted on

Why your dating app conversations die after 3 messages — a technical breakdown

I built a tool that simulates dating-app conversations with an LLM so people can practice opening, escalating, recovering from silence, asking someone out — the uncomfortable stuff. After about a thousand practice sessions went through it, a pattern showed up in the data that I think is more interesting than the product itself. I want to write about the pattern, because it's genuinely a software problem at the core.

The "three-message cliff"

If you log every practice run as a sequence of turns and bucket them by where the user quits (or where the simulated partner disengages), there is a very sharp drop-off between turn 3 and turn 4.

turn 1  ████████████████████████  100%
turn 2  ██████████████████████▎    93%
turn 3  ███████████████████▌       81%
turn 4  ████████▉                  37%
turn 5  █████▋                     23%
turn 6  ████▏                      17%
Enter fullscreen mode Exit fullscreen mode

The shape is not gradual. It's a cliff. Something specific happens between the third and fourth message that is not happening between the second and third.

What dies there

When I read the transcripts around the cliff, the pattern is boringly consistent. Turn 1–3 is pleasantries. "Hey, I liked your profile / mine is the dog / thanks, I like dogs too." Then turn 4 is where one of three things has to happen:

  1. A real question — something that actually requires the other person to share an opinion or story.
  2. A callback — a reference to something earlier in the conversation that shows you read it.
  3. An escalation — a move toward a phone number, a meet-up, or at least a meaningful time commitment.

The conversations that die at turn 4 are the ones where the user picked "none of the above" and instead said some variant of:

"So what do you do for fun?"

or, worse:

"lol same"

The signal is clear enough that it became the first thing our evaluator grades.

Why this is a product problem, not a dating problem

The instinct is to call this a vibes issue. It's not. It's the same problem that kills chatbot conversations, support conversations, and interview conversations: no one was taught how to leave the safe zone of small-talk.

In software terms: pleasantries are a cheap, idempotent protocol. No state, no risk, no memory. Turn 4 is where the protocol has to upgrade to something stateful — you have to refer back to prior turns, commit to a direction, and accept that the other side might say no.

Most users don't upgrade. They retry the idempotent protocol. It returns 200 OK, but no new information is exchanged. Three of those in a row, and the other side disengages.

How we built around it

A few design choices that fell out of this:

1. The evaluator grades "protocol upgrades," not replies

Every response from the user gets a rubric score across six dimensions, but the weightiest one is: does this message advance state? Small-talk replies get marked neutral. A callback to an earlier turn is weighted highly. A failed escalation (cringe ask-out) is weighted more highly than a safe non-escalation — failing forward counts.

2. The simulated partner gets genuinely tired of small-talk

We prompt the partner with an internal "engagement budget." Every idempotent reply burns it. When it hits zero, the partner disengages the way a real person does: shorter replies, then delayed replies, then nothing. Users feel the cliff happen inside the simulator, which is the whole point.

3. The coach never rewrites your message for you

This was the hardest product decision. Every competitor in the space ("screenshot your convo, get a reply") solves the surface problem — give the user a line — and leaves the underlying deficit untouched. We score, explain, and let the user try again. The practice reps are the product.

The take-away for builders

If you are building anything where humans learn to have better conversations — with customers, in interviews, on dates, with their teenager — the cliff between small-talk and committed exchange is the one place where intervention has the highest leverage. Everything before it is low-risk; everything after it compounds.

If you want to poke at it, the tool is at talkeasier.com. The more interesting thing though is the data: I'm going to keep publishing what we see in the transcripts as the dataset grows.


I'm a solo dev building this. If you want to see the evaluator rubric or the partner-engagement-budget prompt, reply and I'll drop them in a follow-up post.

Top comments (0)