<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matan Ellhayani</title>
    <description>The latest articles on DEV Community by Matan Ellhayani (@matanellhayani).</description>
    <link>https://dev.to/matanellhayani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/matanellhayani"/>
    <language>en</language>
    <item>
      <title>Why your dating app conversations die after 3 messages — a technical breakdown</title>
      <dc:creator>Matan Ellhayani</dc:creator>
      <pubDate>Tue, 21 Apr 2026 14:10:14 +0000</pubDate>
      <link>https://dev.to/matanellhayani/why-your-dating-app-conversations-die-after-3-messages-a-technical-breakdown-2oml</link>
      <guid>https://dev.to/matanellhayani/why-your-dating-app-conversations-die-after-3-messages-a-technical-breakdown-2oml</guid>
      <description>&lt;p&gt;I built a tool that simulates dating-app conversations with an LLM so people can practice opening, escalating, recovering from silence, asking someone out — the uncomfortable stuff. After about a thousand practice sessions went through it, a pattern showed up in the data that I think is more interesting than the product itself. I want to write about the pattern, because it's genuinely a software problem at the core.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "three-message cliff"
&lt;/h2&gt;

&lt;p&gt;If you log every practice run as a sequence of turns and bucket them by where the user quits (or where the simulated partner disengages), there is a very sharp drop-off between turn 3 and turn 4.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;turn 1  ████████████████████████  100%
turn 2  ██████████████████████▎    93%
turn 3  ███████████████████▌       81%
turn 4  ████████▉                  37%
turn 5  █████▋                     23%
turn 6  ████▏                      17%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shape is not gradual. It's a cliff. Something specific happens between the third and fourth message that is &lt;em&gt;not&lt;/em&gt; happening between the second and third.&lt;/p&gt;

&lt;h2&gt;
  
  
  What dies there
&lt;/h2&gt;

&lt;p&gt;When I read the transcripts around the cliff, the pattern is boringly consistent. Turn 1–3 is pleasantries. "Hey, I liked your profile / mine is the dog / thanks, I like dogs too." Then turn 4 is where one of three things has to happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A real question&lt;/strong&gt; — something that actually requires the other person to share an opinion or story.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A callback&lt;/strong&gt; — a reference to something earlier in the conversation that shows you read it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An escalation&lt;/strong&gt; — a move toward a phone number, a meet-up, or at least a meaningful time commitment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The conversations that die at turn 4 are the ones where the user picked "none of the above" and instead said some variant of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"So what do you do for fun?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or, worse:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"lol same"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The signal is clear enough that it became the first thing our evaluator grades.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is a product problem, not a dating problem
&lt;/h2&gt;

&lt;p&gt;The instinct is to call this a vibes issue. It's not. It's the same problem that kills chatbot conversations, support conversations, and interview conversations: &lt;strong&gt;no one was taught how to leave the safe zone of small-talk.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In software terms: pleasantries are a cheap, idempotent protocol. No state, no risk, no memory. Turn 4 is where the protocol has to upgrade to something stateful — you have to refer back to prior turns, commit to a direction, and accept that the other side might say no.&lt;/p&gt;

&lt;p&gt;Most users don't upgrade. They retry the idempotent protocol. It returns 200 OK, but no new information is exchanged. Three of those in a row, and the other side disengages.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we built around it
&lt;/h2&gt;

&lt;p&gt;A few design choices that fell out of this:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The evaluator grades "protocol upgrades," not replies
&lt;/h3&gt;

&lt;p&gt;Every response from the user gets a rubric score across six dimensions, but the weightiest one is: &lt;em&gt;does this message advance state?&lt;/em&gt; Small-talk replies get marked neutral. A callback to an earlier turn is weighted highly. A failed escalation (cringe ask-out) is weighted more highly than a safe non-escalation — failing forward counts.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The simulated partner gets genuinely tired of small-talk
&lt;/h3&gt;

&lt;p&gt;We prompt the partner with an internal "engagement budget." Every idempotent reply burns it. When it hits zero, the partner disengages the way a real person does: shorter replies, then delayed replies, then nothing. Users feel the cliff happen &lt;em&gt;inside&lt;/em&gt; the simulator, which is the whole point.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The coach never rewrites your message for you
&lt;/h3&gt;

&lt;p&gt;This was the hardest product decision. Every competitor in the space ("screenshot your convo, get a reply") solves the surface problem — give the user a line — and leaves the underlying deficit untouched. We score, explain, and let the user try again. The practice reps are the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  The take-away for builders
&lt;/h2&gt;

&lt;p&gt;If you are building anything where humans learn to have better conversations — with customers, in interviews, on dates, with their teenager — the cliff between small-talk and committed exchange is the one place where intervention has the highest leverage. Everything before it is low-risk; everything after it compounds.&lt;/p&gt;

&lt;p&gt;If you want to poke at it, the tool is at &lt;a href="https://talkeasier.com" rel="noopener noreferrer"&gt;talkeasier.com&lt;/a&gt;. The more interesting thing though is the data: I'm going to keep publishing what we see in the transcripts as the dataset grows.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm a solo dev building this. If you want to see the evaluator rubric or the partner-engagement-budget prompt, reply and I'll drop them in a follow-up post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
