DEV Community

Matan Ellhayani
Matan Ellhayani

Posted on

Making an LLM "get tired" of bad conversation: the engagement-budget prompt

Follow-up to my previous post about the turn 3→4 cliff in dating-app conversations. A few people asked to see the actual prompt design. Here's one of the core pieces.

The problem

When you build an LLM simulated partner for practice conversations, the default failure mode is: the model is too nice. It politely continues any conversation, including ones where the user is sending increasingly generic, low-effort replies. This is a disaster for a practice tool, because the user doesn't feel the consequences of being bad at it.

In a real conversation, someone you're texting who sends "lol same" three times in a row just… stops replying. You want to reproduce that signal inside the simulator, without making the model actively hostile.

The design: a decaying engagement budget

The trick we landed on is treating the partner's willingness-to-engage as a numeric budget the LLM has to reason about every turn.

Pseudo-prompt (actual prompt is longer; core shape below):

You are {character}, in a dating-app chat with {user}.

You carry an internal engagement_budget that starts at 10.

On every turn, before you reply, update engagement_budget:
  - The user's message advances state (callback, opinion, escalation): +1 or 0
  - The user's message is a neutral pleasantry ("lol same", "what
    do you do for fun"): -1
  - The user's message is genuinely engaging or funny: +1
  - The user's message is weirdly forward, rude, or off-vibe: -2

Cap engagement_budget at [0, 10].

Behave according to the current budget:
  budget >= 7: warm, playful, volunteers details, asks questions back
  budget 4-6: still engaged but shorter, less-volunteered info
  budget 2-3: clipped replies, lets beats die, doesn't initiate
  budget 0-1: barely-there "haha yeah" / "totally"; no curiosity
  budget = 0 AND user hasn't recovered in 2 turns: do not reply
Enter fullscreen mode Exit fullscreen mode

That last branch is the one that matters. When the budget hits zero and stays there, the partner does what a real person does — goes quiet.

Why this beats "just roleplay someone being uninterested"

We tried the lazier version first. We prompted: "You are someone in a mediocre conversation. Act uninterested."

The problem is the model commits to the bit. It plays a flat, disengaged character from turn 1, and the user gets a training signal that doesn't match reality. Real people don't start out disengaged — they become disengaged, as a function of what you send.

The engagement budget reproduces that function. The first two turns can look warm. The fifth turn, after three generic replies, looks exactly like a real match whose attention you've lost.

Surfacing the signal

The second design move — separate from the prompt — is showing the user the budget after the session. Not during. Mid-session readout breaks immersion and turns the tool into a game-ified skinner box.

After the conversation, we surface three things:

  1. Message-by-message budget deltas. The user sees that message 4 ("lol same") dropped them from 7 to 6, and message 5 ("haha yeah") dropped them to 5, and so on.
  2. The specific turn where the budget crossed into the disengagement zone.
  3. A plain-English "here's what would have bought you two points back at turn 5" — usually a callback or a small stance.

This is the part of the product that actually teaches something. The conversation gave them the feel. The debrief gives them the why.

Things that broke

A few things we tried that didn't work, in case they save someone time:

  • Letting the model announce the budget in-session (e.g., "I'm losing interest in this conversation"). Felt patronizing, broke immersion, users hated it.
  • Hard floors below 0 ("very bored, actively hostile"). Overshoots. Real people don't get hostile, they just drift away.
  • Not capping the upper end. If a user landed three great replies in a row, the partner became an over-eager puppy. Capping at 10 keeps warmth realistic.

The bigger pattern

I think "pretend-state hidden inside a character prompt" is going to be a common pattern for LLM products that simulate humans-with-agency — interview trainers, difficult-conversation rehearsal tools, customer-service sims. The character doesn't see the state field in the transcript, but behaves consistently with it. It's a cheap way to give an LLM character memory-with-consequences.


Solo dev building TalkEasier. The evaluator rubric that grades user messages is another piece of this — can write that up next if there's interest.

Top comments (0)