DEV Community

Jesper Deng
Jesper Deng

Posted on

How I Made AI Behave Differently Based on Conversation Context (Multi-Role Prompt Engineering)

#ai

I've been working on a project that requires multiple AI "characters" to behave differently in the same conversation — think of it like NPCs in a game, except each one needs to respond based on their role, personality, and the current situation.

Here's what I learned about making this work reliably.

The problem

If you just tell an AI "you are character A" and "you are character B" in separate prompts, they all end up sounding the same. Generic. Helpful. Boring. You need them to have distinct behaviors — one should be cooperative, another defensive, another authoritative.

What actually works

1. Behavioral constraints > personality descriptions

Bad:

You are a friendly witness who is helpful.
Enter fullscreen mode Exit fullscreen mode

Better:

You are a witness being questioned. Rules:
- Only answer what is directly asked
- If the question is vague, ask for clarification
- Never volunteer extra information
- If pressed on a contradiction, become defensive
Enter fullscreen mode Exit fullscreen mode

Constraints produce more consistent behavior than adjectives.

2. Context-dependent behavior switching

The same character might need to behave differently depending on who's talking to them. I handle this by passing a mode parameter in the system prompt:

function buildPrompt(character: Character, mode: 'friendly' | 'hostile') {
  const base = `You are ${character.name}. Background: ${character.bio}`;

  if (mode === 'friendly') {
    return `${base}\n\nBehavior: Be cooperative. Give detailed answers. Expand on your responses when appropriate.`;
  } else {
    return `${base}\n\nBehavior: Be defensive. Give minimal answers. Only confirm what you cannot deny. Redirect when possible.`;
  }
}
Enter fullscreen mode Exit fullscreen mode

This tiny switch makes a huge difference in how natural the responses feel.

3. State management across turns

The hardest part: making characters remember what happened earlier and adjust. I maintain a simplified state object:

interface ConversationState {
  currentSpeaker: string;
  previousStatements: string[];
  contradictions: string[];
  mood: 'neutral' | 'defensive' | 'confident';
}
Enter fullscreen mode Exit fullscreen mode

Before each AI call, I inject a summary of what's happened so far. This keeps responses contextually aware without blowing up the token count.

4. The "don't be helpful" problem

LLMs are trained to be helpful. When you need a character to be evasive or unhelpful, you have to fight against this training. What works:

  • Explicitly say "you do NOT want to help the questioner"
  • Give the character a motivation for being difficult
  • Add examples of deflection in the prompt
You are being questioned about something you want to hide.
Your goal is to answer without revealing [specific fact].
Techniques you use: giving technically true but misleading answers,
answering a different question than what was asked, saying "I don't recall."
Enter fullscreen mode Exit fullscreen mode

5. Temperature matters more than you think

For authoritative characters (judges, experts), use lower temperature (0.3-0.5). They should be consistent and decisive.

For emotional or unpredictable characters, bump it up (0.7-0.9). The randomness makes them feel more human.

Key takeaway

Multi-role AI isn't about writing better character descriptions. It's about defining behavioral rules, injecting context, and fighting the model's default "helpful assistant" mode. Once I figured that out, everything clicked.

Would love to hear if anyone else is doing multi-agent stuff — what patterns are you using?

Top comments (0)