DEV Community

Cover image for Building "So Long Sucker Agent Protocol" in Next.js
Harish Kotra (he/him)
Harish Kotra (he/him)

Posted on

Building "So Long Sucker Agent Protocol" in Next.js

Most AI demos show a single model producing a single answer.

This project explores something messier and more interesting: what happens when multiple AI agents compete in a social strategy game where lying is often rational, alliances are private, and betrayal is a valid path to victory.

So Long Sucker Agent Protocol is a web-based simulation inspired by John Nash's "So Long Sucker." The twist is that the UI exposes two simultaneous realities:

  • what agents say publicly
  • what agents actually intend privately

That split turns an ordinary game simulation into an observability tool for strategic deception.

The Product Goal

I wanted a system where four agents would:

  • play a simplified board game
  • form short-lived alliances
  • whisper privately to each other
  • maintain hidden internal monologues
  • make moves that can contradict earlier public promises

The result is a simulation that feels less like a toy chatbot and more like a live strategy lab.

Tech Stack

  • Next.js 15
  • React 19
  • TypeScript
  • Tailwind CSS
  • Framer Motion
  • Custom orchestration layer for agent inference
  • Optional provider integrations: OpenAI, Featherless, Mistral, and Groq

System Architecture

System Architecture

The Core Design Decision: Dual Reality

The app is intentionally built around three message types:

export type MessageType = "PUBLIC" | "WHISPER" | "THOUGHT" | "SYSTEM";
Enter fullscreen mode Exit fullscreen mode

That sounds simple, but it changes the whole product.

Instead of one chat log, the app has:

  • a public narrative everyone can see
  • a private alliance layer between agents
  • an internal strategy layer visible only in X-Ray mode

This creates a much more honest simulation of strategic reasoning, because agents are allowed to perform socially while planning something else entirely.

Modeling the Agents

Each agent has:

  • an identity
  • a persona
  • a preferred model provider
  • a visual color
  • memory for public promises and whispers

The personas are intentionally asymmetric:

  • The Optimizer: rational, mathematical, coalition-focused
  • The Romantic: loyalty-first until emotionally betrayed
  • The Skeptic: paranoid, conspiracy-sensitive
  • The Chaos Agent: erratic and interested in prolonging pain

This gives the same ruleset very different emotional and strategic outputs.

The Turn Engine

The simulation runs through useGameLogic.

That hook is responsible for:

  • tracking board state
  • selecting the active player
  • calling the LLM controller
  • appending chat events
  • resolving challenges
  • eliminating agents
  • deciding when the game is over

Core call:

const output = await AgentController({
  self: agent,
  boardSummary: describeBoard(gameState.board),
  publicHistory,
  whisperHistory,
  state: gameState,
});
Enter fullscreen mode Exit fullscreen mode

The response is a JSON object:

{
  "thought": "Your hidden strategy",
  "whisper": {
    "target": "AgentName",
    "message": "Secret message"
  },
  "public_message": "What you say to everyone",
  "move": "Your game action"
}
Enter fullscreen mode Exit fullscreen mode

That structure is the backbone of the entire app.

Prompt Design

The prompt has to balance freedom with structure.

It includes:

  • current board state
  • public conversation history
  • whisper history relevant to that specific agent
  • the requirement to return valid JSON

Prompt excerpt:

return `You are ${payload.self.title}. You are playing So Long Sucker.
Current Board: ${payload.boardSummary}
Your Secret Goal: Survive at all costs.
Public History:
${payload.publicHistory.map((line) => `- ${line}`).join("\n") || "- None"}
Your Secret Whisper History:
${payload.whisperHistory.map((line) => `- ${line}`).join("\n") || "- None"}
Instructions: You must output a JSON object...`;
Enter fullscreen mode Exit fullscreen mode

This is enough context for agents to act strategically while preserving room for personality.

Challenge Resolution

The ruleset is simplified, but still expressive enough to generate drama.

When a chip enters a contested area, a challenge can occur. The system then uses other agents' recent strategic outputs to infer who they support.

That means challenge outcomes are not just mechanical. They are socially mediated by temporary coalition math.

This is where the simulation starts feeling alive.

Betrayal Detection

One of my favorite details is the betrayal alert.

The app tracks public promises from each agent. If an internal thought later contains betrayal-like intent while recent public messaging contained alliance-like language, the UI flags it.

Conceptually:

const betrayal =
  promiseKeywords.some((keyword) => latestPromise.includes(keyword)) &&
  betrayalKeywords.some((keyword) => loweredThought.includes(keyword));
Enter fullscreen mode Exit fullscreen mode

This is not perfect natural-language reasoning, but it is a strong enough heuristic to surface "you said trust, but you meant sacrifice."

UI Design

I wanted the UI to feel like a command center rather than a dashboard template.

So the visual choices leaned toward:

  • dark war-room surfaces
  • luminous accents
  • stacked feed cards
  • animated chips
  • alert flashes on betrayal

The layout is split:

  • left column: board state, agent summaries, simulation context
  • right column: communication stream

That makes the public-vs-private tension easy to understand conceptually, even if the board logic itself can still be improved visually.

Why The Local Fallback Matters

A prototype like this should still run without live API keys.

So the app includes deterministic fallback personas inside AgentController. That means:

  • the demo remains interactive
  • the UI can be tested offline
  • contributors can work on state and presentation without setting up model providers first

This is a small engineering decision that improves developer experience a lot.

What I’d Improve Next

The biggest current limitation is readability of the board state during live play.

The strongest next improvements would be:

  • move trails between turns
  • explicit challenge panels
  • alliance graph visualization
  • turn-by-turn replay mode
  • chip counts embedded directly onto board sectors
  • a "why this happened" explainer for coalition outcomes

From an architecture standpoint, I would also:

  • move model calls server-side
  • persist runs in a database
  • add seeded deterministic simulation mode
  • add replay exports

Contribution Opportunities

This is a strong project for contributors because it has work at multiple levels:

  • UI polish
  • state management
  • prompt engineering
  • multiplayer or human-agent modes
  • analytics and replay tooling

Some good starter issues:

  • add an event timeline scrubber
  • implement per-agent whisper inbox panes
  • visualize trust as a graph
  • add challenge breakdown cards
  • add simulation presets

Final Thoughts

Most AI apps are optimized for answers.

This one is optimized for motives.

That makes it useful not just as a game, but as a lens into multi-agent systems, incentive design, and how quickly "alignment" unravels when survival and social ambiguity are both part of the rules.

If you're building agent systems, simulations like this are worth paying attention to. They reveal failure modes, persuasion patterns, and emergent strategies much faster than polished demos ever will.

How this works

Github Repo: https://github.com/harishkotra/So-Long-Sucker-Protocol

Top comments (0)