KillMyIdea is a developer-focused experiment: can we make AI feedback feel like a real argument instead of a long report?
Short answer: yes, by combining:
- strict output schemas,
- parallel orchestration,
- role-specific prompts,
- and UI pacing that feels live.
Product Goal
Given a user idea, generate a sharp debate between:
- Optimist (upside)
- Skeptic (assumption attack)
- Risk Analyst (legal/financial/operational failure modes)
- Moderator (verdict + scores + actions)
Target latency: under ~8–10 seconds.
High-Level Architecture
Why a Custom Orchestrator (Instead of Full LangGraph)?
LangGraph is powerful for long-lived workflows, but this app needs:
- fast and deterministic flow,
- shallow state,
- predictable output shape.
So a lightweight custom orchestrator in lib/debate-orchestrator.ts is enough and simpler to maintain.
Core Orchestration Code
Parallelized agent execution
const [optimistInit, skepticInit, riskInit] = await Promise.all([
initialRound("Optimist", optimistPrompt),
initialRound("Skeptic", skepticPrompt),
initialRound("Risk Analyst", riskPrompt)
]);
Strict structured moderator output
const moderatorSchema = z.object({
summary: z.string().min(20).max(900),
verdict: z.enum(["Likely to Fail", "Needs Pivot", "Promising with Conditions"]),
optimistScore: z.number().int().min(1).max(10),
skepticScore: z.number().int().min(1).max(10),
riskLevel: z.enum(["Low", "Medium", "High"]),
actions: z.array(z.string().min(5).max(140)).min(1).max(2)
});
Robust parsing strategy
- Agent lines can fallback from plain text into
{ text }. - Structured outputs (context/moderator) run strict parsing + repair retry.
This protects UX from model format drift.
Frontend UX Decisions
The UI avoids wall-of-text. It’s paced like a live debate:
- sequential reveal of cards,
- explicit stage tracker during generation,
- final verdict card with 3 metrics,
- share-card export (text + image clipboard).
Relevant file: app/page.tsx.
API Contract
POST /api/debate accepts:
- rich payload (
title,description,context), or - compact payload (
idea,context).
Returns both concise fields and full structured result.
This dual-format input makes the endpoint easier to integrate with other clients.
Performance Notes
Performance wins come from:
-
Promise.allin all multi-agent stages, - short prompt constraints,
- low-token outputs (3–4 sentences max).
Primary latency bottleneck is provider round-trip time, not local compute.
Developer Experience
-
npm run lintandnpm run buildvalidate deployment health. - App is Vercel-ready with env-driven model/provider switching.
- OpenAI-compatible setup supports non-OpenAI providers via
OPENAI_BASE_URL.
What I’d Build Next
- SSE streaming per stage (context, initial, critique, refine, moderator).
- Persistent DB + analytics for controversial ideas.
- Qdrant memory for historical retrieval and better context grounding.
- Auth + rate limits for public launch.
- Prompt versioning with offline evaluation harness.
KillMyIdea works because it treats AI output as structured debate events, not freeform chat.
That shift makes the product faster, clearer, and more shareable.
Github Repo: https://github.com/harishkotra/kill-my-idea

Top comments (0)