A 600-line bash script, four AI personas, zero shared context. 52 ideas in 4 minutes.
Back when I was running engineering and product teams I used to drag people from all over the office into ideation sessions. Architects, product folks, the security guy who never wanted to be there. We'd cover the walls in sticky notes, draw terrible diagrams on whiteboards, and argue about approaches until something clicked. It took me years to find a version of the IDEO process that kept things loose enough to stay creative but structured enough to actually produce something useful.
Now I run a one-person company. No VP of Engineering. No product team. Nobody to pull into a conference room. But this morning four senior executives independently generated 52 ideas, cross-voted on each other's proposals with mandatory improvement suggestions, merged overlapping concepts, and produced ranked PRDs. While I was taking my kids to school.
They're AI personas. We lost the sticky notes and the bad whiteboard drawings but the workflow still works. And the output is unrecognizable from what a single prompt produces.
The single-prompt trap
You've done this. You ask an LLM for 10 ideas and you get 10 ideas. They're fine. They all sound like they came from the same person because they did. No tension. No cross-pollination. Nobody saying "that's clever but have you considered the security implications?"
The problem isn't intelligence. It's perspective diversity. One LLM instance optimizes for coherence within a single worldview. It won't generate an idea and then attack it from a completely different angle because that would be incoherent within the same generation.
You can't get genuine disagreement from a single context window.
The fix: zero shared context
Run multiple LLM instances. Give each its own process, its own system prompt, the same goal. No persona sees another's output until voting begins.
Phase 1: ideate 4 independent LLM calls (one per persona)
Phase 2: vote 4 independent LLM calls (each sees others' ideas, not their own)
Phase 3: merge 1 facilitator call (consolidates + ranks)
Phase 4: produce 1 call (VP Product drafts PRDs from winners)
That's IDEO's design sprint. Ideate without judgment, converge through structured critique. Automated end-to-end in bash.
Personas that create friction
Each persona is a markdown file. Not "pretend you're an engineer." Specific concerns that collide productively.
VP Engineering cares about architecture purity, test coverage, performance budgets.
VP Product cares about user value, adoption paths, metric impact.
VP Security cares about attack surfaces, data integrity, threat models.
VP DevOps cares about operability, deployment complexity, monitoring.
Same problem. Four different entry points. Engineering proposes a trie-based type classifier. Product proposes a feedback flywheel. Security proposes negative constraints that block misclassification. DevOps proposes a parallelized pipeline with cold-storage caching.
52 ideas from 4 personas. Not 10 variations of the same idea from one.
Voting is where it gets interesting
Phase 2 separates this from running the prompt four times. Each persona votes on other personas' ideas only. You can't vote for your own. Every vote requires a mandatory improvement suggestion.
rules:
you have exactly 3 votes
you cannot vote for your own ideas
for every vote, suggest a specific improvement
the improvement must make the idea stronger
VP Security votes for VP Engineering's type classifier and adds: include negative suffix tries to prevent type-spoofing. VP Engineering votes for VP Product's feedback flywheel and adds: algorithmically calculate ROC-optimized confidence thresholds from the ground truth.
The ideas get better through voting. Not just ranked.
Real numbers
I ran this on a knowledge extraction problem. Pushing deterministic entity extraction from 47.7% recall toward 60%. Constraints: zero LLM calls, sub-3-second latency, YAML-configurable.
| Rank | Idea | Votes | Merged from |
|---|---|---|---|
| 1 | Deterministic Classification Engine | 2 | 4 ideas across Eng, Prod, Sec |
| 2 | The Emergent Flywheel | 2 | 3 ideas across Prod, Eng, DevOps |
| 3 | Passive Entity Harvester | 1 | 4 ideas across Eng, Prod, Sec |
| 4 | Cross-Sentence Entity Registry | 1 | 1 idea from Prod |
| 5 | Vectorized Aho-Corasick Engine | 1 | 1 idea from Eng |
52 raw ideas. 15 unique after merge. Top 5 with cross-functional votes. PRD drafts ready to go.
The top idea was independently proposed by three personas from three angles: a trie-based classifier, a subtype pruner, and type-confusion guardrails. The merge revealed they were all solving the same problem from different entry points. The combined concept was stronger than any individual proposal.
No single prompt produces that convergence.
Under the hood
It's a bash script. Heredoc prompts piped to Gemini CLI (or Claude CLI, it auto-detects what's installed). Each phase writes output to a file:
# Phase 1: each persona ideates independently
for PERSONA in $PERSONAS; do
cat > "$PROMPT_FILE" <<PROMPT
You are adopting the following persona:
$(cat "docs/personas/${PERSONA}.md")
Generate as many ideas as you can (aim for 8-15).
Think from your persona's unique perspective.
No idea is too crazy. Do NOT evaluate, just generate.
=== GOAL ===
$(cat "$GOAL_FILE")
PROMPT
cat "$PROMPT_FILE" | gemini > "phase1-ideas-${PERSONA}.md"
done
Phase 2 compiles all ideas, builds a document for each persona containing only other personas' ideas, asks for votes. Phase 3 merges and tallies. Phase 4 writes PRDs from the winners.
Runtime: about 4 minutes. Cost: a few cents, or zero on Gemini's free tier.
It's open source
The agent workflow template includes:
ideo-sprint.sh the full IDEO sprint (this article)
vp-review.sh automated VP reviews of sprint plans and dev reports
persona templates with customizable VP definitions
sprint workflow covering plan, review, execute, evaluate lifecycle
Drop scripts/agentic/ and docs/personas/ into any repo. Multi-agent workflow running in your terminal.
The bigger pattern
Honestly the thing I miss most about running teams is the creative friction. Getting smart people in a room who see the same problem differently and letting them go at it. That energy is hard to replicate.
This was my attempt to bring some of that back into a world where I'm building solo with agents. We don't have the sticky notes or the someone-brought-donuts energy but the core of it still works. Give independent perspectives the same problem, let them collide, and see what survives.
I've started applying the same idea beyond brainstorming. Code review where an architect persona and a security persona independently review the same PR. Decision making where personas evaluate a proposal before seeing each other's takes. Risk analysis where domain experts independently flag concerns and then merge.
Always the same pattern. Diverge independently. Converge through structured critique.
Your AI agent doesn't need to be smarter. It needs to argue with itself.
The agent workflow template is open source. Works with Gemini CLI, Claude CLI, or both. Contributions welcome.
Tags: #ai #agents #devtools #opensource
Top comments (1)
Hey all, author here. Happy to answer questions about the implementation.
The bash script is intentionally low-tech. No frameworks, no orchestration layer, no agent SDK. Just heredocs piped to a CLI. I wanted something any developer could read top to bottom in 10 minutes and understand completely. If you need something fancier you probably have different problems than I do.
I built this yesterday so I'm curious to see how it holds up over time. If anyone tries it with different LLM backends or different persona structures I'd love to hear how it goes. The repo is set up to work with Gemini CLI or Claude CLI but I haven't tested others yet.