My GitHub has a graveyard.
Several personal projects, each following the same lifecycle: get an idea, write some frontend, push to GitHub, deploy to Vercel, static page appears. Then nothing.
Not because I ran out of time. Not because the code got hard. It's because one day I'd open the project and genuinely not know what to do next — which feature to build, whether a feature was worth building at all, how to make a UI decision without second-guessing it for thirty minutes.
I'd stare at the editor, then close it.
This post is about how I fixed that. Not with more discipline — with structure I'd been missing.
The Real Problem: Roles Disappear
When I work on a professional team, I never have the "what do I do next" problem.
Before any sprint starts, we run an IPM (Iteration Planning Meeting): story cards get reviewed, estimates get made, must-haves get separated from should-haves. Before any significant feature kicks off, there's a Feature Kick-off where BA clarifies requirements, QA identifies test scenarios, and UX locks down interaction specs — all before anyone opens their editor.
Then during development, every story card gets challenged from multiple angles. As a developer, I'd argue from technical feasibility, time cost, and cross-team dependencies. PM would question whether a feature deserved to be in the sprint at all. BA would turn a vague description into concrete acceptance criteria. QA would ask about edge cases before anyone else had thought about them.
None of this felt like pressure. It was just what happened when people with different functional lenses looked at the same problem.
On a solo side project, those lenses all disappear. Nobody asks "is this actually the most important thing right now." Nobody asks "what's the user's actual goal here, not yours." Nobody asks "what happens when this fails."
The questions don't get asked. The project drifts. Then it stops.
One day I found myself thinking: if my team were running this project, how would they think about it? That question led to what I'm doing now.
The Setup: A Gitignored Team
I create two folders in the project root, added to .gitignore:
your-project/
├── .gitignore # includes /team/ and /planning/
├── team/ # role prompt files
│ ├── PM.md
│ ├── BA.md
│ ├── UX.md
│ ├── TL.md
│ └── QA.md
├── planning/ # iteration planning
│ ├── epics.md
│ └── story-cards/
│ ├── epic-1/
│ └── epic-2/
└── src/ # actual code
These folders don't go into the repo. They're the project's thinking layer, not its output layer.
Each role file has a brief responsibility definition and accumulates decisions over time. The roles stay narrow on purpose:
| Role | Focus |
|---|---|
| PM | Value judgment, MVP scope, priority calls |
| BA | Logic breakdown, flow modeling, edge case identification |
| UX | Interaction logic, information architecture, visual spec |
| TL | Technical approach, architecture decisions, complexity control |
| QA | Boundary conditions, error scenarios, quality gates |
The constraint matters. If PM is also making technical decisions, the role loses its purpose. The value comes from friction between different functional perspectives, not from one perspective that covers everything.
Two Prompt Modes: Diverge, Then Converge
The most common mistake when using AI for role-playing is using the same prompt for everything. I maintain two distinct modes.
🔀 Brainstorm Mode
When to use: Direction is unclear. I need to surface blind spots before committing to anything.
# Brainstorm Session
## Project context
[One or two sentences describing the current project]
## Current idea / question
[What I'm exploring — can be vague]
## Each role, please respond:
**PM**: What real user problem does this solve?
Is it worth building now? If only one thing can stay, what is it?
**BA**: Break this apart — what assumptions are hidden here?
Where are the boundaries? Under what conditions does this fail?
**UX**: What is the user's actual goal in this scenario?
Does the proposed interaction conflict with their mental model?
**TL**: How feasible is this technically?
Is there hidden complexity? Can the current architecture support it?
**QA**: Where is this most likely to break?
Which edge cases need to be figured out right now?
---
Rule: Each role asks questions and surfaces risks only. No solutions.
Goal: expose blind spots. Don't converge yet.
The rule at the bottom is load-bearing. If roles are allowed to give solutions, the AI collapses everything into one synthesized answer and the multi-perspective friction disappears.
🎯 Convergence Mode
When to use: Direction is set. I need a decision, not more analysis.
# Convergence Session · [Role Name]
## I need you to reason as [UX / PM / BA / TL / QA]
## What's already decided
[One or two sentences — the committed direction]
## The specific question
[One question. Not more than one.]
## Constraints
- [Time / technical / resource limits]
- [Decisions that can't be changed]
## Output format I need
- A clear recommendation (no "it depends")
- Reasoning in one or two sentences
- If there's a tradeoff, name what you gave up and why
The "no it depends" constraint is the key. The role already has the context and constraints. It should give a judgment, not a menu of options.
What This Actually Looks Like: Design System
The place I've felt this most clearly is UX consistency — and it's the place where AI coding causes the most silent damage.
When you code with AI directly, the AI solves "does this component work." Nobody is tracking "is this component consistent with what already exists." First card gets aspect-ratio: 16/9. Second card, you forget, and get 4:3. By the time the page is together, there's visual inconsistency — and fixing it means touching every component, not just changing a CSS value.
This is what I'd call the "AI feel" in a project: nothing is technically wrong, but nothing quite coheres. The design language is slightly different in every section because each section was generated without reference to the others.
My solution is to make UX.md a living design system document. Early in the project, I run Convergence Sessions to establish core specs:
# UX · Design System
## Images
- Card cover: aspect-ratio 1:1
(Sources are third-party uploads, uncontrolled ratios — square has highest fault tolerance)
- Detail page hero: aspect-ratio 16/9
## Spacing
- Component padding: 16px (small) / 20px (standard) / 32px (large)
- Card gap: 20px
## Color
- Primary: #2b6cb0
- Text: #1a202c (ink) / #718096 (muted)
- Border: rgba(0,0,0,0.08)
## Motion
- Card hover: translate-y -4px + shadow, 200ms ease
- Transitions: 0.2s ease-out throughout
Before writing any new component, I paste the relevant sections of this file and tell the AI "implement to this spec."
When something feels wrong but I can't pinpoint it, I open a Convergence Session and cue the UX role:
Review this component from a UX perspective. What's inconsistent with our design system?
The role shifts from decision-maker to auditor — turning a vague "something's off" into a concrete list of things to fix.
The point isn't that AI is doing the design. The point is that I'm using AI to maintain my control over the design direction, rather than letting it drift with every new feature.
Planning: From Brainstorm to Epics
The planning folder gives the project a rhythm.
epics.md isn't predefined — it's the output of running Brainstorm followed by Convergence. The flow:
Brainstorm Session (all roles)
→ surface directions, risks, unexplored angles
→ collect candidate directions
Convergence Session · PM
→ given constraints (time, energy, target user)
→ judge each direction: worth doing now / later / never
→ output: epics with clear goals
Once I have epics, each breaks into story cards in the corresponding folder. Before starting a new sprint, I run the equivalent of an IPM with the PM role — paste in the story card list, get a priority call, commit to what goes in.
Before any significant feature, I run the kick-off equivalent: BA session for requirements, UX session for interaction spec, QA session for edge cases. All before the editor opens.
# Epics
## Epic 1 · Homepage & Project Showcase
Goal: visitor understands who I am and what I've built within 30 seconds
## Epic 2 · Playground Page
Goal: full project grid with consistent visual language across all cards
## Epic 3 · Multilingual Support
Goal: EN/ZH toggle with locale-aware routing
Having this means every time I open the project, I know exactly what to work on and why. The "open editor, stare, close editor" loop disappears — not because I'm more motivated, but because the structure answers the question before it becomes a problem.
What Actually Changed
The tools are one thing. The shift in how I think about the project is more interesting.
Working on a team, these perspectives come passively. You get challenged on prioritization by PM, on requirements by BA, on user goals by UX. You absorb those lenses over time.
Solo, you have to build them deliberately. What I've found is that after running enough Brainstorm and Convergence sessions, the questions start coming naturally. Before writing a component, I'll find myself asking "does this match our design system" without opening UX.md. Before committing to a feature, I'll ask "is this actually the most important thing right now" without starting a PM session.
The roles change how you look at your own project. You stop being just the person implementing things and start being the person deciding what gets implemented and why.
What This Can't Replace
Honest limits:
- Real user feedback: The UX role simulates user perspective but doesn't replace actual users
- Team's tacit knowledge: A BA with five years of domain experience carries context no prompt can replicate
- Being genuinely challenged: AI roles will surface risks and ask hard questions, but they won't tell you your whole idea is wrong
This approach works best when you have some directional sense already and need structure to execute — not when you're searching for a problem worth solving.
If you want to see what this process has produced, yukiuix.com is the project I've been running it on.
The part I haven't fully figured out: how to make AI roles push back harder when I've already made up my mind. They're good at surfacing things I haven't thought about — less good at genuinely challenging directions I'm committed to. If you've worked through that, I'd be curious how.
Top comments (0)