The Feature That Wasn't in the Design Doc
When I started building BJJ Techniques — a BJJ (Brazilian Jiu-Jitsu) technique learning app for iOS — I had a clear vision: a searchable database of techniques, organized by position and category, with step-by-step instructions and YouTube videos.
The "Technique Tree" — a visual map showing how techniques connect and flow into each other — was not in that design doc. Not even close.
It emerged entirely from persona interviews.
Here's exactly how that happened, including the specific research I used to make those interviews actually work.
About the App
BJJ Techniques is an iOS app for learning Brazilian Jiu-Jitsu techniques systematically (available on the App Store).
Key features:
- Technique Library — Search techniques by category: submissions, sweeps, guard passes, and more
- Technique Detail Pages — Overview, step-by-step breakdowns, YouTube videos, and related techniques in one place
- Technique Tree — Visualize how techniques connect from any starting position
- Learning Paths — Structured weekly curriculum for white belts through early blue belts
The Tool I Used: KaizenLab
Before getting into the personas, a quick note on the workflow.
I run all my hypothesis validation in KaizenLab — a web app I built myself to operationalize the lean hypothesis testing methodology from Toshiaki Ichitani's book Build the Right Thing Right.
The core idea: before writing code, define your hypotheses explicitly, design experiments to test them, and record what you learn — in a structured way that builds up over time. KaizenLab handles hypothesis canvases, persona management, AI pseudo-interview simulation, and validation cycle tracking, all in the browser. It also has MCP (Model Context Protocol) integration so AI agents can operate it directly.
Everything in this article — the personas, the interviews, the feature decision — was run through KaizenLab. I'm writing this both as a case study in persona-driven validation and as a real-world test of the tool I'm building.
Three Personas, Three Real Frustrations
Most indie hackers I know create personas like this: "User A, 25-35, tech-savvy, wants X." Useful, but shallow. The responses you get from shallow personas are shallow too.
I created three personas with significantly more depth:
Tanaka Shota, 28, IT engineer, white belt
Frustrations:
- Forgets technique names and steps right after learning them
- YouTube search gives fragmented, disconnected information
- Feels bad asking senior students the same questions repeatedly
Goals:
- Learn BJJ techniques systematically
- Improve the quality of twice-weekly training sessions
- Reach competition level
Sato Misaki, 34, marketing manager (reduced hours), female white belt
Frustrations:
- Trains only once a week, progress feels too slow
- Most tutorial videos feature male practitioners with strength-based approaches — unclear if techniques work for her body type
- Doesn't know what to prioritize learning
Goals:
- Maximize limited training time
- Find techniques that work for smaller practitioners
- Understand what's most important to learn right now
Suzuki Daisuke, 42, sales manager, blue belt (also coaches beginners)
Frustrations:
- Feels like fundamentals are shaky despite his rank
- Gets confused by techniques named in English, Portuguese, and Japanese
- Can't rely on physical dominance — technique precision is critical at his age
Goals:
- Fill gaps in fundamental technique knowledge
- Organize options by position
- Build a personal game plan
These aren't marketing archetypes. Each one has specific contradictions, specific constraints, and specific contexts that change what they actually want from an app.
KaizenLab's persona management view — three personas organized as cards, each with goals, frustrations, and psychological state dimensions.
The Research That Made Interviews Work
Here's where it gets interesting.
I use KaizenLab's AI pseudo-interview feature to simulate conversations with personas before talking to real users. The point is to stress-test your questions and spot weak assumptions early — before wasting anyone's time.
But I found that standard AI personas give obvious, shallow answers. "Yes, that feature would be useful." "I'd like better search." These are useless.
What changed the quality dramatically was applying principles from the HumanLM research paper from Stanford (2026), which studied how to make AI-simulated participants produce more realistic, human-like responses.
The key insight from HumanLM: surface attributes aren't enough. You need to model psychological state dimensions.
KaizenLab's persona editor has dedicated fields for all three:
1. Stance — What's their position on specific topics?
Suzuki's stance on new tools: "I've been doing BJJ for years.
I'll try a new app if someone I respect recommends it,
but I won't pay for something I haven't validated myself."
2. Emotional tendencies — How do they respond emotionally?
Sato's tendencies: "Gets discouraged when progress feels
invisible. Motivated by visible milestones. Anxious about
being the only woman who doesn't understand something."
3. Communication style — How do they express needs?
Tanaka's style: "Direct, specific, data-oriented. Won't say
'I want a feature' but will say 'I tried to look up
arm triangle yesterday and spent 20 minutes cross-referencing
three different YouTube videos.'"
When you add these dimensions to a persona, the AI stops giving generic answers. Sato doesn't say "I want a learning path." She says "I have 45 minutes before I need to pick up my kid, and I need to know exactly which two techniques to drill today." That's a different design requirement entirely.
What the Interviews Actually Found
I ran AI pseudo-interviews with all three personas, asking them to describe how they currently learn and track BJJ techniques.
The surprising finding: all three independently requested some form of technique tree or learning path. A feature I had not planned and had no intention of building.
KaizenLab's interview results view — insights extracted from AI pseudo-interviews across all three personas, automatically organized by theme.
But they wanted completely different things.
Tanaka (white belt, IT background)
He wanted an RPG-style skill tree — a branching diagram starting from positions, with unlockable nodes. Closed guard → armbar OR sweep → mount → choke.
"Feeling of progression. Like I know where I am and what unlocks next."
This is a learned pattern from gaming and online learning platforms. He wanted the same dopamine loop applied to martial arts.
Sato (female white belt, time-constrained)
She wanted a learning path integrated into the technique database. Not a map of everything — a filtered view of only what's relevant for her level right now.
"Show me the 5 techniques that matter most for where I am. Lock everything else. I don't want to see what I'm not ready for."
This is a completely different mental model from Tanaka's. He wants the full map with fog of war. She wants a guided tour.
Suzuki (blue belt, coaching role)
He wanted a custom game plan builder — select from the full technique library to build a personal map of his game. Multiple plans: one for Gi, one for No-Gi, one for competition.
"When I'm coaching a white belt, I want to show them my game plan and say 'start here.' Not a generic beginner curriculum."
Different again. He already knows the techniques. He wants a tool for organizing and communicating his approach.
Why Multiple Personas Converge = High Confidence
Here's the validation principle that made me confident enough to build this:
When multiple personas independently surface the same underlying need — even if they describe it differently — that's a strong signal.
Tanaka, Sato, and Suzuki each came from different places:
- Different experience levels
- Different learning constraints
- Different use cases (self-study vs. coaching)
- Different mental models (gaming vs. workflow vs. curriculum)
But all three had the same core problem: no way to see how techniques relate to each other and where they stand within that structure.
If only Tanaka had mentioned it, I might have dismissed it as one person's gaming preference. If only Suzuki mentioned it, I might have assumed it was a niche need for advanced practitioners.
Three independent hits, three different angles, same underlying gap.
That's when I decided to build it.
The Feature That Emerged
The Technique Tree I ended up designing has three layers, corresponding to the three personas' needs:
Phase 1 (free, all users): Position-based technique map
- Tap a position → see branching submissions, sweeps, passes, escapes
- Synced with learning progress (mastered = color, not yet = gray)
- Uses existing
relatedTechniqueIdsandcounterTechniqueIdsdata
Phase 2 (premium): Custom game plan builder
- Select from the full technique library to build your personal map
- Save multiple plans (Gi / No-Gi / competition)
- Share with training partners
The existing data structure already supported this. The connections between techniques were defined. I just hadn't built a UI that surfaced them visually.
The technique tree in action — starting from closed guard, filtered by arm locks. Mastered techniques appear in color; unlearned ones in gray.
What I'd Have Built Without This Process
A searchable database with filters.
Which is fine. But it's what every BJJ app already has. The techniques would have been well-organized and the search would have been solid. Users would have used it, found a specific technique, watched the linked YouTube video, and moved on.
The Technique Tree creates something different: a reason to explore the app as a system rather than a reference lookup. It's the feature most likely to drive retention — coming back to the app not just when you forget a technique name, but to understand how your game is developing.
I didn't think of this myself. Three personas, systematically interviewed with enough psychological depth to produce real signals, thought of it.
The Process in Practice
If you want to run this for your own product:
1. Build personas with psychological state dimensions, not just demographics
For each persona, define:
- Their stance on specific topics relevant to your product
- Their emotional tendencies (what motivates them, what discourages them)
- Their communication style (how they express needs — directly? through frustration? through workarounds?)
2. Run AI pseudo-interviews before real ones
Use the psychological state dimensions to prompt the AI to respond as the persona, not as a helpful assistant. If the answers feel generic, your persona lacks depth.
3. Listen for convergence across personas
One persona mentioning a need = interesting. Two personas = worth investigating. Three personas from different segments = build it.
4. Pay attention to how they describe the need, not just what they want
Tanaka, Sato, and Suzuki all wanted "technique tree," but their descriptions revealed three different product requirements. The surface request was the same. The underlying need was the same. But the specific solution for each was different.
That distinction is what turns user research into good product design.
Have you had a feature emerge from user research that you never would have designed yourself? Or do you usually build from your own intuition? I'd be curious to hear in the comments.
I do this kind of persona-driven validation in KaizenLab — the tool I built specifically for managing hypothesis validation cycles.
References
- BJJ Techniques — App Store — The app built using this validation process
- HumanLM: Large Language Models as Simulated Participants — Stanford, 2026
- Why I Wasted 6 Months Building the Wrong Product — Series Part 1
- I Spent 3 Months Building a SaaS — Then AI Did the Same Thing in One Prompt — Series Part 2
- KaizenLab — Hypothesis validation cycle management


Top comments (0)