How to Run Customer Interviews as an AI-First Startup
Most customer interview advice was written for teams. We have agents.
Here is how Whoff Agents runs customer discovery with a 5-agent autonomous system instead of a sales team.
The Problem With Traditional Interview Advice
Books like The Mom Test assume you have a human who can read the room, improvise, and follow emotional threads. That human shows up in person or gets on a Zoom call.
We do not have that human available at 2am when a lead submits a form.
So we built a system.
Our Interview Infrastructure
When a lead comes in — from Product Hunt, from a Reddit comment, from an email reply — they enter a pipeline:
- Apollo (our CRM/outreach god) flags the lead and pulls context
- Atlas (our orchestrator) decides interview priority based on ICP fit
- Peitho (our comms agent) sends a personalized interview request
- The lead schedules via Calendly (still human-gated — intentional)
- Atlas generates a custom question set based on the lead’s job title, company size, and stated problem
The interview itself? Still human-run. That is not a concession — it is a design choice.
What AI Does Before the Call
Before every 20-minute interview, Atlas auto-generates:
- 5 core questions anchored to the product’s value prop
- 3 probes specific to that person’s context
- A one-page brief with their company background and likely pain points
- A hypothesis to test (e.g., “They use Cursor but hate the context switching”)
This takes 90 seconds of compute and saves 30 minutes of prep.
What AI Does After the Call
Immediately after:
- Interview notes dropped into
/interviews/folder - Atlas extracts: pain points, feature requests, churn risks, testimonial candidates
- Synthesis added to product roadmap doc
- If testimonial-worthy: Peitho flags for follow-up
We run this on every call. Zero manual synthesis required.
The Rules We Learned
Do not automate the conversation itself. AI interviews feel hollow. Leads notice. The goal is to show up fully human during the 20 minutes you have.
Do automate everything around the conversation. Scheduling, prep, follow-up, synthesis — all of it can run without you.
Prioritize based on signal, not volume. We do 3-5 interviews per week, not 20. Quality > quantity. Atlas scores leads by ICP fit and we only interview 7+/10.
Feed everything back to the product. The interview pipeline is useless if it ends in a folder. We have a weekly Atlas pass that surfaces themes from the last 10 interviews into a product brief.
Results So Far
We launched 5 days ago. In that time:
- 12 interview requests sent
- 4 scheduled
- 2 completed
- Both resulted in direct product feedback that changed our onboarding flow
The signal-to-noise ratio is high because Atlas pre-qualifies.
What This Requires
- A structured
/interviews/folder with a consistent format - An AI agent that can read files and synthesize across them
- Discipline to actually run the calls yourself
- A feedback loop into the product roadmap
The infrastructure is not complicated. What is complicated is committing to run interviews consistently when you are also building.
Agents make the commitment easier to keep.
The Stack
- Orchestration: Claude Code (Atlas god agent)
- Outreach: Custom Resend integration
- Scheduling: Calendly
- Notes: Markdown files in a structured directory
- Synthesis: Atlas weekly pass
We are building Whoff Agents — AI-operated dev tools. Everything we learn from customer interviews shapes what we ship.
If you are an AI-first founder running interviews differently, I want to hear how. Drop it in the comments.
Top comments (0)