A couple of years ago I had to write documentation for my team's next epic. All the ideas were in my head. POCs done, tradeoffs clear. But nothing was on paper. And the meeting was soon.
Staring at a blank page, I tried something that felt a little silly at the time. Instead of telling the AI what to write, I asked it to interview me:
I have a plan for the team's next epic. I have done some POCs. I would
like to write some documentation about it. I need your help. Ask me 10
short questions about this which will help you help me.
Ten quick answers later, the AI offered to draft the doc. I wasn't done yet, so I pushed back:
No, not yet. Ask me about the X contingency plan. And ask me about any
gaps you see in the plan. Keep the questions short.
After that round, I had a solid document in front of me. Total time: under 30 minutes.
I've been doing this regularly ever since. I call it reverse prompting. It works with whatever AI you're using. ChatGPT, Claude, Gemini, anything that can hold a conversation.
A.K.A. Flipped Interaction Pattern
Many people have probably come up with this technique independently, like I have. Turns out it has a name. Researchers at Vanderbilt published it in 2023 and called it the Flipped Interaction Pattern (White et al., arXiv:2302.11382).
Fun part: I only found out about the Vanderbilt paper while writing this article, because I was using reverse prompting to write it. Partway through the interview, the AI mentioned there's academic literature on what I was describing. That's how I learned the name.
Here's the prompt I used to kick off the interview for this piece:
I have been using a technique which I called reverse prompting. I use
it a lot and it has boosted me and my team's productivity. I want to
write a dev.to article about it.
Ask me questions about my thoughts about this. Ask me 2 short
questions at a time. Based on the answer ask me 2 more questions. I
will tell you to stop at some point.
Here's a snippet from a few rounds in:
AI: Two more.
- You said you use this "on a regular basis now for everything." Have you hit a type of task where reverse prompting actively made things worse or slower than just writing a direct prompt?
- When the AI is interviewing you, do you ever feel it's asking the wrong questions or wasting turns on obvious things, and if so, how do you course-correct mid-interview?
Me:
- Nope.
- Yes it does ask wrong questions sometimes. So I correct the course and give it more context in mid conversation.
AI: Good. Two more.
- On the "never made things worse" answer, I want to push a little here. A technique that works for literally everything is a suspicious claim in an article. What about very short tasks, like "rewrite this sentence"? Is it fair to say reverse prompting is specifically for tasks with enough ambiguity or scope to justify an interview?
- When you redirect the AI mid-interview, does it ever drop threads it was handling well before?
That exchange tightened a lazy claim. I'd been telling myself the technique worked for everything. It doesn't. The AI pushed; I revised. That's the whole pitch in miniature. Multiply it by a dozen rounds and you get the article you're reading.
Why it works
You've probably been using reverse prompting without knowing it had a name. Tell an AI about a bug, and the AI almost always responds with questions. Have you checked the syslogs? What's the exact input? Does it reproduce locally? Can you share the stack trace? You answer one at a time. The AI is interviewing you toward a fix. Most engineers do this several times a day and never call it anything.
Most write-ups sell flipped interaction as "the AI knows what to ask because it's an expert." In my experience that's only half true. For a narrow domain the AI doesn't know, it'll ask generic-senior-engineer questions. Useful, but not magical.
The real benefit is different: you don't initialize the AI with your bias.
When you prompt directly, you quietly constrain the answer. Your framing, your vocabulary, your assumed structure, all of it leaks in. When the AI interviews you, it's working from a clean slate. It'll ask about the thing you were hoping nobody would notice. It'll occasionally lead you somewhere you hadn't planned.
There's a second thing too. Answering questions is cognitively easier than generating structure from scratch. A lot of documentation paralysis is actually structure-block. When you're just answering, sometimes yes or no, the structure assembles itself as a byproduct.
Pattern 1: Extraction (ideas in your head, nothing on paper)
This is the original use case. The thinking is done, the document isn't.
I'm about to work on [X]. Ask me 10 short questions about it. When you
have enough information, produce [the deliverable].
The key phrase is the goal. Without a goal, the AI will wander.
If you have a lot to say, ask for 10 questions, then tell it to ask 10 more. If you're not sure where you're going, ask for fewer and let the AI guide you:
Ask me one question at a time. Based on my answer, decide the next
question. Don't produce anything until I tell you to stop.
This second mode feels like explaining your problem to a senior colleague. Sometimes you work out the answer just by articulating the question, and the colleague barely had to speak.
A real example. I needed to document our use of Airflow to move data between our transactional system and our warehouse. Without reverse prompting, I'd have managed maybe a short paragraph by my deadline. With it, I had a full document in the same time, low-level design and all. That's the gap this technique closes.
Once I shared the technique, teammates started using it the same way. One documented our disaster recovery practices. Others are writing technical implementation plans for new features. Anywhere a single person holds the knowledge and just needs to get it out, this is the pattern that fits.
Pattern 2: Audit (the document exists, but something's missing)
This one's subtly different from extraction. You have a draft. You suspect there are gaps.
Here's a document I've written: [paste]. Interview me about the gaps
you see. Ask short questions. After I've answered, suggest edits to
fill the gaps.
I used this to significantly improve the documentation stack of a massive project. I could have just said "generate documentation on X and Y" and gotten something plausible. Instead I got a document where everything was properly linked, redundancies were gone, and the gaps I didn't know I had were filled, because the interview dragged them into view.
Pattern 3: Tough communication (feedback, appraisals, hard emails)
Some messages need the right words and a careful read of how the receiver will feel. That's a lot to hold in your head while also drafting.
For appraisals, what works for me:
I need to appraise an individual and provide them with feedback. I use
the following template: [template]. The following documents show their
historical appraisals: [paste]. Ask me questions about their recent
performance in the past quarter. Ask 2 short questions at a time and
based on my response create more questions.
For tough feedback in general:
I need to give tough feedback to a team member. Here are the criteria
I care about: [list]. Here's the template I usually follow: [template].
Ask me questions one at a time to help me figure out what to say and
how to say it.
Sometimes I tell the AI the feedback is tough. Sometimes I don't. Either way, it defaults to a professional tone even when my answers are rough, which is usually what I want. Thinking about the receiver's likely reaction one question at a time is much less overwhelming than trying to craft the whole message at once.
Pattern 4: Knowledge base building (long-running, reference-fed)
Most recently I used reverse prompting to build a knowledge base for the team. This one's different in scale. Not a single interview with a single deliverable. It runs for hours. Maybe across days.
I want to build a knowledge base for [topic / team / project]. Ask me
10 short questions at a time. As I answer, build the document in
canvas mode. Keep going past the obvious questions. I'll often answer
with links and references instead of prose.
I started the same way. 10 short questions at a time. I used Gemini's canvas mode so the document was building in parallel as I answered.
After about 50 questions, I had a good document. The AI offered to wrap up. I told it to keep going.
Around that point, my answers stopped being typed sentences. They became links. Repository URLs, other docs, screenshots, anything relevant. The interview kept generating questions; I kept feeding it references instead of writing prose.
When the output started spanning multiple markdown files, I switched to Claude Code. The chat interface wasn't built for that scope, but a coding agent could update many files at once and keep them linked properly.
The final result: a well-organized, low-redundancy knowledge base where everything cross-references where it should. If we had just dumped all the team's existing docs into one place, we'd have ended up with overlapping, unlinked content. Instead the AI did the connective work as it went.
We tested by loading the result into NotebookLM and asking domain experts on the team to query it. Rough satisfaction: over 90%.
Two things made this work:
- Don't type when you can link. Once the interview is rolling, references are denser than prose.
- Switch tools when the deliverable outgrows the chat. Single-doc output is fine for chat. Multi-file is when you need a coding agent.
Pattern 5: Decision-making (clarifying scope and direction)
Reverse prompting works well when the deliverable isn't a document but a decision.
I wanted to build a certification practice app, but with two constraints. Keep hosting cost low, and let the developer community contribute. That was all I had. I let the AI interview me.
It asked about the features I cared about, which helped me scope the thing realistically. It pulled up similar applications and competitors I should be aware of. By the end of several reverse-prompting sessions, I had a clear feature list, a sharp sense of where my idea sat in the market, and enough confidence to start building.
The result is live: quizplay.io, free Azure certification practice tests with community-contributed quizzes stored in GitHub repos. The cost-and-contribution constraints I started with shaped the whole architecture. Reverse prompting is what got me from a vague idea to a buildable plan.
I want to build [X]. My constraints are [list]. Ask me questions to
help me scope this, identify similar applications, and figure out
whether the idea is worth pursuing. Ask 2 questions at a time.
Tips and observations
Pace it to your mental state. Lots to say? Ask for 10 questions at a time. Not sure where you're going? Ask for fewer and let the AI guide you. Both modes work; they just feel different.
Course-correct when it asks wrong questions. It will, sometimes. Just redirect:
That's not the direction I want to go. Focus instead on [X]. Also ask
about [Y], which you haven't covered.
How fast you can steer depends on how well you can articulate what's off. That's a skill that improves with practice.
Personas mostly change tone. Telling the AI it's "a senior architect with 20 years of experience" reliably shifts the tone. Whether it shifts the quality of the questions is less clear. I'd be careful about overclaiming there.
Voice mode might be the killer feature. I haven't tried this myself, but if your AI tool supports voice, you could run an interview while doing household chores or walking. Talk through your epic while folding laundry, end up with a document. Worth trying.
Whatever the AI produces needs revision. Two things I do:
- Give specific pointers. "The intro is too long, the security section is missing X, restructure the API examples by use case rather than alphabetically."
- Apply Pattern 2 (Audit) to the AI's own draft. "Do a gap analysis..."
The second one is funny...
You can flip it to generate prompts instead of deliverables. Slightly meta, but worth knowing:
I'm about to work on an epic. Ask me about it. Use my answers to
generate a prompt, written in the persona of a senior frontend
engineer with years of experience, that will let an AI complete the
epic accurately.
Inside my answers I'll paste PRDs and other references. What comes out is a dense, context-rich prompt I can hand to a fresh session or a coding agent. Interview as scaffolding, prompt as product.
Closing thought
The closest analog I have is walking to a senior colleague's desk to explain a problem, and realizing halfway through that you already know what to do. The colleague barely had to speak. The structured articulation, prompted by an attentive listener, did the work.
One caveat before I let you go. This is overkill for short tasks like "rewrite this sentence" or "what's the syntax for X". Just prompt directly there. And it's a worse fit when the AI genuinely lacks the domain knowledge to ask useful questions. But on ambiguous, generative work? It earns its keep.
That's the art of reverse prompting. You're not getting a smarter collaborator. You're getting a better-structured conversation with yourself.
If you've stumbled onto this technique yourself, or use it for something that didn't make this list, drop a comment below. I'm always collecting new patterns.
References
- White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv:2302.11382.
- QuizPlay. Free Azure certification practice tests, built using Pattern 5. Source: github.com/quizplay-io.
Top comments (0)