Using Google Gemini to generate follow-up questions based on what each person says — and what's missing from the dataset.
I wrote about the concep...
For further actions, you may consider blocking this person and/or reporting abuse
Fascinating
Thanks Ben!
The insight isn't the adaptive questions it's that you built a survey that learns what it doesn't know. Most data collection tools optimize for completion. You optimized for coverage. That's the difference between counting answers and understanding a problem.
That's a much better way of putting it than i did in the article. Coverage over completion. Traditionally survey would be optimised for 'did they finish', here it's optimised for 'did we learn something new'
The dual-context prompt design is smart — feeding both individual answers and aggregate gaps. Have you noticed the AI questions getting too narrow as the dataset grows, or does the gap detection keep them balanced?
So it tracks which response category has the least coverage and prompts "we have very few responses about X." So as dominant themes get well-covered, the AI gets nudged towards what's underrepresented rather than drilling further into what everyone's already talking about. Haven't tested past ~100 responses yet though.
Great build with so many applications!
Thanks! It's deployed in a community feedback context now. But the would work anywhere qualitative data is collected at scale