DEV Community

Cover image for Turning Messy Data into User Insights: Using Thematic Analysis
Celestina Dike
Celestina Dike

Posted on

Turning Messy Data into User Insights: Using Thematic Analysis

By Barakat Ajadi, Product Manager

Most product teams are sitting on research gold they don't know how to use. They conduct interviews, gather feedback, and collect transcripts, then struggle to turn it all into something the team can actually act on. This is that story, and more importantly, this is the method that changed how I approach qualitative data.

1. In the Beginning
You have carried out your user interviews and research, but what do you do with the messy data you have gathered? How do you make sense of it and use it to make informed decisions?
I recently conducted user discovery for a new feature we shipped. I grouped users into three categories to understand them at every phase of the journey and to see how each step was perceived.
I started by gathering users across the different groups and creating a research plan, which I shared with the customer research team for vetting. A process was then set up for interview slots, and the journey began. But in reality, this was only the beginning.

2. Making Sense of the User Groups
Dividing users into groups wasn't just for structure. It was the only way to get a complete picture of the journey.
Each group represented a different stage. Group one had completed the process. How did you complete it, and what were the hurdles you had to cross, if any? Group two was still in the process. Are you experiencing similar hurdles or a different one entirely? Group three hadn't started the process. What prevents you from starting this flow?
Understanding every stage helps uncover pain points and how to make the flow seamless for users throughout the cycle. A single user group would have collapsed very different experiences into one. We would have missed the fact that blockers at the start of the journey are completely different from blockers at the end.

3. Carrying Out the User Interviews
Before the first call, make sure you have recording and transcription enabled on your meeting platform. Get approval from the user upfront. This is non-negotiable.
A plan? Check. Interviews scheduled? Check. Now this is where you start to acquire data. A bunch of user interviews are lined up for the week; the interesting part is where you ask questions and listen to your users talk. Because you have a plan set up with your question bank, it makes it easier, but a user may say something you are not expecting, and it is important to ask follow-up questions to dig deeper.
We have AI infused into our meeting platform, so recording and transcribing were seamless. Tools like Otter.ai, Fireflies, or even the built-in transcription on Google Meet and Zoom work well here. The transcript becomes your best friend in the next phase. Trust me on that.

4. The Storm Before the Calm
Phew. The interviews are done, and it has been quite a week. Twenty calls across three different user groups, and now I am sitting with a folder full of transcripts and recordings staring back at me.
Here is the thing nobody really tells you about user research. The interviews are actually the fun part. You are in a flow state, asking questions, listening, picking up insights in real time. Post-interview is when the actual work begins. You have raw data and need to analyse it and draw insights.
I opened the first transcript. Then the second. By the fifth one, I already had contradicting information. A user from group one said the process was straightforward. A user from group two said they had no idea what they were supposed to do at the exact same step. Both were right. They were just at different points in their journey. But reading transcript by transcript, it is genuinely hard to hold it all together.
I needed a way to find patterns from these conversations without losing the nuance. That is when I turned to thematic analysis.

5. Introducing Thematic Analysis
Thematic analysis is a way to form patterns from qualitative data. You break down your data into labels called codes, and then group those codes into bigger themes.
Here is how I approached it. I gathered all the transcripts and read through them, coding and labelling each one. A code is simply a short label that captures what a user is saying or feeling at that point. For example, a user said, "It took me 5 minutes to complete," and the code for that was completion time. Another said, "One thing I like is that delivery is swift", and the code was ‘merchant satisfaction’. Another said, "I don't think anyone should have issues filling the information", and the code was ‘perceived difficulty’.
Doing this across 20 transcripts, you start to pick up patterns. The same ideas are showing up just in different words. That is your signal.
Next, group similar codes into themes. Codes like completion time, perceived difficulty, and document readiness all pointed at the same underlying problem. They became one theme: Activation Friction. Where you have a cluster of related but distinct codes under one theme, those become sub-themes, a way of preserving the detail without losing the bigger picture.
By the end, I had 8 themes that cut across all three user groups. What made this powerful was seeing which themes were universal across the journey and which ones were specific to a particular stage.

6. Where AI Fits In
Now let's address what everyone is thinking. Reading through twenty transcripts? Three groups? Coding responses manually? This sounds like a lot, and this is where AI can come in to make your work significantly easier.
Once I had my transcripts ready, I fed them into an AI tool. ChatGPT works well for this, as does Claude or a dedicated research tool like Dovetail. I prompted it to code responses, identify similar phrases, and flag recurring sentiments. What would have taken me an entire day was done in a fraction of the time.
Here is something important, though. AI gives you a strong starting point, not a final answer. The interpretation of what those patterns actually mean for your product still sits with you. You handle the thinking.

7. From Themes to Product Insights
Having 8 themes feels good. But themes alone don't ship features. The next step was turning what I found into something the team could actually act on.
I broke down each theme into four sections in my research report.
What We Heard. This section was a brief summary of what users said about the theme, with sample quotes to support it. For example, under the theme of Activation Friction, quotes clustered around not knowing a document was required, or needing time to gather specific documents before completing the flow.
Why It Matters. How does this theme affect the user journey? Does it cause delayed activation? Drop-off? Frustration that only surfaces later? This section connects the theme to a real business or experience consequence.
What It Means. This is where you translate the theme into a product diagnosis. Where does the issue actually come from? Is the form too long, or is it the missing visual cues that tell users what to prepare? Are users benchmarking against a competitor's experience? This section separates the symptom from the cause.
Recommendations. Based on everything identified, what should the team do? This is where research becomes a roadmap. Before the analysis, conversations with stakeholders were centred on surface-level fixes. After that, we were talking about the right problems.

8. What Thematic Analysis Changed
Honestly? It changed how I think about qualitative data altogether.
Thematic analysis gave me a process, and having a process meant I could defend my insights. Walking into stakeholder meetings, the conversation wasn't "users seem frustrated with the journey." It was saying "across all three user groups, activation friction and perceived difficulty were the two most consistent themes, and here is the
If you are a product manager sitting on a pile of interview transcripts right now and wondering where to start, start here. The themes are already in your data. Thematic analysis just helps you find them. The messy data was never the problem. It was always the signal.

Barakat Ajadi is a Product Manager at Moniepoint.
Read more articles from our technical team on the blog

Top comments (0)