Portfolio version (canonical, with full context and styling):
https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/exploratory-testing.html
TL;DR
- What it is: risk-driven exploratory sessions where design, execution, and analysis happen together.
- Platform context: mobile (Android), where interruptions and device state changes are normal.
- Timebox: short focused sessions, not long wandering playthroughs.
- Approach: charters, controlled variation, observation-led decisions.
- Outputs: defects and observations that explain behaviour, with enough context to reproduce.
Exploratory testing on mobile in practice: chartered, timeboxed sessions with controlled variation, producing defects, context notes, bug reports, and evidence.
About this article
Exploratory testing is often summarised as “testing without scripts”. In real mobile QA work, that description is incomplete.
This article explains exploratory testing on mobile as it is actually applied in a practical workflow: session structure, risk focus, interruptions and recovery, and how this approach consistently finds issues that scripted checks often miss.
Examples are drawn from a real Android mobile game pass, but the focus here is the method, not the case study.
What exploratory testing actually means
In practice, exploratory testing is a way of working where test design, execution, and analysis happen together.
You are not following a pre-written script. You are observing behaviour and choosing the next action based on risk, evidence, and what the product is doing right now.
That does not mean “random testing”. It means structured freedom: you keep a clear intent, and you keep your changes controlled so outcomes remain interpretable.
Why exploratory testing matters on mobile
Mobile products rarely fail under perfect conditions. They fail when something changes unexpectedly. On Android especially, many failure modes are contextual and lifecycle-driven.
- Alarms, calls, and notifications interrupt active flows.
- Apps are backgrounded and resumed repeatedly.
- Network quality changes during critical moments (login, purchase, reward claim).
- UI must remain usable on small screens and unusual aspect ratios.
Applied insight: For mobile exploration, compare performance across devices where possible and probe interruptions: lock screen, phone calls, network drops, switching Wi-Fi/data, rotation, and kill/restart recovery.
Radu Posoi, Founder, AlkoTech Labs (ex Ubisoft QA Lead)
Exploratory sessions target these risks directly instead of assuming a clean uninterrupted journey.
Exploratory testing workflow in practice
Exploratory test charters, not scripts
Sessions start with a charter: a short statement of intent.
For example, “Explore reward claim behaviour under interruptions” or “Explore recovery after network loss”.
The charter defines focus, not steps.
Timeboxed exploratory testing sessions
Exploratory testing works best in short sessions. Timeboxing forces prioritisation and prevents unfocused wandering.
Typical sessions range from 20 to 45 minutes.
Applied insight: Before you go deep, verify the basics first. A short daily smoke test protects the golden path, so deeper exploratory work is not wasted rediscovering obvious breakage.
Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)
Controlled variation: one variable at a time
Rather than changing everything at once, one variable is altered at a time: lock state, network type, lifecycle state.
This keeps results interpretable and defects reproducible.
Exploratory testing session checklist (charter, timebox, evidence)
- Charter chosen (risk and focus)
- Timebox set (20 to 45 mins)
- Variables defined (one at a time)
- Notes captured live
- Evidence captured when it happens
- Bug report drafted while context is fresh
Common mobile bugs found with exploratory testing
Exploratory testing is effective at surfacing issues that are low-frequency but high-impact, especially on mobile.
- Soft locks where the UI appears responsive but progression is blocked.
- State inconsistencies after backgrounding or relaunch.
- Audio or visual desynchronisation after OS-level events.
- UI scaling or readability problems that only appear in specific contexts.
Android exploratory testing example: reward claim soft lock
Scenario: reward claim flow under interruptions (Android).
During an exploratory session, repeatedly backgrounding and resuming the app while a reward flow was mid-animation triggered a soft lock: the UI stayed visible, but the claim state never completed, blocking progression.
This did not appear during clean uninterrupted smoke testing because the trigger was lifecycle timing and state recovery.
Why this matters: it is normal user behaviour on mobile, not a rare edge case. Exploratory sessions hit it because they are designed to.
Bug reporting for exploratory testing: notes and evidence
Because exploratory testing is adaptive, notes and evidence matter more than in scripted runs. Findings must be supported with enough context to reproduce and diagnose.
Applied insight: High impact exploratory bugs live or die by their evidence. Capture context (client and device state), include frequency (for example 3/3 or 10/13), and attach a clear repro so the issue is actionable.
Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)
- Screen recordings captured during the session, not recreated later.
- Notes that include context, not just actions (device state, network, lifecycle transitions).
- Bug reports that clearly separate expected behaviour from actual behaviour.
The goal is to make exploratory findings actionable, not anecdotal.
Exploratory testing skills shown in this mobile pass
- Risk-based testing decisions
- Test charter creation and execution
- Defect analysis and clear bug reporting
- Reproduction step clarity under variable conditions
- Evidence-led communication
- Mobile UI and interaction awareness
- Device and network variation testing
Key takeaways for mobile QA
- Exploratory testing is structured, not random.
- Mobile risk is contextual, not just functional.
- Interruptions and recovery deserve dedicated exploration.
- Good notes and evidence make exploratory work credible and actionable.
Exploratory testing FAQ (mobile QA)
How do you stop exploratory testing becoming random wandering?
By using a clear charter, a strict timebox, and controlled variation. If you can’t explain what you were trying to learn in that session, the charter is too vague.
What do you write down during an exploratory session?
The variables that matter for reproduction: device state, network, lifecycle transitions, and what changed between attempts. Notes should capture context, not just button presses.
How do you reproduce a bug found through exploration?
First, reduce the scenario to the smallest set of steps that still triggers the issue. Then rerun it while changing one variable at a time until the trigger conditions are clear.
What makes mobile exploratory testing different from PC or console?
Mobile failure modes are often lifecycle and OS-driven: backgrounding, notifications, lock/unlock, network switching, permissions, battery and performance constraints. Normal user behaviour creates timing and recovery issues that clean runs will miss.
Evidence and case study links
Rebel Racing: Charter-based Exploratory & Edge-Case Testing (full artefacts and evidence):
https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/rebel-racing/QA Chronicles Issue 2: Rebel Racing:
https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-02-rebel-racing
This dev.to post stays focused on the workflow. The case study links out to the workbook structure, runs, and evidence.

Top comments (0)