⚠️ This is a condensed version of the article.
👉 Read the full, updated version (canonical):
This version summarises the workflow. The full article includes deeper examples, structured QA analysis, and full case study context.
TL;DR: exploratory testing in mobile QA
- What it is: exploratory testing is a manual QA approach where test design, execution, and analysis happen together
- Platform focus: mobile apps (Android), where interruptions and lifecycle changes create real-world bugs
- Approach: short, timeboxed sessions using charters, controlled variation, and risk-based decisions
- Outputs: reproducible bugs, clear bug reports, and QA evidence with enough context to diagnose issues
Exploratory testing in mobile apps: chartered, timeboxed sessions with controlled variation, producing defects, notes, bug reports, and evidence.
What exploratory testing is in mobile QA
Exploratory testing in mobile QA is not “random testing”.
It is a structured manual testing approach where:
- test design
- test execution
- and analysis
all happen at the same time.
Instead of following predefined test cases, I:
- define a test charter
- observe behaviour
- adapt based on risk and system response
The key difference from scripted testing is adaptability under real conditions.
Why exploratory testing matters in mobile apps
Mobile apps rarely fail under ideal conditions.
They fail when real users interact with them.
Common mobile QA risks include:
- interruptions (calls, notifications, alarms)
- backgrounding and resuming apps
- network switching during critical flows
- UI scaling issues across devices
Exploratory testing in mobile apps targets these real-world scenarios directly, instead of assuming a clean user journey.
Mobile exploratory testing workflow (practical example)
1. Test charters, not scripts
A test charter defines intent, not steps.
Example:
Explore reward claim behaviour under interruptions.
This keeps sessions focused while allowing flexibility.
2. Timeboxed exploratory sessions
Effective exploratory testing sessions are short:
20 to 45 minutes
Timeboxing:
- prevents unfocused wandering
- forces prioritisation
- improves consistency across sessions
3. Controlled variation
A core principle of exploratory testing in mobile QA is:
change one variable at a time
Examples:
- lock/unlock state
- background/resume
- network changes
- rotation
- app restart
This keeps results interpretable and bugs reproducible.
Common mobile bugs found through exploratory testing
Exploratory testing is effective at finding low-frequency, high-impact bugs:
- soft locks blocking progression
- state inconsistencies after lifecycle changes
- audio or visual desync after OS events
- UI issues in specific device contexts
These bugs often do not appear in scripted testing because scripted flows assume ideal conditions.
Example: Android reward claim soft lock
In an Android exploratory testing session:
- repeatedly backgrounding the app
- while a reward animation was running
caused a soft lock.
Result:
- UI remained visible
- reward state never completed
- progression blocked
This issue only appeared under lifecycle interruption conditions.
That is exactly why exploratory testing in mobile QA is necessary.
Bug evidence in exploratory testing
Exploratory testing requires strong evidence to make findings actionable.
I capture:
- screen recordings during the session
- notes on device and network state
- reproduction steps while context is fresh
- expected vs actual behaviour
A useful exploratory bug report answers:
- What changed?
- What happened?
- What should have happened?
- Can it be reproduced?
Key takeaway: exploratory testing in mobile QA
Exploratory testing is structured testing, not random behaviour.
In mobile QA, it matters because:
- users interrupt flows
- apps change state constantly
- real-world conditions introduce timing issues
A good exploratory workflow turns unpredictable behaviour into reproducible bugs.
Full article and case study
This DEV version focuses on the core workflow.
The full version includes:
- expanded structure
- deeper QA examples
- full case study and evidence
👉 Read the full updated version:
Evidence and case study links
Rebel Racing: Charter-based Exploratory & Edge-Case Testing (full artefacts and evidence):
https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/rebel-racing/QA Chronicles Issue 2: Rebel Racing:
https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-02-rebel-racing
This DEV version focuses on workflow. The case study expands into full QA artefacts, test runs, and evidence.

Top comments (0)