My #HNGi13 Stage 1 experience testing Delve - a 3D quest-based language learning app
The Challenge
I was tasked with creating a complete QA testing strategy for Delve - a mobile app that teaches languages through 3D games, AI conversations, and gamification.
Key Features I Had to Test:
- 3D quest environments with interactive elements
- AI-powered conversation practice with real-time feedback
- Gamification (points, badges, leaderboards)
- Offline mode with data sync
- Multi-language support
- Payment integration
What I Delivered
1. Test Plan
Created a comprehensive test plan covering:
- 10 testing types: Functional, Performance, Security, Usability, Compatibility, etc.
- Risk assessment: Identified 10 potential issues (3D performance, AI accuracy, offline sync)
- Resource planning: 8 QA team members, tools, devices
- Timeline: March 2025 - January 2026 aligned with project milestones
Key Section - Testing Types:
| Type | Purpose | Tools |
|---|---|---|
| Functional | Does it work? | Manual + Appium |
| Performance | Is it fast? | JMeter, Android Profiler |
| Security | Is data safe? | OWASP ZAP |
| Usability | Easy to use? | Beta testing |
2. Non-Functional Requirements (NFR) Document
Defined 5 quality standards the app must meet:
Performance Requirements:
- App launch: <3 seconds
- 3D quest loading: <5 seconds
- Frame rate: Minimum 30 FPS
- AI responses: <2 seconds
Usability Requirements:
- Works on screens 4.7" to 12.9"
- New users complete first quest in <5 minutes
- Support for screen readers and accessibility
Security Requirements:
- HTTPS encryption for all data
- 256-bit encryption for payments
- Account lockout after 5 failed logins
Reliability Requirements:
- 99.5% uptime
- <0.1% crash rate
- Offline mode works 100% for cached quests
Scalability Requirements:
- Support 10,000+ concurrent users
- Handle 100,000+ registered users
- Fast performance worldwide with CDN
My QA Approach
1. Risk-Based Prioritization
I didn't try to test everything equally. I asked:
- What breaks the user experience? → 3D performance, AI accuracy
- What's technically complex? → Offline sync, real-time leaderboards
- What impacts revenue? → Payment flows
This led me to focus testing on:
✅ 3D quest loading (must be <5 seconds)
✅ AI conversation accuracy (90%+ required)
✅ Payment security (encryption, error handling)
2. Manual + Automation Mix
- Automated: Login flows, regression tests, API endpoints
- Manual: New features, exploratory testing, UX validation
- Both: Critical paths like quest completion
3. Real Device Testing
Tested on 6 physical devices:
- 3 iPhones (13, 14, iPad Pro)
- 3 Android (Samsung S22, Pixel 6, OnePlus 9)
Why? Cloud testing misses real-world issues like battery drain and touch responsiveness.
Key Challenges & Solutions
Challenge 1: Testing AI Conversations
- Problem: AI responses are unpredictable
- Solution: Created test datasets with known inputs/outputs, measured accuracy percentages
Challenge 2: 3D Performance on Budget Devices
- Problem: App might lag on older phones
- Solution: Early testing on low-end devices, created fallback 2D mode
Challenge 3: Offline Data Sync
- Problem: Users could lose progress
- Solution: Tested 20+ offline/online transition scenarios
What I Learned
1. Think Like a User, Not Just a Tester
Every test should answer: "Will this frustrate or delight users?"
Example: Testing leaderboard updates isn't just "does it work?" - it's "do users feel motivated by seeing real-time rankings?"
2. Documentation = Accountability
A good test plan isn't just for me - it's for:
- Developers (what to expect)
- Product managers (what's covered)
- Stakeholders (confidence in quality)
3. You Can't Test Everything
With limited time, I had to ruthlessly prioritize. I focused on:
- High-risk features (3D, AI, payments)
- High-impact user journeys (onboarding, first quest)
- Revenue blockers (subscription flows)
4. Tools Are Helpers, Not Solutions
Appium, Postman, and JMeter are powerful - but only if you know what to test and why.
Tools I Used
Test Management: Jira, Confluence
Mobile Automation: Appium, BrowserStack
API Testing: Postman, Newman
Performance: JMeter, Android Profiler, Xcode Instruments
Security: OWASP ZAP
Results
✅ 12-page test plan with clear scope and timeline
✅ 15 detailed test cases covering critical flows
✅ Risk mitigation strategy for 10 identified risks
✅ Resource plan with 8 QA roles and tools
But more importantly:
I learned to think strategically about quality - not just find bugs, but ensure users have a smooth, engaging experience.
Final Thoughts
QA isn't about saying "no" to releases. It's about giving teams confidence to say "yes" by:
- Finding the right bugs at the right time
- Helping prioritize fixes that matter to users
- Balancing thoroughness with practical timelines
This challenge taught me what real-world QA planning looks like - and I'm ready for more!
Sneak Peek
Thanks for joining me! 🙏
If you're doing QA or interested in testing, let's connect!
Tags: #QA #SoftwareTesting #HNGInternship #HNGi13 #TestAutomation #MobileTesting
Shoutout: @HNGInternship for this learning opportunity!






Top comments (0)