Every accessible game asks players to configure their own needs. But players who are newly disabled, new to gaming, or unfamiliar with accessibility terminology can't configure what they don't know they need.
I call this the configuration barrier — and I built ObservePlay to eliminate it.
The Solution
ObservePlay watches how you play during a 5-stage onboarding session and infers your accessibility profile automatically:
Stage 1: Welcome (no questionnaire)
Stage 2: Input Detection (9 input methods detected)
Stage 3: Visual Assessment (text size + contrast thresholds)
Stage 4: Audio Assessment (3-tone hearing classification)
Stage 5: Profile Review (accept or override)
The inferred profile drives automatic adaptation of game feedback, timing, sizes, contrast, and more.
Tech Stack
- Framework: Next.js 16 (App Router) + React 19
- Language: TypeScript (~25,000 lines)
- Audio: Web Audio API (real-time synthesis, no audio files)
- ML: TensorFlow.js (client-side emotion detection via WASM)
- Database: PostgreSQL + pgvector
- Real-time: WebSocket
- Offline: Service Worker + IndexedDB (PWA)
- Testing: Vitest + fast-check (property-based testing)
Architecture
11 service modules connected through a typed event bus:
Profile Learner ──┐
Accessibility Copilot ──┤
Emotion Engine ──┤
NL Controller ──┤
Audio Narrator ──┼──> Event Bus ──> WebSocket Hub ──> Client
AI Companion ──┤
Game Generator ──┤
Research Analyzer ──┤
Consent Manager ──┤
Copilot Adaptation Learner ──┤
Companion Learning ──┘
Key Design Decisions
Web Audio API for sound synthesis — all game sounds are synthesized in real time. No audio files to load, no network dependency. Four distinct tones (flip: 600Hz, match: C5-E5, mismatch: 200Hz, win: C5-E5-G5-C6).
Deterministic profile inference — same observation data always produces the same profile. Validated by property-based tests across 10,000 generated inputs.
Client-side emotion processing — TensorFlow.js runs facial expression analysis in the browser via WASM. No raw video ever leaves the device. This was a non-negotiable privacy decision.
Progressive Web App — service worker caches game assets and profiles for offline play. Cache-first for assets, network-first for API calls, IndexedDB for structured data.
Adaptation Rules
// Hearing-based feedback selection
if (profile.hearingCapability === 'none') {
// Disable audio, enable screen flash + text indicators + card labels
} else if (profile.hearingCapability === 'partial') {
// Enable audio AND enhanced visual feedback
} else {
// Standard audio feedback
}
// Card size computation
const cardSize = Math.max(
80,
profile.minReadableTextSize * 4,
profile.clickPrecision * 8
);
Testing Results
| Metric | Value |
|---|---|
| Test files | 27 |
| Test cases | 638 |
| Pass rate | 100% |
| Execution time | 2.60s |
| Lighthouse accessibility | 96/100 |
| GAG compliance | 86% (79/92) |
| Simulated profiles tested | 100,000 |
| Adaptation conflicts found | 0 |
Try It + Give Feedback
I'm writing a research paper on this for ACM TACCESS and need community feedback.
Try the app: https://observeplay-jf9e.vercel.app/
Share feedback (anonymous, 5 min): https://forms.gle/GEt36zsUcUhnGBULA
Especially interested in hearing from:
- Gamers with disabilities — did it correctly detect your needs?
- Accessibility professionals — what's missing?
- Developers — what would you architect differently?
The project is open-source. Contributions welcome.
What do you think about observation-based profiling as an alternative to manual accessibility configuration? Would love to hear your thoughts in the comments.
Top comments (0)