What if your AI agent could actually use your app?
Not review your test code. Actually tap buttons, enter text, scroll through lists, take screenshots, and verify everything works.
flutter-skill makes this real. It's an MCP server that gives AI agents eyes and hands inside any running app.
Now available as a skill for Claude Code, Cursor, OpenClaw, and 20+ other agents:
npx skills add ai-dashboad/flutter-skill
How it works
- Initialize your app (one-time):
flutter-skill init
- Tell the agent what to test:
"Test the login flow — enter admin and password123, tap Login, verify Dashboard appears"
The agent screenshots the screen, finds UI elements, interacts with them, and verifies results. No test code. No selectors. Just natural language.
8 platforms, 99% pass rate
| Platform | SDK | Score |
|---|---|---|
| Flutter iOS/Android/Web | Dart | 21/21 |
| React Native | JS | 24/24 |
| Electron | JS | 24/24 |
| Android Native | Kotlin | 24/24 |
| Tauri | Rust | 23/24 |
| .NET MAUI | C# | 23/24 |
| KMP Desktop | Kotlin | 22/22 |
Total: 181/183 tests passing
Why use a skill?
Skills are reusable capabilities for AI agents. Install once, use forever:
- One-command install via
npx skills - Works with Claude Code, Cursor, Windsurf, Codex, Cline, and 20+ agents
- Schedule tests with cron for continuous testing
- AI-native: understands natural language prompts
Demo
AI testing a TikTok-level app (10 feature modules), fully autonomous:
Get started
# Install as agent skill (Claude Code, Cursor, OpenClaw, etc.)
npx skills add ai-dashboad/flutter-skill
# Or install CLI globally
npm install -g flutter-skill
⭐ GitHub
What platform would you test first? Drop a comment!
Top comments (0)