<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: charlieww</title>
    <description>The latest articles on DEV Community by charlieww (@charlieww).</description>
    <link>https://dev.to/charlieww</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/charlieww"/>
    <language>en</language>
    <item>
      <title>I tested my app across 8 platforms with zero test code — here's how</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Tue, 24 Feb 2026 18:13:35 +0000</pubDate>
      <link>https://dev.to/charlieww/i-tested-my-app-across-8-platforms-with-zero-test-code-heres-how-37gd</link>
      <guid>https://dev.to/charlieww/i-tested-my-app-across-8-platforms-with-zero-test-code-heres-how-37gd</guid>
      <description>&lt;p&gt;Last week I shipped a cross-platform app and needed to test it on Flutter, React Native, iOS, Android, Electron, Tauri, and web. Writing separate test suites for each platform? No thanks.&lt;/p&gt;

&lt;p&gt;Instead, I used an AI agent that could see my app and interact with it. Here is what the workflow looked like:&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;I used &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;flutter-skill&lt;/a&gt;, an open-source MCP server that gives AI agents eyes and hands inside running apps. It connects to your app via a lightweight bridge and exposes 253 tools the AI can use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; flutter-skill
flutter-skill init ./my-app
flutter-skill launch ./my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing with Natural Language
&lt;/h2&gt;

&lt;p&gt;Instead of writing test code, I just described what to test:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Test the login flow - enter &lt;a href="mailto:test@example.com"&gt;test@example.com&lt;/a&gt; and password123, tap Login, verify the Dashboard appears&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI agent automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Takes a screenshot to see the current state&lt;/li&gt;
&lt;li&gt;Discovers all interactive elements with semantic refs&lt;/li&gt;
&lt;li&gt;Taps, types, scrolls - just like a human&lt;/li&gt;
&lt;li&gt;Verifies the expected outcome&lt;/li&gt;
&lt;li&gt;Screenshots each step for evidence&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;Across 8 platforms, the AI agent completed &lt;strong&gt;562 out of 567 test scenarios&lt;/strong&gt; (99.1% pass rate). The failures were all legitimate bugs it discovered.&lt;/p&gt;

&lt;p&gt;What surprised me most:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero test code written&lt;/strong&gt; - everything was natural language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-platform for free&lt;/strong&gt; - same test descriptions worked on iOS, Android, web, desktop&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Found real bugs&lt;/strong&gt; - the AI explored edge cases I would not have thought of&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snapshot is 99% more token-efficient than screenshots&lt;/strong&gt; - the accessibility tree gives the AI structured data instead of pixels&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use This vs Traditional Automation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use AI testing when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to test across multiple platforms quickly&lt;/li&gt;
&lt;li&gt;You want to explore edge cases without writing explicit tests&lt;/li&gt;
&lt;li&gt;Your team does not have dedicated SDET resources&lt;/li&gt;
&lt;li&gt;You need fast smoke tests during development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stick with traditional automation when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need deterministic, repeatable CI/CD tests&lt;/li&gt;
&lt;li&gt;Performance benchmarking&lt;/li&gt;
&lt;li&gt;Testing specific race conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;flutter-skill is open source and free: &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;github.com/ai-dashboad/flutter-skill&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Works with Claude, GPT, Gemini, Cursor, Windsurf, and any MCP-compatible agent.&lt;/p&gt;

&lt;p&gt;Would love to hear if anyone else is using AI agents for testing - what is working for you?&lt;/p&gt;

</description>
      <category>testingaiflutterwebdev</category>
    </item>
    <item>
      <title>How AI Semantic Snapshots Replace Screenshots for E2E Testing</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Mon, 23 Feb 2026 22:45:30 +0000</pubDate>
      <link>https://dev.to/charlieww/how-ai-semantic-snapshots-replace-screenshots-for-e2e-testing-2pjn</link>
      <guid>https://dev.to/charlieww/how-ai-semantic-snapshots-replace-screenshots-for-e2e-testing-2pjn</guid>
      <description>&lt;h2&gt;
  
  
  The Problem with Screenshots
&lt;/h2&gt;

&lt;p&gt;Every AI-powered testing tool I've seen sends screenshots to the AI model. It works, but it's expensive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100KB+ per screenshot&lt;/li&gt;
&lt;li&gt;Thousands of tokens to process&lt;/li&gt;
&lt;li&gt;Visual recognition needed to find elements&lt;/li&gt;
&lt;li&gt;500ms-2s latency per analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Semantic Snapshots: A Better Way
&lt;/h2&gt;

&lt;p&gt;What if instead of a screenshot, you sent the AI a structured description of every interactive element — its position, label, type, and state?&lt;/p&gt;

&lt;p&gt;That's what semantic snapshots do. In &lt;strong&gt;1ms&lt;/strong&gt;, they extract the complete UI structure. The AI gets a machine-readable picture for a fraction of the token cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  New: Form Validation Detection
&lt;/h2&gt;

&lt;p&gt;The latest feature detects form validation rules automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editor type&lt;/strong&gt;: CodeMirror, Draft.js, Tiptap, ProseMirror, Quill&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Required empty fields&lt;/strong&gt;: Which fields need to be filled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why buttons are disabled&lt;/strong&gt;: Infers missing required fields&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best input method&lt;/strong&gt;: Recommends how to input text per framework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means AI agents can fill and submit forms on the &lt;strong&gt;first try&lt;/strong&gt; — no trial and error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx flutter-skill@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;253 MCP tools, 10 platforms, 1ms latency.&lt;/p&gt;

&lt;p&gt;Open source: &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;github.com/ai-dashboad/flutter-skill&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>ai</category>
      <category>webdev</category>
      <category>mcp</category>
    </item>
    <item>
      <title>How I Stopped Writing Fragile E2E Tests and Let AI Handle It</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Mon, 23 Feb 2026 19:25:11 +0000</pubDate>
      <link>https://dev.to/charlieww/how-i-stopped-writing-fragile-e2e-tests-and-let-ai-handle-it-1hm3</link>
      <guid>https://dev.to/charlieww/how-i-stopped-writing-fragile-e2e-tests-and-let-ai-handle-it-1hm3</guid>
      <description>&lt;p&gt;Last month I spent 4 hours debugging a Playwright test that broke because someone renamed a CSS class. Sound familiar?&lt;/p&gt;

&lt;p&gt;I decided to try a different approach: what if the test framework could &lt;em&gt;see&lt;/em&gt; the app like a human does, instead of relying on brittle selectors?&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;An MCP (Model Context Protocol) server that gives AI agents — Claude, GPT, Cursor, Copilot — direct access to running applications. The AI can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch and connect to apps via CDP&lt;/li&gt;
&lt;li&gt;Tap elements, fill forms, scroll, navigate&lt;/li&gt;
&lt;li&gt;Take screenshots and analyze UI snapshots&lt;/li&gt;
&lt;li&gt;Run assertions in natural language&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Key Trick: Semantic Snapshots
&lt;/h2&gt;

&lt;p&gt;Instead of sending full screenshots (expensive in tokens), I built a snapshot system that extracts the UI's semantic structure — interactive elements, their positions, labels, states. The AI gets a complete picture of the UI in ~2ms and a few hundred tokens.&lt;/p&gt;

&lt;p&gt;Compare that to a screenshot: ~100KB of base64, thousands of tokens, and the AI still has to "guess" where buttons are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Numbers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Traditional&lt;/th&gt;
&lt;th&gt;AI-Driven&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tap latency&lt;/td&gt;
&lt;td&gt;50-200ms&lt;/td&gt;
&lt;td&gt;1ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UI analysis&lt;/td&gt;
&lt;td&gt;500ms-2s (screenshot)&lt;/td&gt;
&lt;td&gt;2ms (snapshot)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test brittleness&lt;/td&gt;
&lt;td&gt;High (selector-dependent)&lt;/td&gt;
&lt;td&gt;Low (semantic)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Platforms supported&lt;/td&gt;
&lt;td&gt;Usually 1-2&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Supported Platforms
&lt;/h2&gt;

&lt;p&gt;Flutter, React Native, iOS, Android, Web (Chrome/Firefox/Safari), Electron, Tauri, KMP, .NET MAUI.&lt;/p&gt;

&lt;p&gt;253 MCP tools total. Video recording, API testing, mock responses, parallel multi-device — it's all there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;\&lt;/code&gt;&lt;code&gt;bash&lt;br&gt;
npx flutter-skill@latest&lt;br&gt;
\&lt;/code&gt;&lt;code&gt;\&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add to your MCP config and your AI assistant can start testing immediately.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;ai-dashboad/flutter-skill&lt;/a&gt;&lt;br&gt;
npm: &lt;a href="https://www.npmjs.com/package/flutter-skill" rel="noopener noreferrer"&gt;flutter-skill&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear from anyone doing E2E testing — what's the most annoying part of your current setup?&lt;/p&gt;

</description>
      <category>testing</category>
      <category>mcp</category>
      <category>webdev</category>
      <category>aitesting</category>
    </item>
    <item>
      <title>I Replaced 500 Lines of E2E Tests with One AI Prompt</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Mon, 23 Feb 2026 09:44:31 +0000</pubDate>
      <link>https://dev.to/charlieww/i-replaced-500-lines-of-e2e-tests-with-one-ai-prompt-40am</link>
      <guid>https://dev.to/charlieww/i-replaced-500-lines-of-e2e-tests-with-one-ai-prompt-40am</guid>
      <description>&lt;h2&gt;
  
  
  The Breaking Point
&lt;/h2&gt;

&lt;p&gt;I had 500+ lines of E2E test code across three platforms. Every UI change broke dozens of selectors. Every framework migration meant rewriting test suites.&lt;/p&gt;

&lt;p&gt;Then I tried something different: &lt;strong&gt;let AI control the app directly.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is flutter-skill?
&lt;/h2&gt;

&lt;p&gt;An MCP server with &lt;strong&gt;253 tools&lt;/strong&gt; that lets Claude, Cursor, or any MCP client:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;See&lt;/strong&gt; your app through screenshots (31ms capture)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interact&lt;/strong&gt; like a human — tap, scroll, type (1ms latency)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test across 10 platforms&lt;/strong&gt; with natural language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-adapt&lt;/strong&gt; when UI changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before vs After
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Traditional (Playwright — 50+ lines)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://app.example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForSelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#register-button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#register-button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[name=email]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test@example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// ... 50 more lines&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AI-Driven (1 line)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flutter-skill &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="s2"&gt;"Open registration, fill email and password, accept terms, submit, verify welcome page"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;99.8% less code. Zero maintenance.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;MCP Tools&lt;/th&gt;
&lt;th&gt;Platforms&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;flutter-skill&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;253&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;10&lt;/strong&gt; (Flutter, RN, iOS, Android, Web, Electron, Tauri, KMP, .NET MAUI, CDP)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Playwright MCP&lt;/td&gt;
&lt;td&gt;~33&lt;/td&gt;
&lt;td&gt;1 (Web only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Appium&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;2 (iOS + Android)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Performance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Tap latency: &lt;strong&gt;1ms&lt;/strong&gt; (near hardware limit)&lt;/li&gt;
&lt;li&gt;Screenshot: &lt;strong&gt;31ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;UI analysis: &lt;strong&gt;2ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Compare: Selenium 100-500ms, Appium 200-1000ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;181 test scenarios across 8 platforms: &lt;strong&gt;99% pass rate&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; flutter-skill
flutter-skill init &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; flutter-skill demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;github.com/ai-dashboad/flutter-skill&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open source, MIT licensed. Happy to answer questions!&lt;/p&gt;

</description>
      <category>testing</category>
    </item>
    <item>
      <title>Zero-code E2E testing for any app with OpenClaw + flutter-skill</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Sat, 14 Feb 2026 13:41:24 +0000</pubDate>
      <link>https://dev.to/charlieww/zero-code-e2e-testing-for-any-app-with-openclaw-flutter-skill-f00</link>
      <guid>https://dev.to/charlieww/zero-code-e2e-testing-for-any-app-with-openclaw-flutter-skill-f00</guid>
      <description>&lt;p&gt;What if your AI agent could actually &lt;em&gt;use&lt;/em&gt; your app?&lt;/p&gt;

&lt;p&gt;Not review your test code. Actually tap buttons, enter text, scroll through lists, take screenshots, and verify everything works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;flutter-skill&lt;/strong&gt; makes this real. It's an MCP server that gives AI agents eyes and hands inside any running app.&lt;/p&gt;

&lt;p&gt;Now available as a skill for Claude Code, Cursor, OpenClaw, and 20+ other agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx skills add ai-dashboad/flutter-skill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Initialize your app (one-time):
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flutter-skill init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Tell the agent what to test:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;"Test the login flow — enter admin and password123, tap Login, verify Dashboard appears"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent screenshots the screen, finds UI elements, interacts with them, and verifies results. No test code. No selectors. Just natural language.&lt;/p&gt;
&lt;h2&gt;
  
  
  8 platforms, 99% pass rate
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;SDK&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Flutter iOS/Android/Web&lt;/td&gt;
&lt;td&gt;Dart&lt;/td&gt;
&lt;td&gt;21/21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;React Native&lt;/td&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;24/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Electron&lt;/td&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;24/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Android Native&lt;/td&gt;
&lt;td&gt;Kotlin&lt;/td&gt;
&lt;td&gt;24/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tauri&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;23/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.NET MAUI&lt;/td&gt;
&lt;td&gt;C#&lt;/td&gt;
&lt;td&gt;23/24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KMP Desktop&lt;/td&gt;
&lt;td&gt;Kotlin&lt;/td&gt;
&lt;td&gt;22/22&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Total: 181/183 tests passing&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why use a skill?
&lt;/h2&gt;

&lt;p&gt;Skills are reusable capabilities for AI agents. Install once, use forever:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One-command install via &lt;code&gt;npx skills&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Works with Claude Code, Cursor, Windsurf, Codex, Cline, and 20+ agents&lt;/li&gt;
&lt;li&gt;Schedule tests with cron for continuous testing&lt;/li&gt;
&lt;li&gt;AI-native: understands natural language prompts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;AI testing a TikTok-level app (10 feature modules), fully autonomous:&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://github-production-user-asset-6210df.s3.amazonaws.com/6106454/549827272-d4617c73-043f-424c-9a9a-1a61d4c2d3c6.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;amp;X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20260327%2Fus-east-1%2Fs3%2Faws4_request&amp;amp;X-Amz-Date=20260327T201607Z&amp;amp;X-Amz-Expires=300&amp;amp;X-Amz-Signature=60a19a6fdd36600af3f0c06e87583ac2b852c4e4b8fac8ddb408a76f98d1bfdc&amp;amp;X-Amz-SignedHeaders=host" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;github-production-user-asset-6210df.s3.amazonaws.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install as agent skill (Claude Code, Cursor, OpenClaw, etc.)&lt;/span&gt;
npx skills add ai-dashboad/flutter-skill

&lt;span class="c"&gt;# Or install CLI globally&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; flutter-skill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⭐ &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;What platform would you test first? Drop a comment!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>flutter</category>
      <category>mcp</category>
      <category>testing</category>
    </item>
    <item>
      <title>How to Test Any App with AI in 30 Seconds — Flutter, React Native, iOS, Android &amp; More</title>
      <dc:creator>charlieww</dc:creator>
      <pubDate>Fri, 13 Feb 2026 18:36:47 +0000</pubDate>
      <link>https://dev.to/charlieww/how-to-test-any-app-with-ai-in-30-seconds-flutter-react-native-ios-android-more-j6c</link>
      <guid>https://dev.to/charlieww/how-to-test-any-app-with-ai-in-30-seconds-flutter-react-native-ios-android-more-j6c</guid>
      <description>&lt;h1&gt;
  
  
  How to Test Any App with AI in 30 Seconds
&lt;/h1&gt;

&lt;p&gt;What if testing your app required zero test code?&lt;/p&gt;

&lt;p&gt;Not "low-code testing." Not "AI-assisted test generation." Literally zero lines of test code — you describe what should happen, and AI does it.&lt;/p&gt;

&lt;p&gt;That's what we built with &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;flutter-skill&lt;/a&gt;: an open-source MCP server that gives AI agents eyes and hands inside any running app.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;E2E testing is universally hated for a reason:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This breaks every time someone moves a button
final loginButton = find.byKey(Key('loginBtn'));
await tester.tap(loginButton);
await tester.pumpAndSettle();
final emailField = find.byKey(Key('emailField'));
await tester.enterText(emailField, 'test@example.com');
// ... 50 more lines of brittle selectors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're not testing your app. You're maintaining a second codebase that mirrors your UI. Every refactor breaks it. Every design change means rewriting tests.&lt;/p&gt;

&lt;p&gt;And it gets worse: &lt;strong&gt;every platform has its own testing framework.&lt;/strong&gt; Flutter has &lt;code&gt;integration_test&lt;/code&gt;. iOS has XCUITest. Android has Espresso. React Native has Detox. Web has Playwright. Each with its own API, its own quirks, its own debug cycle.&lt;/p&gt;

&lt;p&gt;What if there was one tool for all of them?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Idea: Let AI Be the User
&lt;/h2&gt;

&lt;p&gt;Instead of writing robot instructions, what if we just... talked to the robot?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Tap the login button, enter test@email.com as the email,
enter password123 as the password, tap submit,
and verify the dashboard loads."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a complete E2E test. No selectors. No framework-specific code. No maintenance when the UI changes — because AI understands what a "login button" looks like, regardless of its internal key.&lt;/p&gt;

&lt;p&gt;This is what MCP (Model Context Protocol) makes possible. MCP lets AI tools like Claude, Cursor, and Windsurf connect to external services. flutter-skill is one of those services — it bridges AI to your running app's UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The architecture is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────┐     MCP      ┌────────────────┐   WebSocket   ┌─────────────┐
│  AI Client   │ ◄──────────► │  flutter-skill  │ ◄────────────► │  Your App    │
│ (Claude,     │   JSON-RPC   │  (MCP Server)   │   JSON-RPC    │  (any        │
│  Cursor,     │              │                 │   on :18118   │   platform)  │
│  Windsurf)   │              └────────────────┘               └─────────────┘
└─────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Your app&lt;/strong&gt; includes a lightweight SDK (a few lines of code) that connects via WebSocket&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;flutter-skill&lt;/strong&gt; runs as an MCP server, translating AI commands into app interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your AI tool&lt;/strong&gt; sends natural language instructions, which flutter-skill converts to precise UI operations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The SDK exposes your app's accessibility tree — every button, text field, label, and container — so the AI can see exactly what's on screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup: Actually 30 Seconds
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Install (5 seconds)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; flutter-skill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Initialize your project (10 seconds)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;your-app
flutter-skill init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This auto-detects your project type and patches your entry point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pubspec.yaml&lt;/code&gt; → Flutter&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Package.swift&lt;/code&gt; → iOS native&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build.gradle.kts&lt;/code&gt; + &lt;code&gt;AndroidManifest.xml&lt;/code&gt; → Android native&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;package.json&lt;/code&gt; + &lt;code&gt;react-native&lt;/code&gt; → React Native&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;index.html&lt;/code&gt; → Web&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;package.json&lt;/code&gt; + &lt;code&gt;electron&lt;/code&gt; → Electron&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Cargo.toml&lt;/code&gt; + &lt;code&gt;tauri&lt;/code&gt; → Tauri&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build.gradle.kts&lt;/code&gt; + &lt;code&gt;kotlin&lt;/code&gt; → KMP&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.csproj&lt;/code&gt; + &lt;code&gt;Maui&lt;/code&gt; → .NET MAUI&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Add to your AI tool (15 seconds)
&lt;/h3&gt;

&lt;p&gt;Add to your MCP config (e.g., Claude Desktop &lt;code&gt;claude_desktop_config.json&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"flutter-skill"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"flutter-skill"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Your AI can now see and interact with your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Can Do
&lt;/h2&gt;

&lt;p&gt;Once connected, your AI has access to 40+ tools:&lt;/p&gt;

&lt;h3&gt;
  
  
  Inspection
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;inspect&lt;/code&gt;&lt;/strong&gt; — See the full UI element tree (accessibility labels, types, positions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;get_element_details&lt;/code&gt;&lt;/strong&gt; — Deep-dive into any specific element&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Interaction
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;tap&lt;/code&gt;&lt;/strong&gt; — Tap any element by description or index&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;enter_text&lt;/code&gt;&lt;/strong&gt; — Type into text fields&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;scroll&lt;/code&gt;&lt;/strong&gt; — Scroll in any direction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;swipe&lt;/code&gt;&lt;/strong&gt; — Swipe gestures (e.g., dismiss, navigate)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;long_press&lt;/code&gt;&lt;/strong&gt; — Long press for context menus&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;screenshot&lt;/code&gt;&lt;/strong&gt; — Capture what the app looks like right now&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;assert_exists&lt;/code&gt;&lt;/strong&gt; — Verify an element is on screen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;get_text&lt;/code&gt;&lt;/strong&gt; — Read text content from any element&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Navigation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;go_back&lt;/code&gt;&lt;/strong&gt; — Navigate back&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;open_url&lt;/code&gt;&lt;/strong&gt; — Deep link to any route&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;eval&lt;/code&gt;&lt;/strong&gt; — Execute platform-native code (Dart, JS, Swift, Kotlin)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;get_logs&lt;/code&gt;&lt;/strong&gt; — Read app console output&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real Example: Testing a Login Flow
&lt;/h2&gt;

&lt;p&gt;Here's what happens when you tell Claude: &lt;em&gt;"Test the login flow with invalid credentials and verify the error message."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Claude (via flutter-skill):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. inspect() → sees the login screen with email field, password field, submit button
2. tap(element: "Email field")
3. enter_text(text: "bad@email.com")
4. tap(element: "Password field")
5. enter_text(text: "wrongpassword")
6. tap(element: "Submit")
7. screenshot() → captures the error state
8. assert_exists(element: "Invalid credentials") → ✅ verified
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No test file created. No selectors maintained. If the UI changes tomorrow, the AI adapts — it looks for "Submit" by understanding the UI, not by memorizing a widget key.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;We tested flutter-skill across 8 platforms with a comprehensive E2E test suite:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Tests&lt;/th&gt;
&lt;th&gt;Passing&lt;/th&gt;
&lt;th&gt;Rate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Flutter iOS&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flutter Web&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Electron&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Android Native&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KMP Desktop&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;React Native&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tauri&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;95.8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.NET MAUI&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;95.8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;183&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;181&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;99.0%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every test is AI-driven. Zero hand-written test code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons from Building for 8 Platforms
&lt;/h2&gt;

&lt;p&gt;Building SDKs for 8 platforms taught us things no tutorial covers:&lt;/p&gt;

&lt;h3&gt;
  
  
  Android: PNG Screenshots Kill WebSocket
&lt;/h3&gt;

&lt;p&gt;Full-resolution PNG screenshots on Android are huge. Sending them over WebSocket caused timeouts. The fix: JPEG at 80% quality, downscaled to 720p. AI reads the UI just fine at lower resolution, and it saves ~90% bandwidth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tauri: eval() Is Fire-and-Forget
&lt;/h3&gt;

&lt;p&gt;Tauri v2's &lt;code&gt;eval()&lt;/code&gt; function doesn't return values. It executes JavaScript in the webview and... that's it. No callback, no promise, no return.&lt;/p&gt;

&lt;p&gt;Our solution: open a secondary WebSocket on port 18120. The JavaScript sends its result there, and Rust receives it via a oneshot channel. Three ports total: HTTP health (18118), WS commands (18119), WS results (18120).&lt;/p&gt;

&lt;p&gt;We also had to add &lt;code&gt;ws://127.0.0.1:*&lt;/code&gt; to Tauri's CSP, otherwise WebSocket connections from the &lt;code&gt;tauri://&lt;/code&gt; origin to localhost are silently blocked.&lt;/p&gt;

&lt;h3&gt;
  
  
  React Native: Skip the Full Build
&lt;/h3&gt;

&lt;p&gt;Building a full React Native project requires native modules, CocoaPods, Gradle — the works. For testing, we used a Node.js mock that implements the bridge protocol directly. Much faster, same result.&lt;/p&gt;

&lt;h3&gt;
  
  
  The go_back Race Condition
&lt;/h3&gt;

&lt;p&gt;On Android, clearing &lt;code&gt;currentActivity&lt;/code&gt; on &lt;code&gt;onActivityPaused&lt;/code&gt; caused a race condition with &lt;code&gt;go_back&lt;/code&gt;. The previous activity pauses before the new one resumes, leaving a brief window where the SDK thinks there's no activity. Fix: only clear on &lt;code&gt;onActivityDestroyed&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use This (and When Not To)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use flutter-skill when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to test user-facing flows without writing test code&lt;/li&gt;
&lt;li&gt;You're building across multiple platforms and want one testing approach&lt;/li&gt;
&lt;li&gt;You're doing vibe coding and need AI to verify what it builds&lt;/li&gt;
&lt;li&gt;You want to prototype and test simultaneously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't use it for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit testing (that's a different problem)&lt;/li&gt;
&lt;li&gt;Performance benchmarking (AI interaction adds latency)&lt;/li&gt;
&lt;li&gt;Tests that need to run in &amp;lt; 1 second (AI thinking time)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
npm i &lt;span class="nt"&gt;-g&lt;/span&gt; flutter-skill

&lt;span class="c"&gt;# Auto-detect and configure your project&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;your-app
flutter-skill init

&lt;span class="c"&gt;# Or try the built-in demo&lt;/span&gt;
flutter-skill demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add to your MCP config, and start talking to your app through AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/ai-dashboad/flutter-skill" rel="noopener noreferrer"&gt;github.com/ai-dashboad/flutter-skill&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. Contributions welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;flutter-skill supports Flutter, iOS, Android, Web, Electron, Tauri, KMP, React Native, and .NET MAUI. Works with Claude, Cursor, Windsurf, and any MCP-compatible AI tool.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
    </item>
  </channel>
</rss>
