<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jordan Demeulenaere</title>
    <description>The latest articles on DEV Community by Jordan Demeulenaere (@jdemeulenaere).</description>
    <link>https://dev.to/jdemeulenaere</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jdemeulenaere"/>
    <language>en</language>
    <item>
      <title>Vibe coding mobile apps with Compose Driver</title>
      <dc:creator>Jordan Demeulenaere</dc:creator>
      <pubDate>Fri, 06 Feb 2026 17:54:50 +0000</pubDate>
      <link>https://dev.to/jdemeulenaere/vibe-coding-mobile-apps-with-compose-driver-3379</link>
      <guid>https://dev.to/jdemeulenaere/vibe-coding-mobile-apps-with-compose-driver-3379</guid>
      <description>&lt;p&gt;I've been experimenting with AI coding assistants like &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt; lately and, as an Android engineer, it's been quite fun using them for hobby &lt;a href="https://developer.android.com/compose" rel="noopener noreferrer"&gt;Compose&lt;/a&gt; projects.&lt;/p&gt;

&lt;p&gt;However, the feedback loop is often much tighter for web development, where these AI tools usually have browser instrumentation to inspect the DOM and verify their work instantly. As mobile developers, we feel a bit left out. Asking an AI to build an Android screen usually means generating code that can't be checked without manually running the app on a device or emulator.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;Compose Driver&lt;/strong&gt; to bridge that gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is it?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/jdemeulenaere/compose-driver" rel="noopener noreferrer"&gt;Compose Driver&lt;/a&gt; is a library and Gradle plugin that lets AI agents "drive" your Jetpack Compose app. It works by wrapping your UI in a test harness that listens for HTTP requests.&lt;/p&gt;

&lt;p&gt;This means you can have an AI agent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Query the UI&lt;/strong&gt; to see what buttons or text are on the screen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interact&lt;/strong&gt; by clicking, swiping, or typing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt; the result by printing the UI tree, taking a screenshot or recording a GIF.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It runs on the JVM &lt;strong&gt;headlessly&lt;/strong&gt;, so it's very fast and runs anywhere, making it perfect for background or cloud agents. It supports both &lt;strong&gt;Desktop/Multiplatform&lt;/strong&gt; Compose and &lt;strong&gt;Android&lt;/strong&gt; Jetpack Compose (via Robolectric).&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/jdemeulenaere/compose-driver/blob/main/driver-core/src/commonMain/kotlin/io/github/jdemeulenaere/compose/driver/ComposeDriver.kt" rel="noopener noreferrer"&gt;core&lt;/a&gt; of the implementation is actually pretty simple and less than 300 lines. It starts a small local server that translates HTTP requests into standard &lt;code&gt;ComposeUiTest&lt;/code&gt; actions.&lt;/p&gt;

&lt;p&gt;For example, when the agent sends a request to click a button, the code looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;runComposeUiTest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;uiTest&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
    &lt;span class="c1"&gt;// Set the application content. &lt;/span&gt;
    &lt;span class="c1"&gt;// There is also a /reset endpoint to change this content at runtime.&lt;/span&gt;
    &lt;span class="n"&gt;uiTest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setContent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;MyApplication&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;startServer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/click"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;matcher&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;node&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;          &lt;span class="c1"&gt;// Find the node (e.g. by tag or text)&lt;/span&gt;
            &lt;span class="n"&gt;uiTest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matcher&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;performClick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// Perform the click&lt;/span&gt;
            &lt;span class="n"&gt;uiTest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForIdle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;                  &lt;span class="c1"&gt;// Wait for animations to settle (advancing the virtual clock time)&lt;/span&gt;
            &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;respondText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ok"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;             &lt;span class="c1"&gt;// Tell the agent it's done&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/screenshot"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;node&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;node&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;uiTest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;captureToImage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;respondPng&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple loop allows the agent to navigate through the app just like a user would, but much faster since it can use a virtual clock to speed up animations.&lt;/p&gt;

&lt;p&gt;This is all possible only because the incredible Jetpack Compose team has created very powerful testing APIs that are very well thought out. The fact that &lt;code&gt;ComposeUiTest&lt;/code&gt; allows such fine-grained control over the UI clock and input injection, while being completely decoupled from the rendering platform, is what makes this tool feasible. Big credit goes to them for enabling this!&lt;/p&gt;

&lt;h2&gt;
  
  
  Playing with it
&lt;/h2&gt;

&lt;p&gt;To test it out, I created a simple clone of an app most of you are probably familiar with: Instagram. It was surprisingly fun to prompt the agent to just &lt;em&gt;"Build an Instagram UI clone"&lt;/em&gt; or &lt;em&gt;"Improve the app and add missing features"&lt;/em&gt;, and watch it navigate the menus, click the button, and confirm it is done adding 5 new screens in a single shot. Of course the app is far from being production ready, but I was pleasantly surprised by the result I got after playing with this in probably less than 1h and 10 prompts.&lt;/p&gt;

&lt;p&gt;Here is a short video of what it looks like in action:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/yjJcHy4KqsM"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  A word on reliability
&lt;/h2&gt;

&lt;p&gt;Of course, Compose Driver doesn't magically solve all the challenges of building mobile apps with AI. For production workloads, it remains crucial to review the code and understand what the agent implemented (and why). But for hobby projects, prototypes, or just for the sake of experimenting with new workflows, it’s been a blast to use!&lt;/p&gt;

&lt;p&gt;There is more and more evidence that the more tools an AI has to verify its work, the better it will perform. Giving agents the ability to close the feedback loop and iterate on their own can only improve the quality and result of the generated code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check it out
&lt;/h2&gt;

&lt;p&gt;If you're interested in agentic workflows for mobile or just want to play around with it, the code is open source here: &lt;a href="https://github.com/jdemeulenaere/compose-driver" rel="noopener noreferrer"&gt;https://github.com/jdemeulenaere/compose-driver&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've included a &lt;code&gt;sample/&lt;/code&gt; project that you can open in your favorite AI editor to get started quickly. I hope that this will be useful to some of you :-) Let me know what you think!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>android</category>
      <category>kotlin</category>
      <category>compose</category>
    </item>
  </channel>
</rss>
