In less than 24 hours, I built a full-featured Android app for Interval Walking Training (IWT) with under 10% of the code written by hand. By combining my engineering experience with the latest AI tools, I moved from idea to working product at record speed. This isn’t “vibe coding”—it’s expert-driven, AI-accelerated development. Read on to see how I did it, what worked, what didn’t, and why AI is a force multiplier for real developers.
Check out the code: Github: IWT App
The Spark of an Idea đź’ˇ
It all started when my friends at Firebender released Composer, a tool that turns Figma designs into Android Jetpack Compose code (read more). I was instantly intrigued—could this be the missing link between design and code?
At the same time, I was reading about a fitness trend called Interval Walking Training (IWT)—a simple, effective way to alternate between fast and slow walking. That’s when inspiration struck: why not build an app to help people practice IWT, track their laps, and monitor their total time? It was the perfect project to test the limits of AI-driven development.
For Non-Developers: Why This Matters 🧑‍💼
Even if you’ve never written a line of code, this story shows how AI can turn ideas into real products—fast. The key isn’t just using AI, but knowing how to guide it, just like a director guides actors on a movie set. If you’re curious about how technology is changing the way we build things, this is for you.
Crafting the Blueprint with AI 📝
With the idea in mind, I turned to two of the most powerful AI assistants: Gemini 2.5 Pro and ChatGPT 4.1. I didn’t just ask for a list of features—I gave them a structured, detailed prompt, specifying my tech stack (Android, Kotlin, Jetpack Compose), my plan to use Google Stitch for UI, and my intention to feed the output to an agentic tool.
Here’s the prompt I used:
I want to make a nice Android APP that promotes the concept of interval walking training (IWT)\ . This app will help anyone practice IWT and comes with some basic sets of features like tracking, step counting etc.It doesn't need a login.Help me create a simple project plan with a list of features, screen by screen details etc that could then help me build this app.My intention is to use Android as an app development platform, kotlin and jetpack compose.I will also be feeding instructions to Google Stitch to generate the UI of this app, so add a section with instructions for google stitch for UI generation.I also want the overall doc structured in such a way that i can be fed to an Agentic tool like cursor which i will be using to build the app. |
---|
Both models produced solid plans, but Gemini’s output was exceptionally detailed and perfectly structured to my needs. It laid out the features, screen-by-screen flows, and even generated prompts for the next AI in my pipeline. I decided to proceed with Gemini’s plan (see the plan here).
Designing the UI with a Stitch 🎨
Next up: design. Google recently launched Stitch, an AI tool that generates app designs from just an idea (learn more). Since my project plan included instructions for Stitch, I simply fed those in and let the AI do its thing.
The result? Stitch generated a clean, professional-looking UI for the app.
Of course, it wasn’t perfect. The workout screen didn’t quite match my vision for a large circular progress indicator, even after I tried re-prompting Stitch with clearer instructions. Sometimes, AI tools have their quirks! I also tweaked the summary screen to make it more user-friendly.
Here’s the final Figma file: IWT Figma Design.
Figma to Code: Firebender Composer Magic ✨
With my Figma design ready, I opened Firebender Composer in Android Studio, pasted my Figma link, and let the tool work its magic.
What makes Firebender Composer stand out from other "Figma to Code" tools is its native integration with Android Studio. It doesn’t just generate code; it iteratively compares its Compose Preview with a screenshot from Figma and refines the UI to get it pixel-perfect. It’s like watching a developer work at hyperspeed.
But, like any tool, it had its quirks:
- One Screen at a Time: Pasting the link for the entire project initially only generated one screen. I found that feeding it one screen at a time produced much better results.
- The First Attempt Flaw: For every screen, the first iteration was always a bit off. It would then self-correct over several cycles to match the Figma design perfectly.
- The 95% Hurdle: While most screens were generated with 90–95% accuracy, one screen consistently fell short, likely due to less detailed information in the original AI-generated Figma design. I had to step in and fix this one manually.
Agentic AI: Bringing the App to Life 🤖
With the UI for all the screens in place, it was time to code the business logic. My workflow was simple and AI-centric:
- Ask Gemini to create detailed, step-by-step instructions for a single screen.
- Paste these instructions into Firebender’s agentic coding mode.
- Let the agent write the code, compile, and test.
- Provide follow-up prompts to fix or add any missing functionality.
Home Screen Example
For the Home Screen, I gave Gemini a prompt to generate instructions for building the ViewModel, handling user interactions, and managing permissions. The instructions it produced were incredibly thorough, even including the necessary code snippets for data models and repositories.
I pasted this entire block of instructions into Firebender. It immediately got to work: creating files, adding dependencies, writing the business logic, and wiring it up to the UI. At each step, it attempted to compile the code to ensure everything was functional.
The Rest of the Screens
I followed this same process for the remaining screens, letting the AI agent handle the heavy lifting.
Navigating Errors and Refining with AI 🛠️
After the agent finished its work on all the screens, the app was symbolically correct but had some visual glitches and functional gaps.
Agentic tools are great at executing instructions and fixing compilation errors, but they struggle to identify what is qualitatively wrong. This is where a human developer’s eye is still crucial. I stepped in with some manual tweaks and specific, targeted prompts to fix the remaining issues. For example, to standardize the app’s toolbar, I gave the agent a clear refactoring pattern.
The result was a much more polished and functional home screen.
Finally, to ensure the core logic worked as intended without writing a single manual test, I prompted Gemini to create a detailed validation script. This script outlined specific test cases for Firebender’s agentic tool to execute, verifying everything from interval sequencing to the pause-and-resume functionality.
Not Just “Vibe Coding”—AI as a Force Multiplier 🦾⚡
Let’s be clear: this isn’t about letting AI do all the work while you sit back. It’s about using your engineering experience to guide, correct, and supercharge the process. AI is a force multiplier for those who know what they’re doing—not a replacement for real expertise. The real magic happens when you combine deep domain knowledge with AI’s speed and flexibility.
What Worked, What Didn’t (and What I Learned) đź§
While the AI tools were powerful, they weren’t perfect. Here’s what I found:
- UI Imperfections: AI-generated UI often needed manual tweaks and multiple prompts.
- Prompt is King: The more context and detail I gave, the better the results.
- Agentic Aggression: Sometimes, the AI was too eager to “fix” things, even deleting valid code. I had to step in to prevent it from removing Hilt dependencies.
Despite these quirks, building a full app in under a day was an eye-opener. AI is already a force multiplier for developers—and it’s only getting better.
The Finished Product & Final Code 🎉
Want to see the code? Check it out here:
Final Code: https://github.com/pranayairan/IWT/
Commit History: https://github.com/pranayairan/IWT/commits/main/
Final Thoughts 🚀
This experiment was about pushing the boundaries of what’s possible with AI in app development. While the tools are incredibly powerful, they still have limitations that require human intervention. But the speed, flexibility, and creative potential they unlock are game-changing.
If you’re a developer (or just curious about AI), now’s the time to experiment. With the right tools and a bit of creativity, you can turn ideas into working products faster than ever before.
Have questions or want to share your own AI-powered dev story? Drop a comment below !
Top comments (0)