Why I Decided to Try This
There’s a lot of hype right now around AI tools that can supposedly build entire applications. You often see demos where an AI writes code, opens pull requests, and produces a working app with very little human involvement.
I wanted to see what that actually looks like in practice.
So I gave myself a small experiment: build a simple mobile app with as much help from AI as possible and see where things work and where they break.
For this experiment I used OpenAI Codex and asked it to generate the base of the application.
The goal wasn’t to build something complicated. I wanted something small enough to test the limits of AI-assisted development.
The site is live at
https://gurbani247.samay15jan.com
but what mattered to me was everything happening behind it.
The Idea Behind the App
The app itself is intentionally simple.
It streams Gurbani continuously from a URL and shows a visualizer while the audio plays. That’s essentially the whole concept.
The feature set was kept minimal:
- Continuous Gurbani streaming
- A music visualizer
- A clean mobile interface
- Background playback support
- Basic device indicators (network, battery, time)
- Hidden status bar for a cleaner UI
The stack generated by the AI used:
- React Native
- Expo
- NativeWind
Which is a fairly standard setup for lightweight mobile apps.
What AI Did Well
At the start, the experience was honestly impressive.
Within minutes, the AI generated the base project structure, the UI layout, and the core logic for streaming audio. It also created the visualizer components and applied styling using NativeWind.
A basic working version of the app appeared very quickly.
This is where AI tools are genuinely strong. They are extremely good at generating the first version of a project. Tasks like setting up folders, writing component boilerplate, and wiring up UI layouts can be done very quickly.
At that stage it almost feels like the AI can build everything.
But that impression doesn’t last very long once the project moves beyond scaffolding.
Dependency Problems
The first major issue appeared when trying to run the project.
The configuration generated for NativeWind didn’t work properly with the current ecosystem setup. Expo and NativeWind had compatibility issues in the latest versions, and the generated configuration simply failed.
Fixing the issue required manually going through documentation and eventually downgrading to a more stable version of the dependency.
The AI kept generating configurations that looked correct but still didn’t work in a real environment.
This is something AI currently struggles with. It can generate code that looks valid, but it doesn’t fully understand the constantly changing state of real-world dependencies.
Background Audio
Another requirement was background playback. The audio stream needed to continue playing even if the app was minimized or sent to the background.
The implementation suggested by the AI looked correct at first glance, but it didn’t actually work.
No matter how the prompts were adjusted, the audio kept stopping when the app moved to the background. Eventually I had to manually debug the issue and implement the correct behavior based on documentation and experimentation.
This became another example of AI generating something that appears reasonable but fails in real execution.
Android Notification Media Controls
Android usually exposes playback controls directly in the notification panel so users can pause or resume audio without opening the app.
The AI attempted to implement the player controller for the notification area. The generated code looked reasonable, but on actual devices the controls didn’t function properly.
The issue only became clear when testing on real hardware.
Fixing it required additional debugging and experimentation before the media controls started behaving correctly.
CI/CD and GitHub Workflow Failures
I also asked the AI to generate a GitHub workflow to automate building the project.
It produced a YAML configuration, but the workflow repeatedly failed when executed.
Some blocks in the generated configuration caused the pipeline to break, and the build never completed successfully.
After several attempts, I ended up writing the workflow myself and fixing it through a few rounds of trial and error until the pipeline worked properly.
The Small Problems That Break Everything
One of the biggest limitations I noticed during the experiment is that AI struggles with small problems that break an entire workflow.
In real development, the hardest problems are rarely large architectural issues. They’re usually small details:
- a dependency mismatch
- a missing permission
- a configuration error
- a CI pipeline step failing
These small issues can stop a project from working entirely.
Humans usually identify these quickly through debugging and experimentation. AI often keeps generating slightly different versions of the same broken solution without identifying the underlying cause.
What the Real Workflow Looked Like
After working through the project, the development process looked very different from the idea that AI simply builds the entire application.
The workflow usually looked like this:
- AI generates the initial implementation
- The project fails to run
- I debug dependency or configuration problems
- AI helps generate smaller pieces of code
- I integrate everything and fix the next issue
Then the cycle repeats.
In practice, AI behaves more like a very fast assistant rather than an autonomous developer.
The Final Result
After resolving the issues and finishing the missing pieces, the final application includes:
- continuous Gurbani streaming
- a working audio visualizer
- background playback
- Android notification media controls
- device information indicators
- a cleaner interface with the status bar hidden
I also created a web version of the project. Once the core logic was clear, building the web implementation was significantly easier, and AI was more effective in helping with that part.
Future Possibilities for iOS Builds
One possible direction for the project is generating iOS builds in the future.
It should be possible to create unsigned IPA files using my project:
https://github.com/samay15jan/altux
The goal of that project is to simplify generating unsigned IPA builds that can later be installed using alternative signing methods.
Getting something like that working requires a lot of experimentation and trial-and-error. It involves testing different build processes, reading documentation, and slowly figuring out how the tooling behaves.
This kind of iterative research process is something AI currently struggles with.
What I Learned From This Experiment
After finishing the project, one thing became very clear.
AI is extremely good at generating the first 70–80% of a project.
It can scaffold applications, generate UI components, and produce large amounts of code very quickly.
But the final part of development — the part where everything must actually work together — still requires real engineering.
AI currently struggles with:
- dependency conflicts
- ecosystem changes
- CI/CD reliability
- environment-specific bugs
- real device testing
- small workflow-breaking problems
These areas still rely heavily on human debugging and reasoning.
Final Thoughts
Despite the limitations, the experience was still impressive.
Without AI assistance, building the first version of this project would have taken significantly longer. With AI, the initial version appeared very quickly.
However, turning that initial code into a stable, working application still required manual debugging and decision-making.
The most accurate way to describe AI coding tools right now is that they behave like a very fast junior developer.
They can produce a lot of code quickly, but they still require supervision, corrections, and guidance from someone who understands the system.
Top comments (0)