DEV Community

Super Funicular
Super Funicular

Posted on

How I Built a Production Android App in 75+ AI Sessions

The Experiment

What happens when a solo developer tries to build a production Android app using AI as the primary coder?

I've been building Background Camera RemoteStream — an Android app that records video in the background with the screen off and streams to YouTube Live — using Claude Code (Anthropic's AI coding tool) for over 75 sessions spanning 6+ months.

The app is now live on Google Play. Here's what that journey looked like.

What the App Does

Background Camera RemoteStream solves a simple problem: recording video without destroying your battery.

Screen-off recording: The screen turns completely off while recording, giving you roughly 10x battery life compared to traditional camera apps that keep the display on.

Remote control: Control your phone's camera from any browser on the same WiFi network. No app needed on the viewing device — just open the IP address shown in the app.

YouTube streaming: Stream live to YouTube — even with the screen off. This is, as far as I can tell, the first app to offer screen-off YouTube streaming.

The AI Development Journey

Every line of code was written through conversation with Claude Code. The numbers:

  • 75+ development sessions over 6 months
  • 204 builds from first prototype to production release
  • Camera2 API: Low-level Android camera control (one of Android's most complex APIs)
  • Embedded Ktor web server: A full HTTP/WebSocket server running inside an Android app
  • YouTube Live integration: RTMP streaming, OAuth, live chat — all built through AI pair programming

Key Technical Challenges

1. Background recording without a visible preview

Android's Camera2 API normally requires a preview surface. We use a hidden SurfaceView technique to work around this, allowing the camera to record without anything visible on screen.

2. Screen-off operation

Foreground services with wake locks keep the camera and embedded web server running when the screen is off. Getting this reliable across different Android versions and manufacturers was one of the longest debugging efforts.

3. Portrait video streaming

YouTube's RTMP endpoint doesn't auto-rotate, so portrait video requires precise FFmpeg flag configuration to avoid sideways streams.

4. Embedded web server

A full Ktor CIO server running on port 8080 inside an Android app, with WebSocket real-time updates for camera status, recording state, and stream health.

What I Learned About AI-Assisted Development

The experience was genuinely different from traditional development. Some observations:

AI excels at boilerplate and API integration. Setting up Camera2, configuring Ktor routes, wiring up YouTube OAuth — these are well-documented patterns that AI handles efficiently.

AI struggles with cross-cutting concerns. When a change in the streaming module affected the recording module, which affected the UI state, AI needed very explicit guidance about the ripple effects.

The conversation history is your architecture document. 75 sessions of back-and-forth with Claude Code created an implicit design document. If I ever need to onboard someone, those transcripts contain every decision and its rationale.

You still need to understand the domain. AI wrote the code, but I needed to understand enough about Camera2, RTMP, and Android services to evaluate whether the code was correct. Pure delegation doesn't work.

Results

The app is live on Google Play with zero crashes and zero ANRs. Built by someone who isn't a professional Android developer, using AI as the coding engine.

Try it: Background Camera RemoteStream on Google Play

Website: superfunicular.com/backgroundcameraremotestream


What's your experience with AI-assisted development? Have you built something with Claude Code, Cursor, or similar tools? I'd love to hear about it in the comments.

Top comments (0)