DEV Community

myougaTheAxo
myougaTheAxo

Posted on

I Asked a Senior Engineer to Review AI-Generated Android Code. Here's What He Said.

I'll be honest with you: I was skeptical too.

When AI tools started generating complete applications, my first thought was the same as yours — "Sure, it generates *something, but is it actually good code?"*

So I ran an experiment. I used Claude Code to generate a complete Habit Tracker Android app in Kotlin + Jetpack Compose. Then I showed the output to a veteran software engineer who has been writing code since the 1990s — someone who has shipped production software long before "AI-assisted development" was a phrase anyone used.

Here's exactly what happened.


The Experiment

The task was simple: build a Habit Tracker Android app.

Requirements:

  • Kotlin + Jetpack Compose
  • Material3 design system
  • Room database for local persistence
  • No internet required (privacy-first)

I ran Claude Code with a single prompt describing the app. Time to complete: 47 seconds.

I didn't touch the output. I sent it directly to the engineer.


Who Reviewed It

This reviewer has been writing software professionally since the mid-90s. He's worked in C, Java, and Kotlin. He's shipped embedded systems, enterprise backends, and mobile apps. He's the kind of person who immediately notices when someone uses a HashMap where a LinkedHashMap was needed, or when a thread is being blocked unnecessarily.

He is not easily impressed. He is exactly the right person to ask.


His Verdict: "It's Correct."

Not "pretty good for AI." Not "decent starting point." His word was correct.

Let me break down exactly what he flagged as done right.

1. Error Handling Is Present

One of the most common complaints about AI-generated code is that it's optimistic — it assumes everything works and leaves out the defensive programming. In this case, the generated code included proper error states in the ViewModel and handled Room database exceptions appropriately.

"Beginners would have left this out," he said. "This didn't."

2. Data Structures Are Appropriate

The Room Entity was annotated correctly and designed with future-proofing in mind. Here's the exact generated code:

@Entity(tableName = "habits")
data class Habit(
    @PrimaryKey(autoGenerate = true) val id: Int = 0,
    val name: String,
    val createdAt: Long = System.currentTimeMillis(),
    val streak: Int = 0
)
Enter fullscreen mode Exit fullscreen mode

His comment: "The createdAt as Long is the right call. Developers who don't know what they're doing store dates as Strings. This is the correct approach."

Using Long for timestamps (Unix epoch milliseconds) is idiomatic Android/Kotlin — it avoids timezone issues, serializes cleanly, and performs well in SQLite queries. The AI chose this without being told to.

3. Separation of Concerns

The generated project had a full layered architecture:

  • Entity — data definition for Room
  • DAO — database access methods with Flow-based reactive queries
  • Repository — single source of truth, abstracts the data layer
  • ViewModel — UI state management, survives configuration changes
  • Screen (Composable) — pure UI, no business logic

He reviewed each layer and confirmed they were doing what they were supposed to do — and only what they were supposed to do. No business logic leaking into the DAO. No direct database calls from the UI.

"This is what I tell junior engineers and they still get it wrong," he said. "The separation is clean."

4. Material3 Compliance

The UI used MaterialTheme, Surface, Card, FloatingActionButton, and other Material3 components correctly. Typography was handled through the theme system, not hardcoded font sizes. Colors came from MaterialTheme.colorScheme, meaning dark mode works automatically.


What the Code Does RIGHT That Junior Developers Often Get Wrong

After the review, I asked him to be specific about the things that beginners typically mess up. This list came directly from his feedback.

MVVM + Repository Pattern — Chosen Automatically

The AI didn't just generate working code. It generated architected code. The Repository pattern is not obvious to new Android developers — many skip it entirely and call the DAO directly from the ViewModel. The AI chose the correct pattern without being prompted.

Why does this matter? Because when you need to add a second data source (a remote API, for example), the Repository pattern means you change one class, not ten files.

Room Database With Proper Data Access Layer

The DAO was generated with @Query, @Insert, @Update, and @Delete annotations correctly applied. Queries used Flow<List<Habit>> return types, which means the UI automatically updates when data changes. This is the modern, correct approach — not the old callback-based pattern that beginners often copy from outdated tutorials.

No INTERNET Permission

The generated AndroidManifest.xml did not include <uses-permission android:name="android.permission.INTERNET" />.

This is worth highlighting. Many developers add INTERNET permission "just in case" without thinking about it. The AI made a deliberate choice: this app stores data locally, so it doesn't need network access. That's a privacy-correct decision.

Proper State Management With ViewModel

The ViewModel used StateFlow and MutableStateFlow correctly for UI state. The screen observed state as collectAsState(). This is the current recommended pattern — not LiveData (outdated) and not raw mutable variables (incorrect).


What It Does NOT Do — An Honest Assessment

The senior engineer and I both agreed: this is good code, but it's not complete production code without human review. Here's what was missing.

No Unit Tests Generated

The code has zero test coverage. There are no JUnit tests for the ViewModel, no instrumented tests for the DAO, and no UI tests for the Composable screens. A production app needs all three.

This isn't a criticism unique to AI-generated code — plenty of human developers also skip tests — but it's a genuine gap. If you deploy this to users, you should add tests first.

No CI/CD Pipeline

There's no .github/workflows/ directory, no Gradle CI configuration, no automated build checks. For a solo developer or prototype, that's fine. For a team or a production release, you need to add this.

No ProGuard/R8 Rules

The proguard-rules.pro file was present but effectively empty. For a simple app with no networking or reflection, this is probably fine. But if you add libraries that use reflection (Gson, Retrofit, certain Room configurations), you'll need to add keep rules manually.


What This Means for Android Developers

The engineer's summary was this: "For a simple CRUD app with local storage, this is production-quality. I would not rewrite this. I would add tests, add CI, review the ProGuard configuration, and ship it."

That's a meaningful statement. It means the AI has crossed the threshold from "interesting demo" to "useful starting point that saves real time."

Here's how I think about it:

AI-generated code is production-quality for:

  • Simple apps with well-defined requirements
  • Standard patterns (CRUD, list/detail, forms)
  • Apps where offline-first is the main concern

AI-generated code still needs human review for:

  • Complex business logic with edge cases
  • Authentication flows and security-sensitive code
  • Performance-critical rendering or data processing
  • Anything with regulatory compliance requirements

The 47-second generation time doesn't mean 47 seconds to production. It means 47 seconds to a solid foundation that you then iterate on. That's still dramatically faster than starting from scratch.


The Broader Implication

We've spent years debating whether AI can write "good" code. The senior engineer's verdict suggests we've been asking the wrong question.

The better question is: does AI-generated code meet the bar that human developers actually ship?

For simple Android apps, the answer appears to be yes. The architecture is correct. The patterns are current. The data modeling decisions are sound. A developer with five years of Android experience would have written essentially the same code — but would have taken several hours to do it.

This shifts the role of the developer. Less time writing boilerplate. More time on the parts that actually require judgment: edge cases, security, performance, and the specific business logic that makes your app different from everyone else's.


Try It Yourself

If you want to see the full output — not a summary, but the actual Kotlin source code — I've published 8 AI-generated Android app templates on Gumroad.

These are complete projects: Kotlin + Jetpack Compose + Material3 + Room database. No ads embedded, no tracking, 100% of the source code visible before you buy. Each one was generated by Claude Code and reviewed the same way this Habit Tracker was reviewed.

The apps range from simple utilities (unit converter, countdown timer) to more complex tools (budget manager, task manager with priorities). Each one is a working foundation you can take and customize.

If you're an Android developer who wants to understand what AI-generated code actually looks like in practice — or if you're a non-engineer who wants a working app without writing it from scratch — that's what they're for.


Have you shown AI-generated code to an experienced engineer? What was their reaction? I'm curious whether this matches your experience or whether you've seen it go differently.


Related Articles

Top comments (0)