DEV Community

LazyDev_OH
LazyDev_OH

Posted on • Originally published at gocodelab.com

The Dashboard Was There But I Didn't Know What to Do, So I Let AI Handle It

March 2026 · Lazy Developer EP.03

I had a dashboard. Built it in EP.02. Every day at 3 AM, a Cron job runs. By morning, all of yesterday's data for 12 apps is there. Downloads total, revenue, keyword rankings. Everything on one screen. It took days to build, and it saved me 15 minutes every morning.

But about three days in, a different kind of question lingered. "The finance app downloads dropped 22% today." The number was right there on screen. So what? I didn't know why it dropped. I didn't know what to do about it. The dashboard showed me what happened. The judgment was still on me. Having a dashboard actually made things more tiring in a way — the decisions I needed to make became painfully clear.

So I decided to automate the judgment part too. I built an AI growth agent inside Apsity. That's what this post is about.

Quick Overview

  • Dashboard shows "what happened," but "why" and "what to do" were still on me
  • Designed 5 analysis patterns: rank drop diagnosis, hidden markets, keyword optimization, review analysis, revenue breakdown
  • Confidence badge on every insight: Fact / Correlation / Suggestion
  • Indie app filter excludes enterprise apps (1,000+ ratings), analyzes only comparable apps
  • Second Claude API integration — auto-generates 100-character keyword sets, suggests app names, extracts insights
  • Auto growth stage detection: SEED -> GROWING -> STABLE
  • Weekly email report added: Monday 8 AM, auto-sent via Resend + React Email
  • First run: 12 apps, 48 insights generated automatically

What existing tools don't tell you — and the price tag

There are plenty of App Store analytics tools out there. AppFollow, Sensor Tower, MobileAction, plus App Store Connect itself. They all do the same thing. "3,240 downloads this week." "Keyword ranking change: -5." Numbers showing what happened.

The pricing tells the story. Sensor Tower's enterprise plan starts at $30,000/year. AppFollow has a $39/month basic plan, but it caps at 5 apps. Managing 12 means upgrading, and the cost jumps. So most indie developers end up using App Store Connect's built-in analytics.

No matter which tool you use, the "so what?" question remains. Why did it drop? Did a competitor change something? Is my keyword the problem? Is there a signal in the reviews? To figure that out, you have to dig through the data yourself. The tools don't dig for you.

What I wanted was different. Give it data, and it tells me the cause. If there's a cause, it tells me what to do. If it knows what to do, it gives me something I can use right now. Not "you might want to change your keywords" but "copy this 100-character set and paste it into your App Store keyword field."

AI Growth Agent — a system that makes judgments for you

I wrote up my requirements and handed them to Claude. "Not just showing what happened — diagnose the cause, provide verifiable evidence, and deliver ready-to-use outputs." It was a one-line description. Claude broke it into 5 analysis patterns.

// 5 Analysis Patterns
1. Rank Drop Diagnosis — Why it dropped, including competitor changes
2. Hidden Market Discovery — Keywords where my app isn't showing but opportunities exist
3. Keyword Optimization — Current keyword analysis + auto-generated 100-char optimal set
4. Review Keyword Analysis — Recurring patterns extracted from user reviews
5. Revenue Breakdown — Subscription vs IAP anomaly detection + cause hypotheses
Enter fullscreen mode Exit fullscreen mode

Patterns alone are meaningless. What matters is how trustworthy each result is. Not everything AI says is fact. Something read directly from data, something inferred from patterns, and something AI suggests — these are fundamentally different. Without distinguishing them, you'd treat inferences as facts.

Why I added confidence badges

I attached a confidence badge to every insight card. There are three types.

Badge Meaning
Fact Directly confirmed from real data. Like "downloads dropped 22% yesterday" — a measured figure.
Correlation Inferred from patterns between data points. Like "competitor updated their description right before your ranking dropped" — related but not causally confirmed.
Suggestion AI reasoning based on analysis. Like "adding this keyword could increase impressions" — data-informed but not certain.

Each card also has a [View Evidence] toggle. Click it, and you see the raw data: "34% drop from 7-day download average, competitor A changed 3 metadata fields in the same period." You can check what data the AI used to produce the insight. So you can judge for yourself whether to trust it.

This is a design decision, but it's also a philosophy. You shouldn't just follow what AI says. You should be able to see why it said it. That way, you'll know when it's wrong, too.

Indie app filter — wrong comparisons make analysis useless

While designing the competitive analysis, I hit a problem. Even within the same category, some apps shouldn't be compared. Official apps from major banks, apps from companies like Naver or Kakao. They have different marketing budgets, different ASO strategies, and hundreds of thousands of ratings. If an indie developer gets compared against them by the same standards, no meaningful insight comes out.

I asked Claude, and it suggested a rating-count filter. Apps with 1,000+ ratings get classified as enterprise and excluded from comparisons. Apps with 50-1,000 ratings get classified as indie successes and used as the comparison baseline.

The indie app filter logic is simple. On the App Store, an app's rating count correlates with downloads. Over 1,000 ratings means significant marketing investment — that's not indie. Meanwhile, 50-1,000 ratings means somewhat validated but still indie-scale. That's the range you actually want to compare against.

Competitors menu — checking every day if someone changed something yesterday

Once you register a competitor, a Cron job calls the iTunes Lookup API every day at 4 AM KST to fetch that app's latest metadata. App name, subtitle, description, icon, version. These five fields get saved daily, compared against the previous day, and any changes get logged.

Open the menu and you see the list of registered competitors. Apps with recent metadata changes rise to the top, showing which fields changed. Click on a changed field to see the previous and current versions side by side.

At first I thought, "Does this even matter?" So a competitor changed their description — what can I do about it? Using it changed my mind. Three competitors of my finance app updated their descriptions and keywords on the same day, and my ranking dropped right after. The MetaChange log had the dates and exact changes. It's correlation, not causation, but without this data, tracking down the cause would have taken much longer.

Plugging Claude API into Apsity — in the Keywords menu

I added two more specific features. Auto-generating an optimal 100-character keyword set, and suggesting app names and subtitles based on indie success patterns.

// POST /api/growth/keywords-generate
// App name, category, current keywords -> Claude -> 100-char optimal keyword set

const prompt = `
App: ${appName} (${category})
Current keywords: ${currentKeywords}
Top indie app keyword patterns: ${indiePatterns}

Generate an optimized keyword set within 100 characters for the App Store keyword field.
Remove duplicate words, separate with commas, no spaces after commas.`
Enter fullscreen mode Exit fullscreen mode

There are rules for the keyword field:

  • No space after commas (even one space counts toward the character limit)
  • No plurals (App Store auto-matches from singular)
  • Don't repeat app name or category name (they're already indexed)
  • Fill all 100 characters (empty space = wasted exposure)
  • Include review keywords (frequent words from review text act as search signals)

Apsity keyword optimization page
Keyword Optimization — Copy the AI-generated 100-character optimal set with one click / GoCodeLab

Adaptive growth stage mode — from SEED to STABLE

After building all the analysis features, one problem became obvious. Running "revenue anomaly detection" on a freshly launched app is pointless. There's no data. On the flip side, only running basic keyword generation for a well-established app is a waste.

Claude proposed auto-detecting growth stages:

  • 🌱 SEED — Less than 30 days of downloads or under 500 cumulative. Focuses on initial setup: keyword auto-generation, app name suggestions.
  • 🌿 GROWING — Download trend is rising or stable. Rank drop diagnosis, hidden market discovery, and competitor change detection all activate.
  • 🌳 STABLE — Over 3 months of accumulated data. Revenue anomaly detection, review keyword analysis, and long-term trend pattern analysis activate.

Apsity growth stage overview
Auto growth stage detection — each stage activates different analyses / GoCodeLab

Claude reviewed my code

I tried something new this time. I had Claude review the code it wrote. After everything was built, I said: "Review this entire codebase. Focus on things that could break in production."

The results were more specific than I expected:

// [Critical] 1st Review — Key Issues
1. MetaChange relation missing — DB save without linking relation table
2. JSON.parse unprotected — No try-catch on external API response parsing
3. Cron timeout — Timeout risk when processing 12 apps sequentially

// [Critical] 2nd Review — Key Issues
4. iTunes API rate limit — 429 risk from calling in a loop with no delay
5. Review country hardcoded — Only collecting KR, missing other countries
6. ASC data delay — Yesterday's data may not be available at early morning
Enter fullscreen mode Exit fullscreen mode

There was a strange feeling. Code written by Claude, reviewed by Claude, bugs found by Claude, fixed by Claude. The line between what I built and what it built got even blurrier. But I'll take that feeling over things breaking in production.

Weekly email report — a summary arriving Monday at 8 AM

Insights were being generated, but you could only see them by opening the dashboard. I set up weekly reports to be emailed automatically using Resend + React Email + Vercel Cron.

// vercel.json — Cron schedule
{
  "crons": [
    { "path": "/api/cron/collect", "schedule": "0 18 * * *" },      // 3 AM KST daily
    { "path": "/api/cron/analyze", "schedule": "30 10 * * *" },     // 7:30 PM KST daily
    { "path": "/api/cron/weekly-report", "schedule": "0 23 * * 0" } // Monday 8 AM KST
  ]
}
Enter fullscreen mode Exit fullscreen mode

The email contains last week's per-app download and revenue summary, the top 3 insights (with confidence badges), and one immediately actionable item. Long emails don't get read, so the goal was to fit everything on one screen without scrolling.

Apsity weekly email report
Weekly email report — auto-sent Monday 8 AM, everything fits on one screen / GoCodeLab

First run — 48 insights

I deployed the Cron and triggered it manually. 12 apps processed, total execution time 38 seconds. 48 insights generated.

Different kinds of insights came in for each app. The finance app was STABLE, so revenue anomaly detection ran. The habit tracker was GROWING, so competitor change detection ran alongside it. A recently launched app was SEED, so only keyword auto-generation insights came through.

One insight caught my eye. "Over the past 14 days, 3 competitors simultaneously updated their metadata for the 'budget' keyword cluster, and your app's ranking for those keywords dropped an average of 8 positions since." Confidence badge: Correlation. I clicked [View Evidence] and verified the data manually. It checked out.

Below that insight card was a keyword set with a copy button. Claude had generated it incorporating the competitive keyword changes. I copied it and pasted it into App Store Connect. The flow itself — change detection, cause hypothesis, response keyword generation, copy — all happened automatically. That's the point.

FAQ

Q. What exactly is an AI growth agent?

Existing tools show you numbers. "Downloads down 22%." The AI growth agent goes one step further. It proposes a hypothesis for why it dropped, shows the supporting data, and produces a ready-to-use deliverable for your response.

Q. How do the confidence badges work?

Fact means read directly from data. Correlation means inferred from patterns between two data points. Suggestion means AI reasoning. Check the badge and decide how much to trust it.

Q. Why is the indie app filter based on rating count?

Rating count correlates with downloads. Over 1,000 means significant marketing has already gone in, and indie developers shouldn't benchmark against that. The 50-1,000 rating range represents apps that succeeded under similar conditions.

Q. How are the growth stages determined?

They're auto-detected every time the daily Cron runs. It evaluates data collection duration, cumulative downloads, and recent trend direction. When the stage changes, the types of analysis change too.

Q. Can vibe coding really produce features like this?

I built it, so yes. The key is being clear about what you want. Claude structured the code and wrote most of it, but the decisions about which features were needed, ideas like confidence badges — those were mine. It's become more about judgment than coding skill.


Originally published at GoCodeLab

Top comments (0)