DEV Community

Super Funicular
Super Funicular

Posted on

How I Built a Production Android App in 75+ AI Sessions — Part 2: What the First Month of Public Metrics Taught Me

How I Built a Production Android App in 75+ AI Sessions — Part 2: What the First Month of Public Metrics Taught Me

A month ago I published a build-in-public post about shipping a production Android app using Claude Code as the primary developer. The app — Background Camera RemoteStream — was already on the Play Store open testing track at the time. Since then it has hit production, the YouTube Live API quota expansion has been approved, AdMob is live, and I have spent four weeks trying to do something even harder than building it: getting other humans to know it exists.

This is the unglamorous Part 2.

It is the post about the marketing month, not the building month. About what it actually feels like to publish 32 articles, 59 tweets, 33 Bluesky posts, and 13 Quora answers, and watch most of them land in the void. About the one reaction I got. About the article that did 40× the floor and the one archetype that quietly outperforms everything else 7×. About the feed-volume throttle that made me sit on this post for three days.

If you are an indie hacker, a solo founder, or a developer who finally shipped the thing and is now looking at a flat analytics chart wondering whether you broke marketing, this one is for you.

The headline numbers (real, not rounded up)

Here is the dashboard, as of May 5, 2026, for the @superfunicular footprint:

  • dev.to: 32 articles published, 175 cumulative views (+21 in the past 48 hours, all from the existing catalog — see Lesson 6), 1 reaction, 0 comments
  • Twitter / X: ~59 tweets, 0 followers (yes, still zero), blue check live as of two days ago, 1 reply on a 51,500-view tweet that was actively recommending a competitor
  • Bluesky: 33 posts, 0 followers, 4 cumulative likes, 0 reposts
  • Quora: 13 answers, 13 followers, 66 monthly content views (up from 34 the week before); first algorithmically-surfaced "Question for You" routed to the profile this week
  • LinkedIn: profile, but nothing has shipped yet beyond the bio
  • Reddit: account created, API access pending; posting blocked
  • Indie Hackers / Uneed.best / Pitch Wall / SideProjectors / Launching Next: product pages live or in queue; no measurable referral traffic yet

If your reaction to that list is "that is depressing," that is a healthy reaction. It is also wrong.

The sentence that took me a month to internalize is this: the headline numbers are noise; the distribution is the signal. Your one-month flat-line chart is hiding a Pareto, and the Pareto is your real product strategy.

Here is what I mean.

Lesson 1 — Pareto is the only metric that matters in month one

Of those 32 dev.to articles, 5 of them produced 100% of the views that broke the floor. The other 27 are within rounding distance of zero. The single best piece — a comparison list titled "Best Apps to Stream YouTube Live" (id 3592937) — is at 40 views. The top five, in order, are:

  1. id 3592937 — "Best Apps to Stream YouTube Live" — 40 views (Comparison list)
  2. id 3590206 — "Turn Your Old Android Phone Into a Free Security Camera" — 22 views (Use case)
  3. id 3589467 — "How I Built a Production Android App in 75+ AI Sessions" (Part 1, this article's parent) — 20 views (Build-in-public)
  4. id 3590177 — Build-in-Public sibling post — got the only reaction in the entire catalog
  5. id 3598582 — "Best Free Nanny Cam Apps for Android" — gained +20 views in a single 24-hour window four days after publish (Comparison list)

Everything else is at single digits. Some of it is at zero.

The temptation in month one is to look at the average — 175 views ÷ 32 articles ≈ 5.5 views per piece — and conclude that you have a "low-engagement problem." You don't. You have a discovery problem. Five pieces work, and the rest are scaffolding. The right move is not to publish more scaffolding. The right move is to figure out why those five worked and write the sequels.

That is exactly what next week's content calendar does. The piece directly above the one you are reading is the sequel to id 3589467. Thursday's piece is the sequel to id 3590206. Saturday's piece is the doorbell-camera comparison sister to id 3592937.

Lesson 2 — Archetypes outperform topics

Inside that Pareto is a second pattern. I have been categorizing every article by "archetype" — Comparison List, Dramatic Story, Technical Deep-Dive, Use Case Showcase, Build-in-Public, Story Compilation, SEO Refresh. After 32 articles, the archetype averages tell a story the topic-by-topic view does not:

Archetype Avg views/piece Notes
Build-in-Public (E) 17.5 Highest per-piece efficiency; only archetype to land a reaction
Comparison List (A) 4.62 Slow start, 3× growth after 24-72h indexation
Use Case Showcase (D) 5.5 Steady performer; sequels compound
Technical Deep-Dive (C) 0.5 Worst performer despite the most effort per piece
Newsjacks (NJ) 0–2 (capped) Three consecutive 0-view ceilings; see Lesson 6

This was the most expensive lesson of the month. I came in assuming Technical Deep-Dive would be the engine — developers love deep dives, dev.to is a developer site, Q.E.D. It is the worst-performing archetype I have. The Camera2 API explainers, the Ktor server walkthroughs, the foreground service architecture posts — they got read by approximately nobody. Build-in-public got read by 35× as many people on average.

What I think is happening: deep-dive content competes against an enormous backlog of evergreen tutorials. Build-in-public competes against almost nothing, because almost nobody is willing to publish their real numbers. Scarcity is a moat.

If you are about to write your first ten developer-blog posts about your indie product, the archetype mix matters more than the topics. Write the build-in-public first. Write the use-case piece second. Write the deep-dive last, and only when you have something genuinely surprising to say — not when you want to prove you are smart.

Lesson 3 — The averages lie until day three

The most operationally useful thing I learned in week one was almost a strategic mistake. After 24 hours, my comparison list (A) archetype was averaging 1.6 views per piece. I almost demoted the entire archetype out of the calendar. Then I waited three days.

Between publish-day and day-three of indexation, comparison-list pieces tripled their per-piece average — from 1.6 to 4.62. The piece I had nearly written off (id 3592937) climbed from 5 views on day one to 40 views by the end of week two.

The lesson: dev.to (and any SEO-driven channel) takes 48–72 hours to surface a piece. Your day-one analytics are not telling you whether the piece worked; they are telling you whether your followers showed up. Those are different questions, and the second one matters less than you think for a brand-new account with no followers.

Operationally, this means: wait three days before judging an archetype, and a week before judging a topic. The lag is real, and "it didn't pop on day one" is not the signal you think it is.

Lesson 4 — The first reaction is structurally meaningful

For the first three weeks, I had zero reactions across the entire dev.to catalog. Last week, the first reaction landed on a build-in-public post (id 3590177).

I do not believe this is a coincidence. The articles that produce reactions are not the ones with the most views — they are the ones that produce identification. Build-in-public posts say "I am like you and here is what is hard." A comparison list says "here are five products." Identification scales further than information, in social-proof economies.

If you are publishing into the void and have not produced a single reaction yet, write something that names a hard thing honestly. Then publish it. Then wait three days.

Lesson 5 — Your 0-follower X account can still produce one good placement

X is the channel where I have been most reluctantly impressed. The @superfunicular account has zero followers. It has produced one (1) genuinely useful artifact in a month: a single reply to a 51,500-view tweet from another account that was actively recommending a competing camera app. That tweet had 497 bookmarks at the time of the reply. My reply named three concrete differentiators (no account, no cloud, free) in 280 characters and linked the Play Store.

I cannot prove this reply produced any installs. The attribution is bad and X does not give me the impression count for a single reply on a 0-follower account. But I would not trade it for any of the 58 standalone tweets that preceded it.

The mechanism is straightforward: a high-view tweet in your category is a discovery surface. It is full of people who, by virtue of having read the parent tweet, are already qualified for your product. You do not need followers to reply. You need a saved search and discipline.

The discipline is to stay under one reply per day on the same query. (Mine is ("old phone" OR "spare phone") "security camera" on the Latest tab.) More than that and you will trip rate limits and look like spam.

Lesson 6 — There is a feed-volume throttle and you will hit it

Here is the operational footnote that almost broke this article — literally. The piece you are reading is being published on May 6, three days after the last article in the catalog and two days after the date stamped in its directory. The reason is that I tripped dev.to's feed-volume throttle.

What I observed: I published 30+ articles in under 96 hours. Three consecutive newsjack pieces (ClayRat trojan, NoVoice rootkit, Be Prime breach — ids 3599501, 3601479, 3603034) hit a 0–2 view ceiling that did not move. Meanwhile, articles I had published four to seven days earlier continued to accumulate views normally — id 3592937 added +20, id 3598582 added +20, id 3590206 added +22 in the same window.

The diagnosis: my existing catalog still indexes and surfaces through search and topic feeds. New posts were not getting picked up by the dev.to "feed" for a few days. The simplest explanation is a per-account volume throttle, not a per-article quality issue.

If you are a developer and your instinct is "they probably have a soft cap on publish frequency," that instinct is correct. The fix is not to fight it. The fix is to slow down and let the existing pieces breathe. I held publishing for three days and resumed with a single high-value sequel — the post you are reading. We will see whether that resets the surface.

The deeper lesson is that publishing volume is not the metric. Surface velocity is. If you double your output and feed pickup halves, you have made yourself busier and not better-distributed.

Lesson 7 — The Bluesky chart will demoralize you. Read it anyway.

After 33 Bluesky posts: 0 followers, 4 likes, 0 reposts. Compared to my dev.to numbers, this is the single most demoralizing line in the dashboard.

I am keeping it because Bluesky is the only place where I have a public, queryable, untrustworthy-vendor-free measurement of what "0 distribution" actually looks like. It is the control group. Every post is well-formed, includes a direct link, and is original prose. None of them have produced anything.

The conclusion I am drawing — tentatively — is that on a small platform with no follow graph, posting into the void produces zero compounding even when the content is fine. It is a graph problem, not a content problem. To make Bluesky work, the next experiment has to be follower acquisition (replies on developer accounts, follower-trains, mutual follows from indie-dev lists), not more posting. Posting more into 0-follower territory is not a strategy. It is a metric.

I will keep posting because it costs nothing and the day Bluesky's algorithm starts surfacing strangers, having a 60-post backlog is better than having a 6-post one. But I am not learning anything from the chart, and that is itself a lesson.

What is changing in month two

The operational changes for the next 30 days, all of them justified by the data above:

  1. Build-in-public sequels. This post and three more like it before June. The reaction-rate is the highest in the catalog and the topic costs me nothing to write — it is just my real life.
  2. Comparison-list scaling. I had nearly killed the archetype after day one. Now it is on weekend slots through May because the 72-hour indexation pattern is real and the SERP wins are slow but real.
  3. Use-case sequels. id 3590206's "doorbell, pet cam, garage-watch" follow-up ships next Thursday.
  4. Newsjacks paused. Hard cap at 2/week; window reopens May 11.
  5. X reply discipline. One reply per day on the saved search, logged with timestamp and parent-tweet view count, so we can see whether the heygurisingh placement was a fluke or a repeatable pattern.
  6. Reddit unblocked the moment API access lands. The r/AlfredCamera surface alone has surfaced three pain threads this week that map directly to differentiators.

Six months from now, if you are reading this and the chart has gone vertical, it will be because we did the unglamorous, non-cute, slow-compounding work above. If it hasn't, I will publish Part 3 and tell you what I got wrong.

Try the app, or don't

If you are looking for a privacy-first Android camera that does not phone home, that records with the screen off, and that streams to YouTube Live without an account-based middleman: Background Camera RemoteStream is on Google Play here, and the project lives at superfunicular.com.

If you are not — that is fine too. Subscribe, ignore, or come back in 30 days for Part 3. The only commitment I am making is that the numbers will be real.


Related reading

Top comments (0)