DEV Community

Session zero
Session zero

Posted on

143 Users, Zero Support Emails: What the Silence Means

I've had 143 users run my APIs over 15,000 times.

Not a single one emailed me.

No bug reports. No feature requests. No "this broke" or "how do I use this." The inbox is empty. The Apify message system is empty. If you didn't look at the run logs, you'd think no one was using the product at all.

I used to interpret silence as failure. Now I think it's the most useful signal I've received.

The Numbers Behind the Quiet

Here's the breakdown across my Korean data scrapers on Apify:

  • 143 unique users across 15 actors
  • 15,430+ total runs
  • Average success rate: ~99% across all actors
  • Support contacts: 0

The overall runs-per-user ratio sits around 108. But that average hides something more interesting.

When I break it down by actor:

Actor 30-day Runs Active Users Runs/User
naver-news-scraper ~6,889 5 ~1,378
naver-place-reviews ~384 8 ~48
naver-place-search ~624 7 ~89
musinsa-ranking-scraper ~144 5 ~29

Two completely different user behaviors. The naver-news users are running pipelines — automated, scheduled, high-volume. The musinsa users are exploring — probably pulling data once or twice to evaluate fit.

Neither group emailed me.

Why Silence Is Different for Developer Tools

Consumer apps live and die by support volume. If users hit a wall, they complain or they churn. Support tickets are your early warning system.

Developer tools work differently.

When a developer hits an error with a consumer app, they wait for support. When a developer hits an error with an API, they read the error message. If the error message is clear, they fix it themselves. If it's not clear, they might open a ticket — but more likely, they just try another tool.

This means the silence I'm seeing isn't "everything is perfect." It's "the errors are self-explanatory enough that people don't need help."

That's a specific, achievable design goal. And it matters more than it sounds.

What Makes Self-Service Possible

Looking at my actors that have the best silence-to-usage ratios, a few patterns emerge:

Clear input/output contracts. Each actor has a defined schema — what goes in, what comes out. When something fails, the failure is predictable: the target URL changed, the input format was wrong, the rate limit was hit. Developers can diagnose this without documentation.

Standardized error messages. I return errors that match what developers already know: HTTP status codes, JSON error objects with message and type fields. Nothing proprietary to decode.

Atomic operations. Each run does one thing. You search, or you scrape reviews, or you pull photos. There's no state to manage across calls, no session handling, no "step 2 depends on step 1" complexity that generates support questions.

These aren't novel engineering insights. But they're easy to skip when you're moving fast. The silence is the reward for not skipping them.

The Two User Populations Hiding in the Data

The runs/user split reveals something I didn't expect: I have two distinct user populations, and they have almost nothing in common.

The automators (naver-news, place-search) run high volume on a schedule. They integrated the API into a pipeline weeks ago and it's just running. They don't think about it unless it breaks. They don't email because they're not looking at it.

The explorers (musinsa, place-photos, webtoon) run a handful of times. They're evaluating whether the data fits their use case. They don't email because they haven't committed yet — they're still deciding.

Neither population benefits from proactive outreach. The automators would find an email interruption annoying. The explorers are making a quiet decision and an email might actually accelerate a "no."

The right response to each is different:

  • For automators: reliability and uptime monitoring. They'll email when it breaks.
  • For explorers: better documentation, more example outputs, clearer pricing. Reduce the evaluation friction.

When Silence Becomes a Problem

Zero support emails isn't always good news.

If the success rate were 70% instead of 99%, silence would mean users are failing silently — hitting walls and churning without telling me why. That's the dangerous version of this story.

The metric that validates the silence is success rate. When both are high — zero contacts and 99% success — the silence means the product is working as intended. When success rate drops and contacts stay low, something is broken and users are just leaving.

I check success rate daily for this reason. The silence is only meaningful because the success rate gives it context.

What I'm Watching Next

The naver-news users running 1,378 runs per user in 30 days are a signal worth chasing. That's daily automation at scale. They've built something that depends on this data.

I don't know what it is. I haven't asked. But that kind of usage pattern usually means the data is embedded in something that matters to someone's business.

Understanding what those pipelines look like — what decisions are being made with Korean news data at that volume — would tell me more about product direction than any amount of user research I could commission.

The silence is information. It's just information about the product working, not about what the product could become.

That's the next question.


I'm building Korean data APIs on Apify. 14 actors, 143 users, still no support emails.


Tags: devtools api analytics indiedev webdev

Top comments (0)