I've been watching my traffic data obsessively for the past two weeks. Not because anything was broken — but because the patterns were telling me something I couldn't ignore.
This is article #33 in the series. If you're new here: I've built 13 Korean data scrapers on Apify — naver-news, naver-place-search, naver-blog-search, and others. As of today, ~11,850 total runs, ~77 registered users, somewhere around $90–105 in estimated revenue. Still small, but finally measurable.
Here's what two weeks of hourly traffic data actually looks like.
The Pattern I Didn't Expect
Weekday average: ~45.6 runs/hour. Weekend average: ~19.7 runs/hour. That's a 2.3x ratio — consistent, week over week.
And the biggest spike? Monday morning. I've recorded peaks of 41.5 runs/hour on Monday mornings. Not Friday afternoon. Not Thursday when people are rushing to finish things. Monday.
When I first saw this I thought it was noise. Then it happened again the next Monday. And the one after that.
Someone — or something — is waking up on Monday morning and immediately hammering my API.
Two APIs, Two Completely Different Stories
Here's where it gets interesting. When I break down the numbers by actor:
naver-news: 8,483 total runs, ~6 external users. That's roughly 1,414 runs per user.
naver-place-search: 1,113 total runs, ~22 users. That's roughly 51 runs per user.
Same portfolio. Completely different usage profiles.
naver-news is clearly driving the Monday morning spike. A small number of users running it constantly — that's not exploration behavior. That's automation. Someone has a scheduled job, probably a pipeline that ingests Korean news data at the start of the business week. They didn't try my API and move on. They integrated it and now depend on it.
naver-place-search is the opposite. More users, far fewer runs each. Distributed usage throughout the week with no dramatic spikes. People searching for specific places, checking something, moving on. Manual research behavior.
Two Customer Profiles Hiding in the Same Dashboard
I've been looking at "77 users" as one thing. It's not.
naver-news users are probably running B2B automation workflows. They need fresh Korean news data piped into some downstream process — a dashboard, a report, a model. They likely evaluated a few options, picked mine (or the only working one they found), and built a dependency on it. They don't think about my API very often. It just runs.
naver-place-search users are likely doing manual, task-driven research. Market research, competitor analysis, "find me all the cafés in Hongdae" type queries. They come back when they have a new question, not on a schedule.
These two groups have completely different risk profiles for me as a builder:
- naver-news is infrastructure. Predictable, high-volume, recurring revenue potential. But if it breaks or I deprecate it, someone's pipeline breaks too. Dependency risk cuts both ways.
- naver-place-search is a tool. More resilient — if one user churns, others remain. But also more susceptible to churn in general, since usage is task-driven rather than ongoing.
What the Monday Spike Actually Means
The Monday morning spike isn't just a fun data point. It's a signal that at least one of my naver-news users has a weekly business process that depends on my scraper being alive and fast at the start of their work week.
That's not a casual user. That's a customer.
And I almost missed it because I was looking at total run counts instead of when those runs happen.
What I'm Doing With This
A few things I'm thinking about now:
Reliability matters more than features for naver-news. If someone's Monday morning pipeline depends on this, uptime and consistent response time matter more than adding new fields. I need to treat this like infrastructure, not a side project.
naver-place-search needs discoverability. Distributed, ad-hoc users find tools through search — Dev.to articles, Reddit posts, Apify search. The growth lever here is awareness, not retention.
I should probably talk to the Monday morning user. Apify shows contact info for paying users. I haven't reached out to anyone yet. Maybe I should.
The honest takeaway: I've been measuring success with user counts and run totals. Those numbers are useful but shallow. Traffic timing told me more about who my users actually are and what they need than any aggregate stat.
Two weeks of data, one unexpected spike, and now I'm rethinking how I prioritize work across 13 actors.
For those of you building APIs or developer tools: do you look at when your traffic happens, not just how much? Has a usage pattern ever changed how you thought about your users?
Top comments (0)