One Tuesday morning at 9:14 AM, my six-month-old article got 37 views in 20 minutes. DEV.to's dashboard just said "+37 views". No context. No cause. No pattern.
I wanted to know why. Was it a comment from someone influential? A share somewhere? A title change from weeks ago finally paying off? The platform couldn't tell me. So I decided to steal my own data.
Not to optimize. Not to perform. But to understand how my articles actually live over time.
The Starting Point
I started with devto-analytics-pro by @gnomeman4201 — a solid foundation for collecting basic metrics. But I wanted more: a temporal vision, a memory that could tell the story of an article over time.
First step: store everything in a database. Not once, but every 4 to 6 hours. Automatically.
Why this frequency? Because with daily snapshots, you miss the fine variations. You miss what happens between noon and 6 PM. You smooth everything out. But with this frequency, suddenly, you see the breathing. You see when an article wakes up, when it falls asleep, when something revives it.
What I Discovered by Looking at My Data
The first thing the data taught me is that I didn't know my own articles as well as I thought.
For example, I discovered that a simple like from an active DEV community member can change everything. Not a spectacular reaction, just a like. But enough for DEV.to to feature the article. And there, the views climb. Not violently, but distinctly. Without regular data tracking, this phenomenon would have completely escaped me.
I also saw that a title change can triple visibility. Same content, same tags, same structure. Just a reformulated title. And suddenly, the exposure curve takes off again. It's not a "shock" — it's a lesson. A lesson you can only learn by watching temporal evolution, not by consulting a cumulative total.
Another discovery: some articles I thought were "dead" continue to bring readers six months after publication. Not many, but regularly. Two views per day, three comments per week. A discreet but real life. Without history, I would never have known they were still breathing.
And then there are the strange rhythms. My latest article on Cloud Run: 15 views at once on January 11 at 11 AM. Then nothing for 24 hours. Then 10 views on the 13th at 7 AM. Then silence. Then 12 views on the 15th at 11 AM. Then 10 more views on the 17th at 7 AM. Like jerky breathing. Without this collection every 4 hours, I would have only seen a total: "139 views in a week". With it, I see an article that lives in spurts, waking up at specific moments, then going back to sleep.
What the Tool Revealed About Me
By looking at my own data, I understood things my intuition didn't tell me.
The tool automatically classified my articles into four categories: "Tech Expertise", "Human & Career", "Culture & Agile", and "Free Exploration". I didn't choose these categories — the content analysis made them emerge.
And there, surprise:
Free Exploration ████████ 7.3% engagement
Culture & Agile ███ 2.5% engagement
Tech Expertise ██ 2.6% engagement
My "Free Exploration" articles — the freest, most personal ones — generate almost three times more engagement than technical pieces. These texts only reach 211 people on average, but these 211 people react, comment, discuss.
Respiration, for example: 460 views, 8.7% engagement. An article about burnout, writing, breathing. Nothing technical. Just a personal reflection. And it's the one that creates the most conversation.
My "Culture & Agile" articles bring more visibility: 819 views on average, but only 2.5% engagement. Actually Agile: Against Performance Theater: 2154 views, 4% engagement. It reaches many people, but engagement is shallower.
A revealing detail: "Actually Agile" generated 29 comments over 33 days. "Respiration" generated 10 comments over 3 days. The first created a conversation that stretched over time. The second created a concentrated explosion, then silence. Two types of engagement, two different rhythms.
So I wrote three more "Free Exploration" pieces the following month. Not because I was chasing engagement, but because I finally understood what kind of writing created real conversations.
Reading Times (Or: Who Really Reads?)
The tool also collects a metric that DEV.to provides but that nobody really looks at: cumulative reading time.
And there, we encounter surprising things.
My "Cloud Run Bill" article: on January 17, 25 views, 480 seconds of reading. That's 19 seconds average per view. The article is 5 minutes of reading. Conclusion: most people didn't read it. They scrolled, saw the title, maybe looked at the first paragraph, then left.
But on January 16: 15 views, 729 seconds of reading. That's 48 seconds average. Still not 5 minutes, but significantly more. These 15 people actually read part of the article.
And on January 15: 22 views, 30 seconds of reading. 1.4 seconds per view. These people didn't even open the article. They just saw the title in their feed.
What this metric reveals is that "views" means nothing. Some views are real readings. Others are lightning-fast passes. Others are click errors.
If I only look at total views (139), I think: "Not bad for a week."
If I look at reading times, I think: "In reality, maybe 30 to 40 people actually read the article."
And that completely changes the perspective.
What Tags Reveal (And What They Hide)
The tool also analyzes performance by tag. And there again, surprises arrive.
The "performance" tag: 5 articles, 3028 total views, 606 views on average. It's my most visible tag.
The "devjournal" tag: 1 single article, 460 views, but 8.7% engagement. It's "Respiration". A unique article, unclassifiable, unlike anything else I've written.
The "scrum" tag: 1 article, 2154 views, 4% engagement. It's "Actually Agile". The most viewed, but not the most engaging.
What these numbers say is that my most personal articles reach fewer people but create more conversation. My most "professional" articles reach more people but engage less deeply.
And that's exactly the kind of lesson you can only draw by crossing multiple dimensions: views, engagement, tags, temporality. A single metric tells nothing. It's the relationship between metrics that makes sense emerge.
Loyal Readers (Or: Who Really Comes Back?)
The tool also analyzes comments. Not just their number, but who comments, on how many articles, with what regularity, over what duration.
And there, we discover something that DEV.to stats don't show: who your real readers are. Not those who pass once and disappear, but those who come back.
In my case:
- One reader commented on 9 different articles, over a period of 86 days. 38 comments total, 261 characters average. This isn't someone who says "Nice post!" and leaves. This is someone who really reads, thinks, discusses.
- Three other readers commented on 3 articles each, over periods of 27, 33, and 58 days. They come back. Not systematically, but regularly.
What these numbers reveal is that I have a small core of loyal readers. Not thousands of followers, not tens of thousands of views. But a dozen people who really read what I write, who come back, who engage in conversation.
And that, for me, is worth a thousand times more than 10,000 views from people who skim and move on.
When Do Comments Arrive?
Another discovery: comment timing.
When I publish an article, 39.8% of comments arrive in the first 24 hours. Then there's a secondary peak between 24 and 72 hours (27.8%). Then it slows down: 10.4% between 3 and 7 days, 6.9% between 1 and 4 weeks.
But — and this is where it gets interesting — 15% of comments arrive more than a month after publication.
That means my articles continue to create conversations long after they come out. Not massively, but constantly. A comment here, another there, three weeks later, two months later. People who stumble upon an old text, read it, have something to say.
Without this temporal analysis of comments, I would never have known that my articles had this long, discreet life.
(Note in passing: the tool also detects spam. "Lost your crypto? Don't panic!" on an article about CVs. Sometimes, data also tells the absurdities of the web.)
Real Analytics Isn't About Counting. It's About Storytelling.
When you collect data regularly, you no longer see totals. You see trajectories. Rhythms. Moments when something happens.
Let's take a concrete example. My article "How I Cut My Cloud Run Bill by 96%". If I only look at DEV.to stats, I see: "139 views in 7 days". That's all.
But if I look at my timeline collected every 4 hours:
January 10, 7 PM: 14 views (article published 1 hour before)
January 11, 11 AM: +15 views at once (peak)
January 11, 3 PM: +1 view
January 11-12: +1 to +5 views every 4 hours (slow growth)
January 13, 7 AM: +10 views (second peak)
January 13-14: complete stagnation (0 views for 24h)
January 15, 7 AM: +10 views (awakening)
January 15, 11 AM: +12 views (peak)
January 15-16: back to calm (+1 to +2 views)
January 17: +10 views in morning, +10 views at 11 AM, +5 views at 3 PM (last surge)
January 18: complete silence
You see the difference?
Raw data says: "139 views".
The timeline tells: "This article lived in waves. A first peak at publication, then slow growth, then three brutal awakenings on the 13th, 15th, and 17th of January, always in the morning. Then, silence. The article fell asleep."
And now, I can ask real questions: why these morning peaks? Did someone share the article in a morning newsletter? Does DEV.to have a recommendation logic that works in waves?
Without this fine memory, I would only see a number. With it, I see a story.
What I Did With These Discoveries
Nothing spectacular. I didn't change my way of writing. I didn't set up a content strategy. I didn't start writing for the numbers.
But I understood what resonates. I understood that my most personal texts create more conversations, even if they reach fewer people. I understood that certain technical subjects continue to be useful long after their publication. I understood that changing a title can revive an article, but it's not a recipe — it's a possibility.
And above all, I understood that my articles live in time. Not when I publish them. But in their trajectory.
The Uncomfortable Truths
Building this tool also forced me to face things I didn't want to see.
Some articles I was proud of are genuinely dead. Not sleeping. Dead. Zero views for weeks. No comments. No reactions. Just silence. I kept thinking "maybe they need time to find their audience". The data said: no, they're just not interesting to anyone.
I also discovered that some of my "high view" articles were inflated by my own bugs. One article showed 2500 hours of reading time over a week. Impressive, right? Except when you do the math: that's 104 days of continuous reading compressed into 7 days. Impossible. Turned out to be a SQL query error — I was doing a SUM on a field that already contained cumulative totals. The real reading time was closer to 57 hours. Still good, but not magical. And embarrassing: I was impressed by my own coding mistake.
And the hardest truth: most people don't read. They skim. They see the title in their feed, click, scroll for 3 seconds, leave. The "view" counts as engagement for DEV.to's algorithm, but it's not a real read. It's just... noise.
Without this tool, I could have lived in comfortable illusions. With it, I had to face reality: writing into the void is real, inflated metrics are real (even when you inflate them yourself by accident), and most "engagement" is shallow.
But strangely, that made me feel better. Because now I know which articles genuinely connect with people. And those few real connections matter more than any vanity metric.
Why Other Authors Might Want This
You don't need it if you write from time to time, without seeking to understand how your texts are received. DEV.to stats are more than enough.
But if you want to know:
- Why certain articles "take off" and others don't
- If a title change really had an effect
- How your texts live over time
- If your old articles continue to bring readers
- What topics really trigger conversations
Then you need a memory. A tool that observes, not a tool that counts.
Why DEV.to Can't Do This (And Why That's Normal)
DEV.to is a platform, not an analytics tool. Its role is to give you a quick overview: how many views, how many reactions, how many comments. That's already a lot.
But a platform can't indefinitely store the detailed history of every author. It would be an enormous burden, for marginal use. Most authors don't need to know exactly what time an article took off on March 14th.
I do.
Not to "perform". Not to optimize. But because I want to understand what's happening. Because I am — and remain — an observer. Someone who likes to watch how things evolve over time, how an article lives, how a conversation develops.
That's why I built this tool. For me first. To understand my own texts, my own trajectories.
My Stance: Observer, Not Strategist
I don't optimize. I don't chase metrics. I don't compare myself to others.
I observe. I watch how my words live over time. I see what creates real conversations versus what just accumulates views.
This tool is an observation instrument. Not a strategy tool. Not a growth hack. Just a way to see what happens when you write honestly and let time reveal the patterns.
Conclusion: What Numbers Don't Say
DEV.to shows the data. This tool shows the story.
The data says: "This article has 139 views."
The story says: "This article lived in waves. A first peak at publication, then three brutal awakenings on January 13, 15, and 17, always in the morning, then silence."
The data says: "Your 'Agile' articles have 819 views on average."
The story says: "Your 'Agile' articles reach wide but engage little (2.5%). Your 'Free Exploration' articles reach 211 people but engage three times more (7.3%)."
The data says: "Respiration has 460 views."
The story says: "Respiration has 8.7% engagement — your best ratio — because it's the text where you opened up the most."
And sometimes, the story is much more interesting than the total views.
If you too want to see the secret life of your words, steal your data and listen. The rest is just noise.
Technical Annex: How It Works
The tool runs on my machine automatically every 4 hours via cron, calling devto_tracker.py --collect.
At each collection:
- Query DEV.to's API for all articles and metrics
- Store a complete snapshot in SQLite: views, reactions, comments, reading time
- Detect changes: modified titles, added tags, deleted articles
- Record events: "Staff Pick" detected, view spikes >3x average
- Collect and analyze comments (length, timing, author)
SQLite is the heart of the system. Not MongoDB, not PostgreSQL — just SQLite. For 20-30 articles with snapshots every 4 hours, it's more than enough. A single file, easily backupable, easily queryable.
Analysis scripts:
dashboard.py — Overview: most viewed articles, engagement rate, performance by tag, author DNA
comment_analyzer.py --full-report — Comment analysis: who comments, when, on how many articles, with what depth
traffic_analytics.py --article ID — Precise timeline: views per day, reading time, reactions
seismograph.py — Correlation detection: title change → view spike, influential comment → exposure boost
Each script queries the same database with a different question. Simple Python. No complicated frameworks. Just scripts that read SQLite and display results in the terminal.
Installation (3 Steps)
The code is on GitHub: github.com/pcescato/devto_stats
# 1. Clone
git clone https://github.com/pcescato/devto_stats.git
cd devto_stats
# 2. Install dependencies
pip install requests python-dotenv
# 3. Configure API key
cp .env.example .env
# Edit .env, add your DEV.to API key
# (get it at https://dev.to/settings/extensions)
First collection:
# Initialize database
python3 devto_tracker.py --init
# Collect first data
python3 devto_tracker.py --collect
Automate (recommended):
chmod +x setup_automation.sh
./setup_automation.sh
This script will create a cron wrapper and offer different collection frequencies (2x/day, 4x/day, 6x/day).
After a few days: trends emerge.
After a few weeks: complete timelines.
After a few months: a real memory of your texts.
It's not a fancy dashboard. It's not a Web interface with animated graphics. It's a command-line tool for those who want to understand, not impress.
Top comments (2)
But wow, this is a great tool!
As for those articles people scroll through instead of actually reading: I always think that if something is genuinely read by 30–40 people, that’s already a successful article. That’s a full room at a meetup or a workshop, after all!
Exactly! That's the shift this tool gave me. Before, I'd see "139 views" and think "meh, not great". Now I see "maybe 35 actual reads" and think "that's a packed room of people who chose to spend 5 minutes with my words".
The room analogy is perfect. Would I rather speak to 1000 people scrolling their phones, or 35 people genuinely listening? The answer became obvious once I could see the difference.
Thanks for getting it :)