I've sat in too many meetings where someone points at a number on a dashboard and says "look, it's working" or "look, it's broken" — and the number is measuring the wrong thing, or measuring the right thing wrong, or both.
This isn't a data engineering problem. It's a measurement problem. And it's more common than anyone wants to admit.
Why Dashboard Numbers Lie
Problem 1: The metric was defined in a meeting, not in code
Someone says "we need to track active users." Everyone nods. Nobody asks what "active" means. Is it users who logged in today? Users who performed any action? Users who performed a meaningful action (not a bot ping)? Users who paid?
The engineer implements something. The product manager had something different in mind. The exec reading the dashboard has a third interpretation. The number exists, it changes over time, and everyone is confidently wrong about what it means.
Fix: Define metrics in writing before building. "Active user: a user account that submitted at least one form in the last 30 days, excluding accounts marked as test or internal." That's a spec, not a vibe.
Problem 2: The data pipeline has bugs
Metrics pipelines are production code. They fail, they drift, they have edge cases. But unlike application bugs that show up as broken features, data bugs show up as slightly wrong numbers. Slightly wrong numbers are the most dangerous kind because they feel believable.
A query that's double-counting events because someone joins the wrong way. A timezone issue that moves events between days. A column that changed meaning when the schema evolved but the query didn't update.
Nobody knows these exist. The dashboard looks fine. Decisions get made.
Fix: Treat your data pipeline like you treat application code. Tests, code review, alerts when outputs go outside expected ranges. "This metric dropped 40% overnight" should be an automated alert, not something someone notices in a quarterly review.
Problem 3: The metric is easy to measure, not important to measure
Page views. Email opens. Time on site. These are easy to measure and frequently wrong proxies for what you actually care about.
Page views go up if your site is slow and people reload. Email opens are inflated by Apple's Mail Privacy Protection. Time on site goes up if your UX is confusing.
You optimize for what you measure. If you measure the wrong thing, you optimize in the wrong direction.
Fix: Work backward from business outcomes. What does success actually look like? Revenue, retention, task completion rate, NPS. Then find metrics that lead to those outcomes. Resist the urge to track what's easy.
Problem 4: Survivorship bias in the sample
Your dashboard probably shows data about the users you have. It probably doesn't show you much about the users you lost.
Churn dashboards that only track reasons given by departing customers miss the 60% who leave without clicking the feedback button. Conversion funnels show where people drop off but not why. Satisfaction scores track satisfied users more than dissatisfied ones who've already moved on.
Fix: Design for the data you don't have. Exit surveys. Cohort analysis that follows users from signup through churn. Win/loss interviews. The data you're missing is often more important than the data you have.
The Uncomfortable Conclusion
Most dashboards tell you what happened. The useful question is why it happened and whether it matters.
A metric without context is just a number. "DAUs up 12%" is not useful without knowing whether that's a trend or a spike, whether it's driven by the user segment you care about, and whether it maps to outcomes that actually matter for the business.
The best data teams I've seen spend more time on measurement strategy — defining what to track and why — than they do on building pipelines. The pipeline is the easy part. Getting the right number is hard.
What's the most misleading metric you've ever seen a team optimize for? I want to hear the stories.
Top comments (0)