Hey everyone,
I recently wrote a case study on Netflix’s Data-Driven Product Strategy, and wanted to share a few lessons that turned out to be surprisingly useful, whether you’re on a big team or working solo.
What Netflix did
- They identified small “friction” moments (for example, the delay before a show intro plays) and ran controlled experiments to test simple solutions like “Skip Intro.” That change is now used millions of times a day, and it improves session continuation.
- They also personalized the artwork (tiles for shows/movies) to better match users’ tastes then tested what visuals led to more clicks and plays. What I took away (and use myself):
- Always define a clear OEC (Overall Evaluation Criterion) and guardrails up front. Know what metric really matters.
- Run experiments even for “small stuff.” Sometimes low effort ideas have big returns.
- Use “platform thinking” make templates, metrics, dashboards that can scale. Don’t reinvent from scratch every time.
- Combine quantitative data with real user feedback (screens, sessions, interviews) so you understand why something works or not. What this means for you:
- Even if you don’t have Netflix’s user base, you can still apply experiments on small features.
- Prioritize experiments based on exposure + cost.
- Build a habit of measuring & reviewing experiment logs, small test groups, clear decision points.
Curious: for those here doing experiments
What metric (or “OEC”) surprised you most? What was an experiment that failed but taught you more than the successful ones?
Top comments (0)