"What's this sprint's velocity?"
"Um... roughly 30 points?"
Measuring a team's actual productivity is harder than you think.
If you're only looking at burndown charts, you're only seeing half the truth.
Why is knowing real team velocity important?
Predictable schedules, sustainable pace, and team growth.
All of this starts with accurate velocity measurement.
Today, beyond burndown charts,
I'll introduce 5 scientific methods to measure a team's real productivity.
Blind Spots of Burndown Charts
There are things burndown charts can't show.
# What burndown misses
hidden_metrics = {
"rework": "Time spent on bug fixes",
"context_switching": "Task switching cost",
"technical_debt": "Work deferred for later",
"learning_curve": "Time to learn new technology",
"communication": "Meeting and review time"
}
These hidden tasks distort the team's actual velocity.
Metric 1: Throughput
"Number of completed tasks"
The simplest but powerful metric.
const weeklyThroughput = {
week1: { completed: 12, started: 15 },
week2: { completed: 8, started: 20 }, // Warning signal!
week3: { completed: 15, started: 15 },
week4: { completed: 14, started: 14 },
};
// Average throughput: 12.25 tasks/week
// Standard deviation: 3.09 (check variability)
If completed tasks are fewer than started tasks?
It's a danger signal that WIP (Work In Progress) is accumulating.
Metric 2: Cycle Time
"Time from task start to completion"
def calculate_cycle_time(task):
"""Calculate actual work time"""
start = task.started_at
end = task.completed_at
# Exclude weekends and holidays
working_days = exclude_weekends_holidays(start, end)
# Exclude blocking time
blocked_time = task.blocked_duration
return working_days - blocked_time
# Average cycle time by task type
cycle_times = {
"Bug Fix": 0.5, # days
"Feature Development": 3.2, # days
"Refactoring": 2.1, # days
"Documentation": 1.0 # days
}
If cycle time increases, must find where the bottleneck is.
Metric 3: Lead Time
"Total time from request to deployment"
The difference from cycle time is it includes wait time.
Lead Time = Wait Time + Cycle Time
Request → [Wait] → Start → [Work] → Complete → [Wait] → Deploy
└─────────────── Lead Time ──────────────┘
└── Cycle Time ──┘
If lead time is long, process improvement is needed.
Metric 4: Flow Efficiency
"Actual work time / Total time"
flow_efficiency = {
"Work Time": 8, # Actual coding, testing
"Wait Time": 32, # Review wait, deployment wait
"Meeting Time": 5, # Discussion, planning
"Blocking": 3 # Dependency wait
}
efficiency = 8 / (8 + 32 + 5 + 3) # 16.7%
Most teams are at 15-20% level.
Over 40% is a very efficient team.
Metric 5: Forecast Accuracy
"Actual completion rate vs plan"
const sprintAccuracy = {
planned: 30, // Planned story points
completed: 24, // Actually completed points
added: 5, // Points added mid-sprint
removed: 3, // Excluded points
};
// Forecast accuracy = Completed / Planned * 100
const accuracy = (24 / 30) * 100; // 80%
// Scope change rate
const scopeChange = ((5 - 3) / 30) * 100; // 6.7%
If accuracy is below 70%, must improve planning method.
Practice: Building a Dashboard
Create a dashboard to see these 5 metrics at a glance.
class VelocityDashboard:
def __init__(self, team_data):
self.data = team_data
def weekly_report(self):
return {
"throughput": self.calculate_throughput(),
"avg_cycle_time": self.calculate_cycle_time(),
"avg_lead_time": self.calculate_lead_time(),
"flow_efficiency": self.calculate_efficiency(),
"forecast_accuracy": self.calculate_accuracy(),
"trend": self.analyze_trend()
}
def analyze_trend(self):
"""Analyze velocity change trend"""
if self.is_improving():
return "📈 Improving"
elif self.is_stable():
return "➡️ Stable"
else:
return "📉 Attention needed"
Velocity Improvement Strategies
If you measured, you must improve.
1. Increase Throughput with WIP Limits
Before WIP Limit: 10 simultaneous → 5 completed/week
After WIP Limit: 3 simultaneous → 8 completed/week
2. Shorten Cycle Time with Automation
automation_impact = {
"Manual Testing": 4, # hours
"Automated Testing": 0.5, # hours
"Time Saved": 3.5 # hours/task
}
3. Improve Flow Efficiency by Removing Bottlenecks
Remove the largest wait time first.
Usually code review or QA stage is the bottleneck.
Benchmarks by Team Size
Startup (5 people):
throughput: 15-20 tasks/week
cycle_time: 1-2 days
efficiency: 25-35%
Mid-size (20 people):
throughput: 40-60 tasks/week
cycle_time: 2-4 days
efficiency: 20-30%
Enterprise (100+ people):
throughput: 150-200 tasks/week
cycle_time: 3-7 days
efficiency: 10-20%
Do you see why smaller teams have higher efficiency?
Conclusion: If You Can't Measure, You Can't Improve
Burndown charts are just the beginning.
To know real team velocity:
- Measure productivity with throughput
- Measure speed with cycle time
- Measure waste with flow efficiency
- Measure planning ability with forecast accuracy
Measure, track, and improve.
"What you don't measure, you can't manage,
What you don't manage, you can't improve."
Need accurate team velocity measurement and improvement? Check out Plexo.

Top comments (0)