Remember last time we said "burndown charts lie"?
This time, let's learn how to make honest burndown.
Scope Creep: The Goal Keeps Moving
Have you experienced this during projects?
Week 1: "100 points and we're done!"
Week 2: "Oh, add social login too" (+15)
Week 3: "Must support mobile too" (+20)
Week 4: "Admin page is a given..." (+30)
Worked hard and completed 50 points, but total scope increased to 165.
Progress actually decreased. (50% → 30%)
This is exactly scope creep.
Burn-up Chart: Tool That Shows Truth
Burndown charts only show remaining work.
Burn-up charts show completed work and total scope together.
class BurnupChart {
constructor() {
this.completed = [];
this.totalScope = [];
}
addWeek(completedWork, currentScope) {
this.completed.push(completedWork);
this.totalScope.push(currentScope);
}
getScopeCreep() {
const initial = this.totalScope[0];
const current = this.totalScope[this.totalScope.length - 1];
return (((current - initial) / initial) * 100).toFixed(1);
}
}
// Usage example
const chart = new BurnupChart();
chart.addWeek(0, 100); // Start
chart.addWeek(20, 100); // Week 1
chart.addWeek(35, 115); // Week 2 (scope increase!)
chart.addWeek(50, 135); // Week 3 (increased again!)
console.log(`Scope creep: ${chart.getScopeCreep()}%`); // 35%
Looking at burn-up charts, you can see "we're not slow, the goal keeps moving away."
Cumulative Flow Diagram: Find Bottlenecks
A chart that graphs the number of tasks in each column of a Kanban board over time.
def analyze_cumulative_flow(daily_data):
"""Cumulative flow analysis"""
bottlenecks = []
for day in daily_data:
# Number of tasks by stage
todo = day["todo"]
doing = day["doing"]
review = day["review"]
done = day["done"]
# Bottleneck detection
if review > doing * 2:
bottlenecks.append({
"day": day["date"],
"issue": "Review bottleneck",
"action": "Need more reviewers"
})
if doing > 10:
bottlenecks.append({
"day": day["date"],
"issue": "Excessive WIP",
"action": "Limit in-progress work"
})
return bottlenecks
If a specific area swells in cumulative flow diagram? That's the bottleneck.
Cycle Time Tracking
Measures actual time from task start to completion.
class CycleTimeTracker {
trackTask(task) {
const startDate = new Date(task.startedAt);
const endDate = new Date(task.completedAt);
const cycleTime = (endDate - startDate) / (1000 * 60 * 60 * 24); // In days
return {
name: task.name,
cycleTime: cycleTime,
category: this.categorize(cycleTime),
};
}
categorize(days) {
if (days <= 1) return '🟢 Fast';
if (days <= 3) return '🟡 Normal';
if (days <= 7) return '🟠 Slow';
return '🔴 Very Slow';
}
getAverageCycleTime(tasks) {
const times = tasks.map((t) => this.trackTask(t).cycleTime);
return times.reduce((a, b) => a + b, 0) / times.length;
}
}
If average cycle time increases, it's a signal that there's a problem in the process.
Predicting with Monte Carlo Simulation
Predicts future with past data.
import random
def monte_carlo_forecast(historical_velocities, remaining_work, simulations=1000):
"""Completion date prediction"""
results = []
for _ in range(simulations):
days = 0
work_left = remaining_work
while work_left > 0:
# Randomly select from past velocities
daily_velocity = random.choice(historical_velocities)
work_left -= daily_velocity
days += 1
results.append(days)
results.sort()
return {
"p50": results[int(len(results) * 0.5)], # 50% probability
"p70": results[int(len(results) * 0.7)], # 70% probability
"p90": results[int(len(results) * 0.9)], # 90% probability
}
# Usage example
velocities = [3, 5, 2, 8, 4, 6, 3, 7] # Past daily completion
remaining = 100 # Remaining tasks
forecast = monte_carlo_forecast(velocities, remaining)
print(f"50% probability: complete within {forecast['p50']} days")
print(f"90% probability: complete within {forecast['p90']} days")
Instead of "probably done in 2 weeks", you can say "70% probability 12 days, 90% probability 16 days."
Lead Time Distribution
Analyzes time taken by task size.
const leadTimeDistribution = {
'XS (1-2 points)': {
average: '0.5 days',
p50: '0.5 days',
p90: '1 day',
recommendation: 'Process immediately',
},
'S (3-5 points)': {
average: '2 days',
p50: '1.5 days',
p90: '3 days',
recommendation: 'Normal processing',
},
'M (8-13 points)': {
average: '5 days',
p50: '4 days',
p90: '8 days',
recommendation: 'Review breakdown',
},
'L (20+ points)': {
average: '15 days',
p50: '12 days',
p90: '25 days',
recommendation: 'Must break down',
},
};
function estimateBySize(points) {
if (points <= 2) return leadTimeDistribution['XS (1-2 points)'];
if (points <= 5) return leadTimeDistribution['S (3-5 points)'];
if (points <= 13) return leadTimeDistribution['M (8-13 points)'];
return leadTimeDistribution['L (20+ points)'];
}
Creating Real-Time Dashboard
class HonestDashboard:
"""Honest project dashboard"""
def __init__(self):
self.metrics = {}
def update_daily(self, data):
self.metrics = {
"progress": self.calculate_progress(data),
"expected_completion": self.forecast_completion(data),
"scope_change": self.scope_change(data),
"bottleneck": self.find_bottleneck(data),
"risks": self.assess_risks(data)
}
def calculate_progress(self, data):
# Point-based progress (not task count)
completed_points = data["completed_points"]
total_points = data["total_points"]
return f"{(completed_points/total_points*100):.1f}%"
def forecast_completion(self, data):
# Prediction based on actual velocity
avg_velocity = data["avg_velocity_last_2weeks"]
remaining = data["remaining_points"]
days = remaining / avg_velocity if avg_velocity > 0 else "∞"
return f"{days:.0f} days later"
def scope_change(self, data):
# Scope change rate
initial = data["initial_scope"]
current = data["current_scope"]
change = ((current - initial) / initial * 100)
return f"{change:+.1f}%"
def find_bottleneck(self, data):
# Stage with most accumulation
stages = data["work_in_progress_by_stage"]
bottleneck = max(stages, key=stages.get)
return f"{bottleneck} ({stages[bottleneck]} items)"
def assess_risks(self, data):
risks = []
if data["velocity_trend"] < 0:
risks.append("Velocity decreasing")
if data["scope_creep_rate"] > 10:
risks.append("Excessive scope change")
if data["blocked_tasks"] > 3:
risks.append(f"{data['blocked_tasks']} blockers")
return risks or ["Normal"]
Honest Communication
def honest_status_report():
"""Honest status report"""
return f"""
## Project Status (Week 8)
### Truth in Numbers
- Completed: 120 points / 180 points (67%)
- vs Initial Scope: +35% increase
- Current Velocity: 15 points/week
- Expected Completion: 4 weeks later (70% confidence)
### Good News
- Core features 80% complete
- Quality metrics improved (bugs -40%)
### Bad News
- Scope continues to increase
- Backend bottleneck intensifying
### Decisions Needed
1. Whether to freeze additional requirements
2. Whether to support backend developer
3. Whether to adjust launch schedule
"""
Don't manipulate numbers. Tell the truth.
Only then can correct decisions be made.
Checklist: Making Honest Burndown
At project start:
- [ ] Prepare burn-up chart too
- [ ] Unify story point criteria
- [ ] Clearly define completion criteria
- [ ] Prepare cycle time measurement tool
Daily:
- [ ] Record completed points
- [ ] Record newly added scope
- [ ] Record blockers
- [ ] Record task count by stage
Weekly:
- [ ] Calculate average velocity
- [ ] Recalculate expected completion date
- [ ] Analyze scope change rate
- [ ] Identify bottleneck points
Conclusion
Honest burndown isn't simply drawing charts.
It's the courage to show reality as it is.
Instead of the lie "90% complete",
Say "67% complete, scope increased 35%, need 4 more weeks."
Only then can the team make correct decisions.
Need transparent and accurate project management? Check out Plexo.

Top comments (0)