No more gut estimates like "roughly a week?"
From PERT technique used by the US Navy for nuclear submarine projects, to Planning Poker loved by Agile teams.
Let's learn scientific estimation methods.
PERT: The Magic of Three-Point Estimation
PERT (Program Evaluation and Review Technique) was developed by the US Navy in the 1950s for the Polaris missile project. The secret that shortened it by 2 years.
Basic Formula
def pert_estimation(optimistic, realistic, pessimistic):
"""
O: Optimistic (when everything is perfect)
R: Realistic (normal case)
P: Pessimistic (when everything goes wrong)
"""
# PERT formula
expected = (O + 4*R + P) / 6
# Standard deviation (uncertainty)
std_dev = (P - O) / 6
return {
"expected": expected,
"std_dev": std_dev,
"range_68%": (expected - std_dev, expected + std_dev),
"range_95%": (expected - 2*std_dev, expected + 2*std_dev)
}
# Real example: Login API development
result = pert_estimation(
optimistic=4, # Best: 4 hours
realistic=8, # Reality: 8 hours
pessimistic=16 # Worst: 16 hours
)
print(f"Expected: {result['expected']:.1f} hours") # 8.7 hours
print(f"68% probability: {result['range_68%']}") # (6.7, 10.7)
print(f"95% probability: {result['range_95%']}") # (4.7, 12.7)
Why multiply by 4? To give more weight to the mode in normal distribution.
Planning Poker: The Power of Collective Intelligence
Essential tool for Agile teams.
Process
1. Everyone prepares cards (1, 2, 3, 5, 8, 13, 21...)
2. Reveal simultaneously
3. If difference is large, discuss
4. Reach consensus
Why Use Fibonacci
fibonacci = [1, 2, 3, 5, 8, 13, 21, 34]
# Larger tasks have larger uncertainty
# Difference between 1 and 2 is clear
# But difference between 21 and 34 is ambiguous
Psychological Effect: Wider intervals for larger numbers prevent excessive precision.
Monte Carlo Simulation
Calculate probability through 1000 simulations.
import random
import numpy as np
def monte_carlo_simulation(tasks, iterations=1000):
"""Simulate project completion time"""
results = []
for _ in range(iterations):
total_time = 0
for task in tasks:
# Randomly select actual time for each task
actual = random.triangular(
task['min'],
task['max'],
task['likely']
)
total_time += actual
results.append(total_time)
return {
"mean": np.mean(results),
"p50": np.percentile(results, 50), # Median
"p90": np.percentile(results, 90), # 90% probability
"p95": np.percentile(results, 95) # 95% probability
}
# Project tasks
tasks = [
{"name": "Design", "min": 2, "likely": 3, "max": 5},
{"name": "Development", "min": 5, "likely": 8, "max": 15},
{"name": "Testing", "min": 2, "likely": 3, "max": 6}
]
result = monte_carlo_simulation(tasks)
print(f"50% probability: complete within {result['p50']:.1f} days")
print(f"90% probability: complete within {result['p90']:.1f} days")
Velocity-Based Estimation
Estimation using past data.
class VelocityEstimator:
def __init__(self, past_sprints):
self.velocities = past_sprints
def estimate(self, total_points):
avg_velocity = np.mean(self.velocities)
std_velocity = np.std(self.velocities)
sprints_needed = total_points / avg_velocity
return {
"expected_sprints": sprints_needed,
"optimistic": total_points / (avg_velocity + std_velocity),
"pessimistic": total_points / (avg_velocity - std_velocity)
}
# Past 10 sprint velocities
past_velocities = [23, 28, 25, 30, 22, 27, 26, 24, 29, 26]
estimator = VelocityEstimator(past_velocities)
result = estimator.estimate(total_points=150)
print(f"Expected: {result['expected_sprints']:.1f} sprints")
print(f"Range: {result['optimistic']:.1f} ~ {result['pessimistic']:.1f}")
Wideband Delphi
Expert consensus technique.
Round 1: Submit estimates anonymously
├── Developer A: 10 days
├── Developer B: 5 days
├── Developer C: 15 days
└── Large variance
Round 2: Share reasons and re-estimate
├── A: "Considering DB migration..."
├── B: "Oh, I missed that"
├── C: "Is test automation included?"
└── Re-estimate: 8 days, 9 days, 10 days
Round 3: Consensus
└── Final: 9 days
Practical Application Guide
Small Projects (1-2 weeks)
- Recommended: Planning Poker
- Reason: Fast and easy team consensus
Medium Projects (1-3 months)
- Recommended: PERT + Velocity
- Reason: Appropriate accuracy and practicality
Large Projects (3+ months)
- Recommended: Monte Carlo + Wideband Delphi
- Reason: High accuracy needed
Estimation Accuracy Improvement Tips
1. Reference Class Forecasting
# Find similar past projects
similar_projects = [
{"name": "Login System A", "estimated": 20, "actual": 35},
{"name": "Login System B", "estimated": 15, "actual": 28},
{"name": "Login System C", "estimated": 25, "actual": 40}
]
adjustment_factor = np.mean([p["actual"]/p["estimated"]
for p in similar_projects])
# 1.73x
new_estimate = raw_estimate * adjustment_factor
2. Estimation Retrospective
## Sprint Estimation Retrospective
| Task | Estimated | Actual | Difference | Cause |
| -------- | ---- | ---- | ---- | ------------- |
| API Dev | 8h | 12h | +4h | Auth Complexity |
| UI Impl | 6h | 5h | -1h | Template Reuse |
| Testing | 4h | 8h | +4h | Edge Cases |
**Lesson**: Auth and testing need 1.5x buffer
Conclusion: Estimation is Science
The era of "gut feeling" estimation is over.
Calculate uncertainty with PERT,
Utilize collective intelligence with Planning Poker,
Simulate probability with Monte Carlo.
Remember:
- Estimate as a range, not a single number
- Always use past data
- Team-wide participation in estimation
- Continuous estimation improvement
Accurate estimation builds trust,
Trust creates successful projects.
Need scientific estimation and project management? Check out Plexo.

Top comments (0)