Actuarial science has been quantifying risk for centuries, but the underlying math is surprisingly approachable. At its core, a life expectancy estimator is a weighted scoring model. You start with a baseline value drawn from a life table, apply a series of factor adjustments derived from epidemiological research, and return an adjusted estimate. The entire pipeline fits comfortably in a client-side JavaScript module with no server dependencies.
The interesting engineering challenge is not the arithmetic itself. It is structuring the data so that factor weights remain maintainable, the model recalculates efficiently when a single input changes, and the output stays within statistically defensible bounds. This is the same class of problem you encounter when building credit scoring engines, insurance premium calculators, or any multi-variable risk assessment tool in the browser.
This article walks through the data structures, the scoring algorithm, and the implementation details you need to build one from scratch.
Actuarial Life Tables: Structure and Representation
Life tables are the foundation of any longevity estimate. The Social Security Administration publishes period life tables annually, broken down by age and sex. Each row contains an age, the probability of dying within one year at that age, the number of survivors out of 100,000 births, and the remaining life expectancy at that age.
The key column for our purposes is the remaining life expectancy value, often labeled e(x) in actuarial notation. For a 40-year-old male in the 2023 SSA table, e(x) is roughly 38.6 years, meaning the average remaining lifespan from age 40 is about 78.6 years total. The WHO Global Health Observatory publishes similar tables with country-level granularity.

Photo by Tima Miroshnichenko on Pexels
In JavaScript, a flat lookup is the most practical representation. You do not need a full survival curve for a scoring model. You need fast access to e(x) given an age and sex. A Map keyed by a composite string handles this cleanly:
// Life table data derived from SSA Period Life Table (2023)
// Each entry: [age, sex] -> remaining life expectancy in years
const lifeTable = new Map();
function loadLifeTable(data) {
// data is an array of [age, sex, ex] tuples
for (const [age, sex, ex] of data) {
lifeTable.set(`${age}:${sex}`, ex);
}
}
function getBaselineExpectancy(age, sex) {
const key = `${Math.floor(age)}:${sex}`;
if (lifeTable.has(key)) {
return lifeTable.get(key);
}
// Linear interpolation for fractional ages
const lower = lifeTable.get(`${Math.floor(age)}:${sex}`);
const upper = lifeTable.get(`${Math.ceil(age)}:${sex}`);
if (lower !== undefined && upper !== undefined) {
const fraction = age - Math.floor(age);
return lower + fraction * (upper - lower);
}
return null; // Age out of table range
}
A few design notes on this approach. Using a Map instead of a plain object avoids prototype chain issues and gives you O(1) lookups. The composite key pattern (age:sex) is simple but effective for two-dimensional tables. For larger datasets with more dimensions (country, cohort year), you might move to a typed array with index arithmetic for better memory performance. The simple-statistics library on GitHub provides interpolation utilities if you want to avoid rolling your own.
The SSA table covers ages 0 through 119. In practice, you will cap your input range at something reasonable, typically 18 to 100, since the scoring factors from epidemiological studies are calibrated against adult populations.
Building the Weighted Scoring Model
With the baseline in place, the next layer is the adjustment model. Epidemiological research assigns relative risk multipliers to lifestyle and demographic factors. Smoking, physical activity, BMI, alcohol consumption, chronic conditions, diet quality, and socioeconomic indicators each have published hazard ratios from large cohort studies. The Global Burden of Disease Study from the Institute for Health Metrics and Evaluation is one of the most comprehensive sources for these ratios.
The mathematical structure is straightforward. Each factor contributes an additive adjustment (in years) to the baseline. The adjustment is the product of a factor weight (derived from research) and a normalized score for the individual's input:
adjusted_expectancy = baseline + SUM(weight_i * score_i)
Where weight_i is the maximum possible adjustment in years for factor i, and score_i is a normalized value between -1 and 1 representing how the individual's input compares to the reference population.
Here is a scoring function that implements this pattern:
// Factor definitions: each factor has a weight (max years adjustment)
// and a scoring function that returns a value between -1 and 1
const factors = [
{
name: 'smoking',
weight: -8.0, // Heavy smoking can reduce expectancy by up to 8 years
score: (input) => {
// 0 = never, 1 = former, 2 = light, 3 = heavy
const map = { 0: 0, 1: -0.3, 2: -0.6, 3: -1.0 };
return map[input] ?? 0;
}
},
{
name: 'exercise',
weight: 4.5, // Regular exercise adds up to 4.5 years
score: (input) => {
// minutes per week of moderate activity
return Math.min(input / 300, 1.0); // Caps benefit at 300 min/week
}
},
{
name: 'bmi',
weight: -4.0,
score: (input) => {
// Optimal BMI range: 18.5-24.9
if (input >= 18.5 && input <= 24.9) return 0;
if (input < 18.5) return -0.5;
// Nonlinear penalty for obesity
const excess = input - 24.9;
return -Math.min((excess / 15) ** 1.3, 1.0);
}
},
{
name: 'alcohol',
weight: -3.5,
score: (input) => {
// drinks per week: 0-7 moderate, 7-14 elevated, 14+ heavy
if (input <= 7) return 0;
return -Math.min((input - 7) / 21, 1.0);
}
}
];
function calculateAdjustedExpectancy(age, sex, inputs) {
const baseline = getBaselineExpectancy(age, sex);
if (baseline === null) return null;
let totalAdjustment = 0;
const breakdown = [];
for (const factor of factors) {
const rawScore = factor.score(inputs[factor.name]);
const adjustment = factor.weight * rawScore;
totalAdjustment += adjustment;
breakdown.push({
factor: factor.name,
score: rawScore,
adjustment: Math.round(adjustment * 10) / 10
});
}
// Clamp the total adjustment to prevent absurd results
const maxAdjustment = baseline * 0.25; // Never adjust more than 25%
const clampedAdjustment = Math.max(
-maxAdjustment,
Math.min(maxAdjustment, totalAdjustment)
);
return {
baseline: Math.round(baseline * 10) / 10,
adjustment: Math.round(clampedAdjustment * 10) / 10,
estimate: Math.round((baseline + clampedAdjustment) * 10) / 10,
breakdown
};
}
Several things are worth noting in this implementation. The scoring functions use different curve shapes depending on the factor. Smoking is a discrete lookup because the epidemiological data groups smokers into categories. Exercise uses a linear ramp with a cap, reflecting the diminishing returns documented in studies like the one published in the British Journal of Sports Medicine. BMI uses a power curve for the obesity penalty, which better models the nonlinear relationship between excess weight and mortality risk described in Lancet meta-analyses.
The clamping step at the end is critical. Without it, an individual with extreme values across multiple factors could get an output that implies negative remaining years or an absurdly long lifespan. The 25% ceiling is a conservative guard rail. In production systems, you would likely derive clamping bounds from the confidence intervals of the underlying studies.
"When you're building data-driven tools that run entirely in the browser, the biggest engineering decision isn't the algorithm itself. It's how you structure the data so a non-technical team member can update factor weights six months from now without breaking the scoring pipeline." - Dennis Traina, 137Foundry
The Array.prototype.reduce method on MDN is worth reviewing if you want to refactor the loop into a functional style. For more complex models with interaction effects between factors (where smoking combined with obesity has a worse-than-additive penalty), you would extend the architecture with pairwise interaction terms, similar to how logistic regression handles feature interactions.

Photo by Daniil Komov on Pexels
EvvyTools' life expectancy modeling tool implements this kind of weighted actuarial model client-side, running the full calculation pipeline in the browser with no backend calls. The architecture mirrors what we have built here: a life table lookup layer, a configurable set of weighted factors, and a normalization step that keeps outputs within actuarial bounds.
Implementation Considerations
Efficient What-If Recalculation
A common UI pattern for scoring models is a what-if slider, where the user changes one factor and sees the estimate update instantly. Recalculating the entire model on every slider tick is fine for small factor sets (under 20 factors), but if your model grows or you add Monte Carlo confidence intervals, you want to cache partial sums.
The approach is straightforward. Cache the total adjustment, and when a single factor changes, subtract its old contribution and add the new one:
class ExpectancyModel {
constructor(age, sex) {
this.baseline = getBaselineExpectancy(age, sex);
this.adjustments = new Map(); // factor name -> current adjustment
this.totalAdjustment = 0;
}
updateFactor(factorName, newInput) {
const factor = factors.find(f => f.name === factorName);
if (!factor) return;
const oldAdj = this.adjustments.get(factorName) || 0;
const newAdj = factor.weight * factor.score(newInput);
this.totalAdjustment -= oldAdj;
this.totalAdjustment += newAdj;
this.adjustments.set(factorName, newAdj);
return this.getEstimate();
}
getEstimate() {
const maxAdj = this.baseline * 0.25;
const clamped = Math.max(-maxAdj, Math.min(maxAdj, this.totalAdjustment));
return Math.round((this.baseline + clamped) * 10) / 10;
}
}
This gives you O(1) recalculation per factor change instead of O(n), which matters when you are driving real-time UI updates tied to requestAnimationFrame cycles.
Edge Cases and Data Integrity
A few practical issues to handle in production:
Extreme ages. The SSA life table goes to age 119, but the factor weights are calibrated on populations aged 20 to 85. For ages outside that range, either disable scoring and return only the baseline, or apply a dampening coefficient that reduces factor influence as age increases.
Missing inputs. Default missing factors to a neutral score (0.0) rather than excluding them from the sum. This keeps the model deterministic. If you skip a factor, the baseline shifts depending on which factors are present, which makes results inconsistent across users who fill out different fields.
Confidence intervals. A point estimate is useful, but a range is more honest. If your factor weights come with standard errors from the source studies, you can propagate uncertainty through the sum using variance addition. The total variance of the adjustment equals the sum of individual variances (assuming factor independence), and you can present a 95% interval as the estimate plus or minus 1.96 standard deviations.
Normalization drift. If you update factor weights over time as new research is published, existing saved results become stale. Version your factor configurations and store the version number alongside any persisted estimates. The Semantic Versioning convention works well for this.
Resources and Further Reading
- SSA Actuarial Life Tables - The source data for U.S. period life tables by age and sex
- MDN: JavaScript Map Reference - Complete API documentation for the Map object used in the lookup layer
- simple-statistics on GitHub - A JavaScript library for descriptive statistics, regression, and distribution functions
- D3.js Documentation - If you want to visualize life expectancy curves or factor sensitivity, D3 is the standard for data-driven DOM manipulation
- Wikipedia: Actuarial Science - Background on the mathematical foundations of risk modeling and life contingencies
- Stack Overflow: Weighted Scoring Algorithms - Discussion thread on implementing weighted averages with practical examples
- How Daily Habits Add or Subtract Years From Your Life - A non-technical companion piece that walks through the epidemiological research behind each lifestyle factor and its weight in the model
The core technique described here, a baseline lookup combined with weighted additive adjustments, generalizes well beyond longevity modeling. Credit risk scores, insurance underwriting engines, and medical triage systems all use the same structural pattern. Once you have a clean separation between the data layer (life tables, factor weights) and the computation layer (scoring, clamping, confidence intervals), you can swap in different datasets and reuse the same engine. The JavaScript implementation keeps everything transparent and inspectable in the browser's dev tools, which makes debugging factor interactions far easier than tracing through a server-side black box.
Top comments (0)