A General Bias‑Variance Decomposition for Proper Scoring Rules – Finally!
Or: Why your ensemble works, how to build confidence regions in logit space, and what Bregman information really does for uncertainty estimation.
If you’ve ever trained a classifier, you’ve heard the mantra:
Bias‑variance trade‑off.
But look closely – the classical decomposition works for squared error only.
What about log‑loss? Brier score? CRPS?
For years, we had no general, closed‑form bias‑variance decomposition for strictly proper scoring rules.
Until now.
In their AISTATS 2023 paper, Gruber & Buettner (PDF) finally fill this gap.
And they give us practical tools:
- Explain ensembles via a law of total Bregman variance.
- Build confidence regions directly in logit space.
- Detect out‑of‑distribution inputs better than raw softmax confidence.
Let’s dive in.
The problem: Uncertainty under domain drift
Your model says “cat” with 0.99 probability – but the image is heavily corrupted.
You know from Ovadia et al. (2019) that softmax confidence is not reliable under dataset shift.
What we need is a variance‑based uncertainty measure that works for any proper loss.
And we need a theory that explains why – for example – ensembling always helps.
Missing piece: A general bias‑variance decomposition for strictly proper scoring rules.
Background: Bregman divergences & proper scoring rules
Bregman divergence
Given a differentiable convex function , the Bregman divergence is
Example: gives (squared error).
Example: gives the KL divergence.
Strictly proper scoring rule
A scoring rule is strictly proper if the expected score is maximised only when equals the true data distribution .
Common examples:
- Log score:
- Brier score:
- CRPS (continuous ranked probability score)
Every strictly proper scoring rule corresponds to a Bregman divergence generated by the negative entropy (Ovcharov, 2018).
The main result: A general bias‑variance decomposition
Let
be a random prediction (e.g., from different training sets), and
the true outcome.
Let
be a strictly proper scoring rule with negative entropy
, and
its convex conjugate.
Theorem (Gruber & Buettner, 2023)
What does each term mean?
- – Bregman information (generalised variance). For , .
- – Bregman divergence in the dual space – that’s the squared bias.
So the classical MSE decomposition ( ) is a special case of this theorem.
Bregman information – the “variance” term
Definition (Banerjee et al., 2005):
It measures spread around the mean in the sense of a Bregman divergence.
Figure 2 in the paper shows for the softplus function – this controls variance for binary classification in logit space.
[!NOTE]
When is the squared function, is the classical variance.
When is the log‑sum‑exp function (LSE), is the variance in logit space.
Special case: Exponential families
For an exponential family , the decomposition becomes:
- – variance in the natural parameter space (classical variance weighted by the log‑partition function ).
- Perfectly recovers the classical MSE case when .
Special case: Classification (logit space) – this is huge
Let
be the logits (before softmax).
Let
be the softmax probabilities.
Use the negative log‑likelihood (log loss) as scoring rule.
Corollary:
where (LogSumExp).
Why is this surprising?
- The variance term is computed directly on the logits, without applying softmax.
- No normalisation to probabilities needed – numerically stable and conceptually clean.
This is perfect for deep neural networks:
To estimate predictive uncertainty, just compute the Bregman information of the logits over an ensemble or multiple forward passes.
Applications
1. Why ensembles reduce uncertainty
The law of total Bregman information:
For an ensemble that averages over random initialisations :
As number of ensemble members
,
i.e., the variance due to disappears.
The expected score strictly improves.
This is the first general theoretical justification for why ensembles are almost always beneficial.
2. Confidence regions via Markov’s inequality
Using Markov’s inequality on the Bregman divergence:
Thus a -confidence region is:
Figure 3 & 4 in the paper:
- Binary classification – confidence intervals on the probability simplex.
- Iris dataset – convex confidence regions for three classes.
No need for normality assumptions – works with any proper score.
3. Out‑of‑distribution detection (CIFAR‑10C / ImageNet‑C)
Setup: Train on clean images, test on corrupted versions (CIFAR‑10C).
We want to discard uncertain predictions so that the remaining predictions have high accuracy.
Result (Figure 1 in the paper):
- To reach 90% validation accuracy, using max softmax confidence you must discard ≈14% of data.
- Using Bregman information you only discard ≈7% of data.
→ Bregman information is a superior uncertainty measure under domain drift.
Limitations (real talk)
- Computational cost: Estimating Bregman information requires multiple predictions per input (ensemble, MC dropout, or multi‑epoch sampling).
- Proper scoring rules only: Doesn’t directly apply to 0‑1 loss (accuracy). But for probabilistic forecasting that’s fine – use log‑loss.
- Not Bayesian: It gives a frequentist variance measure, not a full posterior.
Future work: extend to Bayesian neural networks and large language models (uncertainty for hallucinations).
Take‑away
- First general closed‑form bias‑variance decomposition for strictly proper scoring rules.
- Bregman information emerges as the universal variance term – generalising classical variance.
- Logit‑space formulation makes it practical for deep learning.
- Demonstrated benefits: ensembling theory, confidence regions, OOD detection.
Code available: GitHub – MLO‑lab/Uncertainty_Estimates_via_BVD
If you liked this, check out my previous post on Bayesian Neural Networks under covariate shift.
And let me know: how do you estimate uncertainty in your models today?



Top comments (0)