
Divergence measures estimation and its asymptotic normality theory : Discrete case
In this paper we provide the asymptotic theory of the general phidiverg...
read it

The Cramer Distance as a Solution to Biased Wasserstein Gradients
The Wasserstein probability metric has received much attention from the ...
read it

Scaling of Model Approximation Errors and Expected Entropy Distances
We compute the expected value of the KullbackLeibler divergence to vari...
read it

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral
The spectacular success of deep generative models calls for quantitative...
read it

Statistical Detection of Collective Data Fraud
Statistical divergence is widely applied in multimedia processing, basic...
read it

Estimating 2Sinkhorn Divergence between Gaussian Processes from FiniteDimensional Marginals
Optimal Transport (OT) has emerged as an important computational tool in...
read it

(f,Γ)Divergences: Interpolating between fDivergences and Integral Probability Metrics
We develop a general framework for constructing new informationtheoreti...
read it
Statistical and Topological Properties of Sliced Probability Divergences
The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between onedimensional random projections of the two measures. However, the computational and statistical consequences of such a technique have not yet been wellestablished. In this paper, we aim at bridging this gap and derive some properties of sliced divergence functions. First, we show that slicing preserves the metric axioms and the weak continuity of the divergence, implying that the sliced divergence will share similar topological properties. We then precise the results in the case where the base divergence belongs to the class of integral probability metrics. On the other hand, we establish that, under mild conditions, the sample complexity of the sliced divergence does not depend on the dimension, even when the base divergence suffers from the curse of dimensionality. We finally apply our general results to the Wasserstein distance and Sinkhorn divergences, and illustrate our theory on both synthetic and real data experiments.
READ FULL TEXT
Comments
There are no comments yet.