
An equivalence between learning of data and probability distributions, and some applications
Algorithmic learning theory traditionally studies the learnability of ef...
read it

Algorithmic learning of probability distributions from random data in the limit
We study the problem of identifying a probability distribution for some ...
read it

Algorithmic Theories of Everything
The probability distribution P from which the history of our universe is...
read it

Using theoretical ROC curves for analysing machine learning binary classifiers
Most binary classifiers work by processing the input to produce a scalar...
read it

The cut metric for probability distributions
Guided by the theory of graph limits, we investigate a variant of the cu...
read it

Markov Chains Generated by Convolutions of Orthogonality Measures
About two dozens of exactly solvable Markov chains on onedimensional fi...
read it

Tree boosting for learning probability measures
Learning probability measures based on an i.i.d. sample is a fundamental...
read it
Equivalences between learning of data and probability distributions, and their applications
Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by [Vitanyi and Chater, 2017] and [Bienvenu et al., 2014] has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, learnability of a subclass of probability measures is equivalent to learnability of the class of the corresponding real parameters. This equivalence allows to transfer results from classical algorithmic theory to learning theory of probability measures. We present a number of such applications, providing many new results regarding EX and BC learnability of classes of measures, thus drawing parallels between the two learning theories.
READ FULL TEXT
Comments
There are no comments yet.