F-divergence

1

In probability theory, an f**-divergence** is a certain type of function D_f(P| Q) that measures the difference between two probability distributions P and Q. Many common divergences, such as KL**-divergence**, Hellinger distance, and total variation distance, are special cases of f**-divergence**.

History

These divergences were introduced by Alfréd Rényi in the same paper where he introduced the well-known Rényi entropy. He proved that these divergences decrease in Markov processes. f-divergences were studied further independently by, and and are sometimes known as Csiszár f-divergences, Csiszár–Morimoto divergences, or Ali–Silvey distances.

Definition

Non-singular case

Let P and Q be two probability distributions over a space \Omega, such that P\ll Q, that is, P is absolutely continuous with respect to Q. Then, for a convex function such that f(x) is finite for all x > 0, f(1)=0, and (which could be infinite), the f-divergence of P from Q is defined as We call f the generator of D_f. In concrete applications, there is usually a reference distribution \mu on \Omega (for example, when, the reference distribution is the Lebesgue measure), such that , then we can use Radon–Nikodym theorem to take their probability densities p and q, giving When there is no such reference distribution ready at hand, we can simply define \mu = P+Q, and proceed as above. This is a useful technique in more abstract proofs.

Extension to singular measures

The above definition can be extended to cases where P\ll Q is no longer satisfied (Definition 7.1 of ). Since f is convex, and f(1) = 0, the function must be nondecreasing, so there exists , taking value in. Since for any p(x)>0, we have, we can extend f-divergence to the P\not\ll Q.

Properties

Basic relations between f-divergences

Basic properties of f-divergences

In particular, the monotonicity implies that if a Markov process has a positive equilibrium probability distribution P^* then is a monotonic (non-increasing) function of time, where the probability distribution P(t) is a solution of the Kolmogorov forward equations (or Master equation), used to describe the time evolution of the probability distribution in the Markov process. This means that all f-divergences are the Lyapunov functions of the Kolmogorov forward equations. The converse statement is also true: If H(P) is a Lyapunov function for all Markov chains with positive equilibrium P^* and is of the trace-form then, for some convex function f. For example, Bregman divergences in general do not have such property and can increase in Markov processes.

Analytic properties

The f-divergences can be expressed using Taylor series and rewritten using a weighted sum of chi-type distances.

Naive variational representation

Let f^* be the convex conjugate of f. Let be the effective domain of f^*, that is,. Then we have two variational representations of D_f, which we describe below.

Basic variational representation

Under the above setup, This is Theorem 7.24 in.

Example applications

Using this theorem on total variation distance, with generator its convex conjugate is, and we obtain For chi-squared divergence, defined by, we obtain Since the variation term is not affine-invariant in g, even though the domain over which g varies is affine-invariant, we can use up the affine-invariance to obtain a leaner expression. Replacing g by a g + b and taking the maximum over a, b \in \R, we obtain which is just a few steps away from the Hammersley–Chapman–Robbins bound and the Cramér–Rao bound (Theorem 29.1 and its corollary in ). For \alpha-divergence with, we have , with range. Its convex conjugate is with range, where. Applying this theorem yields, after substitution with , or, releasing the constraint on h, Setting \alpha=-1 yields the variational representation of \chi^2-divergence obtained above. The domain over which h varies is not affine-invariant in general, unlike the \chi^2-divergence case. The \chi^2-divergence is special, since in that case, we can remove the |\cdot | from |h|. For general, the domain over which h varies is merely scale invariant. Similar to above, we can replace h by a h, and take minimum over a>0 to obtain Setting, and performing another substitution by g=\sqrt h, yields two variational representations of the squared Hellinger distance: Applying this theorem to the KL-divergence, defined by, yields This is strictly less efficient than the Donsker–Varadhan representation This defect is fixed by the next theorem.

Improved variational representation

Assume the setup in the beginning of this section ("Variational representations"). This is Theorem 7.25 in.

Example applications

Applying this theorem to KL-divergence yields the Donsker–Varadhan representation. Attempting to apply this theorem to the general \alpha-divergence with does not yield a closed-form solution.

Common examples of f-divergences

The following table lists many of the common divergences between probability distributions and the possible generating functions to which they correspond. Notably, except for total variation distance, all others are special cases of \alpha-divergence, or linear sums of \alpha-divergences. For each f-divergence D_f, its generating function is not uniquely defined, but only up to c\cdot(t-1), where c is any real constant. That is, for any f that generates an f-divergence, we have. This freedom is not only convenient, but actually necessary. Let f_\alpha be the generator of \alpha-divergence, then f_\alpha and are convex inversions of each other, so. In particular, this shows that the squared Hellinger distance and Jensen-Shannon divergence are symmetric. In the literature, the \alpha-divergences are sometimes parametrized as which is equivalent to the parametrization in this page by substituting.

Relations to other statistical divergences

Here, we compare f-divergences with other statistical divergences.

Rényi divergence

The Rényi divergences is a family of divergences defined by when. It is extended to the cases of by taking the limit. Simple algebra shows that, where D_\alpha is the \alpha-divergence defined above.

Bregman divergence

The only f-divergence that is also a Bregman divergence is the KL divergence.

Integral probability metrics

The only f-divergence that is also an integral probability metric is the total variation.

Financial interpretation

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines the official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. For a large class of rational players the expected profit rate has the same general form as the ƒ-divergence.

This article is derived from Wikipedia and licensed under CC BY-SA 4.0. View the original article.

Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.
Bliptext is not affiliated with or endorsed by Wikipedia or the Wikimedia Foundation.

Edit article