In probability theory and information theory, the Kullback–Leibler divergence^{[1]}^{[2]}^{[3]} (also information divergence, information gain, relative entropy, KLIC, or KL divergence) is a nonsymmetric measure of the difference between two probability distributions P and Q. Specifically, the Kullback–Leibler divergence of Q from P, denoted D_{KL}(P‖Q), is a measure of the information lost when Q is used to approximate P:^{[4]} The Kullback–Leibler divergence measures the expected number of extra bits (so intuitively it is non negative; this can be verified by Jensen's inequality) required to code samples from P when using a code optimized for Q, rather than using the true code optimized for P. Typically P represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P.
Although it is often intuited as a metric or distance, the Kullback–Leibler divergence is not a true metric — for example, it is not symmetric: the Kullback–Leibler divergence from P to Q is generally not the same as that from Q to P. However, its infinitesimal form, specifically its Hessian, is a metric tensor: it is the Fisher information metric.
Kullback–Leibler divergence is a special case of a broader class of divergences called fdivergences. It was originally introduced by Solomon Kullback and Richard Leibler in 1951 as the directed divergence between two distributions. It can be derived from a Bregman divergence.
Contents

Definition 1

Characterization 2

Motivation 3

Properties 4

Kullback–Leibler divergence for multivariate normal distributions 5

Relation to metrics 6

Fisher information metric 6.1

Relation to other quantities of information theory 7

Kullback–Leibler divergence and Bayesian updating 8

Bayesian experimental design 8.1

Discrimination information 9

Principle of minimum discrimination information 9.1

Relationship to available work 10

Quantum information theory 11

Relationship between models and reality 12

Symmetrised divergence 13

Relationship to other probabilitydistance measures 14

Data differencing 15

See also 16

References 17

External links 18
Definition
For discrete probability distributions P and Q, the Kullback–Leibler divergence of Q from P is defined to be

D_{\mathrm{KL}}(P\Q) = \sum_i P(i) \, \log\frac{P(i)}{Q(i)}.
In words, it is the expectation of the logarithmic difference between the probabilities P and Q, where the expectation is taken using the probabilities P. The Kullback–Leibler divergence is defined only if Q(i)=0 implies P(i)=0, for all i (absolute continuity). Whenever P(i) is zero the contribution of the ith term is interpreted as zero because \lim_{x \to 0} x \log(x) = 0.
For distributions P and Q of a continuous random variable, the Kullback–Leibler divergence is defined to be the integral:^{[5]}

D_{\mathrm{KL}}(P\Q) = \int_{\infty}^\infty p(x) \, \log\frac{p(x)}{q(x)} \, {\rm d}x, \!
where p and q denote the densities of P and Q.
More generally, if P and Q are probability measures over a set X, and P is absolutely continuous with respect to Q, then the Kullback–Leibler divergence from P to Q is defined as

D_{\mathrm{KL}}(P\Q) = \int_X \log\frac(P\Q) = \int_X \log\!\left(\frac(P\Q) = \int_X p \, \log \frac{p}{q} \, {\rm d}\mu. \!
The logarithms in these formulae are taken to base 2 if information is measured in units of bits, or to base e if information is measured in nats. Most formulas involving the Kullback–Leibler divergence hold regardless of the base of the logarithm.
Various conventions exist for referring to D_{KL}(P‖Q) in words. Often it is referred to as the divergence between P and Q; however this fails to convey the fundamental asymmetry in the relation. Sometimes it may be found described as the divergence of P from, or with respect to Q (often in the context of relative entropy, or information gain). However, in the present article the divergence of Q from P will be the language used, as this best relates to the idea that it is P that is considered the underlying "true" or "best guess" distribution, that expectations will be calculated with reference to, while Q is some divergent, less good, approximate distribution.
Characterization
Arthur Hobson proved that Kullback–Leibler divergence is the only measure of difference between probability distributions that satisfies some desiderata, which are the canonical extension to those for the characterization of entropy.^{[6]} Consequently Mutual Information is the only measure of mutual dependence that satisfies an induced criteria since it is defined in terms of KullbackLeibler divergence.
Motivation
Illustration of the Kullback–Leibler (KL) divergence for two
normal Gaussian distributions. Note the typical asymmetry for the Kullback–Leibler divergence is clearly visible.
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x_i out of a set of possibilities X can be seen as representing an implicit probability distribution q(x_i)=2^{l_i} over X, where l_i is the length of the code for x_i in bits. Therefore, Kullback–Leibler divergence can be interpreted as the expected extra messagelength per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P.

\begin{matrix} D_{\mathrm{KL}}(P\Q) & = &\sum_x p(x) \log q(x)& + & \sum_x p(x) \log p(x) \\[0.5em] & = & H(P,Q) &  & H(P)\, \! \end{matrix}
where H(P,Q) is the cross entropy of P and Q, and H(P) is the entropy of P.
Note also that there is a relation between the Kullback–Leibler divergence and the "rate function" in the theory of large deviations.^{[7]}^{[8]}
Kullback brings together all notions of information in his historic text, Information Theory and Statistics. For instance he shows that the mean discriminating information between two hypotheses is the basis for all of the various measures of information, from Shannon to Fisher. Shannon's rate is the mean information between the hypotheses of dependence and independence of processes. Fisher's information is second order term and dominant in the Taylor approximation of the discriminating information between two models of the same parametric family.^{[2]}
Properties

The Kullback–Leibler divergence is always nonnegative,

D_{\mathrm{KL}}(P\Q) \geq 0, \,

a result known as Gibbs' inequality, with D_{KL}(P‖Q) zero if and only if P = Q almost everywhere. The entropy H(P) thus sets a minimum value for the crossentropy H(P,Q), the expected number of bits required when using a code based on Q rather than P; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X, if a code is used corresponding to the probability distribution Q, rather than the "true" distribution P.

The Kullback–Leibler divergence remains welldefined for continuous distributions, and furthermore is invariant under parameter transformations. For example, if a transformation is made from variable x to variable y(x), then, since P(x) dx = P(y) dy and Q(x) dx = Q(y) dy the Kullback–Leibler divergence may be rewritten:

D_{\mathrm{KL}}(P\Q) = \int_{x_a}^{x_b}P(x)\log\left(\frac{P(x)}{Q(x)}\right)\,dx = \int_{y_a}^{y_b}P(y)\log\left(\frac{P(y)dy/dx}{Q(y)dy/dx}\right)\,dy = \int_{y_a}^{y_b}P(y)\log\left(\frac{P(y)}{Q(y)}\right)\,dy

where y_a=y(x_a) and y_b=y(x_b). Although it was assumed that the transformation was continuous, this need not be the case. This also shows that the Kullback–Leibler divergence produces a dimensionally consistent quantity, since if x is a dimensioned variable, P(x) and Q(x) are also dimensioned, since e.g. P(x) dx is dimensionless. The argument of the logarithmic term is and remains dimensionless, as it must. It can therefore be seen as in some ways a more fundamental quantity than some other properties in information theory^{[9]} (such as selfinformation or Shannon entropy), which can become undefined or negative for nondiscrete probabilities.

The Kullback–Leibler divergence is additive for independent distributions in much the same way as Shannon entropy. If P_1, P_2 are independent distributions, with the joint distribution P(x,y) = P_1(x)P_2(y), and Q, Q_1, Q_2 likewise, then

D_{\mathrm{KL}}(P \ Q) = D_{\mathrm{KL}}(P_1 \ Q_1) + D_{\mathrm{KL}}(P_2 \ Q_2).
Kullback–Leibler divergence for multivariate normal distributions
Suppose that we have two multivariate normal distributions, with means \mu_0, \mu_1 and with (nonsingular) covariance matrices \Sigma_0, \Sigma_1. If the two distributions have the same dimension, k, then the Kullback–Leibler divergence between the distributions is as follows.^{[10]}

D_\text{KL}(\mathcal{N}_0 \ \mathcal{N}_1) = { 1 \over 2 } \left( \mathrm{tr} \left( \Sigma_1^{1} \Sigma_0 \right) + \left( \mu_1  \mu_0\right)^\top \Sigma_1^{1} ( \mu_1  \mu_0 )  k + \ln \left( { \det \Sigma_1 \over \det \Sigma_0 } \right) \right).
The logarithm in the last term must be taken to base e since all terms apart from the last are basee logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by log_{e} 2 yields the divergence in bits.
Relation to metrics
One might be tempted to call it a "distance metric" on the space of probability distributions, but this would not be correct as the Kullback–Leibler divergence is not symmetric – that is, D_{\mathrm{KL}}(P\Q) \neq D_{\mathrm{KL}}(Q\P), – nor does it satisfy the triangle inequality. Even so, being a premetric, it generates a topology on the space of probability distributions. More concretely, if \{P_1,P_2,\cdots\} is a sequence of distributions such that

\lim_{n \rightarrow \infty} D_{\mathrm{KL}}(P_n\Q) = 0
then it is said that P_n \xrightarrow{D} Q. Pinsker's inequality entails that P_n \xrightarrow{\mathrm{D}} P \Rightarrow P_n \xrightarrow{\mathrm{TV}} P, where the latter stands for the usual convergence in total variation.
Following Rényi (1970, 1961)^{[11]}^{[12]} the term is sometimes also called the information gain about X achieved if P can be used instead of Q. It is also called the relative entropy of P with respect to Q, and written H(PQ).
Fisher information metric
However, the Kullback–Leibler divergence is rather directly related to a metric, specifically, the Fisher information metric. This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multidimensional) parameter \theta. Consider then two close by values of P = P(\theta) and Q = P(\theta_0) so that the parameter \theta differs by only a small amount from the parameter value \theta_0. Specifically, up to first order one has (using the Einstein summation convention)

P(\theta) = P(\theta_0) + \Delta\theta^jP_j(\theta_0) + \cdots
with \Delta\theta^j = (\theta  \theta_0)^j a small change of \theta in the j direction, and P_{j}(\theta_0) = \frac{\partial P}{\partial \theta^j}(\theta_0) the corresponding rate of change in the probability distribution. Since the Kullback–Leibler divergence has an absolute minimum 0 for P = Q, i.e. \theta = \theta_0 , it changes only to second order in the small parameters \Delta\theta^j. More formally, as for any minimum, the first derivatives of the divergence vanish

\left.\frac{\partial}{\partial \theta^j}\right_{\theta = \theta_0} D_{KL}(P(\theta) \ P(\theta_0)) = 0,
and by the Taylor expansion one has up to second order

D_{\mathrm{KL}}(P(\theta)\P(\theta_0)) = \frac{1}{2} \Delta\theta^j\Delta\theta^k g_{jk}(\theta_0) + \cdots
where the Hessian matrix of the divergence

g_{jk}(\theta_0) = \left.\frac{\partial^2}{\partial \theta^j\partial \theta^k}\right_{\theta = \theta_0} D_{KL}(P(\theta)\P(\theta_0))
must be positive semidefinite. Letting \theta_0 vary (and dropping the subindex 0) the Hessian g_{jk}(\theta) defines a (possibly degenerate) Riemannian metric on the \theta parameter space, called the Fisher information metric.
Relation to other quantities of information theory
Many of the other quantities of information theory can be interpreted as applications of the Kullback–Leibler divergence to specific cases.
The selfinformation,

I(m) = D_{\mathrm{KL}}(\delta_{im} \ \{ p_i \}),
is the Kullback–Leibler divergence of the probability distribution P(i) from a Kronecker delta representing certainty that i = m — i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution P(i) is available to the receiver, not the fact that i = m.
The mutual information,

\begin{align}I(X;Y) & = D_{\mathrm{KL}}(P(X,Y) \ P(X)P(Y) ) \\ & = \operatorname{E}_X \{D_{\mathrm{KL}}(P(YX) \ P(Y) ) \} \\ & = \operatorname{E}_Y \{D_{\mathrm{KL}}(P(XY) \ P(X) ) \}\end{align}
is the Kullback–Leibler divergence of the product P(X)P(Y) of the two marginal probability distributions from the joint probability distribution P(X,Y) — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability P(X,Y) is known, it is the expected number of extra bits that must on average be sent to identify Y if the value of X is not already known to the receiver.
The Shannon entropy,

\begin{align}H(X) & = \mathrm{(i)} \, \operatorname{E}_x \{I(x)\} \\ & = \mathrm{(ii)} \log N  D_{\mathrm{KL}}(P(X) \ P_U(X) )\end{align}
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the Kullback–Leibler divergence of the uniform distribution P_{U}(X) from the true distribution P(X) — i.e. less the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution P_{U}(X) rather than the true distribution P(X).
The conditional entropy,

\begin{align}H(X\mid Y) & = \log N  D_{\mathrm{KL}}(P(X,Y) \ P_U(X) P(Y) ) \\ & = \mathrm{(i)} \,\, \log N  D_{\mathrm{KL}}(P(X,Y) \ P(X) P(Y) )  D_{\mathrm{KL}}(P(X) \ P_U(X)) \\ & = H(X)  I(X;Y) \\ & = \mathrm{(ii)} \, \log N  \operatorname{E}_Y \{ D_{\mathrm{KL}}(P(XY) \ P_U(X)) \}\end{align}
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the Kullback–Leibler divergence of the product distribution P_{U}(X) P(Y) from the true joint distribution P(X,Y) — i.e. less the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution P_{U}(X) rather than the conditional distribution P(X  Y) of X given Y.
The cross entropy between two probability distributions measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q, rather than the "true" distribution p. The cross entropy for two distributions p and q over the same probability space is thus defined as follows:

H(p, q) = \operatorname{E}_p[\log q] = H(p) + D_{\mathrm{KL}}(p \ q).\!
Kullback–Leibler divergence and Bayesian updating
In Bayesian statistics the Kullback–Leibler divergence can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution. If some new fact Y = y is discovered, it can be used to update the probability distribution for X from p(x  I) to a new posterior probability distribution p(x  y,I) using Bayes' theorem:

p(x\mid y,I) = \frac{p(y\mid x,I) p(x\mid I)}{p(y\mid I)}
This distribution has a new entropy

H\big( p(\cdot\mid y,I) \big) = \sum_x p(x\mid y,I) \log p(x\mid y,I),
which may be less than or greater than the original entropy H(p(·  I)). However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on p(x  I) instead of a new code based on p(x  y,I) would have added an expected number of bits

D_{\mathrm{KL}}\big(p(\cdot\mid y,I) \mid p(\cdot\mid I) \big) = \sum_x p(x\mid y,I) \log \frac{p(x\mid y,I)}{p(x\mid I)}
to the message length. This therefore represents the amount of useful information, or information gain, about X, that we can estimate has been learned by discovering Y = y.
If a further piece of data, Y_{2} = y_{2}, subsequently comes in, the probability distribution for x can be updated further, to give a new best guess p(xy_{1},y_{2},I). If one reinvestigates the information gain for using p(xy_{1},I) rather than p(xI), it turns out that it may be either greater or less than previously estimated:

\sum_x p(x\mid y_1,y_2,I) \log \frac{p(x\mid y_1,y_2,I)}{p(x\mid I)} may be ≤ or > than \displaystyle\sum_x p(x\mid y_1,I) \log \frac{p(x\mid y_1,I)}{p(x\mid I)}
and so the combined information gain does not obey the triangle inequality:

D_{\mathrm{KL}} \big( p(\cdot\mid y_1,y_2,I) \big\ p(\cdot\mid I) \big) may be <, = or > than D_{\mathrm{KL}} \big( p(\cdot\mid y_1,y_2,I)\big\ p(\cdoty_1,I) \big) + D_{\mathrm{KL}} \big( p(\cdot \mid y_1,I) \big\ p(x\mid I) \big)
All one can say is that on average, averaging using p(y_{2}  y_{1},x,I), the two sides will average out.
Bayesian experimental design
A common goal in Bayesian experimental design is to maximise the expected Kullback–Leibler divergence between the prior and the posterior.^{[13]} When posteriors are approximated to be Gaussian distributions, a design maximising the expected Kullback–Leibler divergence is called Bayes doptimal.
Discrimination information
The Kullback–Leibler divergence D_{KL}( p(xH_{1}) ‖ p(xH_{0}) ) can also be interpreted as the expected discrimination information for H_{1} over H_{0}: the mean information per sample for discriminating in favor of a hypothesis H_{1} against a hypothesis H_{0}, when hypothesis H_{1} is true.^{[14]} Another name for this quantity, given to it by I.J. Good, is the expected weight of evidence for H_{1} over H_{0} to be expected from each sample.
The expected weight of evidence for H_{1} over H_{0} is not the same as the information gain expected per sample about the probability distribution p(H) of the hypotheses,

D_\mathrm{KL}( p(xH_1) \ p(xH_0) ) \neq IG = D_\mathrm{KL}( p(Hx) \ p(HI) ).
Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies.
On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question.
Principle of minimum discrimination information
The idea of Kullback–Leibler divergence as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information (MDI): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution f_{0} as possible; so that the new data produces as small an information gain D_{KL}( f ‖ f_{0} ) as possible.
For example, if one had a prior distribution p(x,a) over x and a, and subsequently learnt the true distribution of a was u(a), the Kullback–Leibler divergence between the new joint distribution for x and a, q(xa) u(a), and the earlier prior distribution would be:

D_\mathrm{KL}(q(xa)u(a)\p(x,a)) = \operatorname{E}_{u(a)}\{D_\mathrm{KL}(q(xa)\p(xa))\} + D_\mathrm{KL}(u(a)\p(a)),
i.e. the sum of the Kullback–Leibler divergence of p(a) the prior distribution for a from the updated distribution u(a), plus the expected value (using the probability distribution u(a)) of the Kullback–Leibler divergence of the prior conditional distribution p(xa) from the new conditional distribution q(xa). (Note that often the later expected value is called the conditional Kullback–Leibler divergence (or conditional relative entropy) and denoted by D_{KL}(q(xa)‖p(xa))^{[15]}) This is minimized if q(xa) = p(xa) over the whole support of u(a); and we note that this result incorporates Bayes' theorem, if the new distribution u(a) is in fact a δ function representing certainty that a has one particular value.
MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy), but the Kullback–Leibler divergence continues to be just as relevant.
In the engineering literature, MDI is sometimes called the Principle of Minimum CrossEntropy (MCE) or Minxent for short. Minimising the Kullback–Leibler divergence of m from p with respect to m is equivalent to minimizing the crossentropy of p and m, since

H(p,m) = H(p) + D_{\mathrm{KL}}(p\m),
which is appropriate if one is trying to choose an adequate approximation to p. However, this is just as often not the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising D_{KL}(p‖m) subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining crossentropy to be D_{KL}(p‖m), rather than H(p,m).
Relationship to available work
Pressure versus volume plot of available work from a mole of Argon gas relative to ambient, calculated as T_o times the Kullback–Leibler divergence.
Surprisals^{[16]} add where probabilities multiply. The surprisal for an event of probability p is defined as s=k \ln(1 / p). If k is \{ 1, 1/\ln 2, 1.38\times 10^{23}\} then surprisal is in \{nats, bits, or J/K\} so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins.
Bestguess states (e.g. for atoms in a gas) are inferred by maximizing the average surprisal S (entropy) for a given set of control parameters (like pressure P or volume V). This constrained entropy maximization, both classically^{[17]} and quantum mechanically,^{[18]} minimizes Gibbs availability in entropy units^{[19]} A\equiv k \ln Z where Z is a constrained multiplicity or partition function.
When temperature T is fixed, free energy (T \times A) is also minimized. Thus if T, V and number of molecules N are constant, the Helmholtz free energy F\equiv UTS (where U is energy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy G=U+PVTS is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature T_o and pressure P_o is W = \Delta G =NkT_o \Theta(V/V_o) where V_o = NkT_o/P_o and \Theta(x)=x1\ln x\ge 0 (see also Gibbs inequality).
More generally^{[20]} the work available relative to some ambient is obtained by multiplying ambient temperature T_o by Kullback–Leibler divergence or net surprisal \Delta I\ge 0, defined as the average value of k\ln(p/p_o) where p_o is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of V_o and T_o is thus W=T_o \Delta I, where Kullback–Leibler divergence \Delta I = Nk[\Theta(V/V_o)+\frac{3}{2}\Theta(T/T_o)]. The resulting contours of constant Kullback–Leibler divergence, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flamepowered airconditioning or in the unpowered device to convert boilingwater to icewater discussed here.^{[21]} Thus Kullback–Leibler divergence measures thermodynamic availability in bits.
Quantum information theory
For density matrices P and Q on a Hilbert space, the K–L divergence (or quantum relative entropy as it is often called in this case) from P to Q is defined to be

D_{\mathrm{KL}}(P\Q) = \operatorname{Tr}(P( \log(P)  \log(Q))). \!
In quantum information science the minimum of D_{\mathrm{KL}}(P\Q) over all separable states Q can also be used as a measure of entanglement in the state P.
Relationship between models and reality
Just as Kullback–Leibler divergence of "ambient from actual" measures thermodynamic availability, Kullback–Leibler divergence of "model from reality" is also useful even if the only clues we have about reality are some experimental measurements. In the former case Kullback–Leibler divergence describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn.
Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers^{[22]} and a book^{[23]} by Burnham and Anderson. In a nutshell the Kullback–Leibler divergence of a model from reality may be estimated, to within a constant additive term, by a function (like the squares summed) of the deviations observed between data and the model's predictions. Estimates of such divergence for models that share the same additive term can in turn be used to select among models.
When trying to fit parametrized models to data there are various estimators which attempt to minimize Kullback–Leibler divergence, such as maximum likelihood and maximum spacing estimators.
Symmetrised divergence
Kullback and Leibler themselves actually defined the divergence as:

D_{\mathrm{KL}}(P\Q) + D_{\mathrm{KL}}(Q\P)\, \!
which is symmetric and nonnegative. This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes.
An alternative is given via the λ divergence,

D_{\lambda}(P\Q) = \lambda D_{\mathrm{KL}}(P\\lambda P + (1\lambda)Q) + (1\lambda) D_{\mathrm{KL}}(Q\\lambda P + (1\lambda)Q),\, \!
which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q, if they currently have probabilities λ and (1 − λ) respectively.
The value λ = 0.5 gives the Jensen–Shannon divergence, defined by

D_{\mathrm{JS}} = \tfrac{1}{2} D_{\mathrm{KL}} \left (P \ M \right ) + \tfrac{1}{2} D_{\mathrm{KL}}\left (Q \ M \right )\, \!
where M is the average of the two distributions,

M = \tfrac{1}{2}(P+Q). \,
D_{JS} can also be interpreted as the capacity of a noisy information channel with two inputs giving the output distributions p and q. The Jensen–Shannon divergence, like all fdivergences, is locally proportional to the Fisher information metric. It is similar to the Hellinger metric (in the sense that induces the same affine connection on a statistical manifold), and equal to onehalf the socalled Jeffreys divergence.^{[24]}^{[25]}
Relationship to other probabilitydistance measures
There are many other important measures of probability distance. Some of these are particularly connected with the Kullback–Leibler divergence. For example:

The family of Rényi divergences provide generalizations of the Kullback–Leibler divergence. Depending on the value of a certain parameter, \alpha, various inequalities may be deduced.
Other notable measures of distance include the Hellinger distance, histogram intersection, Chisquared statistic, quadratic form distance, match distance, Kolmogorov–Smirnov distance, and earth mover's distance.^{[24]}
Data differencing
Just as absolute entropy serves as theoretical background for data compression, relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch).
See also
References

^

^ ^{a} ^{b} Kullback S. (1959), Information Theory and Statistics (John Wiley & Sons).

^

^ Burnham K.P., Anderson D.R. (2002), Model Selection and MultiModel Inference (Springer). (2nd edition), p.51

^ Bishop C. (2006). Pattern Recognition and Machine Learning p. 55.

^ Hobson, Arthur (1971). Concepts in statistical mechanics. New York: Gordon and Breach.

^ Sanov, I.N. (1957). "On the probability of large deviations of random magnitudes". Matem. Sbornik 42 (84): 11–44.

^ Novak S.Y. (2011), Extreme Value Methods with Applications to Finance ch. 14.5 (Chapman & Hall). ISBN 9781439835746.

^ See the section "differential entropy  4" in Relative Entropy video lecture by Sergio Verdú NIPS 2009

^ Duchi J., "Derivations for Linear Algebra and Optimization", p. 13.

^ Rényi A. (1970). Probability Theory. Elsevier. Appendix, Sec.4.

^ Rényi, A. (1961), "On measures of entropy and information" (PDF), Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability 1960, pp. 547–561

^ Chaloner, K.; Verdinelli, I. (1995). "Bayesian experimental design: a review".

^ Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007). "Section 14.7.2. Kullback–Leibler Distance".

^ Thomas M. Cover, Joy A. Thomas (1991) Elements of Information Theory (John Wiley & Sons), p.22

^ Myron Tribus (1961), Thermodynamics and Thermostatics (D. Van Nostrand, New York)

^ Jaynes, E. T. (1957). "Information theory and statistical mechanics" (PDF). Physical Review 106: 620–630.

^ Jaynes, E. T. (1957). "Information theory and statistical mechanics II" (PDF). Physical Review 108: 171–190.

^ J.W. Gibbs (1873), "A method of geometrical representation of thermodynamic properties of substances by means of surfaces", reprinted in The Collected Works of J. W. Gibbs, Volume I Thermodynamics, ed. W. R. Longley and R. G. Van Name (New York: Longmans, Green, 1931) footnote page 52.

^ Tribus, M.; McIrvine, E. C. (1971). "Energy and information". Scientific American 224: 179–186.

^ Fraundorf, P. (2007). "Thermal roots of correlationbased complexity". Complexity 13 (3): 18–26.

^ Burnham, K.P.; Anderson, D.R. (2001). "Kullback–Leibler information as a basis for strong inference in ecological studies". Wildlife Research 28: 111–119.

^ Burnham, K. P. and Anderson D. R. (2002), Model Selection and Multimodel Inference: A Practical InformationTheoretic Approach, Second Edition (Springer Science) ISBN 9780387953649.

^ ^{a} ^{b} Rubner, Y.; Tomasi, C.;

^
External links

Information Theoretical Estimators Toolbox

Ruby gem for calculating Kullback–Leibler divergence

Jon Shlens' tutorial on Kullback–Leibler divergence and likelihood theory

Matlab code for calculating Kullback–Leibler divergence for discrete distributions

Sergio Verdú, Relative Entropy, NIPS 2009. Onehour video lecture.

A modern summary of infotheoretic divergence measures
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.