Log Chance, Cross-Entropy, KL Divergence, Logistic Regression, and Neural Networks

Rachel Lea Ballantyne Draelos

This text will cowl the relationships between the unfavourable log probability, entropy, softmax vs. sigmoid cross-entropy loss, most probability estimation, Kullback-Leibler (KL) divergence, logistic regression, and neural networks. If you’re not conversant in the connections between these subjects, then this text is for you!

Advisable Background

  • Primary understanding of neural networks. If you need extra background on this space please learn Introduction to Neural Networks.
  • Thorough understanding of the distinction between multiclass and multilabel classification. If you’re not conversant in this matter, please learn the article Multi-label vs. Multi-class Classification: Sigmoid vs. Softmax. The quick refresher is as follows: in multiclass classification we need to assign a single class to an enter, so we apply a softmax perform to the uncooked output of our neural community. In multilabel classification we need to assign a number of courses to an enter, so we apply an element-wise sigmoid perform to the uncooked output of our neural community.

First we are going to use a multiclass classification downside to know the connection between log probability and cross-entropy. Right here’s our downside setup:

Cat picture supply: Wikipedia.

Let’s say we’ve chosen a specific neural community structure to unravel this multiclass classification downside — for instance, VGG, ResNet, GoogLeNet, and many others. Our chosen structure represents a household of potential fashions, the place every member of the household has completely different weights (completely different parameters) and subsequently represents a unique relationship between the enter picture x and a few output class predictions y. We need to select the member of the household that has a very good set of parameters for fixing our explicit downside of [image] -> [airplane, train, or cat].

A technique of selecting good parameters to unravel our activity is to decide on the parameters that maximize the probability of the noticed knowledge:

The unfavourable log chances are then, actually, the unfavourable of the log of the probability:

Reference for Setup, Chance, and Destructive Log Chance: “Cross entropy and log probability” by Andrew Webb

Why will we “decrease the unfavourable log probability” as a substitute of “maximizing the probability” when these are mathematically the identical? It’s as a result of we usually decrease loss features, so we speak in regards to the “unfavourable log probability” as a result of we will decrease it. (Supply: CrossValidated.)

Thus, whenever you decrease the unfavourable log probability, you might be performing most probability estimation. Per the Wikipedia article on MLE,

And right here’s one other abstract from Jonathan Gordon on Quora:

After that apart on most probability estimation, let’s delve extra into the connection between unfavourable log probability and cross entropy. First, we’ll outline entropy:

Part references: Wikipedia Cross entropy, “Cross entropy and log probability” by Andrew Webb

The Kullback-Leibler (KL) divergence is usually conceptualized as a measurement of how one chance distribution differs from a second chance distribution, i.e. as a measurement of the space between two chance distributions. Technically talking, KL divergence isn’t a real metric as a result of it doesn’t obey the triangle inequality and D_KL(g||f) doesn’t equal D_KL(f||g) — however nonetheless, intuitively it might appear to be a extra pure approach of representing a loss, since we would like the distribution our mannequin learns to be similar to the true distribution ( i.e. we would like the KL divergence to be small — we need to decrease the KL divergence.)

Right here’s the equation for KL divergence, which will be interpreted because the anticipated variety of extra bits wanted to speak the worth taken by random variable X (distributed as g(x)) if we use the optimum encoding for f(x) relatively than the optimum encoding for g(x):

Further methods to consider KL divergence D_KL (g||f):

  • it’s the relative entropy of g with respect to f
  • it’s a measure of the knowledge gained when one revises one’s beliefs from the prior chance distribution f to the posterior chance distribution g
  • it’s the quantity of knowledge misplaced when f is used to approximate g

In machine studying, g usually represents the true distribution of knowledge, whereas f represents the mannequin’s approximation of the distribution. Thus for our neural community we will write the KL divergence like this:

Discover that the second time period (coloured in blue) relies upon solely on the info, that are mounted. As a result of this second time period does NOT rely on the probability y-hat (the anticipated chances), it additionally doesn’t rely on the parameters of the mannequin. Subsequently, the parameters that decrease the KL divergence are the identical because the parameters that decrease the cross entropy and the unfavourable log probability! This implies we will decrease a cross-entropy loss perform and get the identical parameters that we’d’ve gotten by minimizing the KL divergence.

Part references: Wikipedia Kullback-Leibler divergence, “Cross entropy and log probability” by Andrew Webb

Up to now, we have now centered on “softmax cross entropy loss” within the context of a multi class classification downside. Nevertheless, when coaching a multi label classification mannequin, wherein multiple output class is feasible, then a “sigmoid cross entropy loss” is used as a substitute of a “softmax cross entropy loss.” Please see this text for extra background on multilabel vs. multiclass classification.

As we simply noticed, cross-entropy is outlined between two chance distributions f(x) and g(x). However this doesn’t make sense within the context of a sigmoid utilized on the output layer, for the reason that sum of the output decimals gained’t be 1, and subsequently we don’t actually have an output “distribution.”

Per Michael Nielsen’s guide, chapter three equation 63, one legitimate approach to consider the sigmoid cross entropy loss is as “a summed set of per-neuron cross-entropies, with the activation of every neuron being interpreted as a part of a two-element chance distribution.”

So, as a substitute of considering of a chance distribution throughout all output neurons (which is totally fantastic within the softmax cross entropy case), for the sigmoid cross entropy case we are going to take into consideration a bunch of chance distributions, the place every neuron is conceptually representing one a part of a two-element chance distribution.

For instance, let’s say we feed the next image to a multilabel picture classification neural community which is educated with a sigmoid cross-entropy loss:

Picture Supply: Wikipedia.

Our community has output neurons akin to the courses cat, canine, sofa, airplane, prepare, and automobile.

After making use of a sigmoid perform to the uncooked worth of the cat neuron, we get zero.eight as our worth. We will think about this zero.eight to be the chance of sophistication “cat” and we will think about an implicit chance worth of 1–zero.eight = zero.2 because the chance of sophistication “NO cat.” This implicit chance worth does NOT correspond to an precise neuron within the community. It’s solely imagined/hypothetical.

Equally, after making use of a sigmoid perform to the uncooked worth of the canine neuron, we get zero.9 as our worth. We will think about this zero.9 to be the chance of sophistication “canine” and we will think about an implicit chance worth of 1–zero.9 = zero.1 because the chance of sophistication “NO canine.”

For the airplane neuron, we get a chance of zero.01 out. This implies we have now a 1–zero.01 = zero.99 chance of “NO airplane” — and so forth, for all of the output neurons. Thus, every neuron has its personal “cross entropy loss” and we simply sum collectively the cross entropies of every neuron to get our complete sigmoid cross entropy loss.

We will write out the sigmoid cross entropy loss for this community as follows:

“Sigmoid cross entropy” is usually known as “binary cross-entropy.” This text discusses “binary cross-entropy” for multilabel classification issues and contains the equation.

  • If a neural community has no hidden layers and the uncooked output vector has a softmax utilized, then that’s equal to multinomial logistic regression
  • if a neural community has no hidden layers and the uncooked output is a single worth with a sigmoid utilized (a logistic perform) then that is logistic regression
  • thus, logistic regression is only a particular case of a neural community! (refA, refB)
  • if a neural community does have hidden layers and the uncooked output vector has a softmax utilized, and it’s educated utilizing a cross-entropy loss, then it is a “softmax cross entropy loss” which will be interpreted as a unfavourable log probability as a result of the softmax creates a chance distribution.
  • if a neural community does have hidden layers and the uncooked output vector has element-wise sigmoids utilized, and it’s educated utilizing a cross-entropy loss, then it is a “sigmoid cross entropy loss” which CANNOT be interpreted as a unfavourable log probability, as a result of there isn’t any chance distribution throughout all of the examples. The outputs don’t sum to 1. Right here we have to use the interpretation supplied within the earlier part, wherein we conceptualize the loss as a bunch of per-neuron cross entropies which might be summed collectively.

Part reference: Stats StackExchange

For additional information you may take a look at the Wikipedia article on Cross entropy, particularly the ultimate part which is entitled “Cross-entropy loss perform and logistic regression.” This part describes how the standard loss perform utilized in logistic regression is computed as the typical of all cross-entropies within the pattern (“sigmoid cross entropy loss” above.) The cross-entropy loss is usually referred to as the “logistic loss” or the “log loss”, and the sigmoid perform can also be referred to as the “logistic perform.”

In Pytorch, there are a number of implementations for cross-entropy:

Implementation A:torch.nn.purposeful.binary_cross_entropy (seetorch.nn.BCELoss): the enter values to this perform have already had a sigmoid utilized, e.g.

Implementation B:torch.nn.purposeful.binary_cross_entropy_with_logits(see torch.nn.BCEWithLogitsLoss): “this loss combines a Sigmoid layer and the BCELoss in a single single class. This model is extra numerically steady than utilizing a plain Sigmoid adopted by a BCELoss as, by combining the operations into one layer, we benefit from the log-sum-exp trick for numerical stability.”

Implementation C:torch.nn.purposeful.nll_loss(see torch.nn.NLLLoss) : “the unfavourable log probability loss. It’s helpful to coach a classification downside with C courses. […] The enter given by means of a ahead name is predicted to include log-probabilities of every class.”

Implementation D: torch.nn.purposeful.cross_entropy(see torch.nn.CrossEntropyLoss): “this criterion combines nn.LogSoftmax() and nn.NLLLoss() in a single single class. It’s helpful when coaching a classification downside with C courses. […] The enter is predicted to include uncooked, unnormalized scores for every class.”

There are basic relationships between unfavourable log probability, cross entropy, KL divergence, neural networks, and logistic regression as we have now mentioned right here. I hope you’ve gotten loved studying in regards to the connections between these completely different fashions and losses!

For the reason that matter of this publish was connections, the featured picture is a “connectome.” A connectome is “a complete map of neural connections within the mind, and could also be considered its ‘wiring diagram.’” Reference: Wikipedia. Featured Picture Supply: The Human Connectome.

Leave a Reply

Your email address will not be published. Required fields are marked *