IML L3.1 Loss functions
Loss functions
Loss functions are used to quantify how well or bad a model can reproduce the values of the training set.
The appropriate loss function depends on the type of problems and the algorithm we use.
Let’s denote with $\hat y$ the prediction of the model and $y$ the true value.
Gaussian noise
Let’s assume that the relationship between the features $X$ and the label $Y$ is given by
\[{Y=f(X)+\epsilon}\]where $f$ is the model whose parameters we want to fix and $\epsilon$ is some random noise with zero mean and variance $\sigma$.
The likelihood to measure $y$ for feature values $x$ is given by
\[{L}\sim\exp\left(-\frac{({y-f(x)})^{2}}{2\sigma}\right)\]If we have a set of examples $x^{(i)}$ the likelihood becomes
\[{L}\sim\prod_{i}\exp\left(-\frac{({y}^{i}-\mathrm{f}({x}^i))^{2}}{2\sigma}\right)\]We now want to fix the parameters in $f$ such that we maximize the likelihood that our data was generated by the model.
It is more convenient to work with the log of the likelihood. Maximizing the likelihood is equivalent to minimising the negative log-likelihood.
\[NLL = -log(L) = \frac{1}{2\sigma}\sum_i(y^{(i)}-f(x^{(i)}))^2\]So assuming gaussian noise for the difference between the model and the data leads to the least square rule.
We can use the square error loss
\[J(f)=\sum_i(y^{(i)}-f(x^{(i)}))^2\]To train our machine learning algorithm.
Two class model
If we have two classes, we call one the positive class $(c=1) $ and the other the negative class $(c=0).$ If the probability to belong to class 1
\(p(c=1)=p\) we also have
\[p(c=0)=1-p\]The likelihood for a single measurement if the outcome is in the positive class is pp and if the outcome is in the negative class the likelihood is 1−p1−p. For a set of measurements with outcomes yiyi the likelihood is given by
\[L=\prod_{y_i=1}p\prod_{y_i=0}(1-p)\]So the negative log-likelihood is:
\[NLL=-\sum_{y_i=1}log(p)-\sum_{y_i=0}log(1-p)\]Given that $y=0$ or $y=1$ we can rewrite it as
\[NLL=-\sum (y log(p)+(1-y)log(1-p))\]So if we have a model for the probability $\hat y = p(X)$ we can maximize the likelihood of the training data by optimizing
\[J = -\sum_i y_ilog(\hat y)+(1-y_i)log(1-\hat y)\]这个公式可能不太对吧
It is called the cross entropy.
Perceptron loss
One can formulate the perceptron algorithm in terms of a stochastic gradient descent with the loss given by
\[J(w)=\sum h(y_ip(x_i,w))\]where $p(xi,w)$ is the model prediction $\vec x \cdot \vec w + w_0 $ and $h$ is the hinge function:
\[h(x)=\left\{\begin{array}{ll}-x&\mathrm{if}&\mathrm{x<0}\\0&\mathrm{if}&\mathrm{x\geq0}\end{array}\right.\]Support vector machine
The loss for the SVM also uses the hinge function, but offset such that we penalise values up to 1:
\[J(w)=\frac{1}{2}\vec w\cdot \vec w+ C\sum h_i(y_ip(x_i,w))\]where $p(x_i,w)$ is the model prediction $\vec x \cdot \vec w + w_0 $ and $h_1$ is the shifted hinge function
. \(h_i(x) = max(0,1-x)\)
$C$ is a model parameter controlling the trade-off between the width of the margin and the amount of margin violation.