# Understanding Logistic Regression Behind the Scene

**Logistic Regression**

Logistic regression (LR) is widely used in classification tasks. The LR model is as follows:

The observed class (y) for individual i equals to the sum of predicted class and corresponding error (ε(i)). The predicted class is estimated from a function (g) that transforms the linear combination of theta's and predictors (x_{1} to x_{m}). To simplify the linear combination, linear algebra’s matrix transposition (T) can be applied to generate the following function, with θ^{T}x equals to the dot product of the transpose of θ and x matrices:

To isolate the function g, the error term can be repositioned onto the left side of the equation. Then, the left hand side of the equation can be considered as the predicted class (ŷ) which can be thought of as a form of hypotheses h_{θ}(x):

The function g in logistic regression is the logistic (or sigmoid) function. The logistic function has a nice property with asymptotes at 0 and 1, which fittingly represents the two classes in a binary classification problem such that y ∈ {0, 1}:

where

The g(θ^{T}x) term is interpreted as the conditional probability (P) of the outcome variable equaling a "1" class given θ and x:

and complimentarily with class "0" as

Typically in binary classification, when P(y=1|θ;x) ≥ 0.5 the predicted classification (ŷ) will be 1, and when P(y=1|θ;x) < 0.5 the predicted classification will be 0. The P(y=1|θ;x)=0.5 occurs when θ^{T}x=0, also known as the decision boundary. Values other than 0.5 can be used as decision boundary depending on how many false positives and false negatives can be tolerated in the predictions.

**Cost Function**

A cost (or loss) function quantifies the amount of deviation between the predicted and observed values. A cost function is used to estimate the most suitable θ values in order to minimize the penalty from misclassification. The cost function for logistic regression is as follows:

It can be condensed into a single equation:

The above cost function is for a single training data point. The cost function J(θ) for the entire training set with p training sample is:

The objective is to find a set of parameter θ values that minimize the J(θ):

**Gradient Descent**

Such minimization task is done by gradient descent. The gradient descent update rule is as follows, where all θ_{j} are updated simultaneously:

Repeat {

}

The learning rate parameter α dictates the rate of descent while the partial derivative of J(θ) dictates the direction of descent of each step. The partial derivative term can be expressed as:

and with simple substitution, the gradient descent becomes:

Repeat {

}

**Regularization**

To reduce the chance of overfitting, a regularization term can be introduced in logistic regression algorithm to reduce the magnitude of each θ:

where p is the number of total training sample, m is the number of θ parameters, and 𝜆 is the regularization parameter and its purpose is to trade-off between 1) model’s ability to fit well with the data and 2) reducing the magnitudes of θ parameters to avoid overfitting. The larger the 𝜆, the greater the reduction of the magnitude of θ's, which may in turn leads to underfitting when shrinking θ's too much leaving the model unable to fit well with the data.

Finally, the new gradient descent rules with regularization term incorporated will be as follows, where the effect of regularization will take effect in all θ's except θ_{0}:

Repeat {

}

where j = 1, 2,…, m corresponding to θ_{1}, θ_{2},…, θ_{m}.

**Source material**

## Leave a Reply

You must be logged in to post a comment.