Logistic Regression
We can generalize linear regression to the classification scenario by defining a different family of probability distributions. If we have two classes, class 0 and class 1, then we need only specify the probability of one of these classes. The probability of class 1 determines the probability of class 0, because these two values must add up to 1.The normal distribution over real-valued numbers that we used for linear regression is parametrized in terms of a mean. Any value we supply for this mean is valid. A distribution over a binary variable is slightly more complicated, because its mean must always be between 0 and 1. One way to solve this problem is to use the logistic sigmoid function to squash the output of the linear function into the interval (0, 1) and interpret that value as a probability:
This approach is known as logistic regression(a somewhat strange name since we use the model for classification rather than regression).
Cost Function
We cannot use the same cost function that we use for linear regressionbecause the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.
Instead, our cost function for logistic regression looks like:
if
if
Simplified Cost Function and Gradient Descent
We can compress our cost function's two conditional cases into one case:
Notice that when y is equal to 1, then the second term will be zero and will not affect the result. If y is equal to 0, then the first term
will be zero and will not affect the result.
We can fully write out our entire cost function as follows:
A vectorized implementation is:
Gradient Descent
Remember that the general form of gradient descent is:
Repeat:
We can work out the derivative part using calculus to get:
Repeat:
Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.
A vectorized implementation is:
Multiclass Classification: One-vs-all
Now we will approach the classification of data when we have more than two categories. Instead of we will expand our definition so that
.
Since , we divide our problem into
(+1 because the index starts at 0) binary classification problems; in each one, we predict the probability that
is a member of one of our classes.
We are basically choosing one class and then lumping all the others into a single second class. We do this repeatedly, applying binary logistic regression to each case, and then use the hypothesis that returned the highest value as our prediction.
Train a logistic regression classifier for each class to predict the probability that 
. To make a prediction on a new x, pick the class that maximizes
.