Linear Regression with Multiple Variables
Multiple Features
Linear regression with multiple variables is also known as "multivariate linear regression".
We now introduce notation for equations where we can have any number of input variables.
= value of feature
in the
training examples
= the input (features) of the
training example
= the number of training examples
= the number of features
The multivariable form of the hypothesis function accommodating these multiple features is as follows:
In order to develop intuition about this function, we can think about as the basic price of a house,
as the price per square meter,
as the price per floor, etc.
will be the number of square meters in the house,
the number of floors, etc.
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
This is a vectorization of our hypothesis function for one training example.
Gradient Descent for Multiple Variables
The gradient descent equation itself is generally the same form; we just have to repeat it for our ; features:
repeat until convergence:
In other words:
for
Gradient Descent in Practice (Feature Scaling)
We can speed up gradient descent by having each of our input values in roughly the same range. This is because will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same.
Ideally: or
These aren't exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:
Where is the average of all the values for feature
and
is the range of values (max - min), or
is the standard deviation.
Features and Polynomial Regression
We can improve our features and the form of our hypothesis function in a couple different ways.
We can combine multiple features into one. For example, we can combine and
into a new feature
by taking
.
Polynomial Regression
Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
For example, if our hypothesis function is
then we can create additional features based on
, to get the quadratic function
or the cubic function
One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.
eg. if range 1 - 1000 then range of
becomes 1 - 1000000 and that of
becomes 1 - 1000000000
Normal Equation
Gradient descent gives one way of minimizing . Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the "Normal Equation" method, we will minimize
by explicitly taking its derivatives with respect to the
, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:
There is no need to do feature scaling with the normal equation.
The following is a comparison of gradient descent and the normal equation:

With the normal equation, computing the inversion has complexity O(n3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.