Remember that in “Regression Problems” we are taking input variables and trying to map the output on to a “continuous” expected result function.
Linear Regression with one variable is also called as “univariate linear regression”. This is just more fancy way to call it.
Linear regression with one variable is used when you want to predict a single output value from a single input value. That means you only have one x as input(attribute) and one y as output.
The Hypothesis Function
The general form of hypothesis function is;
We give hθ values for θ0 and θ1 to get the output ‘y’. We are trying to create a function called hθ which is able to reliably map our input data to our output data.
Cost function is used to measure the accuracy of our hypothesis. Cost function takes an average of all results of the hypothesis with inputs from x’s compared to actual output y’s.
Here is our cost function;
We are able to concretely measure the accuracy of our predictor function against the correct results we have so that we can predict new results we don’t have.
We have our hypothesis function and we have a way to measure its accuracy with our cost function. Now we need to improve our hypothesis function and Gradient decent will help us on that.
Imagine that our hypothesis function based on its fields θ0 and θ1. W put θ0 on the x axis and θ1 on the z axis with the cost function on the vertical y axis. The points on the graph will be the result of the cost function using those specific theta parameters.
When the cost function is at the global minimum we will know that we have succeeded. The way achieving to global minimum is by taking the derivative. Derivative basically means the slope of the tangent. We will make steps down with parameter α(alpha) to that derivative. Alpha is also known as learning rate.
The gradient descent equation is:
repeat until to the converge point: