Univariate Linear Regression
Here, you’ll learn more about Regression and the concepts of Univariate Linear Regression.
Univariate Linear Regression
In Univariate Linear Regression, we have one independent variable which we use to predict a dependent variable .
We will be using the Tips Dataset from Seaborn’s Datasets to illustrate theoretical concepts.
We will be using the following columns from the dataset for Univariate Analysis.
-
Total_bill: It is the total bill of food served.
-
Tip: It is the tip given on the cost of food.
Goal of Univariate Linear Regression: The goal is to predict the “tip” given on a “total_bill”. The regression model constructs an equation to do so.
If we plot the scatter plot between the independent variable (total_bill) and dependent variable (tip), we will get the plot below.
-
We can see that the points in the scatter plot are mostly scattered along the diagonal.
-
This is an indication that there can be some positive correlation between the total_bill and tip. This will be fruitful in modeling.
Working
The univariate Linear Regression model comes up with the following equation of the straight line.
Or
Goal: Find the values of and , where and are the parameters, so that the predicted tip () is as close to the actual tip () as possible. Mathematically, we can model the problem as seen below.
=
-
is the cost function, which an algorithm tries to minimize by finding the values of and . These values give us the minimum value of the above function.
-
is the actual output value of a training instance , where
-
is the predicted output value of a training instance . Where
-
denotes the sum across all the training samples.
-
denotes the sum of the squared difference between the actual and predicted values across all the training instances.
-
is the term for normalization purposes. is the number of training instances.
-
Let’s understand the working of the above formula with the help of hypothetical data.
Notation | Actual Value | Predicted Value | (Predicted Value - Actual Value)2 |
---|---|---|---|
10.5 | 15 | 20.25 | |
15.5 | 17 | 2.25 | |
7.34 | 5 | 5.47 |
Let’s suppose and
Here = 3
= =
Gradient descent
Gradient descent is an optimization algorithm that helps us find the optimal values of and . It is used as a backbone behind many Machine Learning algorithms. Let’s rephrase our goal.
Goal: Find the values of and that give us the minimum value of cost of the function.
Gradient descent helps us find the optimal values. It is outlined below.
-
Start with initial values of and .
-
Keep changing and until we achieve the minimum of .
Intuition
Gradient descent works as follows:
Repeat until convergence {
}
- Here, j = 0, 1.
- is the partial derivative of the cost function.
- Both and are simultaneously updated as seen below.
-
Here, is a learning parameter. The value is chosen after some careful experimentation.
-
Setting too small can lead to slow convergence of gradient descent.
-
Setting too large, can divert gradient descent from choosing the optimal values of and which gives us the minimum value of cost function.
There are modules in Python that help us choose the optimal values of . This value is called hyper-parameter. Choosing the optimal values by embedding in techniques and reading from research papers is called hyperparameter optimization.
Gradient descent for Univariate Linear Regression
Now, we can apply the gradient descent by opening the derivative term for Univariate Linear Regression, as seen below.
Cost function:
=
Derivatives:
Gradient descent:
Repeat until convergence {
}
Versions of gradient descent
-
Batch Gradient Descent: Each step of gradient descent (i.e updating the parameters ()) takes the whole training dataset.
-
Stochastic Gradient Descent: Gradient Descent performs steps (i.e updating the parameters ()) after every training instance or sample. This is also called online learning or incremental or out-of-core learning.
-
Mini-Batch Stochastic Gradient descent: Gradient Descent takes steps (i.e updating the parameters after a defined set of the training sets) that are called batch size.
Conclusion
Once the gradient descent has done its job of finding the parameters, we can place the values back into the equation and get the tip_predicted for a particular total_bill.
Let’s suppose gradient descent returns and .
Get hands-on with 1400+ tech skills courses.