The norm loss function, also known as the least squares error (LSE), is used to minimize the sum of the square of differences between the target value, , and the estimated value,
The mathematical representation of -norm is:
As an error function, -norm is less robust to outliers than the -norm. An outlier causes the error value to increase to a much larger number because the difference in the actual and predicted value gets squared.
However, -norm always provides one stable solution (unlike -norm).
The -norm loss function is known as the least absolute error (LAE) and is used to minimize the sum of absolute differences between the target value, , and the estimated values, .
The code to implement the -norm is given below:
import numpy as npactual_value = np.array([1, 2, 3])predicted_value = np.array([1.1, 2.1, 5 ])# take square of differences and sum theml2_norm = np.sum(np.power((actual_value-predicted_value),2))# take the square root of the sum of squares to obtain the L2 norml2_norm = np.sqrt(l2)print(l2_norm)
Lines 3 and 4: To store the heights of three people we created two Numpy arrays called actual_value
and predicted_value
. The predicted_value
contains the heights predicted by a machine learning model.
Line 7: We calculate the differences between the actual_value
and predicted_value
arrays.
We used the np.power
to square the differences between the elements of two arrays.
We use np.sum
to sum the square resulting values.
Line 10: Finally, we take the square root of the l2_norm
using np.sqrt
this value shows the difference between the predicted values and actual value. This value is used to evaluate the performance of the machine learning model.
Line 11: We print the l2_norm.
Free Resources