Normalization refers to rescaling real-valued numeric attributes into a to range.
Data normalization is used in machine learning to make model training less sensitive to the scale of features. This allows our model to converge to better weights and, in turn, leads to a more accurate model.
Normalization makes the features more consistent with each other, which allows the model to predict outputs more accurately.
Python provides the preprocessing
library, which contains the normalize
function to normalize the data. It takes an array in as an input and normalizes its values between and . It then returns an output array with the same dimensions as the input.
from sklearn import preprocessingimport numpy as npa = np.random.random((1, 4))a = a*20print("Data = ", a)# normalize the data attributesnormalized = preprocessing.normalize(a)print("Normalized Data = ", normalized)