KNN
Learn how to use the k-nearest neighbors (KNN) algorithm for classification and regression tasks.
The k-nearest neighbors (KNN) algorithm makes predictions on new observations by looking at similar observations among data it has seen before. It looks at the values of the other features in the dataset for the rows with missing values, and it uses those values to estimate what the missing value should be. The k in KNN refers to the number of neighbors that the algorithm considers when making its estimation.
For example, suppose we have a dataset in which a particular row has a missing value for one of its features. In this case, we can use KNN to estimate the missing value based on the other available feature values. Specifically, if we set k (number of neighbors) to three, KNN would identify the three rows in the dataset that are most similar to the row in question based on their feature values and use the values of the missing feature from those three rows to impute the missing value for the original row.
Get hands-on with 1400+ tech skills courses.