The k-nearest neighbors (KNN) algorithm is a supervised machine learning algorithm.
KNN assumes that similar things exist in close proximity. In data science, it implies that similar data points are close to each other. KNN uses similarity to calculate the distance between points on a graph.
Very easy to implement.
This algorithm can be used for both classification and regression.
Since data is not previously assumed, it is very useful in cases of nonlinear data.
The algorithm ensures relatively high accuracy.
It is a bit more expensive as it stores the entire training data.
High memory storage requirements for this algorithm.
Higher sets of values may lead to inaccurate predictions.
Highly sensitive to the scale of the data.
The following are some of the areas in which KNN can be applied successfully:
KNN is often used in banking systems to identify if an individual or organization is fit for a grant or a loan based on key characteristics.
KNN can be used in Speech Recognition, Handwriting Detection, Image Recognition, and Video Recognition.
A potential voter can be classified into categories based on characteristics (like “voter” or “non-voter”) for elections.
For implementation in Python, click here.