Nearest Neighbour Regression

In this lesson, you'll learn about the details of the Nearest Neighbour Regression, which is used for Regression Problems and predicts the outcome based on the nearest neighbors of an instance.

Nearest Neighbors Regression

The idea of Nearest Neighbor Regression has been borrowed from Nearest Neighbors Classification. Note that:

  • The principle behind the nearest neighbors algorithm in regression is to find the nearest, let’s say, k neighbors. The neighbors are calculated based on some measure of similarity or distance calculation. Based on the value K chosen and neighbors retrieved, this algorithm is also called K-Nearest neighbors.

  • KK is a parameter that can be tuned.

  • The output value for a new instance is returned by taking the mean of its nearest neighbors in case of Regression. The important thing to remember is that no equation is constructed and no parameters are optimized.

  • The nearest neighbors algorithms remembers all the training dataset and comes under the category of non-generalizing algorithms of Machine Learning. It stores the training dataset in some efficient data structure.

Implementation in Scikit Learn

The KNeighborsRegressor class implements the KNN algorithm.

Get hands-on with 1400+ tech skills courses.