Introduction to The K-means Algorithm
Learn the basics of the k-mean partitional method.
In this lesson, we’re going to discuss the first clustering approach, partitional clustering, and its three subtypes: density, centroid, and distribution-based partitioning. This approach aims to partition the data into disjoint subsets, each of which makes up a cluster. Methods adopting this approach include k-means, fuzzy c-means, DBSCAN, and model-based cluster learning using the expectation-maximization (EM) approach.
K-means
The k-mean’s goal is to find a set of centroids, each of which represents the center of a cluster and is characterized by the mean vectors of data points belonging to that cluster. Based on the provided distance metric, each data point is then assigned to the centroid closest to it.
Assumptions and target cluster type
The k-means algorithm searches for clusters that are spherical in shape. Note that this algorithm will not perform well in cases where we have elongated or intertwined clusters, such as those appearing as spiral and nested circles in the figure below. K-means expects data in the form of continuous, numeric vectors since it uses the mean vector (that is, computed on all attributes) as the representation of clusters.
Method
The inputs of K-means are:
- The input dataset,
- (optional) The distance function with , being data points in
- The number of clusters,
The second input is optional because, by default, the K-means implementation uses Euclidean distance. In theory, K-means can work with other metrics as well, for instance, the use of kernel in Kernel K-means.
K-means follows the following steps mentioned below.
Step 1: Initialization
Select initial centroids, represented as mean vectors . These centroids can be selected randomly or with prior knowledge.
Step 2: Cluster assignment
Each data point in is assigned to the centroid closest to it. The distance is computed using the distance function , which is the Euclidean distance by default.
Step 3: Centroid update
Cluster centroids are recomputed to be the means:
where
- is the mean vector of cluster ,
- is the data subset containing all data points that are currently assigned to cluster , and
- are data points assigned to cluster .
Step 4: Cluster reassignment
Each data point in is assigned to the centroid closest to it.
Step 5: Stability checking and termination
If there’s no change in the cluster assignment, the algorithm terminates. Otherwise, we go back to Step 3.
Hyperparameter tuning
K-means expects the number of clusters as input, but we often don’t know apriori. As such, needs to be tuned, starting with a small up to a large one. Cluster evaluation metrics will be used to assess the quality of the clusters, and the Elbow method is used to select the best
A practical example of applying k-means
In lab, we’ll apply k-means to cluster players from the Dotalicious dataset. We’ll first need to load and preprocess the data. Next, since k-means assumes clusters of spherical shape, it’s important to scale or normalize the data, following the process outlined previously, to put the variables into comparable scales. We may want to see if there’s any correlation among the attributes to consolidate the features. To do so, please follow the steps discussed in the Introduction to Probability and Statistics chapter and in Data Abstraction chapter. It’s important to do this because we don’t want to introduce bias into the distance computation. In general, using dimension reduction techniques, such as PCA, is useful to eliminate such correlations in our data before running cluster analysis.
Selecting clusters
Next, as discussed earlier, to select an appropriate number of clusters, we use a scree plot to examine how the WSS metric changes with the number of clusters (see the figure below). We can observe clearly that the improvement on WSS slows down after , with and being the two elbows of the plot. We can also use the silhouette
width to see how cohesive the clusters are, given the number of clusters chosen.
Get hands-on with 1400+ tech skills courses.