A confusion matrix is typically a square matrix with the same number of rows and columns as the number of classes. It evaluates the interpretation of a machine learning model in a classification task compared to the actual labels for the given data. The rows illustrate the actual classes, and the columns illustrate the predicted classes. Each entry in the matrix illustrates the number of times the model predicted a particular class.
As shown above, the following four terms represent the entries of a confusion matrix.
True positives (TP): The number of correctly classified instances as positive. Considering the use case of a spam email filter, true positives are cases where the spam filter accurately classifies a spam email as spam (i.e., the email is spam, and the filter correctly identifies it as such).
False positives (FP): The number of instances incorrectly classified as positive when they belong to the negative class. Considering the same use case of a spam email filter, false positives are cases where the spam filter inaccurately classifies a valid email as spam (i.e., the email is not spam, but the filter incorrectly identifies it as such).
True negatives (TN): The number of instances correctly classified as negative. Considering the same use case of a spam email filter, true negatives are cases in which the spam filter accurately classifies a valid email as not spam (i.e., the email is not spam, and the filter correctly identifies it as such).
False negatives (FN): The number of instances incorrectly classified as negative when they belong to the positive class. Considering the same use case of a spam email filter, false negatives are cases in which the spam filter inaccurately classifies a spam email as not being spam (i.e., the email is spam, but the filter incorrectly identifies it as not spam).
A confusion matrix is also helpful in computing various performance metrics such as accuracy, precision, recall, and
Accuracy: It evaluates the overall performance of a machine learning model in a classification task by measuring the proportion of correct predictions made by the model out of all its predictions. The formula for the accuracy is:
Precision: It’s the ability of the model to correctly classify just positive cases (i.e., true positives) out of all the examples it has classified as positive (i.e., true positives and false positives). The formula for the precision is:
Recall: It’s the ability of the model to correctly classify all positive cases (i.e., true positives) out of all examples in the positive class (i.e., true positives and false negatives). The formula for the recall is:
F1 score: It’s the
Free Resources