Model Performance on the Test Set

Let's look into the model performance on the test set.

Rigorous estimate of expected future performance

We already have some idea of the out-of-sample performance of the XGBoost model from the validation set. However, the validation set was used in model fitting via early stopping. The most rigorous estimate of expected future performance we can make should be created with data that was not used at all for model fitting. This was the reason for reserving a test dataset from the model-building process.

You may notice that we did examine the test set to some extent already, for example, in the chapter "Data Exploration and Cleaning," when assessing data quality and cleaning data. The gold standard for predictive modeling is to set aside a test set at the very beginning of a project and not examine it at all until the model is finished. This is the easiest way to ensure that none of the knowledge from the test set has "leaked" into the training set during model development. When this happens, it opens up the possibility that the test set is no longer a realistic representation of future, unknown data. However, it is sometimes convenient to explore and clean all of the data together, as we've done. If the test data has the same quality issues as the rest of the data, then there would be no leakage. It is most important to ensure you're not looking at the test set when you decide which features to use, fit various models, and compare their performance.

Examining the test set and making predictions

We begin the test set examination by loading the trained model from "Challenge: XGBoost and SHAP Explanation for Case Study Data" along with the training and test data and feature names, using Python's pickle:

Get hands-on with 1400+ tech skills courses.