Enhancing Model Selection with Custom Metrics
Discover how to implement custom metrics to refine model evaluation and selection.
We'll cover the following
Metrics customization
Looking at suitably chosen metrics for a problem tremendously increases the ability to develop better models. Although a metric does not directly improve model training, it helps in a better model selection.
Several metrics are available outside TensorFlow, such as in sklearn
. However, they can’t be used directly during model training in TensorFlow. This is because the metrics are computed while processing batches during each training epoch.
Fortunately, TensorFlow provides the ability for this customization. The custom-defined metrics F1Score
and FalsePositiveRate
are provided in the user-defined performancemetrics
library. Learning the programmatic context for the customization is important and, therefore, is elucidated here.
The TensorFlow official guide shows the steps for writing a custom metric. It instructs to create a new subclass inheriting the Metric
class and work on the following definitions for the customization:
__init__()
: All the state variables should be created in this method by calling theself.add_weight()
method, e.g.,self.var = self.add_weight(...)
update_state()
: All updates to the state variables should be done asself.var.assign_add(...)
result()
: The final result from the state variables is computed and returned in this function.
Using these instructions, the FalsePositiveRate()
custom metric is defined in the code below. Note that the FalsePositives
metric is already present in TensorFlow. However, drawing the false positive rate from it during training is not direct, so, the FalsePositiveRate()
is defined.
Get hands-on with 1400+ tech skills courses.