More on Threshold Optimization

Learn how to use a dedicated method for threshold optimization.

Threshold optimization with Fairlearn

In the Fixing Bias by Threshold Optimization project, we conducted an experiment in which we selected different thresholds for various species to enhance the fairness score. Although this method is effective, there is a more efficient approach. The Fairlearn library includes a built-in ThresholdOptimizer that can assist us in accomplishing this task.

To utilize it, we need to provide several parameters. Here are the most vital ones:

  • The estimator, which is to be adjusted.

  • Constraint, which is the fairness metric we aim to satisfy. We can choose from demographic parity, false/true positive/negative rate, and equalized odds.

  • Objective, which is the performance metric (not fairness) that will be maximized. This can be accuracy, balanced accuracy, selection rate, or true positive/negative rate.

  • Grid size, which is the number of samples to test. A larger grid size allows for more precise selection but increases execution time.

Note that not all combinations of constraint and objective are permissible. The legal combinations can be found in the Fairlearn documentation.

Using precomputed predictions

The ThresholdOptimizer typically requires access to the underlying scikit model. In such cases, it is quite simple to use. However, it can still be employed even if we don’t have access to the model. We explore this scenario in the following notebook.

Get hands-on with 1400+ tech skills courses.