...

/

Case Study: Bias Mitigation for Credit Loan Data

Case Study: Bias Mitigation for Credit Loan Data

Learn how to mitigate the impact of model bias.

This case study focuses on mitigating gender-based differences in financial lending decisions.

Previously, we discovered that our current model shows a clear bias toward male applicants, resulting in higher false negatives and false positives for female applicants.

In this section, we will explore different strategies to fix this unfairness and see how they affect the performance of our AI model. We will also compare the results with our original model. Get ready to dive into the fascinating world of Responsible AI!

Identify bias in the AI model

We use the fairlearn toolkit for assessing the fairness of our model.

The fairlearn toolkit provides a data structure called MetricFrame to enable evaluation of disaggregated metrics. We will show how to use a MetricFrame object to assess the trained classifier for potential fairness-related harms.

Press + to interact
main.py
loan_data.csv
# Select fairness metrics
metrics_to_report = [
"balanced_accuracy",
"false_positive_rate",
"false_negative_rate"
]
# Compute the disaggregated performance metrics
metricframe_unmitigated = MetricFrame(
metrics=fairness_metrics,
y_true=y_test,
y_pred=y_pred,
sensitive_features=A_test,
)
metricframe_unmitigated.by_group[metrics_to_report]
metricframe_unmitigated.difference()[metrics_to_report]
print(metricframe_unmitigated.overall[metrics_to_report])
#Plot the disaggregated metric for gender group
plot_group_metrics_with_error_bars(
metricframe_unmitigated, "false_positive_rate", "false_positive_error"
)
metricframe_unmitigated.by_group[metrics_to_report].plot.bar(
subplots=True, layout=[1, 3], figsize=[12, 4], legend=None, rot=0
)
plt.savefig('output/graph.png')
#Compute balanced accuracy and equalized odds metrics for model
balanced_accuracy_unmitigated = balanced_accuracy_score(y_test, y_pred)
print('Balanced Accuracy:',balanced_accuracy_unmitigated)
equalized_odds_unmitigated = equalized_odds_difference(
y_test, y_pred, sensitive_features=A_test
)
print('Equalized Odds:',equalized_odds_unmitigated)
  • Lines 1–6: We define metrics ...