In-Training Model Bias Mitigation

Learn theoretical bias mitigation practices while training algorithms.

We'll cover the following

In-training methods are a much more optimal way of solving for bias but are done while the model is training and are usually computationally more expensive.

Adversarial debiasing

For synthetic data generation, adversarial methods create a digital twin of the data by using two different algorithms. One algorithm (the generator) creates potential synthetic rows and another (the discriminator) guesses whether the new rows are synthetic or real. Over time, the generator learns to generate better and better rows.

Adversarial debiasing works similarly, also using two algorithims:

  • Generator: Makes predictions given input variables XX that also contain protected classes.

  • Discriminator: Guesses the protected classes given the generator’s assigned class.

Consider a lending algorithm with race as the protected class. The task is to classify whether an applicant can repay a loan. The generator starts off by making normal predictions over the data, using race as a prediction variable. If its predictions are highly biased (i.e., are highly dependent on race), the discriminator will have an easy time identifying the protected class of the rows given the prediction output by the generator.

For example, if the generator always denies Black applicants, the discriminator will be able to guess the race of an applicant with high fidelity (if denied, there’s a good chance the applicant was Black).

Get hands-on with 1400+ tech skills courses.