What is AI fairness?

Algorithms surround us in our daily lives. They decide what we see on social media, what movie we should watch next, and what we buy. But there are more serious things. When we pay in the online shop, an algorithm authenticates our payment by comparing it to typical transactions. When we buy car insurance, a model determines the payment amount based on various attributes (like experience, age, and past accidents). It would be nice if those algorithms were reliable and just. But we all know that these models are not always perfect and sometimes make mistakes.

But what will happen if someone has bad luck and algorithms are constantly wrong about them? There can be multiple reasons. Maybe one is a member of a minority group, and the model has not seen too many people like them. Or maybe there is a stereotype about them in the data, and the model learned to reproduce it. There can be multiple reasons why models can be wrong.

AI fairness is a field of research focused on measuring and mitigating a systematic, algorithmic bias. It is a relatively new study area, so we may not expect standardized procedures and well-established details about it. Right now, it is more like an experiment. But the topic is so important that the lack of such details and material is not a reason to ignore the problem.

There is one thing that needs to be articulated very clearly. AI fairness is not about making the same number of positive predictions for each group. The goal is to ensure that there is no group of people who are treated unfairly because of sensitive attributes. For example, a hiring model is not expected to hire the exact same number of young and older people. Instead, it is expected to not say, “You are old, so I won’t hire you.” The course contains a dedicated section about measuring fairness correctly, as the topic is pretty complex.

Get hands-on with 1400+ tech skills courses.