Understanding Model Bias
Learn how models can become biased, using historical examples of bias in algorithms.
By far the most dangerous (and well-represented in the media) type of bias is model bias. This type of bias is even more subtle than data bias and has tremendous consequences if not handled properly in the development stage. This is because model bias is the exacerbation of data bias—but in a model that most would assume to be objective.
Historical examples
Model bias is far easier to understand through examples, so in this lesson, we’ll cover some of the most notorious mishaps faced by big tech companies who didn’t control for this type of bias in their pipelines.
Google Photos
Google Photos is one of the most used applications for photo storage and includes features like auto-tagging and image recognition. In 2015, it was reported that the Google Photos image recognition algorithm labelled Black people as gorillas; a particularly offensive label because it echoes centuries of racist tropes. This is learned racism and Google, along with other big tech companies, have never really figured out how to stop these types of incidents from happening. This incident was most likely caused by data bias—the underrepresentation of Black individuals during training. Appallingly, Google’s fix was to stop tagging primates in general—now it no longer tags pictures of ...