Course Overview

Get a brief overview and have a look at the prerequisites for this course.

Overview

The high performance of deep learning models and their ability to learn complex functions has fostered their adoption in many areas, like medical imaging, finance, recommendation systems, object tracking, etc. However, it’s often difficult to understand or interpret the logic they have learned. As a result, these deep-learning models are also often referred to as black-box models. Sometimes the decisions made by these black-box models may hide potential biases or violate ethical principles. Therefore, there is a substantial risk in deploying these deep learning models in medicine, finance, and automation without understanding how they work.

This course covers a popular AI concept—Explainable Artificial Intelligence (XAI). After learning about XAI, we’ll apply what we’ve learned to implement explanation methods that can interpret the internal logic behind image classifiers’ decisions.

This course covers popular algorithms, such as smooth gradients, guided backpropagation, integrated gradients, LIME, layer-wise relevance propagation, class activation maps, counterfactual explanations, etc., for image classification networks—such as MobileNet-V2—trained on large-scale datasets—like ImageNet-1K.

Press + to interact
Explainable AI
Explainable AI

Below is a quick demo to understand the gist of XAI. It takes an image as input, gets a class prediction from the MobileNet-V2 model trained on the ImageNet-1K dataset, and then computes an explanation highlighting (in red color) which image pixels are important for the prediction.

Note: For a better experience, upload an image belonging to one of the ImageNet classes, such as dogs, cats, tigers, zebra, etc. Ensure that the image is in RGB format (three-channel format). Preferrably upload a .png or .jpg image.

Press + to interact
# Input image
image = Image.open("__ed_input.png").resize((224,224)).convert('RGB')
# MobileNet-V2 model
model = mobilenet_v2()
ckpt = torch.load("./mobilenet_v2-b0353104.pth", map_location="cpu")
model.load_state_dict(ckpt)
model.eval()
# Run demo
run_demo(image, model)

Such explanations can help professionals like data scientists understand whether their model makes predictions based on correct artifacts present in the input image.

Intended audience

If you are in the industry or academia looking to advance your knowledge in machine learning and learn how to explain image model predictions to stakeholders, this course is for you.

After taking this course, you will:

  • Understand the need and benefits of XAI.

  • Understand various classes of explanation methods or explainers used to interpret the decisions of a neural network.

  • Be able to design and implement popular explanation algorithms, like saliency maps, class activation maps, counterfactual explanations, etc.

  • Be able to evaluate and quantify the quality of the neural network explanations via several interpretability metrics.

  • Be able to tweak or combine existing explanation methods to generate more robust explanations.

Prerequisites

Since the course is built on PyTorch and requires a basic understanding of deep learning and neural networks, we expect learners to have the following:

  • Basic familiarity with Python language and PyTorch framework.

  • Basic knowledge of neural networks, CNNsConvolutional neural networks, image processing, loss functions, training, and evaluation of models.

  • Understanding of the gradients, partial derivatives, and backpropagation algorithm.

Press + to interact