This device is not compatible.

Explainable AI: The Power of Interpreting ML Models

PROJECT


Explainable AI: The Power of Interpreting ML Models

In this project, we will learn how to explain machine learning models globally and locally and learn different methods to determine the most important predictors of an outcome. These methods can explain why a model made its predictions and guide us in how to change undesirable outcomes.

Explainable AI:  The Power of Interpreting ML Models

You will learn to:

Explain machine learning models using global methods

Explain machine learning models using a local explanation method, SHAP, or SHapley Additive exPlation

Explain a logistic regression model using coefficients

Explain a tree-based model using feature importances

Explain a neural network using permutation importances

Skills

Machine Learning

Data Science

Explainable AI

Prerequisites

Hands-on experience with Python and Jupyter Notebook

Basic understanding of how to fit the scikit-learn models to data

Hands-on experience with pandas

Technologies

SHAP logo

SHAP

Python

Pandas

Scikit-learn

Project Description

Explainable machine learning, or XAI, aims to interpret machine learning models to uncover the most influential predictors of an outcome, ultimately enhancing transparency in predictive analytics. The primary goal of this project is to employ explainable machine learning techniques to explain the decision-making process of three distinct models: Logistic Regression, Random Forest, and Neural Networks. These three models were chosen to show three distinct ways of explaining a model, namely intrinsic, feature importances, and permutation importances. By working with the UCI Census Income dataset, we aim to predict whether an individual earns more than $50k/year.

This project also shows the difference between global and local explanation methods. Whereas global explanations show the most important predictors on average for the entire group, local methods aim to explain why a model made a prediction for an individual. Local methods are valuable for cases such as a client being rejected a loan—the company could provide an explanation of why the client was rejected. The client then has specific knowledge on what to change to improve their chances of obtaining a loan. For the local explainable methods, we focus on applying SHAP or SHapley Additive exPlanations.

Project Tasks

1

Introduction

Task 0: Getting Started

Task 1: Import Libraries

Task 2: Prepare the Dataset

2

Global Explanations

Task 3: Explain a Logistic Regression Model using Coefficients

Task 4: Explain a Random Forest Model using Feature Importances

Task 5: Explain a Neural Network using Permutation Importances

3

Local Explanations

Task 6: Local Explanations using SHAP

Congratulations!

has successfully completed the Guided ProjectExplainable AI: The Power of InterpretingML Models

Relevant Courses

Use the following content to review prerequisites or explore specific concepts in detail.