Model Explainability
Understand the model interpretability and explainability using LIME and SHAP.
We'll cover the following...
Let's start with a vital question. That is, why should we trust our model?
Overview
It's a fact that data science and machine learning have naturally found their way to the biggest business and political stories in a concise timeline. Machine learning or deep learning algorithms built into automation and AI systems lack transparency. Machine learning models are often considered black boxes.
Ironically, this transparency has become more visible and challenging for data scientists to explain and interpret their machine learning models (specifically deep learning neural networks). It isn’t easy (maybe impossible for even their engineers to ultimately interpret and explain) to describe their inner workings, such as how different variables are related to each other by the algorithm while coming up with a specific prediction.
In machine learning and AI, explainability and interpretability are often used interchangeably. Well, they might be closely ...