Ground-Truth Faithfulness—Feature Agreement

Learn to evaluate the ground-truth faithfulness of an explanation using the Top-k feature agreement metric.

There are several algorithms to generate explanations to model predictions; let’s try to understand how to evaluate these algorithms. While explanations are inherently subjective, benchmarking these algorithms under certain aspects can be insightful to their behavior. We’ll start with our first class of metrics, known as ground-truth faithfulness.

Ground-truth faithfulness

Ground-truth faithfulness is a class of metrics that evaluate how an explanation generated by an algorithm compares to the ground-truth explanation. A human annotator can provide the ground-truth explanation, or it can be assumed to come from a state-of-the-art XAI algorithm.

In this lesson and the next, we’ll evaluate the ground-truth faithfulness of saliency map algorithms like vanilla gradients, smooth gradients, etc., using two metrics:

  • Top-k feature agreement

  • Rank correlation

Since we don't have ground-truth explanations, we’ll assume integrated gradients as our ground-truth saliency.

Top-k feature agreement

The Top-k feature agreement computes the common fraction of Top-k features between a given post-hoc explanation and the corresponding ground-truth explanation. In other words, given a post-hoc explanation SS and the ground-truth explanation SgS_g, the Top-k feature agreement FAk(S,Sg)FA_{k}(S, S_g) can be expressed as follows:

Get hands-on with 1400+ tech skills courses.