Issue wherein a XAI technique doesn’t reveal the underlying mechanism/decision boundaries behind the model.

Analogy

  1. Suppose certain defendent denied parole one morning by a judge
  2. Defendant wants to know why they were denied parole
  3. Judge looks at all decisions made that morning, sees everyone granted parole with high school diploma and defendant does not
  4. Judge tells defendant that denial of parole is unlike those granted parole in morning, they didn’t have high school diploma