Impossibility Theorems for Feature Attribution
Overview
Affiliations
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear-for example, Integrated Gradients and Shapley Additive Explanations (SHAP)-can provably fail to improve on random guessing for inferring model behavior. Our results apply to common end-tasks such as characterizing local model behavior, identifying spurious features, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks: Once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
Guo G, Deng L, Tandon A, Endert A, Kwon B FACCT 24 (2024). 2025; 2024:1861-1874.
PMID: 39877054 PMC: 11774553. DOI: 10.1145/3630106.3659011.
Joint embedding-classifier learning for interpretable collaborative filtering.
Reda C, Vie J, Wolkenhauer O BMC Bioinformatics. 2025; 26(1):26.
PMID: 39844056 PMC: 11755841. DOI: 10.1186/s12859-024-06026-8.
Should Artificial Intelligence Play a Durable Role in Biomedical Research and Practice?.
Bongrand P Int J Mol Sci. 2025; 25(24.
PMID: 39769135 PMC: 11676049. DOI: 10.3390/ijms252413371.
Explainable AI for computational pathology identifies model limitations and tissue biomarkers.
Kaczmarzyk J, Saltz J, Koo P ArXiv. 2024; .
PMID: 39279830 PMC: 11398542.
Prospector Heads: Generalized Feature Attribution for Large Models & Data.
Machiraju G, Derry A, Desai A, Guha N, Karimi A, Zou J ArXiv. 2024; .
PMID: 38947933 PMC: 11213143.