Distill n' Explain: explaining graph neural networks using simple surrogates
Loading...
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2023
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
16
6199-6214
6199-6214
Series
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023, Proceedings of Machine Learning Research, Volume 206
Abstract
Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies back-propagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: Can we break this bond by explaining a simpler surrogate GNN? To answer the question, we propose Distill n' Explain (DnX). First, DnX learns a surrogate GNN via knowledge distillation. Then, DnX extracts node or edge-level explanations by solving a simple convex program. We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model. Experiments show that DnX and FastDnX often outperform state-of-the-art GNN explainers while being orders of magnitude faster. Additionally, we support our empirical findings with theoretical results linking the quality of the surrogate model (i.e., distillation error) to the faithfulness of explanations.Description
Funding Information: This work was supported by the Silicon Valley Community Foundation (SVCF) through the Ripple impact fund, the Fundac¸ão de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ), the Fundac¸ão Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP), the Coordenac¸ão de Aperfeic¸oamento de Pessoal de Nível Superior (CAPES), and the Getulio Vargas Foundation’s school of applied mathematics (FGV EMAp). Publisher Copyright: Copyright © 2023 by the author(s)
Keywords
Other note
Citation
Pereira, T, Nascimento, E, Resck, L E, Mesquita, D & Souza, A 2023, Distill n' Explain : explaining graph neural networks using simple surrogates . in F Ruiz, J Dy & J-W van de Meent (eds), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023 . Proceedings of Machine Learning Research, vol. 206, JMLR, pp. 6199-6214, International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 25/04/2023 . < https://proceedings.mlr.press/v206/pereira23a.html >