Interactive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimization
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2023-06-18
Major/Subject
Mcode
Degree programme
Language
en
Pages
12
Series
UMAP 2023 - Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, pp. 34-45
Abstract
Explainability is a crucial aspect of models which ensures their reliable use by both engineers and end-users.However, explainability depends on the user and the model’s usage context, making it an important dimension for user personalization.In this article, we explore the personalization of opaque-box image classifiers using an interactive hyperparameter tuning approach, in which the user iteratively rates the quality of explanations for a selected set of query images.Using a multi-objective Bayesian optimization (MOBO) algorithm, we optimize for both, the classifier’s accuracy and the perceived explainability ratings.In our user study, we found Pareto-optimal parameters for each participant, that could significantly improve explainability ratings of queried images while minimally impacting classifier accuracy.Furthermore, this improved explainability with tuned hyperparameters generalized to held-out validation images, with the extent of generalization being dependent on the variance within the queried images, and the similarity between the query and validation images.This MOBO-based method has the potential to be used in general to jointly optimize any machine learning objective along with any human-centric objective.The Pareto front produced after the interactive hyperparameter tuning can be useful during deployment, allowing for desired tradeoffs between the objectives (if any) to be chosen by selecting the appropriate parameters.Additionally, user studies like ours can assess if commonly assumed trade-offs, such as accuracy versus explainability, exist in a given context.Description
Keywords
Other note
Citation
Chandramouli, S, Zhu, Y & Oulasvirta, A 2023, Interactive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimization . in UMAP 2023 - Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization . ACM, pp. 34-45, Conference on User Modeling, Adaptation and Personalization, Limassol, Cyprus, 26/06/2023 . https://doi.org/10.1145/3565472.3592956