Interactive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimization

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorChandramouli, Suyogen_US
dc.contributor.authorZhu, Yifanen_US
dc.contributor.authorOulasvirta, Anttien_US
dc.contributor.departmentDepartment of Communications and Networkingen
dc.contributor.departmentDepartment of Information and Communications Engineeringen
dc.contributor.groupauthorUser Interfacesen
dc.contributor.groupauthorHelsinki Institute for Information Technology (HIIT)en
dc.date.accessioned2023-08-01T06:18:51Z
dc.date.available2023-08-01T06:18:51Z
dc.date.issued2023-06-18en_US
dc.description.abstractExplainability is a crucial aspect of models which ensures their reliable use by both engineers and end-users.However, explainability depends on the user and the model’s usage context, making it an important dimension for user personalization.In this article, we explore the personalization of opaque-box image classifiers using an interactive hyperparameter tuning approach, in which the user iteratively rates the quality of explanations for a selected set of query images.Using a multi-objective Bayesian optimization (MOBO) algorithm, we optimize for both, the classifier’s accuracy and the perceived explainability ratings.In our user study, we found Pareto-optimal parameters for each participant, that could significantly improve explainability ratings of queried images while minimally impacting classifier accuracy.Furthermore, this improved explainability with tuned hyperparameters generalized to held-out validation images, with the extent of generalization being dependent on the variance within the queried images, and the similarity between the query and validation images.This MOBO-based method has the potential to be used in general to jointly optimize any machine learning objective along with any human-centric objective.The Pareto front produced after the interactive hyperparameter tuning can be useful during deployment, allowing for desired tradeoffs between the objectives (if any) to be chosen by selecting the appropriate parameters.Additionally, user studies like ours can assess if commonly assumed trade-offs, such as accuracy versus explainability, exist in a given context.en
dc.description.versionPeer revieweden
dc.format.extent12
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationChandramouli, S, Zhu, Y & Oulasvirta, A 2023, Interactive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimization. in UMAP 2023 - Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization. ACM, pp. 34-45, Conference on User Modeling, Adaptation and Personalization, Limassol, Cyprus, 26/06/2023. https://doi.org/10.1145/3565472.3592956en
dc.identifier.doi10.1145/3565472.3592956en_US
dc.identifier.isbn978-1-4503-9932-6
dc.identifier.otherPURE UUID: 5fdb85dd-93f7-41da-9084-59fed91ee72cen_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/5fdb85dd-93f7-41da-9084-59fed91ee72cen_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85163879252&partnerID=8YFLogxK
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/116255010/Chandramouli_Interactive_personalization_ACM.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/122194
dc.identifier.urnURN:NBN:fi:aalto-202308014555
dc.language.isoenen
dc.relation.ispartofConference on User Modeling, Adaptation and Personalizationen
dc.relation.ispartofseriesUMAP 2023 - Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalizationen
dc.relation.ispartofseriespp. 34-45en
dc.rightsopenAccessen
dc.titleInteractive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimizationen
dc.typeA4 Artikkeli konferenssijulkaisussafi
dc.type.versionpublishedVersion

Files