Interactive Personalization for Explainability via Human-in-the-loop Multi-Objective Bayesian Optimization

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu | Master's thesis
Date
2023-08-21
Department
Major/Subject
Computer Science
Mcode
SCI3042
Degree programme
Master’s Programme in Computer, Communication and Information Sciences
Language
en
Pages
61+7
Series
Abstract
Personalization, capable of delivering tailored experiences for individuals, has gained substantial interest from both academia and industry, particularly in the field of machine learning (ML). Personalized ML models allow better adaptation to individual conditions, thereby offering tailored and precise solutions to individuals. However, prevailing personalization methods in ML tend to maintain system dominance and take humans as passive data providers, which often results in outdated adaptations and a lack of trust in the model. Reintroducing humans into the loop for personalization, on the other hand, allows for real-time feedback, iterative learning, and boosted user autonomy, enabling a continuous improvement in model performance. This thesis focuses on interactive personalization, proposing an approach to personalizing black-box models by optimizing both model accuracy and human-perceived explainability, which may constitute a trade-off. With a particular problem setting in explainable ML, we aim to personalize a deep-learning-based image classifier for improving explainability while preserving accuracy. We achieve this by performing Hyperparameter Optimization (HPO) and integrating the human-in-the-loop (HITL) strategy with multi-objective Bayesian optimization (MOBO) to establish an interaction loop for fine-tuning the classifier. The result is a Pareto front comprising a spectrum of Pareto-optimal solutions that achieve the desired balance between accuracy and explainability. We evaluate the efficacy of the approach with a user study. The results obtained from this study demonstrate the effectiveness of our method in obtaining the optimal trade-offs between accuracy and personalized explainability and providing personalized models accordingly. Furthermore, the effect of personalized models also generalizes to unseen validation images, thus demonstrating the model's generalizability. This approach can be generalized and then adapted to various downstream applications that demand personalization along both model-based and human-centred objectives.
Description
Supervisor
Oulasvirta, Antti
Thesis advisor
Chandramouli, Suyog
Keywords
Human-in-the-loop, Personalization, Bayesian optimization, Explainable Machine Learning
Other note
Citation