Adapting User Interfaces with Model-based Reinforcement Learning
Loading...
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
Conference article in proceedings
This publication is imported from Aalto University research portal.
View publication in the Research portal
View/Open full text file from the Research portal
Other link related to publication
View publication in the Research portal
View/Open full text file from the Research portal
Other link related to publication
Date
2021-05-06
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
13
Series
CHI '21: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Abstract
Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user – for example, due to surprise or relearning effort – or "trap" the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.Description
Keywords
Adaptive User Interfaces, Reinforcement Learning, Predictive Models, Monte Carlo Tree Search
Other note
Citation
Todi, K, Leiva, L, Bailly, G & Oulasvirta, A 2021, Adapting User Interfaces with Model-based Reinforcement Learning . in CHI '21: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems : Making Waves, Combining Strengths ., 573, ACM, ACM SIGCHI Annual Conference on Human Factors in Computing Systems, Yokohama, Japan, 08/05/2021 . https://doi.org/10.1145/3411764.3445497