Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning
Loading...
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2022-02-28
Major/Subject
Mcode
Degree programme
Language
en
Pages
44
1-44
1-44
Series
ALGORITHMS, Volume 15, issue 3
Abstract
Reinforcement learning with sparse rewards is still an open challenge. Classic methods rely on getting feedback via extrinsic rewards to train the agent, and in situations where this occurs very rarely the agent learns slowly or cannot learn at all. Similarly, if the agent receives also rewards that create suboptimal modes of the objective function, it will likely prematurely stop exploring. More recent methods add auxiliary intrinsic rewards to encourage exploration. However, auxiliary rewards lead to a non-stationary target for the Q-function. In this paper, we present a novel approach that (1) plans exploration actions far into the future by using a long-term visitation count, and (2) decouples exploration and exploitation by learning a separate function assessing the exploration value of the actions. Contrary to existing methods that use models of reward and dynamics, our approach is off-policy and model-free. We further propose new tabular environments for benchmarking exploration in reinforcement learning. Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function. Results also suggest that our approach scales gracefully with the size of the environment.Description
Publisher Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Keywords
Exploration, Off-policy, Reinforcement learning, Sparse reward, Upper confidence bound
Other note
Citation
Parisi, S, Tateo, D, Hensel, M, D’eramo, C, Peters, J & Pajarinen, J 2022, ' Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning ', Algorithms, vol. 15, no. 3, 81, pp. 1-44 . https://doi.org/10.3390/a15030081