The Actor-Dueling-Critic Method for Reinforcement Learning
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2019-04-01
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
20
Series
Sensors (Basel, Switzerland), Volume 19, issue 7, pp. 1-20
Abstract
Model-free reinforcement learning is a powerful and efficient machine-learning paradigm which has been generally used in the robotic control domain. In the reinforcement learning setting, the value function method learns policies by maximizing the state-action value (Q value), but it suffers from inaccurate Q estimation and results in poor performance in a stochastic environment. To mitigate this issue, we present an approach based on the actor-critic framework, and in the critic branch we modify the manner of estimating Q-value by introducing the advantage function, such as dueling network, which can estimate the action-advantage value. The action-advantage value is independent of state and environment noise, we use it as a fine-tuning factor to the estimated Q value. We refer to this approach as the actor-dueling-critic (ADC) network since the frame is inspired by the dueling network. Furthermore, we redesign the dueling network part in the critic branch to make it adapt to the continuous action space. The method was tested on gym classic control environments and an obstacle avoidance environment, and we design a noise environment to test the training stability. The results indicate the ADC approach is more stable and converges faster than the DDPG method in noise environments.Description
Keywords
advantage, continuous control, DDPG, dueling network, reinforcement learning, Advantage, Continuous control, Reinforcement learning, Dueling network
Other note
Citation
Wu, M, Gao, Y, Jung, A, Zhang, Q & Du, S 2019, 'The Actor-Dueling-Critic Method for Reinforcement Learning', Sensors (Basel, Switzerland), vol. 19, no. 7, 1547, pp. 1-20. https://doi.org/10.3390/s19071547