Title: | Sample-Efficient Methods for Real-World Deep Reinforcement Learning |
Author(s): | Boney, Rinu |
Date: | 2022 |
Language: | en |
Pages: | 92 + app. 76 |
Department: | Tietotekniikan laitos Department of Computer Science |
ISBN: | 978-952-64-0809-5 (electronic) 978-952-64-0808-8 (printed) |
Series: | Aalto University publication series DOCTORAL THESES, 71/2022 |
ISSN: | 1799-4942 (electronic) 1799-4934 (printed) 1799-4934 (ISSN-L) |
Supervising professor(s): | Kannala, Juho, Prof., Aalto University, Department of Computer Science Science, Finland; Ilin, Alexander, Prof., Aalto University, Department of Computer Science Science, Finland |
Subject: | Computer science |
Keywords: | reinforcement learning, deep learning, robot learning, sample-efficient learning |
Archive | yes |
|
|
Abstract:Reinforcement learning (RL) is a general framework for learning and evaluating intelligent behaviors in any domain. Deep reinforcement learning combines RL with deep learning to learn expressive nonlinear functions that can interpret rich sensory signals to produce complex behaviors. However, this comes at the cost of increased sample complexity and instability, limiting the practical impact of deep RL algorithms on real-world problems. The thesis presents advances towards improving the sample efficiency and benchmarking of deep RL algorithms on real-world problems.
|
|
Parts:[Publication 1]: Rinu Boney, Alexander Ilin, Juho Kannala, and Jarno Seppanen. Learning to Play Imperfect-Information Games by Imitating an Oracle Planner. Accepted for publication in IEEE Transactions on Games, March 2021. DOI: 10.1109/TG.2021.3067723 View at Publisher [Publication 2]: Rinu Boney*, Norman Di Palo*, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, and Harri Valpola. Regularizing Trajectory Optimization with Denoising Autoencoders. In Advances in Neural Information Processing Systems 32, pp. 2859-2869, December 2019.[Publication 3]: Rinu Boney, Juho Kannala, and Alexander Ilin. Regularizing Model-Based Planning with Energy-Based Models. In Conference on Robot Learning, pp. 182-191, October 2019. Full text in ACRIS/Aaltodoc: http://urn.fi/URN:NBN:fi:aalto-202102021858. http://proceedings.mlr.press/v100/boney20a.html[Publication 4]: Rinu Boney, Alexander Ilin and Juho Kannala. Learning of feature points without additional supervision improves reinforcement learning from images, 2021. arXiv:2106.07995[Publication 5]: Ari Viitala*, Rinu Boney*, Yi Zhao, Alexander Ilin, and Juho Kannala. Learning to Drive (L2D) as a Low-Cost Benchmark for Real-World Reinforcement Learning. In 20th International Conference on Advanced Robotics, December 2021. DOI: 10.1109/ICAR53236.2021.9659342 View at Publisher [Publication 6]: Rinu Boney*, Jussi Sainio*, Mikko Kaivola, Arno Solin, and Juho Kannala. RealAnt: An Open-Source Low-Cost Quadruped for Education and Research in Real-World Reinforcement Learning, 2021. arXiv:2011.03085 |
|
|
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Page content by: Aalto University Learning Centre | Privacy policy of the service | About this site