Simulation-Aided Policy Tuning for Black-Box Robot Learning

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Date

Major/Subject

Mcode

Degree programme

Language

en

Pages

16

Series

IEEE Transactions on Robotics, Volume 41

Abstract

How can robots learn and adapt to new tasks and situations with little data? Systematic exploration and simulation are crucial tools for efficient robot learning. We present a novel black-box policy search algorithm focused on data-efficient policy improvements. The algorithm learns directly on the robot and treats simulation as an additional information source to speed up the learning process. At the core of the algorithm, a probabilistic model learns the dependence between the policy parameters and the robot learning objective not only by performing experiments on the robot, but also by leveraging data from a simulator. This substantially reduces interaction time with the robot. Using the model, we can guarantee improvements with high probability for each policy update, thereby facilitating fast, goal-oriented learning. We evaluate our algorithm on simulated fine-tuning tasks and demonstrate the data-efficiency of the proposed dual-information source optimization algorithm. In a real robot learning experiment, we show fast and successful task learning on a robot manipulator with the aid of an imperfect simulator.

Description

Other note

Citation

He, S, von Rohr, A, Baumann, D, Xiang, J & Trimpe, S 2025, 'Simulation-Aided Policy Tuning for Black-Box Robot Learning', IEEE Transactions on Robotics, vol. 41. https://doi.org/10.1109/TRO.2025.3539192