Data-efficient Reinforcement Learning for Variable Impedance Control

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Date
2024
Major/Subject
Mcode
Degree programme
Language
en
Pages
11
15631-15641
Series
IEEE Access, Volume 12
Abstract
One of the most crucial steps toward achieving human-like manipulation skills in robots is to incorporate compliance into the robot controller. Compliance not only makes the robot’s behaviour safe but also makes it more energy efficient. In this direction, the variable impedance control (VIC) approach provides a framework for a robot to adapt its compliance during execution by employing an adaptive impedance law. Nevertheless, autonomously adapting the compliance profile as demanded by the task remains a challenging problem to be solved in practice. In this work, we introduce a reinforcement learning (RL)-based approach called DEVILC (Data-Efficient Variable Impedance Learning Controller) to learn the variable impedance controller through real-world interaction of the robot. More concretely, we use a model-based RL approach in which, after every interaction, the robot iteratively learns a probabilistic model of its dynamics using the Gaussian process regression model. The model is then used to optimize a neural-network policy that modulates the robot’s impedance such that the long-term reward for the task is maximized. Thanks to the model-based RL framework, DEVILC allows a robot to learn the VIC policy with only a few interactions, making it practical for real-world applications. In simulations and experiments, we evaluate DEVILC on a Franka Emika Panda robotic manipulator for different manipulation tasks in the Cartesian space. The results show that DEVILC is a promising direction toward autonomously learning compliant manipulation skills directly in the real world through interactions. A video of the experiments is available in the link: https://youtu.be/_uyr0Vye5no
Description
Publisher Copyright: Authors
Keywords
Adaptation models, Aerospace electronics, Covariance matrix adaptation, Gaussian processes, Impedance, Jacobian matrices, Model-based reinforcement learning, Reinforcement learning, Robots, Task analysis, Variable impedance learning control
Other note
Citation
Anand, A S, Kaushik, R, Gravdahl, J T & Abu-Dakka, F J 2024, ' Data-efficient Reinforcement Learning for Variable Impedance Control ', IEEE Access, vol. 12, pp. 15631-15641 . https://doi.org/10.1109/ACCESS.2024.3355311