MSRL: Distributed Reinforcement Learning with Dataflow Fragments
Loading...
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2023
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
977-993
Series
Proceedings of the 2023 USENIX Annual Technical Conference
Abstract
A wide range of reinforcement learning (RL) algorithms have been proposed, in which agents learn from interactions with a simulated environment. Executing such RL training loops is computationally expensive, but current RL systems fail to support the training loops of different RL algorithms efficiently on GPU clusters: they either hard-code algorithm-specific strategies for parallelization and distribution; or they accelerate only parts of the computation on GPUs (e.g., DNN policy updates). We observe that current systems lack an abstraction that decouples the definition of an RL algorithm from its strategy for distributed execution. We describe MSRL, a distributed RL training system that uses the new abstraction of a fragmented dataflow graph (FDG) to execute RL algorithms in a flexible way. An FDG is a heterogenous dataflow representation of an RL algorithm, which maps functions from the RL training loop to independent parallel dataflow fragments. Fragments account for the diverse nature of RL algorithms: each fragment can execute on a different device through a low-level dataflow implementation, e.g., an operator graph of a DNN engine, a CUDA GPU kernel, or a multi-threaded CPU process. At deployment time, a distribution policy governs how fragments are mapped to devices, without requiring changes to the RL algorithm implementation. Our experiments show that MSRL exposes trade-offs between different execution strategies, while surpassing the performance of existing RL systems with fixed execution strategies.Description
Keywords
Other note
Citation
Zhu, H, Zhao, B, Chen, G, Chen, W, Chen, Y, Shi, L, Yang, Y, Pietzuch, P & Chen, L 2023, MSRL: Distributed Reinforcement Learning with Dataflow Fragments . in Proceedings of the 2023 USENIX Annual Technical Conference . USENIX -The Advanced Computing Systems Association, pp. 977-993, USENIX Annual Technical Conference, Boston, Massachusetts, United States, 10/07/2023 . < https://www.usenix.org/conference/atc23/presentation/zhu-huanzhou >