Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search
Loading...
URL
Journal Title
Journal ISSN
Volume Title
School of Science |
Master's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2024-09-30
Department
Major/Subject
Machine Learning, Data Science and Artificial Intelligence
Mcode
SCI3044
Degree programme
Master's Programme in Computer, Communication and Information Sciences
Language
en
Pages
144
Series
Abstract
In this Thesis we consider Code World Models, world models generated by a Large Language Model (LLM) in the form of Python code for offline model-based Reinforcement Learning (RL) guided by language instructions. Calling code instead of LLMs for planning has the advantages of being precise, reliable, interpretable, and extremely efficient. However, writing appropriate Code World Models requires the ability to understand complex instructions, to generate exact code with non-trivial logic and to self-debug a long program with feedback from unit tests and environment trajectories. To address these challenges, we propose Generate, Improve and Fix with Monte Carlo Tree Search (GIF-MCTS), a new code generation strategy for LLMs. To test our approach, we introduce the Code World Models Benchmark (CWMB), a suite of program synthesis and planning tasks comprised of 18 diverse RL environments paired with corresponding textual descriptions and curated trajectories. GIF-MCTS surpasses all baselines on the CWMB and two other benchmarks, and we show that the Code World Models synthesized with it can be successfully used for planning, resulting in language-following offline model-based RL agents with greatly improved sample efficiency and inference speed.Description
Supervisor
Marttinen, PekkaThesis advisor
Dainese, NicolaKeywords
large language models, reinforcement learning, Monte Carlo tree search, code synthesis, model-based reinforcement learning, offline reinforcement learning