Reader: Model-based language-instructed reinforcement learning
| dc.contributor | Aalto-yliopisto | fi |
| dc.contributor | Aalto University | en |
| dc.contributor.author | Dainese, Nicola | en_US |
| dc.contributor.author | Marttinen, Pekka | en_US |
| dc.contributor.author | Ilin, Alexander | en_US |
| dc.contributor.department | Department of Computer Science | en |
| dc.contributor.editor | Bouamor, Houda | en_US |
| dc.contributor.editor | Pino, Juan | en_US |
| dc.contributor.editor | Bali, Kalika | en_US |
| dc.contributor.groupauthor | Professorship Marttinen Pekka | en |
| dc.contributor.groupauthor | Computer Science Professors | en |
| dc.contributor.groupauthor | Computer Science - Artificial Intelligence and Machine Learning (AIML) - Research area | en |
| dc.contributor.groupauthor | Computer Science Professors of Practice | en |
| dc.date.accessioned | 2024-01-04T09:04:52Z | |
| dc.date.available | 2024-01-04T09:04:52Z | |
| dc.date.issued | 2023 | en_US |
| dc.description.abstract | We explore how we can build accurate world models, which are partially specified by language, and how we can plan with them in the face of novelty and uncertainty. We propose the first model-based reinforcement learning approach to tackle the environment Read To Fight Monsters (Zhong et al., 2019), a grounded policy learning problem. In RTFM an agent has to reason over a set of rules and a goal, both described in a language manual, and the observations, while taking into account the uncertainty arising from the stochasticity of the environment, in order to generalize successfully its policy to test episodes. We demonstrate the superior performance and sample efficiency of our model-based approach to the existing model-free SOTA agents in eight variants of RTFM. Furthermore, we show how the agent’s plans can be inspected, which represents progress towards more interpretable agents. | en |
| dc.description.version | Peer reviewed | en |
| dc.format.mimetype | application/pdf | en_US |
| dc.identifier.citation | Dainese, N, Marttinen, P & Ilin, A 2023, Reader: Model-based language-instructed reinforcement learning. in H Bouamor, J Pino & K Bali (eds), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 16583–16599, Conference on Empirical Methods in Natural Language Processing, Singapore, Singapore, 06/12/2023. < https://aclanthology.org/2023.emnlp-main.1032 > | en |
| dc.identifier.isbn | 979-8-89176-060-8 | |
| dc.identifier.other | PURE UUID: a344f405-9106-4628-b6e5-632a66040bb5 | en_US |
| dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/a344f405-9106-4628-b6e5-632a66040bb5 | en_US |
| dc.identifier.other | PURE LINK: https://aclanthology.org/2023.emnlp-main.1032 | en_US |
| dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/130972931/SCI_Dainese_etal_EMNLP_2023.pdf | en_US |
| dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/125486 | |
| dc.identifier.urn | URN:NBN:fi:aalto-202401041175 | |
| dc.language.iso | en | en |
| dc.relation.ispartof | Conference on Empirical Methods in Natural Language Processing | en |
| dc.relation.ispartofseries | Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing | en |
| dc.relation.ispartofseries | pp. 16583–16599 | en |
| dc.rights | openAccess | en |
| dc.title | Reader: Model-based language-instructed reinforcement learning | en |
| dc.type | A4 Artikkeli konferenssijulkaisussa | fi |
| dc.type.version | publishedVersion |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- SCI_Dainese_etal_EMNLP_2023.pdf
- Size:
- 6.21 MB
- Format:
- Adobe Portable Document Format