Partially Observable Markov Decision Processes in Robotics A Survey
| dc.contributor | Aalto-yliopisto | fi |
| dc.contributor | Aalto University | en |
| dc.contributor.author | Lauri, Mikko | en_US |
| dc.contributor.author | Hsu, David | en_US |
| dc.contributor.author | Pajarinen, Joni | en_US |
| dc.contributor.department | Department of Electrical Engineering and Automation | en |
| dc.contributor.groupauthor | Robot Learning | en |
| dc.contributor.organization | University of Hamburg | en_US |
| dc.contributor.organization | National University of Singapore | en_US |
| dc.date.accessioned | 2022-10-26T06:27:56Z | |
| dc.date.available | 2022-10-26T06:27:56Z | |
| dc.date.issued | 2023-02-01 | en_US |
| dc.description | Publisher Copyright: IEEE | |
| dc.description.abstract | Noisy sensing, imperfect control, and environment changes are defining characteristics of many real-world robot tasks. The partially observable Markov decision process (POMDP) provides a principled mathematical framework for modeling and solving robot decision and control tasks under uncertainty. Over the last decade, it has seen many successful applications, spanning localization and navigation, search and tracking, autonomous driving, multi-robot systems, manipulation, and human-robot interaction. This survey aims to bridge the gap between the development of POMDP models and algorithms at one end and application to diverse robot decision tasks at the other. It analyzes the characteristics of these tasks and connects them with the mathematical and algorithmic properties of the POMDP framework for effective modeling and solution. For practitioners, the survey provides some of the key task characteristics in deciding when and how to apply POMDPs to robot tasks successfully. For POMDP algorithm designers, the survey provides new insights into the unique challenges of applying POMDPs to robot systems and points to promising new directions for further research. | en |
| dc.description.version | Peer reviewed | en |
| dc.format.extent | 20 | |
| dc.format.mimetype | application/pdf | en_US |
| dc.identifier.citation | Lauri, M, Hsu, D & Pajarinen, J 2023, 'Partially Observable Markov Decision Processes in Robotics A Survey', IEEE Transactions on Robotics, vol. 39, no. 1, pp. 21-40. https://doi.org/10.1109/TRO.2022.3200138 | en |
| dc.identifier.doi | 10.1109/TRO.2022.3200138 | en_US |
| dc.identifier.issn | 1552-3098 | |
| dc.identifier.issn | 1941-0468 | |
| dc.identifier.other | PURE UUID: 54592846-3490-4bda-82a8-05bcc393fa9b | en_US |
| dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/54592846-3490-4bda-82a8-05bcc393fa9b | en_US |
| dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/89841074/Pajarinen_et_al_Partially_Observable_Markov.pdf | |
| dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/117456 | |
| dc.identifier.urn | URN:NBN:fi:aalto-202210266238 | |
| dc.language.iso | en | en |
| dc.publisher | IEEE | |
| dc.relation.ispartofseries | IEEE Transactions on Robotics | en |
| dc.relation.ispartofseries | Volume 39, issue 1, pp. 21-40 | en |
| dc.rights | openAccess | en |
| dc.subject.keyword | AI-based methods | en_US |
| dc.subject.keyword | autonomous agents | en_US |
| dc.subject.keyword | Markov processes | en_US |
| dc.subject.keyword | partially observable Markov decision process (POMDP) | en_US |
| dc.subject.keyword | Planning | en_US |
| dc.subject.keyword | planning under uncertainty | en_US |
| dc.subject.keyword | Robot kinematics | en_US |
| dc.subject.keyword | Robot sensing systems | en_US |
| dc.subject.keyword | Robots | en_US |
| dc.subject.keyword | scheduling and coordination | en_US |
| dc.subject.keyword | Task analysis | en_US |
| dc.subject.keyword | Uncertainty | en_US |
| dc.title | Partially Observable Markov Decision Processes in Robotics A Survey | en |
| dc.type | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä | fi |
| dc.type.version | acceptedVersion |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Pajarinen_et_al_Partially_Observable_Markov.pdf
- Size:
- 4.38 MB
- Format:
- Adobe Portable Document Format