Modelling Human Decision-making based on Aggregate Observation Data
dc.contributor | Aalto-yliopisto | fi |
dc.contributor | Aalto University | en |
dc.contributor.author | Kangasrääsiö, Antti | en_US |
dc.contributor.author | Kaski, Samuel | en_US |
dc.contributor.department | Department of Computer Science | en |
dc.contributor.groupauthor | Centre of Excellence in Computational Inference, COIN | en |
dc.contributor.groupauthor | Professorship Kaski Samuel | en |
dc.contributor.groupauthor | Helsinki Institute for Information Technology (HIIT) | en |
dc.contributor.groupauthor | Probabilistic Machine Learning | en |
dc.date.accessioned | 2019-02-25T08:42:39Z | |
dc.date.available | 2019-02-25T08:42:39Z | |
dc.date.issued | 2017 | en_US |
dc.description.abstract | Being able to infer the goals, preferences and limitations of humans is of key importance in designing interactive systems. Reinforcement learning (RL) models are a promising direction of research, as they are able to model how the behavioural patterns of users emerge from the task and environment structure. One limitation with traditional inference methods for RL models is the strict requirements for observation data; both the states of the environment and the actions of the agent need to be observed at each step of the task. This has prevented RL models from being used in situations where such fine-grained observations are not available. In this extended abstract we present results from a recent study where we demonstrated how inference can be performed for RL models even when the observation data is significantly more coarse-grained. The idea is to solve the inverse reinforcement learning (IRL) problem using approximate Bayesian computation sped up with Bayesian optimization. | en |
dc.description.version | Peer reviewed | en |
dc.format.extent | 4 | |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Kangasrääsiö, A & Kaski, S 2017, Modelling Human Decision-making based on Aggregate Observation Data . in Human In The Loop-ML Workshop at ICML . Human in the Loop Machine Learning, Sydney, Human in the Loop Machine Learning; ICML Workshop, Sydney, Australia, 11/08/2017 . | en |
dc.identifier.other | PURE UUID: 25d9924d-3936-4f4e-94ab-7164d9c5e896 | en_US |
dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/25d9924d-3936-4f4e-94ab-7164d9c5e896 | en_US |
dc.identifier.other | PURE LINK: https://machlearn.gitlab.io/hitl2017/ | en_US |
dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/14255048/ICML17_WS.pdf | en_US |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/36680 | |
dc.identifier.urn | URN:NBN:fi:aalto-201902251837 | |
dc.language.iso | en | en |
dc.relation.ispartof | Human in the Loop Machine Learning; ICML Workshop | en |
dc.relation.ispartofseries | Human In The Loop-ML Workshop at ICML | en |
dc.rights | openAccess | en |
dc.title | Modelling Human Decision-making based on Aggregate Observation Data | en |
dc.type | A4 Artikkeli konferenssijulkaisussa | fi |
dc.type.version | acceptedVersion |