Exploring Contextual Representation and Multi-modality for End-to-end Autonomous Driving
dc.contributor | Aalto-yliopisto | fi |
dc.contributor | Aalto University | en |
dc.contributor.author | Azam, Shoaib | en_US |
dc.contributor.author | Munir, Farzeen | en_US |
dc.contributor.author | Kyrki, Ville | en_US |
dc.contributor.author | Kucner, Tomasz Piotr | en_US |
dc.contributor.author | Jeon, Moongu | en_US |
dc.contributor.author | Pedrycz, Witold | en_US |
dc.contributor.department | Department of Electrical Engineering and Automation | en |
dc.contributor.groupauthor | Intelligent Robotics | en |
dc.contributor.groupauthor | Mobile Robotics | en |
dc.contributor.organization | Gwangju Institute of Science and Technology | en_US |
dc.contributor.organization | University of Alberta | en_US |
dc.date.accessioned | 2024-06-14T07:46:52Z | |
dc.date.available | 2024-06-14T07:46:52Z | |
dc.date.issued | 2024-09 | en_US |
dc.description.abstract | Learning contextual and spatial environmental representations enhances autonomous vehicle’s hazard anticipation and decision-making in complex scenarios. Recent perception systems enhance spatial understanding with sensor fusion but often lack global environmental context. Humans, when driving, naturally employ neural maps that integrate various factors such as historical data, situational subtleties, and behavioral predictions of other road users to form a rich contextual understanding of their surroundings. This neural map-based comprehension is integral to making informed decisions on the road. In contrast, even with their significant advancements, autonomous systems have yet to fully harness this depth of human-like contextual understanding. Motivated by this, our work draws inspiration from human driving patterns and seeks to formalize the sensor fusion approach within an end-to-end autonomous driving framework. We introduce a framework that integrates three cameras (left, right, and center) to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation. The sensor data is fused and encoded using a self-attention mechanism, leading to an auto-regressive waypoint prediction module. We treat feature representation as a sequential problem, employing a vision transformer to distill the contextual interplay between sensor modalities. The efficacy of the proposed method is experimentally evaluated in both open and closed-loop settings. Our method achieves displacement error by 0.67 m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset. In closed-loop evaluations on CARLA’s Town05 Long and Longest6 benchmarks, the proposed method enhances driving performance, route completion, and reduces infractions. | en |
dc.description.version | Peer reviewed | en |
dc.format.extent | 13 | |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Azam, S, Munir, F, Kyrki, V, Kucner, T P, Jeon, M & Pedrycz, W 2024, ' Exploring Contextual Representation and Multi-modality for End-to-end Autonomous Driving ', Engineering Applications of Artificial Intelligence, vol. 135, 108767 . https://doi.org/10.1016/j.engappai.2024.108767 | en |
dc.identifier.doi | 10.1016/j.engappai.2024.108767 | en_US |
dc.identifier.issn | 0952-1976 | |
dc.identifier.other | PURE UUID: 5d77ac89-3c09-43b3-9726-06ca3dc009cb | en_US |
dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/5d77ac89-3c09-43b3-9726-06ca3dc009cb | en_US |
dc.identifier.other | PURE LINK: http://www.scopus.com/inward/record.url?scp=85195421264&partnerID=8YFLogxK | en_US |
dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/148460472/1-s2.0-S0952197624009254-main.pdf | en_US |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/128715 | |
dc.identifier.urn | URN:NBN:fi:aalto-202406144304 | |
dc.language.iso | en | en |
dc.publisher | Elsevier Ltd | |
dc.relation.ispartofseries | Engineering Applications of Artificial Intelligence | |
dc.rights | openAccess | en |
dc.subject.keyword | Vision transformer | en_US |
dc.subject.keyword | Imitation learning | en_US |
dc.subject.keyword | Attention | en_US |
dc.subject.keyword | Vision-centric autonomous driving | en_US |
dc.subject.keyword | Contextual representation | en_US |
dc.title | Exploring Contextual Representation and Multi-modality for End-to-end Autonomous Driving | en |
dc.type | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä | fi |
dc.type.version | publishedVersion |