Potential of explanations in enhancing trust – What can we learn from autonomous vehicles to foster the development of trustworthy autonomous vessels?
Loading...
Access rights
openAccess
CC BY
CC BY
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
11
Series
Ocean Engineering, Volume 325
Abstract
The development of autonomous vessels presents a complex socio-technical challenge where AI and humans must coexist and cooperate. A crucial aspect of successfully deploying these systems is ensuring trust in the AI-powered autonomy. Our research aims to explore the potential of explanations in enhancing trust and its correlated metrics (such as preference, understanding, anxiety) in autonomous vessels. While the investigation of explainability and its role in increasing end-user trust is still at an elementary level for autonomous vessels, it has already been identified as a key requirement for successful adoption of self-driving cars and highly automated vehicles in general. We conducted a systematic literature review to investigate how the impact of explainability on trust and its correlated metrics has been studied in the domain of autonomous vehicles. We examined the diverse experimental setups employed to assess trust-building, exploring instruments, explanation modes, types, timings, and additional human factors influencing trust. The study scrutinizes prevalent data collection methods and commonly used questionnaires for measuring trust levels following explanations and examines the characteristics and theories integral to effective explanations for trust development. Review results indicate that explanations generally have a positive impact on trust and its correlated metrics preference, although this impact is not statistically significant in all cases. The effect of explanations on correlated metrics understanding was found to be statistically significant in all cases. For correlated metrics anxiety, a decrease was observed with the presence of explanations in most cases, even though this decrease wasn't always statistically significant. This study discusses how lessons learned from autonomous vehicles can be applied in the context of autonomous vessels, with the aim of fostering the development of trustworthy autonomous vessels.Description
Publisher Copyright: © 2025 The Authors
Other note
Citation
Ranjan, R, Kulkarni, K & Musharraf, M 2025, 'Potential of explanations in enhancing trust – What can we learn from autonomous vehicles to foster the development of trustworthy autonomous vessels?', Ocean Engineering, vol. 325, 120753. https://doi.org/10.1016/j.oceaneng.2025.120753