“Why Should I Trust You?” : Exploring Interpretability in Machine Learning Approaches for Indirect SHM
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Authors
Date
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
8
Series
The e-Journal of Nondestructive Testing & Ultrasonics, Volume 2024, issue 07
Abstract
Currently, machine learning (ML) methods are widely adopted in structural health monitoring (SHM), yet they are still mostly black boxes. On the other hand, given the significant responsibility associated with SHM, understanding the rationale behind it is critically important. In some cases, even experienced experts have difficulties finding evidence related to structural integrity within complex structural signals. Thus, solely relying on these black-box SHM systems carries inherent risks. Trustworthiness is the key for decision-makers when planning to act on predictions or deciding whether to deploy a new model. This understanding can also offer insights about the models, transforming untrustworthy models or predictions into reliable ones. The indirect SHM method using passing vehicles, an emerging technique in the past two decades, offers a rapid and cost-effective solution for bridge monitoring. Its signal components are affected by factors such as vehicle dynamics and road roughness, making them more complex than those in the direct method. Although ML methods have shown promising results in this domain, their results require further explanation. In this work, an interpretation tool is proposed to interpret the result prediction of ML methods in indirect SHM. The trustworthiness of models is demonstrated through simulation databases: deciding whether a prediction should be trusted, choosing between models, and determining why a classifier should not be trusted.Description
Publisher Copyright: © 2024 11th European Workshop on Structural Health Monitoring, EWSHM 2024. All rights reserved.
Other note
Citation
Lan, Y, Li, Z & Lin, W 2024, '“Why Should I Trust You?” : Exploring Interpretability in Machine Learning Approaches for Indirect SHM', The e-Journal of Nondestructive Testing & Ultrasonics, vol. 2024, no. 07. https://doi.org/10.58286/29792