Transparency and Explainability of AI Systems: Ethical Guidelines in Practice
Loading...
Access rights
openAccess
acceptedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Date
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
16
Series
Requirements Engineering: Foundation for Software Quality - 28th International Working Conference, Proceedings, pp. 3-18, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; Volume 13216 LNCS
Abstract
[Context and Motivation] Recent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of defining these quality requirements in practice. [Question] The goal of our study was to explore what ethical guidelines organizations have defined for the development of transparent and explainable AI systems. We analyzed the ethical guidelines in 16 organizations representing different industries and public sector. [Results] In the ethical guidelines, the importance of transparency was highlighted by almost all of the organizations, and explainability was considered as an integral part of transparency. Building trust in AI systems was one of the key reasons for developing transparency and explainability, and customers and users were raised as the main target groups of the explanations. The organizations also mentioned developers, partners, and stakeholders as important groups needing explanations. The ethical guidelines contained the following aspects of the AI system that should be explained: the purpose, role of AI, inputs, behavior, data utilized, outputs, and limitations. The guidelines also pointed out that transparency and explainability relate to several other quality requirements, such as trustworthiness, understandability, traceability, privacy, auditability, and fairness. [Contribution] For researchers, this paper provides insights into what organizations consider important in the transparency and, in particular, explainability of AI systems. For practitioners, this study suggests a structured way to define explainability requirements of AI systems.Description
Publisher Copyright: © 2022, Springer Nature Switzerland AG.
Other note
Citation
Balasubramaniam, N, Kauppinen, M, Hiekkanen, K & Kujala, S 2022, Transparency and Explainability of AI Systems: Ethical Guidelines in Practice. in V Gervasi & A Vogelsang (eds), Requirements Engineering : Foundation for Software Quality - 28th International Working Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13216 LNCS, Springer, pp. 3-18, International Working Conference on Requirements Engineering, Birmingham, United Kingdom, 21/03/2022. https://doi.org/10.1007/978-3-030-98464-9_1