Candidate Solutions for Defining Explainability Requirements of AI Systems

No Thumbnail Available

Access rights

openAccess
acceptedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2024-04-08

Major/Subject

Mcode

Degree programme

Language

en

Pages

18

Series

Requirements Engineering: Foundation for Software Quality: 30th International Working Conference, REFSQ 2024, Winterthur, Switzerland, April 8–11, 2024, Proceedings, pp. 129–146, Lecture Notes in Computer Science ; Volume 14588

Abstract

[Context and Motivation] Many recent studies highlight explainability as an important requirement that supports in building transparent, trustworthy, and responsible AI systems. As a result, there is an increasing number of solutions that researchers have developed to assist in the definition of explainability requirements. [Question] We conducted a literature study to analyze what kind of candidate solutions are proposed for defining the explainability requirements of AI systems. The focus of this literature review is especially on the field of requirements engineering (RE). [Results] The proposed solutions for defining explainability requirements such as approaches, frameworks, and models are comprehensive. They can be used not only for RE activities but also for testing and evaluating the explainability of AI systems. In addition to the comprehensive solutions, we identified 30 practices that support the development of explainable AI systems. The literature study also revealed that most of the proposed solutions have not been evaluated in real projects, and there is a need for empirical studies. [Contribution] For researchers, the study provides an overview of the candidate solutions and describes research gaps. For practitioners, the paper summarizes potential practices that can help them define and evaluate the explainability requirements of AI systems.

Description

Keywords

Explainability Requirements, Explainable AI, AI Systems, Explainability Practices

Other note

Citation

Balasubramaniam, N, Kauppinen, M, Truong, L & Kujala, S 2024, Candidate Solutions for Defining Explainability Requirements of AI Systems. in D Mendez, D Mendez, A Moreira & A Moreira (eds), Requirements Engineering: Foundation for Software Quality : 30th International Working Conference, REFSQ 2024, Winterthur, Switzerland, April 8–11, 2024, Proceedings. Lecture Notes in Computer Science, vol. 14588, Springer, pp. 129–146, International Working Conference on Requirements Engineering, Winterthur, Switzerland, 08/04/2024. https://doi.org/10.1007/978-3-031-57327-9_8