Instilling trust and creating additional value to machine learning: A qualitative study examining the effects of explainability and envelopment in modelling goals in football
Loading...
URL
Journal Title
Journal ISSN
Volume Title
School of Business |
Master's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2022
Department
Major/Subject
Mcode
Degree programme
Information and Service Management (ISM)
Language
en
Pages
65
Series
Abstract
This master’s thesis examines the effects of envelopment of artificial intelligence systems and machine learning explainability to the perceptions of expert workers. Envelopment of AI systems is a framework that aims to create boundaries to control the environment where the AI system will function in. As a result of this controlled approach, the deployment of the system is safer and sustainable. Machine learning explainability is an umbrella term for a vast array of methods that try to explain either the decision logic or the outputs of a black box model and thus create understanding, trust, and accountability to the results. In particular, the consequences of both aforementioned methods in how the specialists perceive the trustworthiness of the models they apply in their work, as well as whether they are able to derive additional value from the explanations provided on top of the machine learning model outputs, are studied. There is a research gap in the literature on the empirical side studying the potential benefits of these methods, which this research tries to cover. Semi-structured interviews are used to study the attitudes of the expert workers and the results are analysed with thematic analysis method to display as objective outlook as possible into the perceived usability of envelopment and explainability. The results suggest that there is a strong case to be made for both methods in evoking trust towards machine learning models. Additionally, a local explainability method was found to create valuable insights and being able to function as a platform for further discussions and analyses made from the outcomes of the black box predictor. In the discussion chapter, four propositions are made to benefit the most from envelopment and explainability. In addition to these theoretical implications, recommendations for managers are made based on the theoretical discoveries.Description
Thesis advisor
Penttinen, EskoKeywords
koneoppiminen, tekoäly, selitettävyys, ympäröimisstrategia