Explainable Artificial Intelligence for Radio Resource Management Systems: A diverse feature importance approach
Loading...
URL
Journal Title
Journal ISSN
Volume Title
Sähkötekniikan korkeakoulu |
Master's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
Department
Major/Subject
Mcode
ELEC3059
Degree programme
Language
en
Pages
120+6
Series
Abstract
The field of wireless communications is arguably one of the most rapidly developing technological fields. Therefore, with each new advancement in this field, the complexity of wireless systems can grow significantly. This phenomenon is most visible in mobile communications, where the current 5G and 6G RANs have reached unprecedented complexity levels to satisfy diverse increasing demands. In such increasingly complex environments, managing resources is becoming more and more challenging. Thus, experts employed performant AI techniques to aid RRM decisions. However, these AI techniques are often difficult to understand by humans and may receive unimportant inputs which unnecessarily increase their complexity. In this work, we propose an explainability pipeline meant to be used for increasing humans' understanding of AI models for RRM, as well as for reducing the complexity of these models, without loss of performance. To achieve this, the pipeline generates diverse feature importance explanations of the models with the help of three XAI methods: Kernel SHAP, CERTIFAI, and Anchors, and performs an importance-based feature selection using one of three different strategies. In the case of Anchors, we formulate and utilize a new way of computing feature importance scores, since no current publication in the XAI literature suggests a way to do this. Finally, we applied the proposed pipeline to a RL-based RRM system. Our results show that we could reduce the complexity of the RL model between ~ 27.5% and ~ 62.5% according to different metrics, without loss of performance. Moreover, we showed that the explanations produced by our pipeline can be used to answer some of the most common XAI questions about our RL model, thus increasing its understandability. Lastly, we achieved an unprecedented result showing that our RL agent could be completely replaced with Anchors rules when taking RRM decisions, without a significant loss of performance, but with a considerable gain in understandability.Description
Supervisor
Gross, JamesThesis advisor
Imtiaz, SaharMoysen Cortes, Jessica