A lightweight generative model for interpretable subject-level prediction

Loading...
Thumbnail Image

Access rights

openAccess
CC BY-NC-ND
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Date

2025-04

Major/Subject

Mcode

Degree programme

Language

en

Pages

18

Series

Medical Image Analysis, Volume 101, pp. 1-18

Abstract

Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject's diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause–effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at https://github.com/chiara-mauri/Interpretable-subject-level-prediction.

Description

Publisher Copyright: © 2024

Keywords

Brain age, Explainable AI, Generative models, Image-based prediction

Other note

Citation

Mauri, C, Cerri, S, Puonti, O, Mühlau, M & Van Leemput, K 2025, ' A lightweight generative model for interpretable subject-level prediction ', Medical Image Analysis, vol. 101, 103436, pp. 1-18 . https://doi.org/10.1016/j.media.2024.103436