Browsing by Author "Peltola, Tomi, Dr., Aalto University, Department of Computer Science, Finland"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Methods for probabilistic modeling of knowledge elicitation for improving machine learning predictions(Aalto University, 2020) Afrabandpey, Homayun; Peltola, Tomi, Dr., Aalto University, Department of Computer Science, Finland; Tietotekniikan laitos; Department of Computer Science; Probabilistic Machine Learning (PML); Perustieteiden korkeakoulu; School of Science; Kaski, Samuel, Prof., Aalto University, Department of Computer Science, FinlandMany applications of supervised machine learning consist of training data with a large number of features and small sample size. Constructing models with reliable predictive performance in such applications is challenging. To alleviate these challenges, either more samples are required, which could be very difficult or even impossible in some applications to obtain, or additional sources of information are required to regularize models. One of the additional sources of information is the domain expert, however, extracting knowledge from a human expert can itself be difficult; it will require some computer systems that experts could effectively and effortlessly interact with. This thesis proposes novel knowledge elicitation approaches, to improve the predictive performance of statistical models. The first contribution of this thesis is to develop methods that incorporate different types of knowledge on features extracted from domain expert, into the construction of the machine learning model. Several solutions are proposed for knowledge elicitation, including interactive visualization of the effect of feedback on features, and active learning. Experiments demonstrate that the proposed methods improve the predictive performance of an underlying model through adoption of limited interaction with the user. The second contribution of the thesis is to develop a new approach to the interpretability of Bayesian predictive models to facilitate the interaction of human users with Bayesian black-box predictive models. The proposed approach separates model specification from model interpretation, via a two-stage decision--theoretical approach: first construct a highly predictive model without compromising accuracy and then optimize the interpretability. Conducted experiments demonstrate that the proposed method constructs models which are more accurate, and yet more interpretable than the alternative practice of incorporation of interpretability constraints into the model specification via prior distribution.