Bayesian model assessment and selection using expected utilities
No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
Doctoral thesis (monograph)
Checking the digitized thesis and permission for publishing
Instructions for the author
Instructions for the author
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Author
Date
2001-12-14
Major/Subject
Mcode
Degree programme
Language
en
Pages
110
Series
Helsinki University of Technology Laboratory of Computational Engineering publications. Report B, 29
Abstract
In this work, we discuss practical methods for the Bayesian model assessment and selection based on expected utilities, and propose several new methods and techniques for the analysis of the models. The Bayesian approach offers a consistent way to use probabilities to quantify uncertainty in inference resulting in a probability distribution expressing our beliefs regarding how likely the different predictions are. The use of Bayesian models in increasingly complex problems is facilitated by advances in Markov chain Monte Carlo methods and computing power. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. With application specific utilities, the expected benefit or the cost of using the model can be readily computed. We propose an approach using cross-validation predictive densities to compute the expected utilities and Bayesian bootstrap to obtain samples from their distributions. Instead of just making a point estimate, it is important to estimate the distribution of the expected utility, as it describes the uncertainty in the estimate. The distributions of the expected utilities can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. The expected utilities take into account how the model predictions are going to be used and thus may reveal that even the best model selected may be inadequate or not practically better than the previously used models. To make the model easier to analyse, or to reduce the cost of making measurements or computation, it may be useful to select a smaller set of input variables. Computing the cross-validation predictive densities for all possible input combinations easily becomes computationally prohibitive. We propose to use a variable dimension Markov chain Monte Carlo method to find out potentially useful input combinations, for which the final model choice and assessment is done using the cross-validation predictive densities. We demonstrate the usefulness of the presented approaches with MLP neural networks and Gaussian process models in three challenging real-world case problems.Description
Keywords
Bayesian approach, expected utility, model assessment, model comparison, model selection, cross-validation