Improving Hyperparameter Learning under Approximate Inference in Gaussian Process Models

Loading...
Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Major/Subject

Mcode

Degree programme

Language

en

Pages

21

Series

Proceedings of the 40th International Conference on Machine Learning, pp. 19595-19615, Proceedings of Machine Learning Research ; Volume 202

Abstract

Approximate inference in Gaussian process (GP) models with non-conjugate likelihoods gets entangled with the learning of the model hyperparameters. We improve hyperparameter learning in GP models and focus on the interplay between variational inference (VI) and the learning target. While VI’s lower bound to the marginal likelihood is a suitable objective for inferring the approximate posterior, we show that a direct approximation of the marginal likelihood as in Expectation Propagation (EP) is a better learning objective for hyperparameter optimization. We design a hybrid training procedure to bring the best of both worlds: it leverages conjugate-computation VI for inference and uses an EP-like marginal likelihood approximation for hyperparameter learning. We compare VI, EP, Laplace approximation, and our proposed training procedure and empirically demonstrate the effectiveness of our proposal across a wide range of data sets.

Description

Keywords

Other note

Citation

Li, R, John, ST & Solin, A 2023, Improving Hyperparameter Learning under Approximate Inference in Gaussian Process Models. in A Krause, E Brunskill, K Cho, B Engelhardt, S Sabato & J Scarlett (eds), Proceedings of the 40th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 202, JMLR, pp. 19595-19615, International Conference on Machine Learning, Honolulu, Hawaii, United States, 23/07/2023. < https://proceedings.mlr.press/v202/li23m.html >