Advances in variational Bayesian nonlinear blind source separation

No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
Doctoral thesis (article-based)
Checking the digitized thesis and permission for publishing
Instructions for the author
Date
2005-05-13
Major/Subject
Mcode
Degree programme
Language
en
Pages
74, [83]
Series
Dissertations in computer and information science. Report D, 10
Abstract
Linear data analysis methods such as factor analysis (FA), independent component analysis (ICA) and blind source separation (BSS) as well as state-space models such as the Kalman filter model are used in a wide range of applications. In many of these, linearity is just a convenient approximation while the underlying effect is nonlinear. It would therefore be more appropriate to use nonlinear methods. In this work, nonlinear generalisations of FA and ICA/BSS are presented. The methods are based on a generative model, with a multilayer perceptron (MLP) network to model the nonlinearity from the latent variables to the observations. The model is estimated using variational Bayesian learning. The variational Bayesian method is well-suited for the nonlinear data analysis problems. The approach is also theoretically interesting, as essentially the same method is used in several different fields and can be derived from several different starting points, including statistical physics, information theory, Bayesian statistics, and information geometry. These complementary views can provide benefits for interpretation of the operation of the learning method and its results. Much of the work presented in this thesis consists of improvements that make the nonlinear factor analysis and blind source separation methods faster and more stable, while being applicable to other learning problems as well. The improvements include methods to accelerate convergence of alternating optimisation algorithms such as the EM algorithm and an improved approximation of the moments of a nonlinear transform of a multivariate probability distribution. These improvements can be easily applied to other models besides FA and ICA/BSS, such as nonlinear state-space models. A specialised version of the nonlinear factor analysis method for post-nonlinear mixtures is presented as well.
Description
Keywords
Bayesian learning, blind source separation, latent variable models, nonlinear blind source separation, nonlinear factor analysis, nonlinear models, post-nonlinear mixing, unsupervised learning, variational methods
Other note
Parts
  • H. Lappalainen and A. Honkela. 2000. Bayesian non-linear independent component analysis by multi-layer perceptrons. In: M. Girolami (editor), Advances in Independent Component Analysis, pp. 93-121. [article1.pdf] © 2000 Springer-Verlag. By permission.
  • A. Honkela, H. Valpola and J. Karhunen. 2003. Accelerating cyclic update algorithms for parameter estimation by pattern searches. Neural Processing Letters 17 (2), pp. 191-203.
  • A. Honkela and H. Valpola. 2004. Variational learning and bits-back coding: an information-theoretic view to Bayesian learning. IEEE Transactions on Neural Networks 15 (4), pp. 800-810. [article3.pdf] © 2004 IEEE. By permission.
  • A. Ilin and A. Honkela. 2004. Post-nonlinear independent component analysis by variational Bayesian learning. In: Proceedings of the Fifth International Conference on Independent Component Analysis and Blind Signal Separation (ICA 2004). Lecture Notes in Computer Science, vol. 3195, pp. 766-773. [article4.pdf] © 2004 Springer-Verlag. By permission.
  • A. Honkela, S. Harmeling, L. Lundqvist and H. Valpola. 2004. Using kernel PCA for initialisation of variational Bayesian nonlinear blind source separation method. In: Proceedings of the Fifth International Conference on Independent Component Analysis and Blind Signal Separation (ICA 2004). Lecture Notes in Computer Science, vol. 3195, pp. 790-797. [article5.pdf] © 2004 Springer-Verlag. By permission.
  • A. Honkela. 2004. Approximating nonlinear transformations of probability distributions for nonlinear independent component analysis. In: Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IJCNN 2004), pp. 2169-2174. [article6.pdf] © 2004 IEEE. By permission.
  • A. Honkela and H. Valpola. 2005. Unsupervised variational Bayesian learning of nonlinear models. In: L. Saul, Y. Weiss, and L. Bottou (editors), Advances in Neural Information Processing Systems 17, MIT Press, Cambridge, MA, USA, to appear. [article7.pdf] © 2005 by authors.
Citation
Permanent link to this item
https://urn.fi/urn:nbn:fi:tkk-005161