Practical approaches to principal component analysis in the presence of missing values
No Thumbnail Available
URL
Journal Title
Journal ISSN
Volume Title
Faculty of Information and Natural Sciences |
D4 Julkaistu kehittämis- tai tutkimusraportti taikka -selvitys
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2008
Major/Subject
Mcode
Degree programme
Language
en
Pages
v, 37
Series
TKK reports in information and computer science, 6
Abstract
Principal component analysis (PCA) is a classical data analysis technique that finds linear transformations of data that retain maximal amount of variance. We study a case where some of the data values are missing, and show that this problem has many features which are usually associated with nonlinear models, such as overfitting and bad locally optimal solutions. Probabilistic formulation of PCA provides a good foundation for handling missing values, and we introduce formulas for doing that. In case of high dimensional and very sparse data, overfitting becomes a severe problem and traditional algorithms for PCA are very slow. We introduce a novel fast algorithm and extend it to variational Bayesian learning. Different versions of PCA are compared in artificial experiments, demonstrating the effects of regularization and modeling of posterior variance. The scalability of the proposed algorithm is demonstrated by applying it to the Netflix problem.Description
Keywords
principal component analysis (PCA), missing values, overfitting, regularization, variational Bayes