Browsing by Author "Raninen, Elias"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
- Bias Adjusted Sign Covariance Matrix
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2022) Raninen, Elias; Ollila, EsaThe spatial sign covariance matrix (SSCM), also known as the normalized sample covariance matrix (NSCM), has been widely used in signal processing as a robust alternative to the sample covariance matrix (SCM). It is well-known that the SSCM does not provide consistent estimates of the eigenvalues of the shape matrix (normalized scatter matrix). To alleviate this problem, we propose BASIC (Bias Adjusted SIgn Covariance), which performs an approximate bias correction to the eigenvalues of the SSCM under the assumption that the samples are generated from zero mean unspecified complex elliptically symmetric distributions (the real-valued case is also addressed). We then use the bias correction in order to develop a robust regularized SSCM based estimator, BASIC Shrinkage estimator (BASICS), which is suitable for high dimensional problems, where the dimension can be larger than the sample size. We assess the proposed estimator with several numerical examples as well as in a linear discriminant analysis (LDA) classification problem with real data sets. The simulations show that the proposed estimator compares well to competing robust covariance matrix estimators but has the advantage of being significantly faster to compute. - A comparative study of supervised learning algorithms for symmetric positive definite features
A4 Artikkeli konferenssijulkaisussa(2020) Mian, Ammar; Raninen, Elias; Ollila, EsaIn recent years, the use of Riemannian geometry has reportedly shown an increased performance for machine learning problems whose features lie in the symmetric positive definite (SPD) manifold. The present paper aims at reviewing several approaches based on this paradigm and provide a reproducible comparison of their output on a classic learning task of pedestrian detection. Notably, the robustness of these approaches to corrupted data will be assessed. - Contributions to Theory and Estimation of High-Dimensional Covariance Matrices
School of Electrical Engineering | Doctoral dissertation (article-based)(2022) Raninen, EliasHigh-dimensional and low sample size problems have become increasingly common in modern data science. Generally speaking, as the dimension grows, so does the number of parameters that need to be estimated. In multivariate statistics, the covariance matrix describes the second-order associations between the variables, and it is a fundamental building block for many algorithms and statistical data analysis methods. The estimation of a high-dimensional covariance matrix is, however, a very challenging problem, not least because the number of unknown parameters increases quadratically with the dimension. A particularly difficult regime for parameter estimation problems is the case, when the dimension of the data exceeds the number of observations. In this regime, classical methods no longer work, and it becomes necessary to impose additional structure on the data or the model parameters using prior knowledge or simplifying assumptions. This thesis develops theory and methods for covariance matrix estimation in the high-dimensional low sample size regime. Different scenarios are considered, such as a single population setting and a multiple populations setting. The primary modeling tools used in this thesis are real and complex elliptically symmetric (ES) distribution theory and regularization. In this thesis, high-dimensional covariance matrix estimators are developed based on finding an optimal linear combination of the sample covariance matrix (SCM) with one or multiple target matrices. To this end, several theoretical properties of the SCM are derived under real and complex ES distributions, such as the explicit expressions for the variance-covariance matrix of the SCM and its mean squared error (MSE). In the multiple populations setting, we study different methods of pooling the class SCMs in order to reduce the overall estimation error. A coupled regularized SCM estimator and a linear pooling method are developed. The thesis also considers regularized high-dimensional robust estimation of the shape matrix (normalized covariance matrix). To this end, the spatial sign covariance matrix (SSCM) is used, which is the SCM computed from centered samples normalized to unit norm. Several properties of the SSCM under ES distributions are also derived. For example, the expectation of a complex weighted SCM is derived, which includes as a special case the expectation of the SSCM. Furthermore, an asymptotic unbiasedness result and an approximate bias correction scheme for the SSCM are developed. All of the proposed methods are shown, both via simulations and real data examples, to be computationally effective and potentially useful in many practical applications involving high-dimensional covariance matrices. Specifically, we demonstrate their usefulness in classification and portfolio optimization problems. - Coupled regularized sample covariance matrix estimator for multiple classes
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021) Raninen, Elias; Ollila, EsaThe estimation of covariance matrices of multiple classes with limited training data is a difficult problem. The sample covariance matrix (SCM) is known to perform poorly when the number of variables is large compared to the available number of samples. In order to reduce the mean squared error (MSE) of the SCM, regularized (shrinkage) SCM estimators are often used. In this work, we consider regularized SCM (RSCM) estimators for multiclass problems that couple together two different target matrices for regularization: the pooled (average) SCM of the classes and the scaled identity matrix. Regularization toward the pooled SCM is beneficial when the population covariances are similar, whereas regularization toward the identity matrix guarantees that the estimators are positive definite. We derive the MSE optimal tuning parameters for the estimators as well as propose a method for their estimation under the assumption that the class populations follow (unspecified) elliptical distributions with finite fourth-order moments. The MSE performance of the proposed coupled RSCMs are evaluated with simulations and in a regularized discriminant analysis (RDA) classification set-up on real data. The results based on three different real data sets indicate comparable performance to cross-validation but with a significant speed-up in computation time. - fMRI-raakadatan rekonstruointi kahden henkilön koeasetelmassa
Sähkötekniikan korkeakoulu | Bachelor's thesis(2014-12-10) Raninen, Elias - Linear pooling of sample covariance matrices
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2022) Raninen, Elias; Tyler, David E; Ollila, EsaWe consider the problem of estimating high-dimensional covariance matrices of K-populations or classes in the setting where the sample sizes are comparable to the data dimension. We propose estimating each class cova Hance matrix as a distinct linear combination of all class sample covariance matrices. This approach is shown to reduce the estimation error when the sample sizes are limited, and the true class covariance matrices share a somewhat similar structure. We develop an effective method for estimating the coefficients in the linear combination that minimize the mean squared error under the general assumption that the samples are drawn from (unspecified) elliptically symmetric distributions possessing finite fourth-order moments. To this end, we utilize the spatial sign covariance matrix, which we show (under rather general conditions) to be an asymptotically unbiased estimator of the normalized covariance matrix as the dimension grows to infinity. We also show how the proposed method can be used in choosing the regularization parameters for multiple target matrices in a single class covariance matrix estimation problem. We assess the proposed method via numerical simulation studies including an application in global minimum variance portfolio optimization using real stock data. - On the Variability of the Sample Covariance Matrix Under Complex Elliptical Distributions
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-01-01) Raninen, Elias; Ollila, Esa; Tyler, David E.We derive the form of the variance-covariance matrix for any affine equivariant matrix-valued statistics when sampling from complex elliptical distributions. We then use this result to derive the variance-covariance matrix of the sample covariance matrix (SCM) as well as its theoretical mean squared error (MSE) when finite fourth-order moments exist. Finally, illustrative examples of the formulas are presented. - Optimal Pooling of Covariance Matrix Estimates Across Multiple Classes
A4 Artikkeli konferenssijulkaisussa(2018-09-10) Raninen, Elias; Ollila, EsaThe paper considers the problem of estimating the covariance matrices of multiple classes in a low sample support condition, where the data dimensionality is comparable to, or larger than, the sample sizes of the available data sets. In such conditions' a common approach is to shrink the class sample covariance matrices (SCMs) towards the pooled SCM. The success of this approach hinges upon the ability to choose the optimal regularization parameter. Typically, a common regularization level is shared among the classes and determined via a procedure based on cross-validation. We use class-specific regularization levels since this enables the derivation of the optimal regularization parameter for each class in terms of the minimum mean squared error (MMSE). The optimal parameters depend on the true unknown class population covariances. Consistent estimators of the parameters can, however, be easily constructed under the assumption that the class populations follow (unspecified) elliptically symmetric distributions. We demonstrate the performance of the proposed method via a simulation study as well as via an application to discriminant analysis using both synthetic and real data sets. - Scaled sparse linear regression with the elastic net
Sähkötekniikan korkeakoulu | Master's thesis(2017-05-08) Raninen, EliasScaled linear regression is a form of penalized linear regression in which the penalty level is automatically scaled in proportion to the estimated noise level in the data. This makes the penalty parameter independent of the noise scale enabling an analytical approach for choosing an optimal penalty level for a given problem. In this thesis, we first review conventional penalized regression methods, such as ridge regression, lasso, and the elastic net. Then, we review some scaled sparse linear regression methods, the most relevant of which is the scaled lasso, also known as square-root lasso. As an original contribution, we propose two elastic net formulations, which extend the scaled lasso to the elastic net framework. We demonstrate by numerical examples that the proposed estimators improve upon the scaled lasso in the presence of high correlations in the feature space. As a real-world application example, we apply the proposed estimators in a simulated single snapshot direction-of-arrival (DOA) estimation problem, where we show that the proposed estimators perform better, especially when the angles of incidence of the DOAs are oblique with respect to the uniform linear array (ULA) axis.