Browsing by Author "Kadri, Hachem"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Cross-view kernel transfer(Elsevier Limited, 2022-09) Huusari, Riikka; Capponi, Cécile; Villoutreix, Paul; Kadri, Hachem; Department of Computer Science; Professorship Rousu Juho; Aix-Marseille UniversitéWe consider the kernel completion problem with the presence of multiple views in the data. In this context the data samples can be fully missing in some views, creating missing columns and rows to the kernel matrices that are calculated individually for each view. We propose to solve the problem of completing the kernel matrices with Cross-View Kernel Transfer (CVKT) procedure, in which the features of the other views are transformed to represent the view under consideration. The transformations are learned with kernel alignment to the known part of the kernel matrix, allowing for finding generalizable structures in the kernel matrix under completion. Its missing values can then be predicted with the data available in other views. We illustrate the benefits of our approach with simulated data, multivariate digits dataset and multi-view dataset on gesture classification, as well as with real biological datasets from studies of pattern formation in early Drosophila melanogaster embryogenesis.Item Entangled Kernels - Beyond Separability(MICROTOME PUBL, 2021-01) Huusari, Riikka; Kadri, Hachem; Professorship Rousu Juho; Aix-Marseille Université; Department of Computer ScienceWe consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels. Borrowing tools and concepts from the field of quantum computing, such as partial trace and entanglement, we propose a new view on operator-valued kernels and define a general family of kernels that encompasses previously known operator-valued kernels, including separable and transformable kernels. Within this framework, we introduce another novel class of operator-valued kernels called entangled kernels that are not separable. We propose an efficient two-step algorithm for this framework, where the entangled kernel is learned based on a novel extension of kernel alignment to operator-valued kernels. We illustrate our algorithm with an application to supervised dimensionality reduction, and demonstrate its effectiveness with both artificial and real data for multi-output regression.Item Partial Trace Regression and Low-Rank Kraus Decomposition(MLRP, 2020) Kadri, Hachem; Ayache, Stéphane; Huusari, Riikka; Rakotomamonjy, Alain; Ralaivola, Liva; Department of Computer Science; Professorship Rousu Juho; Aix-Marseille Université; Université de Rouen; Criteo AI LabThe trace regression model, a direct extension of the well-studied linear regression model, al-lows one to map matrices to real-valued outputs.We here introduce an even more general model,namely the partial-trace regression model, a family of linear mappings from matrix-valued inputs to matrix-valued outputs; this model subsumes the trace regression model and thus the linear regression model. Borrowing tools from quantum information theory, where partial trace operators have been extensively studied, we propose a framework for learning partial trace regression models from data by taking advantage of the so-called low-rank Kraus representation of completely positive maps.We show the relevance of our framework with synthetic and real-world experiments conducted for both i) matrix-to-matrix regression and ii) positive semidefinite matrix completion, two tasks which can be formulated as partial trace regression problems.