Multimodal image fusion via coupled feature learning

Loading...
Thumbnail Image
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Date
2022-11
Major/Subject
Mcode
Degree programme
Language
en
Pages
Series
Signal Processing, Volume 200
Abstract
This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods.
Description
Publisher Copyright: © 2022 The Author(s)
Keywords
Coupled dictionary learning, Infrared images, Joint sparse representation, Multimodal image fusion, Multimodal medical imaging
Other note
Citation
Veshki, F G, Ouzir, N, Vorobyov, S A & Ollila, E 2022, ' Multimodal image fusion via coupled feature learning ', Signal Processing, vol. 200, 108637 . https://doi.org/10.1016/j.sigpro.2022.108637