1 June 2011 Multimodal image fusion with joint sparsity model
Shutao Li, Haitao Yin
Author Affiliations +
Abstract
Image fusion combines multiple images of the same scene into a single image which is suitable for human perception and practical applications. Different images of the same scene can be viewed as an ensemble of intercorrelated images. This paper proposes a novel multimodal image fusion scheme based on the joint sparsity model which is derived from the distributed compressed sensing. First, the source images are jointly sparsely represented as common and innovation components using an over-complete dictionary. Second, the common and innovations sparse coefficients are combined as the jointly sparse coefficients of the fused image. Finally, the fused result is reconstructed from the obtained sparse coefficients. Furthermore, the proposed method is compared with some popular image fusion methods, such as multiscale transform-based methods and simultaneous orthogonal matching pursuit-based method. The experimental results demonstrate the effectiveness of the proposed method in terms of visual effect and quantitative fusion evaluation indexes.
©(2011) Society of Photo-Optical Instrumentation Engineers (SPIE)
Shutao Li and Haitao Yin "Multimodal image fusion with joint sparsity model," Optical Engineering 50(6), 067007 (1 June 2011). https://doi.org/10.1117/1.3584840
Published: 1 June 2011
Lens.org Logo
CITATIONS
Cited by 99 scholarly publications and 2 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Associative arrays

Discrete wavelet transforms

Stationary wavelet transform

Image sensors

Lithium

Visualization

RELATED CONTENT


Back to Top