Shangzheng Liu, Jiuqiang Han, Bowen Liu, Xinman Zhang
Optical Engineering, Vol. 48, Issue 04, 047002, (April 2009) https://doi.org/10.1117/1.3119291
TOPICS: Image fusion, Visual system, Image sensors, Magnetic resonance imaging, Image quality, Optical engineering, Algorithm development, Computed tomography, Composites, Infrared imaging
With the rapid development of numerous imaging sensors, a variety of image fusion algorithms have been proposed. However, these methods only transfer absolute information into the fused image and neglect the relative information contrast. Nevertheless, the contrast is the key stimulus variable represented by a neuron's activity. We propose a new image fusion method inspired by the human visual system. The new method uses the contrast to represent the salient features of an image. First, we use the polyharmonic local sine transform to decompose the source image into two components: the polyharmonic component and the residual. Next, we compute the contrast of every pixel by dividing the residual by the average value of the polyharmonic component. The polyharmonic component and residual are fused separately with different fusion rules. We can easily obtain the fused image through directly adding the composite polyharmonic component and the composite residual. Compared with the existing image fusion methods, such as the Laplace pyramid, shift-invariant wavelet, and contourlet, the inverse transformation is easy. The results demonstrate that the proposed algorithm is effective and is superior to the conventional image fusion algorithm in terms of the mutual information.