A robust multi-focus image fusion method is proposed to generate an all-in-focus image with all objects in focus by merging multiple images. The proposed method first estimates local focus maps using a novel measure of Gaussian model combined with joint bilateral filtering in HSV space. Then, a propagation process is conducted to obtain accurate focus maps based on a traditional natural image matting model that makes full use of the spatial information. The fused all-infocus image is finally generated by a focus-selected strategy. Experimental results demonstrate that the proposed method has state-of-the-art performance for multi-focus image fusion under various situations encountered in practice, even in cases with little edge information.
Deep space image registration is an important part of space exploration research. To improve the robustness and efficiency, a new method based on the geometry feature of the triangles constructed by neighbor stars is proposed. Considering the characteristics of deep space image, such as lower signal to noise ratio and less stable features, the star points in the image have been chosen as the feature points, and the geometric distribution of the surrounding stars are regarded as descriptors. Firstly, the distance between every pair of stars is calculated, and the neighborhood stars are determined by sorting distance. Then the main direction of the current star is determined by the intensity distribution of the neighborhood stars. Whole space is divided into eight quadrants by the Clockwise direction while setting the main direction as starting direction. The strongest stars in each quadrant is selected to construct the triangles which will be used as the descriptors of the current star. Finally, the matching distance between the stars is defined and calculated, and the voting matrix is established to determine the matching pairs. Experimental results show that compared with the traditional matching method, the proposed algorithm has higher efficiency and precision both in situations such as translation, rotation, noise.
Digital image camera has received more and more attention because of its convenience in storing and transferring, the still exist problems about it are also hot topics of research. Auto white balance is one of the problems, it’s the result of differences between image sensors and human eyes. If the illumination of environment has changed, color cast will happen in image from sensors, but image from eyes due to color constancy won’t. For weakening this inconsistence and acquiring image of same scene under canonical illumination, color adjustment according to color temperature of environment should be considered. In this paper, an auto white balance approach combined gray world and coincidence of chromaticity histogram (GWCCH) is proposed. It’s based on basic assumptions of these two methods, measures color components in image, and selects appropriate routine and arguments to implement auto white balance. In the experiment results, the proposed method can meet the theory of gray world (GW) or coincidence of chromaticity histogram (CCH) respectively, and get good effect in more scenes than these two methods.
Influenced by the climatic conditions, such as haze, there exist problems of weak visibility and low contrast for the images and videos captured outdoors. Recently, an effective image haze removal method based on dark channel prior has been proposed. However, the brightness of the result is usually not as bright as the atmospheric light, that makes the whole image looks dim. Besides, the execution speed of this method is slow. As a result, it cannot be applied to the situations with high real-time requirements, such as video streams. In order to solve these problems, an efficient algorithm for image and video dehazing is proposed in this paper. Firstly, the transmission map of hazy image based on the fast fuzzy theory is calculated. Then, according to the statistical principle of dark channel prior and the atmospheric scattering model, the haze-free image under ideal illumination can be restored successfully. Large number of experimental results have shown that, the proposed algorithm can obtain better haze-free results for single image compared with the previous method. More importantly, the execution efficiency has been improved greatly. As a result, video steaming can also be dehazed in real-time so as to meet the occasions with much requirements of industry.
Color filter arrays(CFA) based on complementary colors(CYMG) has been designed and used with the main advantage: higher spectral sensitivity and wider bandwidth than RGB CFA, especially in the low-light or short integration time environment. As for the interline-transfer, interlaced-readout complementary colors CFA pattern CCD, we propose a method of color restoration by the conversion of CYMG-YUV-RGB color space. Specifically, summing horizontally adjacent pixels from raw data can provide estimates of the luminance channel Y, while subtracting horizontally interlaced adjacent pixels can provide estimates of chrominance (color difference) channel U and V individually. A 2 × 2 pixels block of raw data is the smallest cell to figure out Y, U, V channels. Subsequently, transform YUV to RGB linearly according to the conversion formula between CYMG and RGB. At last, the raw data from CCD can be restored to RGB signals which is convenient for post-process, such as white balance. Additionally, we adopt an improved median filter to U and V channels to remove the edge zipper noise caused by interpolation, which can optimize the image quality.
Single image super-resolution is one of the most prevalent techniques in digital image processing with a wide range of applications. In this paper, we analyzed the well-known new edge directed interpolation (NEDI) and proposed an improved single image super-resolution method based on edge directed interpolation which could preserve the edge features and reduce common artifacts efficiently. In order to obtain a good tradeoff between quality and speed, a new scheme which moves local window along edge direction is applied. Simulation results demonstrate that the proposed algorithm improves the subjective quality of the interpolated images over the other conventional interpolations with competitive computation complexity.
In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.
Image blind deconvolution is a more practical inverse problem in modern imaging sciences including consumer photography, astronomical imaging, medical imaging, and microscopy imaging. Among all of the latest blind deconvolution algorithms, the total variation based method provides privilege for large blur kernel. However, the computation cost is heavy and it does not handle the estimated kernel error properly. Otherwise, the using of the whole image to estimate the blur kernel is inaccurate because of that the insufficient edges information will hazard the accuracy of estimation. Here, we proposed a robust multi-frame images blind deconvolution algorithm to handle this complicated imaging model and applying it to the engineering community. In our proposed method, we induced the patch and kernel selection scheme to selecting the effective patch to estimate the kernel without using the whole image; then an total variation based kernel estimation algorithm was proposed to estimate the kernel; after the estimation of blur kernels, a new kernel refinement scheme was applied to refine the pre-estimated multi-frame estimated kernels; finally, a robust non-blind deconvolution method was implemented to recover the final latent sharp image with the refined blur kernel. Objective experiments on both synthesized and real images evaluate the efficiency and robustness of our algorithm and illustrate that this approach not only have rapid convergence but also can effectively recover high quality latent image from multi-blurry images.
Because of the substrate back reflectance phenomena, the reflectance of optical thin film stack on a transparent substrate is totally different from that of on an opaque substrate. In this paper, a method for the measurement of low reflectance optical film thickness that has substrate back reflectance is proposed for the first time. Through the analysis of the actual substrate back reflectance, a compensation model is introduced to reduce the influence of substrate back reflectance. The experimental results show good fitting precision and proves that this model can be used directly for the measurement of the optical film thickness with substrate back reflectance, and no extra process is needed.
The charge-coupled device (CCD) array spectrometers are increasingly being used in wide variety of scientific researches and industrial applications. However, all CCD detectors suffer some amount of non-linear behavior on response to light, and the accuracy of the CCD array spectrometer measurement will be influenced from the non-linear behavior, the detectable error is presented. Therefore, the non-linearity correction method is important to obtain the accurate results of spectrometers based on the CCD. Here, we proposed a convenient experiment and calculation method to solve the problem of non-linearity. With the combined values of all the pixels across the detector, a 7th order polynomial is fitted in the relation between the normalized counts per second and counts, and the correction coefficients were generated by this polynomial for the pixels. The method to apply the correction is dividing the original response by the calculated correction coefficients for all the pixels. Finally, the CCD detector response is linear to >99.5% after correcting for the non-linearity of spectrometers, experimental results show that the proposed method is reasonable and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.