This paper describes the method related to correcting color distortion in color imaging. Acquiring color images from
CMOS or CCD digital sensors can suffer from color distortion, which means that the image from sensors is different
from the original image in the color space. The main reasons are the cross-talks between adjacent pixels, the color
pigment characteristic's mismatch with human perception and infra-red (IR) influx to visible channel or red, green, blue
(RGB) due to IR cutoff filter imperfection. To correct this distortion, existing methods use multiplying gain coefficients
in each color channel and this multiplication can cause noise boost and loss of detail information. This paper proposes
the novel method which can not only preserve color distortion correction ability, but also suppress noise boost and loss
of detail information in the color correction process of IR corrupted pixels. In the case of non-IR corruption pixels, the
use of image before color correction instead of IR image makes this kind of method available. Specifically the color and
low frequency information in luminance channel is extracted from the color corrected image. And high frequency
information is from the IR image or the image before color correction. The method extracting the low and high
frequency information use multi-layer decomposition skill with edge preserving filters.
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera,
which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known
that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in
reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a
low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel
spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving
the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different
according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we
use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions.
Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method
compared to existing method.
Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of
photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or
'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic
functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical
factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image
blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to
enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene
Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency
component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of
human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception
physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users
and photographers, as well.
This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.