Multiple light scattering in tissue limits the penetration of optical coherence tomography (OCT) imaging. Here, we present in vivo OCT imaging of a live mouse using wavefront shaping (WS) to enhance the penetration depth. A digital micromirror device was used in a spectral-domain OCT system for complex WS of an incident beam which resulted in the optimal delivery of light energy into deep tissue. Ex vivo imaging of chicken breasts and mouse ear tissues showed enhancements in the strength of the image signals and the penetration depth, and in vivo imaging of the tail of a live mouse provided a multilayered structure inside the tissue.
This paper describes the method related to correcting color distortion in color imaging. Acquiring color images from
CMOS or CCD digital sensors can suffer from color distortion, which means that the image from sensors is different
from the original image in the color space. The main reasons are the cross-talks between adjacent pixels, the color
pigment characteristic's mismatch with human perception and infra-red (IR) influx to visible channel or red, green, blue
(RGB) due to IR cutoff filter imperfection. To correct this distortion, existing methods use multiplying gain coefficients
in each color channel and this multiplication can cause noise boost and loss of detail information. This paper proposes
the novel method which can not only preserve color distortion correction ability, but also suppress noise boost and loss
of detail information in the color correction process of IR corrupted pixels. In the case of non-IR corruption pixels, the
use of image before color correction instead of IR image makes this kind of method available. Specifically the color and
low frequency information in luminance channel is extracted from the color corrected image. And high frequency
information is from the IR image or the image before color correction. The method extracting the low and high
frequency information use multi-layer decomposition skill with edge preserving filters.
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera,
which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known
that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in
reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a
low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel
spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving
the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different
according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we
use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions.
Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method
compared to existing method.
This paper presents a method of digitally removing or correcting Chromatic Aberration (CA) of lens, which
generally occurs in an edge region of image. Based on the information of the lens's and sensor's features in camera, it
determines CA level and the dominant chrominance of CA and efficiently removes extreme CA such as purple fringe
and blooming artifacts, as well as a general CA to be generated at an edge in an image captured by a camera. Firstly, this
method includes a CA region sensing part analyzing a luminance signal of an input image and sensing a region having
CA. Secondly, the CA level sensing part calculates the weight, which indicates a degree of CA, based on a difference
between gradients of color components of the input image. Thirdly, for removing the extreme CA such as purple fringe
and blooming artifact which caused by the feature of lens and sensor, it uses 1-D Gaussian filters having different sigma
values to get the weight. The sigma value indicates the feature of lens and sensor. And, for removing the general CA, it
includes the adaptive filter, based on luminance signal. Finally, by using these weights, final filter will be produced
adaptively with the level of CA and lens's and sensor's features. Experimental results show the effectiveness of this
proposed method.
In this paper, we propose a space variant image restoration method where the each different local regions of a given image are de-blurred by each different estimated de-convolution filter locally. The depth of each local blocks are estimated roughly on the optical module representing different indices of refraction for different wavelengths of light. Following the depth, each different region of an image is restored based on the sharpest channel among 3 channels (Red, Green, Blue). Then, in order to prevent discontinuities between the differently restored image regions, we use the piecewise linear interpolation on overlapping regions. Also, practically, this method is applied to 3Mega camera module in order to confirm the effect of proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.