KEYWORDS: Cameras, Satellites, High dynamic range imaging, Satellite imaging, Earth observing sensors, Image fusion, Sensors, Video, Molybdenum, Signal to noise ratio
Temporal bracketing can create images with higher dynamic range than the underlying sensor. Unfortunately, moving objects cause disturbing artifacts. Moreover, the combination with high frame rates is almost unachiev able since a single video frame requires multiple sensor readouts. The combination of multiple synchronized side-by-side cameras equipped with different attenuation filters promises a remedy, since all exposures can be performed at the same time with the same duration using the playout video frame rate. However, a disparity correction is needed to compensate the spatial displacement of the cameras. Unfortunately, the requirements for a high quality disparity correction contradict the goal to increase dynamic range. When using two cameras, disparity correction needs objects to be properly exposed in both cameras. In contrast, a dynamic range increase needs the cameras to capture different luminance ranges. As this contradiction has not been addressed in literature so far, this paper proposes a novel solution based on a three camera setup. It enables accurate de termination of the disparities and an increase of the dynamic range by nearly a factor of two while still limiting costs. Compared to a two camera solution, the mean opinion score (MOS) is improved by 13.47 units in average for the Middleburry images.
Image sensors for digital cameras are built with ever decreasing pixel sizes. The size of the pixels seems to be
limited by technology only. However, there are also hard theoretical limits for classical miniature camera systems:
During a certain exposure time only a certain number of photons will reach the sensor. The resulting shot noise
thus limits the signal-to-noise ratio. On the other hand, diffraction sets another limit for image resolution in case
that there is enough brightness in the scene. In this work we show that current sensors are already surprisingly
close to these limits.
KEYWORDS: Cameras, Signal to noise ratio, Neodymium, Sensors, Signal attenuation, Prototyping, High dynamic range imaging, Image restoration, Image sensors, Spatial resolution
Although there is steady progress in sensor technology, imaging with a high dynamic range (HDR) is still difficult
for motion imaging with high image quality. This paper presents our new approach for video acquisition with high
dynamic range. The principle is based on optical attenuation of some of the pixels of an existing image sensor.
This well known method traditionally trades spatial resolution for an increase in dynamic range. In contrast
to existing work, we use a non-regular pattern of optical ND filters for attenuation. This allows for an image
reconstruction that is able to recover high resolution images. The reconstruction is based on the assumption
that natural images can be represented nearly sparse in transform domains, which allows for recovery of scenes
with high detail. The proposed combination of non-regular sampling and image reconstruction leads to a system
with an increase in dynamic range without sacrificing spatial resolution. In this paper, a further evaluation is
presented on the achievable image quality. In our prototype we found that crosstalk is present and significant.
The discussion thus shows the limits of the proposed imaging system.
Although we observe a steady progress in the development of High Dynamic Range Video (HDRV) technology,
current image sensors are still lacking achievable dynamic range for high image quality applications. We propose
a new imaging principle that is based on a spatial variation of optical Neutral Density (ND) filters on top of some
pixels. In existing work, this method has been used to trade spatial resolution for an increase in dynamic range.
We improve this approach by a non-regular placement of these filters. The non-regular sampling is an important
step as any sub-sampling with regular patterns leads to aliasing. The non-regular patterns however preserve
just a single dominant spatial frequency and enable an image reconstruction without aliasing. In combination
with a new image reconstruction approach, we are able to recover image details at high resolution. The iterative
reconstruction is based on the assumption that natural images can be represented with few coefficients in the
Fourier domain. As typical natural images can be classified as near-sparse, the method enables the reconstruction
of images of high objective and visual quality. In extension to theory and simulation of this method we
want to present details on a practical implementation of our method. While building a demonstration system
we encountered many challenges. This includes effects like crosstalk, aspects like sensor selection and mask
fabrication as well as mounting of the masks.
Image sensors for digital cameras are built with ever decreasing pixel sizes. The size of the pixels seems to be limited by technology only. However, there is also a hard theoretical limit for classical video camera systems: During a certain exposure time only a certain number of photons will reach the sensor. The resulting shot noise thus limits the signal-to-noise ratio. In this letter we show that current sensors are already surprisingly close to this limit.
Synthesizing novel views from originally available camera perspectives via depth maps is a key issue in the
3D video domain. Up to now, several high-resolution cameras are needed to obtain high-quality intermediate
synthesized views. One possibility to reduce costs with regard to the used camera array is to replace some cameras
by low-resolution cameras, which are cheaper on the one hand, but provide a much poorer image quality on the
other hand. Unfortunately, some of the information inside the desired intermediate view may only be available
in the low-resolution reference. Thus, the image quality of the low-resolution reference has a big influence on the
visual quality of the synthesized view. This paper proposes a postprocessing step for the synthesized view, based
on the non-local means algorithm. Thereby, all areas inserted from the low-resolution reference get efficiently
adapted to their high-resolution environment. It is shown, that the non-local means refined image merging leads
to a PSNR gain of up to 0.90 dB compared to an unrefined mixed-resolution setup. The approach can be easily
extended to a hole-filling algorithm and yields a PSNR gain of up to 0.81 dB for hole areas compared to a
reference hole-filling algorithm. The subjective image quality also increases convincingly in both applications.
The image processing pipeline of a traditional digital camera is often limited by processing power. A better
image quality could be generated only if more complexity was allowed. In a raw data workflow most algorithms
are executed off-camera. This allows the use of more sophisticated algorithms for increasing image quality
while reducing camera complexity. However, this requires a major change in the processing pipeline: a lossy
compression of raw camera images might be used early in the pipeline. Subsequent off-camera algorithms then
need to work on modified data. We analyzed this problem for the interpolation of defect pixels. We found
that a lossy raw compression spreads the error from uncompensated defects over many pixels. This leads to a
problem as this larger error cannot be compensated after compression. The use of high quality, high complexity
algorithms in the camera is also not an option. We propose a solution to this problem: Inside the camera only a
simple and low complexity defect pixel interpolation is used. This significantly reduces the compression error for
neighbors of defects. We then perform a lossy raw compression and compensate for defects afterwards. The high
complexity defect pixel interpolation can be used off-camera. This leads to a high image quality while keeping
the camera complexity low.
As with any sampling process, image acquisition is subject to spatial domain aliasing. Many designs of optical
filters have been proposed that reduce the amount of aliasing in image acquisition. Still, the most common
systems are based on birefringent optical materials. Unfortunately, these filters are expensive and the actual
filter response curve can only partially be influenced. We propose a new refractive optical low-pass filter (ROLPF)
based on refraction at a structured glass plate. The transparent plate features a single surface with a defined
structure. This leads to refraction and will lead to a modified light distribution on the sensor. The plate is placed
in front of the sensor and is moved in the image sensor plane during an exposure. This causes an averaging
process and results in a spatial domain filtering. The response curve of this setup can be controlled by designing
the gradient of the surface. This filter will be easy to manufacture, it offers a true low-pass characteristic which
can be specifically designed and the bandwidth can be adjusted at runtime by variation of the distance to the
sensor.
Typical image sensors in digital cameras have a fixed sensitivity, and the amount of captured light energy is often
controlled by adjusting exposure time and lens aperture. For high end motion imaging these settings are not
available as they are used to set motion blur and depth of field, respectively. In many cases a proper exposure
is achieved with additional optical filtering, using so called "neutral density" (ND) filters. We propose a digital
equivalent of a neutral density filter, which can replace the handling of optical filters for camera systems. It
consists of an adjusted sensor readout and in-camera processing of images. Instead of a single long exposure
we capture N short exposures. These images are then combined by averaging. The short exposures reduce the
sensitivity by a factor of N, while averaging reconstructs motion blur. In addition we also achieve a reduction
of both dynamic and fixed pattern noise which leads to an overall increase in dynamic range. The digital ND
filter can be used with regular image sensors and does not require hardware modifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.