Phase Measuring Deflectometry (PMD) is an optical metrology method for accurate 3D imaging of specular surfaces. So far, one of the limitations that has hindered the broader application of PMD outside metrology is the so-called height-normal ambiguity problem. Current solutions to this problem either rely on prior knowledge of the object’s shape or introduce additional hardware components (camera or display) to the setup. In this contribution, we propose a novel PMD concept that solves the height-normal ambiguity problem by leveraging polarization cues. We replace the classic deflectometry setup with an unpolarized display and a polarization camera. This allows us to uniquely calculate the reflection angle with the assistance of polarization cues, leading to simultaneous estimation of surface points and surface normals without ambiguity. Our method is capable of measuring complex surfaces since it does not require prior knowledge of the surface shape. Furthermore, our method requires only one camera, which can enhance measurement coverage compared to stereo PMD. Our experiments demonstrate sub-degree normal accuracy and successful reconstruction of complex surface shapes.
As a scanning version of coherent diffraction imaging (CDI), X-ray ptychography has become a popular and very successful method for high-resolution quantitative imaging of extended specimens. The requirements of mostly coherent illumination and the scanning mechanism limit the throughput of ptychographic imaging. In this paper, we will introduce the methods we use at the Advanced Photon Source (APS) to achieve highthroughput ptychography by optimizing the parameters of the illumination beam. One work we have done is increasing the illumination flux by using a double-multilayer monochromator (DMM) optics with about 0.8% bandwidth. Compared with our double-crystal monochromator (DCM) optics with 0.01% bandwidth, this DMM optics provides around 20 times more flux. A multi-wavelength reconstruction method has been implemented to deal with the consequential degraded temporal coherence from such an illumination to ensure high-quality reconstruction. In the other work, we adopt a novel use of at-top focusing optics to generate a at-top beam with the diameter of about 1.5 μm on the focal plane. The better uniformity of the probe and the large beam size allow one to significantly increase the step size in ptychography scans and thereby the imaging efficiency.
The nascent field of indirect imaging is concerned with the recovery of information pertaining to objects that are beyond the line-of-sight (LoS) and hidden from view. Current approaches to indirect imaging are either limited in their ability to recover spatially resolved imagery (resolution of few centimeters at 1-meter standoff) or impose severe restrictions on the imaging geometry. The present work examines two approaches that recover spatial detail on hidden objects by exploiting spatial and spectral correlation in the light scattered by the objects. Experiments have demonstrated the ability to discern sub-millimeter spatial detail, on centimeter sized objects positioned 1-meter behind a wall.
Pigment identification and mapping gives us insight into an artists' material use, allows us to measure slow chemical changes in painted surfaces, and allows us to detect anachronistic uses of materials that can be associated with either forgeries or past restorations. Earlier work has demonstrated the potential of a dictionary-based reflectance approach for pigment classification. This technique identifies pigments by searching for the pigment combinations that best reproduce the measured reflectance curve. The prospect of pigment classification through modeling is attractive because it can be extended to a layered medium -- potentially opening a route to a depth-resolved pigment classification method. In this work, we investigate a layered pigment classification technique with a fused deep learning and optimization-based Kubelka-Munk framework. First, we discuss the efficacy of the algorithm in a thick, single-layer system. Specifically, we consider the impacts of layer thickness, total pigment concentration, and spectrally similar pigment combinations. Following a thorough discussion of the single layer problem, the system is generalized to multiple layers. Finally, as a concrete example, we use the two-layered system to demonstrate both the impacts of layer thickness and dictionary content on paint localization within the painting. Results of the algorithm are then shown for mock-up paintings for which the ground truth is known.
The use of non-invasive hyperspectral imaging techniques has become standard practice in the materials analysis and study of precious cultural heritage objects such as drawings, paintings, murals and more. However, the non-linear mixing of spectral signatures from complex and heterogenous objects with multiple colorants present below the resolution limits of the camera can complicate material identification. Consequently, ground truth measurements are still usually obtained from microscopic samples removed and embedded to expose stratigraphy and obtain sub-surface information about the artist’s material choices and technique. This work considers a microscopic spectral imaging technique capable of mapping molecular information in such micro samples at high spatial and spectral resolution while avoiding some of the challenges of complimentary techniques, such as swamping fluorescence in Raman spectroscopy or long integration times using FT-IR spectroscopy. Construction of a dark field hyperspectral microscope for cultural heritage samples is described using a tunable light source to illuminate the sample monochromatically from the visible to near infrared wavelengths, with the diffusely reflected light collected from the specimen with a long working distance, 20x objective. The illumination and detection arms were decoupled to better focus the power of the tunable light source across the tunable range through Köhler illumination optics. By mounting the optical train on a rotating arm, we can achieve multiple angles of illumination and optimize lighting conditions. The sample is also rotated in order to reconstruct an even distribution of light across the field of view. This multi-axis movement capability also provides exciting opportunities to leverage more than simple spectral information from an image series such as surface topography and differential phase contrast information. The developed microscope was used create a library of spectral signatures for comparison to painting cross sections, and the ability of the microscope to identify and examine individual pigment particles was tested.
KEYWORDS: Optical coherence tomography, Mirrors, Sensors, Signal to noise ratio, Imaging systems, Data acquisition, Image resolution, Cultural heritage, Reflectivity, Control systems
Accurate measurements of the geometric shape and the internal structure of cultural artifacts are of great importance for the analysis and understanding of artworks such as paintings. Often their complex layers, delicate materials, high value and uniqueness preclude all but the sparsest sample-based measurements (microtomy or embedding of small chips of paint). In the last decade, optical coherence tomography (OCT) has enabled dense point-wise measurements of layered surfaces to create 3D images with axial resolutions at micron scales. Commercial OCT systems at biologically-useful wavelengths (900 nm to 1.3 μm) can reveal some painting layers, strong scattering and absorption at these wavelengths severely limits the penetration depth. While Fourierdomain methods increase measurement speed and eliminate moving parts, they also reduce signal-to-noise ratios and increase equipment costs. In this paper, we present an improved lower-cost time-domain OCT (TD-OCT) system for deeper, high-resolution 3D imaging of painting layers. Assembled entirely from recently-available commercially-made parts, its 2x2 fused fiber-optic coupler forms an interferometer without a delicate, manuallyaligned beam-splitter, its low-cost broadband Q-switched super-continuum laser source supplies 20 KHz 0.4-2.4 μm coherent pulses that penetrate deeply into the sample matrix, and its single low-cost InGaAs amplified photodetector replaces the sensitive spectroscopic camera required by Fourier domain OCT (FD-OCT) systems. Our fiber and filter choices operate at 2.0±0.2 μm wavelengths, as these may later help us characterize scattering and absorption characteristics, and yield axial resolution of about 4.85 μm, surprisingly close to the theoretical maximum of 4.41 μm. We show that despite the moving parts that make TD-OCT measurements more timeconsuming, replacing the spectroscopic camera required by FD-OCT with a single-pixel detector offers strong advantages. This detector measures interference power at all wavelengths simultaneously, but at a single depth, enabling the system to reach its axial resolution limits by simply using more time to acquire more samples per Ascan. We characterize the system performance using material samples that match real works of art. Our system provides an economical and practical way to improve 3D imaging performance for cultural heritage applications in terms of penetration, resolution, and dynamic range.
Continuous wave time-of-flight (ToF) cameras have been rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose an imaging architecture to achieve high spatial resolution ToF imaging using optical multiplexing and compressive sensing (CS). Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4 x improvement in spatial resolution. We believe that our proposed architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.
An interferometric fluorescent microscope and a novel theoretic image reconstruction approach were developed and used to obtain super-resolution images of live biological samples and to enable dynamic real time tracking. The tracking utilizes the information stored in the interference pattern of both the illuminating incoherent light and the emitted light. By periodically shifting the interferometer phase and a phase retrieval algorithm we obtain information that allow localization with sub-2 nm axial resolution at 5 Hz.
A current focus of art conservation research seeks to accurately identify materials, such as oil paints or pigments, used in a work of art. Since many of these materials are fluorescent, measuring the fluorescence lifetime following an excitation pulse is a useful non-contact, quantitative method to identify pigments. In this project, we propose a simple method using a dynamic vision sensor to efficiently characterize the fluorescence lifetime of a common pigment named Egyptian Blue, which is consistent with x-ray techniques. We believe our fast, compact and cost-effective method for fluorescence lifetime analysis is useful in art conservation research and potentially a broader range of applications in chemistry and materials science.
KEYWORDS: Sensors, Particles, Cameras, Phase retrieval, 3D image reconstruction, Compressed sensing, Digital holography, High speed imaging, 3D image processing
Digital in-line holography serves as a useful encoder for spatial information. This allows three-dimensional reconstruction from a two-dimensional image. This is applicable to the tasks of fast motion capture, particle tracking etc. Sampling high resolution holograms yields a spatiotemporal tradeoff. We spatially subsample holograms to increase temporal resolution. We demonstrate this idea with two subsampling techniques, periodic and uniformly random sampling. The implementation includes an on-chip setup for periodic subsampling and a DMD (Digital Micromirror Device) -based setup for pixel-wise random subsampling. The on-chip setup enables direct increase of up to 20 in camera frame rate. Alternatively, the DMD-based setup encodes temporal information as high-speed mask patterns, and projects these masks within a single exposure (coded exposure). This way, the frame rate is improved to the level of the DMD with a temporal gain of 10. The reconstruction of subsampled data using the aforementioned setups is achieved in two ways. We examine and compare two iterative reconstruction methods. One is an error reduction phase retrieval and the other is sparsity-based compressed sensing algorithm. Both methods show strong capability of reconstructing complex object fields. We present both simulations and real experiments. In the lab, we image and reconstruct structure and movement of static polystyrene microspheres, microscopic moving peranema, macroscopic fast moving fur and glitters.
In recent years smartphone cameras have improved a lot but they still produce very noisy images in low light conditions.
This is mainly because of their small sensor size. Image quality can be improved by increasing the aperture size and/or
exposure time however this make them susceptible to defocus and/or motion blurs. In this paper, we analyze the trade-off
between denoising and deblurring as a function of the illumination level. For this purpose we utilize a recently introduced
framework for analysis of computational imaging systems that takes into account the effect of (1) optical multiplexing, (2)
noise characteristics of the sensor, and (3) the reconstruction algorithm, which typically uses image priors. Following this
framework, we model the image prior using Gaussian Mixture Model (GMM), which allows us to analytically compute
the Minimum Mean Squared Error (MMSE). We analyze the specific problem of motion and defocus deblurring, showing
how to find the optimal exposure time and aperture setting as a function of illumination level. This framework gives us the
machinery to answer an open question in computational imaging: To deblur or denoise?.
KEYWORDS: Digital micromirror devices, 3D displays, Super resolution, Cameras, Calibration, Image resolution, 3D image processing, Projection systems, Image processing, Relays
We describe a projection system that presents a 20 megapixel image using a single XGA SLM and time-division
multiplexing. The system can be configured as a high-resolution 2-D display or a highly multi-view horizontal parallax
display. In this paper, we present a technique for characterizing the light transport function of the display and for
precompensating the image for the measured transport function. The techniques can improve the effective quality of the
display without modifying its optics. Precompensation is achieved by approximately solving a quadratic optimization
problem. Compared to a linear filter, this technique is not limited by a fixed kernel size and can propagate image detail
to all related pixels. Large pixel-count images are supported through dividing the problem into blocks. A remedy for
blocking artifacts is given. Results of the algorithm are presented based on simulations of a display design. The display
characterization method is suitable for experimental designs that may be dim and imperfectly aligned. Simulated results
of the characterization and precompensation process are presented. RMS and qualitative improvement of display image
quality are demonstrated.
KEYWORDS: 3D displays, Visualization, 3D image processing, Electronics, 3D volumetric displays, Projection systems, Digital micromirror devices, OpenGL, 3D applications, Software frameworks
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors’ first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality’s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality’s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
The authors present work that was conducted as a collaboration between Cambridge University and MIT. The work is a continuation of previous research at Cambridge University, where several view-sequential 3D displays were built. The authors discuss a new display which they built and compare performance to previous versions. The new display utilizes a DMD projection engine, whereas previous versions used high frame rate CRTs to generate imagery. The benefits of this technique are discussed, and suggestions for future improvements are made.
If a three dimensional image is to be projected into mid-air in a room with bare walls, then light must follow a curving path. Since this does not happen in a vacuum, a gradient must be introduced into the refractive index of air itself, which can be introduced by varying either the temperature or pressure of air. A reduction from 300°C to room temperature across the front of a 1 mm wide ray will bend it with a radius of curvature of 3 m. However the temperature gradient cannot be sustained without an unacceptably aggressive mechanism for cooling. The pressure gradients delivered by sound waves are dynamically sustainable, but even powers as extreme as 175 dBm at 25 kHz deliver a radius of curvature of only 63 m. It appears that something will have to be added to the air if such displays are to be possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.