PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Abhijit Mahalanobis,1 Amit Ashok,2 Lei Tian,3 Jonathan C. Petruccelli4
1Lockheed Martin Missiles and Fire Control (United States) 2College of Optical Sciences, The Univ. of Arizona (United States) 3Boston Univ. (United States) 4Univ. at Albany (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 10669, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will describe how learning techniques can be used to estimate the 3D shape of objects using optical 2D measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a computer-aided detection (CAD) method for breast cancer screening using convolutional neural network (CNN) and follow-up scans. First, mammographic images are examined by three cascading object detectors to detect suspicious cancerous regions. Then all regional images are fed to a trained CNN (based on the pre-trained VGG-19 model) to filter out false positives. Three cascading detectors are trained with Haar features, local binary pattern (LBP) and histograms of oriented gradient (HOG) separately via an AdaBoost approach. The bounding boxes (BBs) from three featured detectors are merged to generate a region proposal. Each regional image, consisting of three channels, current scan (red channel), registered prior scan (green channel) and their difference (blue channel), is scaled to 224×224×3 for CNN classification. We tested the proposed method using our digital mammographic database including 69 cancerous subjects of mass, architecture distortion, and 27 healthy subjects, each of which includes two scans, current (cancerous or healthy), prior scan (healthy 1 year before). On average 165 BBs are created by three cascading classifiers on each mammogram, but only 3 BBs remained per image after the CNN classification. The overall performance is described as follows: sensitivity = 0.928, specificity = 0.991, FNR = 0.072, and FPI (false positives per image) = 0.004. Considering the early-stage cancerous status (1-year ago was normal), the performance of the proposed CAD method is very promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) imaging has recently been applied to human gesture recognition using depth maps from RGB-D sensors. An alternative which has been scarcely explored is 3D Integral Imaging, which has shown to give very competitive results in object reconstruction and recognition tasks, even under challenging conditions (e.g. low illumination, occlusions). Integral Imaging has some remarkable advantages over other sensors that may give 3D information (like RGB-D sensors). One of the most important ones is its long range working capability, which stands out even more when compared against other sensors that lose their capabilities for depths of 2m or more. In this paper we present results corresponding to the application of the integral imaging 3D acquisition technique for the recognition of human gestures, when there are occlusions that may hinder the recognition capability. We also present results comparing its capability against that given by an RGB-D sensor (Kinect) and that obtained when only one of the cameras in the camera array is used. Our results show that Integral Imaging compares more or less similarly to Kinect and the monocular case when there are not occlusions, but much more favorably when there are. We also show that the camera spatial resolution may be an issue to account for, when we refer to gesture recognition under occlusions, for the monocular case, but it is less sensitive for the Integral Imaging case, because the features that are extracted from Integral Imaging seem to be more descriptive and discriminative than for the monocular counterpart case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational optical imaging is a combination of computationally designed illumination, optics, and processing algorithms. Some of these novel optical systems are either applied for capturing multi-dimensional information or for displaying this information to the user. These capture and display devices have been studied for a long time and a few are now slowly maturing toward commercial applications.
Many 3D displays have been proposed over the years but only some of them promise to give true 3D perception to humans. In this talk, I will focus on one such display technique which is based on integral imaging, also known as light field displays, exploring how they provide 3D information and discussing enabling technologies which are required for their success. In parallel to developing these displays, understanding of how humans perceive 3D is also important and has to be taken in to account while designing these displays. I will highlight these issues for integral displays and show how these displays have a potential to provide accurate and comfortable 3D experiences.
While exploring 3D displays, one of the obvious next questions is about generating the content and information these displays can show. While showing computer generated information is a relatively easier route, capturing and converting real world content for these displays is not trivial. I will show examples of capture methods for integral displays and talk about two specific methods to capture 3D information from real world scenes and show on integral displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continuous wave time-of-flight (ToF) cameras have been rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose an imaging architecture to achieve high spatial resolution ToF imaging using optical multiplexing and compressive sensing (CS). Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4 x improvement in spatial resolution. We believe that our proposed architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A diffractive plenoptic camera is a novel approach to the traditional plenoptic camera which replaces the main optic with a Fresnel zone plate making the camera sensitive to wavelength instead of range. Algorithms are necessary, however, to reconstruct the image produced by these plenoptic cameras. This paper provides the first quantification of the effectiveness of four different types of post-processing algorithms on a simulated Fresnel zone light field spectral imaging system. The four post-processing algorithms used were standard digital refocusing, 3D deconvolution through a Richardson-Lucy algorithm, a novel Gaussian smoothing algorithm, and a custom-made super resolution algorithm. For the digital refocusing algorithm, the image quality decreased as the wavelength difference from design increased. In comparison, in the Richardson Lucy deconvolution algorithm, the image returned to the same quality as at the design wavelength if enough iterations were used and generally provided results on par with the best near the design wavelength of the Fresnel zone plate and by far the best results far from design at the cost of extensive computation time. The super resolution method, in general, performed better than the standard digital refocusing while the Gaussian smoothing algorithm performed on par with digital refocusing. As a consequence, if time is not a factor, deconvolution should be used in general, while the super resolution method provides faster results if time is an issue. Still, each algorithm outperformed the others in specific cases which allows the best results to be obtained by choosing the algorithm that meets operational requirements and limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-spherical aerosols, particularly aggregates and those comprised of rough surfaces, produce complex light scattering patterns that deviate considerably from those of their spherical counterparts. Consequently, discerning particle morphology from the complex scattering pattern, i.e., the inverse problem, is difficult at best. Additional information is required to associate uniquely the interference pattern resulting from scattered light and the particle's morphology (size, shape, etc.). This uniqueness challenge of the inverse problem may be overcome by incorporating digital holographic imaging into the light scattering apparatus. Using a color CCD camera, we demonstrate that two-dimensional light scattering patterns and digital holograms from individual owing aerosol particles may be recorded simultaneously at different wavelengths revealing the complex scattering pattern along with the size, shape, and orientation of the particle at the instant the scattering occurs. Knowing the exact scattering pattern associated with an exact particle morphology will improve the understanding the radiative characteristics of non-spherical atmospheric aerosols, and reduce uncertainties of important physical parameters such as radiative forcing of aerosols.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this talk, we will present the applications of inverse scattering principles with digital holography. First, I will present the recently developed 3-D holotomography setup using a dynamic mirror device, which is an optical analogous to X-ray computed tomography. In particular, I will discuss the visualization of 3D refractive index distributions of biological cells and tissues measured with the 3-D holotomography using the transfer function method. For a weakly scattering sample, such as biological cells and tissues, a three-dimensional refractive index tomogram of the sample can be reconstructed with the inverse scattering principle from multiple measurements of two-dimensional holograms. The outcome demonstrates outstanding visualization of 3D refractive index maps of live. In addition, we also discuss the application of inverse scattering principle for highly scattering layers. With wavefront shaping techniques using digital holography, we demonstrate ultra-high-definition dynamic holographic display exploiting large space-bandwidth in volume speckle. Exploiting light scattering in diffusers, we also demonstrate the holographic image sensor which does not require the use of a reference beam .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier Ptychographic Microscopy (FPM) and Differential Phase Contrast (DPC) are quantitative phase imaging (QPI) methods that recover the complex transmittance function of a sample through coded illumination measurements and phase retrieval optimization. The successes of these methods rely upon acquiring several or possibly hundreds of illumination-encoded measurements. The multi-shot nature of such methods limits their temporal resolution. Similar to motion-induced blur during a long photographic exposure, motion occurring during these acquisitions causes spatial distortion and errors in the reconstructed phase, which inhibits these methods' ability to image fast moving live samples.
Here we present a novel approach to correct for motion during QPI capture that relies on motion navigation to register measurements together prior to phase retrieval. The different illumination patterns required for QPI cause the measurements to have a different contrasts. This makes it difficult to use standard registration approaches to estimate complex sample motion directly from the measurements. Instead, we use a color-multiplexed navigator signal (red) that is comprised of a constant illumination pattern and leverage a color camera to separate it from the primary QPI information (green). The reliable motion estimate allows measurements to be shared across time points through image registration. This enables a full set of measurements for a phase retrieval problem to be solved at each time point. We demonstrate proof-of-concept experimental results in which blurring due to live sample motion (swimming Zebra fish, cell motion, and organelle movement) is reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional electro-optical and infrared (EO/IR) systems (i.e., active, passive, multiband and hyperspectral) capture an image by optically focusing the incident light at each of the millions of pixels in a focal plane array. The optics and the focal plane are designed to efficiently capture desired aspects (like spectral content, spatial resolution, depth of focus, polarization, etc.) of the scene. Computational imaging refers to image formation techniques that use digital computation to recover an image from an appropriately multiplexed or coded light intensity of the scene. In this case, the desired aspects of the scene can be selected at the time of image reconstruction which allows greater flexibility of the EO/IR system. Compressive sensing involves capturing a smaller number of specifically designed measurements from the scene to computationally recover the image or task specific scene information. Compressive sensing has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. More significantly, the data acquisition can be sequenced and designed to capture task specific and mission relevant information guided by the scene content with more flexibility. However, the benefits of compressive sensing and computational imaging do not come without compromise. NATO SET-232 has undertaken the task of investigating the promise of computational imaging and compressive sensing for EO/IR systems. This paper presents an overview of the ongoing joint activities by NATO SET-232, current computational imaging and compressive sensing technologies, limitations of the design trade space, algorithm and conceptual design considerations, and field performance assessment and modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many applications that have corrupted or missing pixels in images. Here, we present sparsity based image completion algorithms that can achieve high performance in image reconstruction. Through extensive experiments using various types of images, it was demonstrated that our algorithms can deal with extremely high missing rates (up to 99.9%) and relatively large missing blocks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long image acquisition time is a critical problem in single-pixel-imaging. Here, we propose a new high-speed single-pixel compressive imaging method. We develop an ADMM based optimization algorithm to handle images with multiple features. The proposed method solves an optimization problem with the objectives of Total- Variation and ℓ1-norm with a data-fidelity constraint. The algorithm is highly parallel and is suitable for implementation using GPUs, with a significant reduction in computation. The resulting system produces high resolution images and can also be used for super-resolution by changing the single detector with a focal plane array. We verify the system experimentally and compare the performance of our algorithm with similar methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The single pixel camera is an imaging system particularly well suited for scene acquisition where a matrix detector is unavailable. Because of the long acquisition times, the system is usually aided with common compressive sensing techniques. However, if the scene contains moving elements, the recovery may show poor results. In this article we review a new novel technique we purposed for recovery of a moving scene with a compressive single pixel camera. The technique is inspired by 'Russian-Dolls' multi-scale ordering of the Hadamard sensing matrix. The technique can handle both global motion (e.g. due to camera panning) and motion of the objects in the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier Ptychography is a novel imaging technique consists in combining many coherent images acquired with various illumination angle and combined using an efficient phase retrieval algorithm. This technique allows to synthesize a larger numerical aperture than physically possible by the lens aperture alone, often limited by manufacturing capabilities. We present here an adaptation of the Fourier Ptychography technique to a synchrotron-based full-field microscope (SHARP) operating at 13.5nm (EUV wavelength) and demonstrate 26nm coherent resolution on reflective samples, alongside with quantitative phase imaging which allows to characterize sub-nanometer substrate roughness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a theoretical formulation and an experimental demonstration of a fast compression-less terahertz imaging technique based on broadband Fourier optics. The technique exploits k-vector/frequency duality in Fourier optics which allows to use a single-pixel detector to perform angular scan along a circular path, while the broadband spectrum is used to scan along the radial dimension in Fourier domain. The proposed compression-less image reconstruction technique (hybrid inverse transform) requires only a small number of measurements that scales linearly with the image linear size, thus promising real-time acquisition of high-resolution THz images. We develop an algorithm based on a polar formulation of the Fourier transform to reconstruct the image. First, we show how the equations are transformed when passing from a spatial integral to a frequency integral. Second, we analytically demonstrate that, in the case of binary amplitude objects and phase objects, the reconstructed image from our formulation is proportional to the original object. Third, we experimentally demonstrate the image reconstruction method in the two above-mentioned cases: we use a metal aperture for the binary object and an engraving in a polymer sample for the phase object. A detailed analysis of the novel technique advantages and limitations is presented, and its place among other existing THz imaging techniques is clearly identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a high-temporal resolution 4D-XCT with feature-based iterative reconstruction method(FBIR) by imposing feature priors in the reconstruction process. The 4D reconstruction is acquired through an iterative minimization of the cost function which is obtained by combining the forward model and multiple structural featurebased priors. The scheme is applied to the study of the mechanical response of a porous structure (sea urchin spines), which achieves high temporal resolution and demonstrates robustness against noise, limited views and motion induced blurring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional x-ray imaging relies on differences in attenuation in a material to produce image contrast. While useful for differentiating structures with large density variations, the subtle differences in attenuation for low-density materials can be difficult to detect. X-ray phase imaging, on the other hand, relies on differences in phase delay which is typically several orders of magnitude larger than attenuation. However, most methods of producing x-ray phase images rely on specialized synchrotron sources, small and low power microfocus sources or the careful alignment of several precision gratings. We demonstrate that focusing polycapillary optics can produce small focal spots from conventional x-ray sources to enable phase imaging. Moreover, in conjunction with focusing optics, the use of a simple, low cost wire mesh to structure the beam can significantly improve phase reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
X ray phase imaging can offer significantly improved image contrast between materials of similar atomic number compared to traditional imaging, although it typically requires small, low power sources to generate the required spatial coherence. We have demonstrated the use of a simple wire mesh and Fourier transform techniques to overcome this limitation, essentially by observing shifts in the mesh. However, the resolution of that technique is limited by the mesh period. Here we demonstrate greatly improved spatial resolution by the application of wider windowing and appropriate combinations of Fourier components from multiple images acquired while spatially shifting the grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose and demonstrate that a multimode optical fiber can be used to measure the spectral phase of ultrafast optical pulses. The speckle pattern formed at the end of a multimode fiber provides a fingerprint which can be used to identify the spectral amplitude and phase of an unknown pulse. We measure both a linear speckle pattern and a non-linear speckle pattern from a multimode fiber. After calibrating the wavelength dependent speckle field formed at the end of the fiber, the linear speckle pattern can be used to reconstruct the amplitude spectrum while the nonlinear speckle pattern can be used to reconstruct the spectral phase. This technique allows for single-shot pulse characterization in a simple experimental setup, while the diversity of spatial and spectral modes contributing to the speckle pattern removes any ambiguities in the sign of the recovered spectral phase. In addition to demonstrating a novel pulse characterization scheme, this work further illustrates the potential of complex photonic structures such as multimode fibers as versatile optical sensing platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) is a well-established approach for retrieving images with high resolution. How- ever, common hardware used for SAR systems is usually complex and costly, and can suffer from lengthy signal acquisition. In near-field imaging, such as through-wall-sensing and security screening, simpler and faster hardware can be found in the form of dynamic metasurface antennas (DMAs). These antennas consist of a waveguide-fed array of tunable metamaterial elements whose overall radiation patterns can be altered by DC signals. By sweeping through a set of tuning states, near-field imaging can be accomplished by multiplexing scene information into a collection of measurements, which are post-processed to retrieve scene information. While DMAs simplify hardware, the post-processing can become cumbersome, especially when DMAs are moving in a fashion similar to SAR. In this presentation, we address this problem by modifying the range migration algorithm (RMA) to be compatible with DMAs. To accommodate complex patterns generated by DMAs in the RMA, a pre-processing step is introduced to transform the measurements into an equivalent set corresponding to an effective multistatic configuration, for which specific forms of the algorithm have been derived. As we are operating in the near field of the antennas, some approximations made in the classical formulation of RMA may not be valid. In this paper, we examine the effect of one such approximation: the discarding of amplitude terms in the signal-target Fourier relationship. We demonstrate the adaptation of the RMA to near field imaging using a DMA as central hardware of a SAR system, and discuss the effects of this approximation on the resulting image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of linear algebra in optics has been shown to be an important tool in optical design and imaging system analysis for hundreds of years. More recently the use of matrix theory has allowed the development of image processing, particularly with the introduction of large scale computer processing. The ability to approximate matrices as Toplitz allowed matrix multiplies to be carried out with fast transforms and convolutions thus allowing much faster implementations for many image processing applications. There remain a large class of problems for which no Toplitz representation is feasible, particularly those requiring an inversion of a large matrix which is often ill-conditioned, or, formally singular. In this article we discuss a technique for providing an approximate solution for problems which are formally singular. We develop a method for solving problems with a high degree of singularity (those for which the number of equations is far less than the number of variables). By way of illuminating the utility of the overall technique, several examples are presented. The use of the method for solving small under-determined problems is presented as an introduction to the use and limitations of the solution. The technique is applied to digital zoom and the results compared with standard interpolation techniques. The development of multi spectral data cubes for tomographic type multi spectral imaging systems is shown with several simulated results based on real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A phase-only filter is placed in the pupil plane of an imaging system to engineer a new point spread function with a low peak intensity. Blurred detected images are then reconstructed in post-processing through Wiener Deconvolution. A Differential Evolution algorithm is implemented to optimize these filters for high SNR across the MTF. These filters are tested experimentally using a reflective Spatial Light Modulator (SLM) in the pupil of a system and successfully show the peak intensity reduced 100 times the diffraction limit. Results are compared to expected performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the modern tactical imaging environment, new computational imaging (CI) systems and algorithms are being used to improve speed and accuracy for detection tasks. Therefore, a measurement technique is needed to predict the performance of complex non-shift invariant EO/IR imaging systems, including CI systems. Detection performance of traditional imaging systems can be modeled using current system metrics and measurements such as Modulation Transfer Function (MTF), Signal to Noise (SNR), and instantaneous Field of View (iFOV). In this correspondence, we propose a technique to experimentally measure a detection sensitivity metric for non-traditional CI systems. The detection sensitivity metric predicts the upper bound of linear algorithm performance though evaluation of a matched filter. The experimental results are compared with theoretical expected values though the Night Vision Integrated Performance Model (NV-IPM). Additionally, we demonstrate the experimental results for a variety of imaging systems (IR, visible, and color), target sizes and orientations, as well as SNR values. Our results demonstrate how this detection sensitivity metric can be measured to provide additional insight into the final system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate methods for breast cancer diagnosis are of capital importance for selection and guidance of treatment and optimal patient outcomes. In dynamic contrast enhancing magnetic resonance imaging (DCE-MRI), the accurate differentiation of benign and malignant breast tumors that present as non-mass enhancing (NME) lesions is challenging, often resulting in unnecessary biopsies. Here we propose a new approach for the accurate diagnosis of such lesions with high resolution DCE-MRI by taking advantage of seven robust classification methods to discriminate between malignant and benign NME lesions using their dynamic curves at the voxel level, and test it in a manually delineated dataset. The tested approaches achieve a diagnostic accuracy up to 94% accuracy, sensitivity of 99 % and specificity of 90% respectively, with superiority of high temporal compared to high spatial resolution sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the image retrieval problem of finding those images of a large corpus that contain objects or scenes similar to a given query image. In the last decade, research for large-scale systems has shifted from using local feature-based approaches such as the Bag-of-Words model to applying global aggregation methods that represent every image with a short and fixed-length vector. Examples of such methods comprise Fisher Vectors or the Vector of Aggregated Local Descriptors (VLAD) which both combine a variable number of local features into a global vector. Moreover, global approaches which pool visual information from features based on convolutional neural networks (CNN) have become increasingly popular for retrieval. In fact, fine-tuning or even end-to-end learning the retrieval task with CNNs shows impressive performance for a certain targeted object class. We argue that this is reasonable for established public retrieval datasets typically showing one large object (building or sight or scene) in the middle of an image. However, it often fails in real-world forensic scenarios where one wants to find small objects in cluttered backgrounds. We therefore propose to adapt public datasets to generate novel evaluation setups yielding tasks that are closer to the problem of small object retrieval. With experiments comparing global features with local features, we show that the new evaluation setup allows focusing on specific characteristics such as the object size more easily during evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.