PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The demand for new image sensors that can simultaneously acquire multispectral-and-depth imagery (MS-D) using compact, lightweight and monocular imaging systems is rapidly increasing. Seminal works in this line focused on RGB-D sensors that were able to acquire 3-channel color images and a depth map, but relying on two independent image sensors. We intend to advance the state-of-the-art in imaging systems with the ability to extract depth while accurately capturing spectral properties of scenes using as few as a single snapshot. Therefore, this paper discusses the advances of a compressive spectral-depth computational camera that employs a time-of-flight (ToF) sensor, an optimized color-coded aperture (CCA), a dispersive element and a model-based reconstruction algorithm, to attain MS-D imaging. In particular, the ToF sensor has the ability to measure ambient light along with modulated light from the active source, and the CCA is a passive-and-static optical element optimized to spectrally encode the scene reflectance while permitting the propagation of the active modulated light unaffected. The optimization of the CCA is performed using a direct-binary-search (DBS) algorithm that exploits the underlying ideas of blue-noise multitoning, to design the spatial distribution and spectral response characteristics of each one of the optical filters (pixels) of the CCA. We report a proof-of-concept prototype of such a camera that uses a CCA fabricated with 3 different filters using deposition and lithographic patterning cycles of thin-films.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have recently addressed the challenge of measuring the 3D shapes of uncooperative materials, specifically transparent objects. By sequentially projecting laser lines in the long-wave infrared (LWIR), we generate thermal fringes on the object surface. With a thermal stereo camera setup, we were able to measure the 3D shape of objects in as little as 0.1 seconds. Furthermore, we successfully recorded and reconstructed the dynamic deformation process of transparent objects for the first time at a 3D frame rate of 20 Hz. However, motion blur still exists in these dynamic measurements. To reduce or even eliminate this motion blur and make this thermal 3D method suitable for bin-picking, for example, we have adapted a proven single-shot method from the visible (VIS) and near-infrared (NIR) spectrum to the thermal 3D approach. Instead of using temporal sequences of multi-fringe patterns or scanning single fringes, we now project a statistical point pattern and capture only one thermal stereo image pair. Our new projection system generates a statistical thermal point pattern across the entire measurement field. With a quick single capture from two thermal cameras and a spatial correlation algorithm, we reconstruct the object’s surface in 3D. This significantly reduces the measurement time, leading to a substantial decrease in motion blur during dynamic measurements. In this contribution, we present our single-shot 3D sensor setup, which includes the implementation of our single-shot projection unit, and we showcase the enhanced measurement speed for a transparent object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimodal 3D imaging is a key technology in various areas, such as medical technology, trust-based human-robot collaboration and material recognition for recycling. This technology offers new possibilities, particularly for the 3D perception of optically uncooperative surfaces in the VIS and NIR spectral range, e.g., transparent or specular materials. For this purpose, a thermal 3D sensor developed by Landmann et al. allows the 3D detection of transparent and reflective surfaces without object preparation, which can be used to generate real multimodal 3D data sets for AI-based methods. The 3D perception of optically uncooperative surfaces in VIS or NIR is still nowadays an open challenge (cf. Jiang et al.). However, to overcome this challenge, we have developed a new measurement principle TranSpec3D, with which we can generate real multimodal 3D data sets with annotation without object preparation techniques. This system significantly reduces the effort required for data acquisition. We also show the advantages and disadvantages of our extended measurement principle and data set compared to other data sets (generated with object preparation).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents novel 3D geometry measurement method using cylindrical-shaped mechanical projector. The proposed mechanical projector is designed in a cylindrical shape with ON/OFF slots on the wall having distinct intervals to project multi-wavelength fringes patterns in every direction, enabling omnidirectional 3D geometry measurement. Our novel approach can retrieve absolute phase for each pixel with only one pattern generator using generated multi-wavelength fringe patterns. In addition, by adopting phase-based calibration method that utilizes absolute phase, this method can simplify the calibration process and produce 3D geometry measurement with only one camera. Experimental results demonstrate reliability and feasibility of the suggested method for omnidirectional 3D geometry measurement with high speed and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical Gages have seen wide use in applications from automotive manufacturing to precisions electronics. But even after many years of advances, a common complaint is still the problem of errors or lost data when trying to measure surfaces that are semi-specular, rough, or with a mixture of finishes from very bright to dark. There have been some solutions developed including new cameras with more dynamic range, the use of polarization or multiple exposures, but each approach has its limitations be it longer measurement times or physical restrictions such as view angles. In this paper, we will review the latest solutions to date and introduce a new approach that alleviates some of the current problems and thereby offers an alternative that could be applied to some of the more difficult applications today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface Enhanced Raman Spectroscopy (SERS) greatly increases signal intensity for aflatoxin B1 allowing detection of parts per billion (ppb) concentrations without sample preparation. This enhancement enables detection and quantification in real-time without laborious sample preparation and without consuming hazardous chemical solvents for extraction. Aflatoxins are produced by certain fungi that grow on many agricultural commodities, including corn, nuts, wheat, barley, and other species. Consumption of aflatoxin-contaminated foods can lead to severe health issues in both humans and livestock animals, such as liver damage, immune system suppression, and reproductive issues. The concept for online SERS is presented by tec5USA to measure a greater percentage of product than would ever be possible for collected samples and traditional laboratory analysis. A 785 nm laser is used to excite the analyte of interest: aflatoxin B1 spiked onto a SERS substrate, which allows detection in 100 ms. These rapid Raman measurements allow for online integration to measure product in the production environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current calibration methods for multimodal systems consisting of structured light and thermography use calibration targets with physical characteristics. However, defects in the manufacturing of these targets are common. Therefore, these methods are prone to undesired errors. We propose a calibration method for a multimodal system (a visible camera, projector, and thermal imaging camera) that does not require the construction of a physical calibration target. For this purpose, thanks to an auxiliary camera, we use a digital screen to obtain the intrinsic parameters of the camera, and a mirror to obtain the intrinsic and extrinsic parameters of the projector and the thermal imaging camera. The experimental results demonstrate that it is possible to elude the challenging task of fabricating physical targets without compromising the accuracy of the system calibration compared to conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase-shifting profilometry has limitations in that any motion during the capture process can introduce errors. In this paper, we propose a method to reduce motion-induced errors at the pixel level by leveraging the velocity profile of the linear stage and the pinhole model of the camera and the projector when the digital fringe projection system is in motion on a linear stage. Our approach utilizes only three fringe patterns, applying geometric constraints of the digital fringe projection system. It consists of camera pixel correction and phase shift error correction. Experimental results demonstrate that the proposed method effectively reduces motion-induced errors, resulting in improved quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method to estimate the surface normal of an object at high-speed with high dynamic range using an event camera and a rotating light source. Conventional photometric stereo methods use RGB frame cameras and at least three light sources. As the number of light sources increases, the accuracy increases, but the frame that needs to be captured increases, which means that the speed of a normal map estimation is slower. Also, conventional photometric stereo methods are implemented in the dark room to avoid ambient light illumination. To overcome these limitations, the novel method employs an event camera. The event camera operates at microseconds resolution with negligible motion blur, and outputs a continuous stream of events that measures log intensity changes of each pixel asynchronously. Applying these properties, we add a light source rotating around the event camera to have the same effect as numerous light sources. Our method estimates normal vectors by analyzing how the light intensity changes as the light source rotates. Experiment results demonstrate the novel approach for the 3D normal vector estimation using the event camera based on photometric stereo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional CCD or CMOS cameras create complete two-dimensional images of a scene. In contrast, event cameras generate signals at pixels and times where and when changes in brightness occur. Reported changes come in the form of a stream of events, which are digital packets containing the pixel’s location, a time stamp, and additional information, such as the direction of the change. While successive images of conventional frame-based cameras might be partially redundant, measurements of event cameras are sparse. They do not contain any static information, and thus adapt to the dynamics of an observed scene. Event cameras also offer high temporal precision without the need for the same high bandwidth that would be required for comparable high-speed frame-based cameras. We have developed a 3D measurement setup, which consists of a pair of event cameras in stereo configuration and a specialized projector. We designed the projector in a way that it probes the measurement volume by means of a horizontally moving, vertically oriented contrast edge, i.e., a sharp transition between two levels of illumination. We argue that our method of structured illumination is well adapted to the sparse sampling and radiometric properties of event cameras. We present 3D measurements, performed within ∼200 ms, with quantified uncertainties to demonstrate the abilities of our setup, which enabled us to reconstruct an entire scene with ∼55,000 3D points. At a working distance of ∼700 mm, we achieved a spatial uncertainty of ∼0.6 mm. These results are achieved through triangulation of temporally corresponding events without any smoothing or similar post-processing. Based on our work and previous research, we suggest areas of future investigation in the field of event-based 3D and 4D (3D + time) measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calibrating large-range vision systems like UAV cameras is a complex task that often involves costly setups and the potential for errors due to inaccuracies in target fabrication. Traditional UAV surveying software typically estimates camera parameters alongside ground control points, but this method may lack optimal accuracy. Our study explores an alternative: using out-of-focus camera calibration to improve the reliability and accuracy of drone cameras for surveying. In our approach, the UAV camera is positioned several meters away from a low-cost target to ensure focus. We then calibrate the intrinsic camera parameters using an out-of-focus small calibration target, fixing these parameters before flight. For evaluation, we compare this method against the standard approach of estimating UAV camera parameters with survey imagery. Preliminary results suggest that this out-of-focus method offers a reliable and accurate solution for UAV surveying applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, we have made significant progress in evaluating the three-dimensional surface shape of optically challenging objects, especially transparent objects. Our method involves projecting long-wave infrared (LWIR) laser lines, inducing thermal patterns on the object’s surface. Using two thermal cameras in stereo arrangement, we were able to quickly measure the outer geometry of objects in less than 0.1 s. Additionally, we have set a new standard by capturing and reproducing real-time dynamic deformations of transparent objects at a 3D frame rate of 20 Hz.
Many industrial processes call for determining not only the front but also the rear surface shape, i.e., the material thickness. This is crucial for identifying weak points or potential material savings, for instance, in ampoules. Existing methods for the simultaneous measurement of surface shape and material thickness (e.g., computer tomography) are complex, expensive, slow, and cannot be integrated into production lines. As a result, e.g., container glass manufacturers are actively seeking an alternative solution.
We aim to provide such a solution by enhancing our current process. Instead of a CO2 laser line at λ = 10.6 μm wavelength, which is absorbed at the object’s surface and does not penetrate the material, we use a wavelength in the short-wave infrared (SWIR). At this shorter wavelength, the laser radiation travels through commercially available glasses. At the rear surface, the radiation is partly reflected and reaches the front surface again. Along its path, the radiation is absorbed and leaves a heat trace behind. Whereas common glasses are translucent in the SWIR, they are generally opaque in the LWIR range. Consequently, while some SWIR radiation penetrates the object, LWIR cameras detect heat only at its front surface: (1) at the entering laser line and (2) at the position of the exiting line. Our goal is to use these two thermal signal positions to determine both the front and rear 3D surface shape, and thus the material thickness. In this paper, we investigate our approach theoretically using a simulation model. The model is used to generate thermal points on static measurement objects and determine appropriate parameters such as laser power, angle of incidence, and irradiation time. Furthermore, we analyze the temporal and spatial behavior of the thermal points, considering the material parameters. With the obtained simulated results, we subsequently demonstrate an initial experimental setup. In this setup, the two thermal signals are evaluated on a glass plate for different angles of incidence to determine the material thickness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color accuracy is crucial in several domains such as biomedical imaging, cosmetics, and multimedia. Digital Light Processing (DLP) with LEDs has increasingly become a popular lighting source in 3D scanning systems. Although DLP provides advantages in 3D reconstruction, it poses challenges in maintaining color accuracy. Our research focused on using hybrid lighting to improve the color accuracy of DLP-based 3D sensing systems. We developed an empirical dataset featuring skin tones captured under multiple lighting environments, including variations in indoor ambient lighting. Through qualitative and quantitative evaluations of color differences, we conclude that including auxiliary lighting with DLP is beneficial for color accuracy, particularly in biomedical imaging and other applications in which color accuracy is essential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on an optical, non-contact, thickness measurement system for materials that are opaque at ultraviolet (UV) through near infrared (NIR) wavelengths, such as Germanium and Nano-Composite Optical Ceramics (NCOCs). Measurement options do exist, but they must physically touch the sample or rely on an assumed bulk distribution of material. Additionally, optics are often highly sensitive to contamination and greatly benefit from non-contact metrology. The authors used the Lumetrics Optigauge MIR low coherence interferometry (LCI) system to successfully measure a NCOC. A Silicon (Si) control is used as a reference because it can be measured by both an Optigauge II and the Optigauge MIR-LCI system. In this work, the authors successfully measured and report on materials that are transparent in the mid-infrared (MIR) range. The authors speculate that MIR-LCI will enable wedge, thickness, flatness, and other measurements performed using an Optigauge II system for MIR transparent materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D scanning is an indispensable tool in industrial applications, enabling the precise digital replication of complex objects and environments to revolutionize crucial tasks in factory planning such as quality control, factory design, and overall efficiency. However, mainstream 3D scanning approaches such as structured light, 3D laser triangulation, and photogrammetry are often limited by their need for human intervention, operation, and oversight, leading to significant potential errors. To address this challenge, this paper proposes a mobile ground robotic scanning system to enable the quick, robust, and precise 3D scanning of large-scale industrial environments for smart factory planning. Through a simulation of our developed ground mobile robotic system, we demonstrate the potential of NVIDIA Isaac Sim to serve as a platform for the efficient development, prototyping, and testing of large-scale industrial robotic 3D scanning frameworks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D scans of fingerprints can be more informative than their conventional 2D counterparts by measuring a finger's profile and its fingerprint ridge patterns. Capturing finger profiles may be advantageous for identifying individuals whose fingerprints have been altered or hidden or whose hands are of a unique structure. We establish a novel pipeline for producing silicone fingerprint replicas with sufficient detail to spoof a commercially available, low-cost fingerprint sensor to demonstrate that a single contactless 3D scan can capture a finger's ridge patterns and profile simultaneously with high fidelity. This pipeline employs a scanning-printing-casting (SPC) strategy: A laboratory-made structured-light 3D scanner obtains 3D fingerprints within half a second. Molds of the digitized fingerprints are 3D modeled and printed, and silicone is poured into the molds and allowed to cure to complete the fingerprint replication process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper offers an investigation of Grey Wolf Optimizer (GWO) in robotic systems, using bibliometric data and VOSviewer analysis from 2016 to 2023. These years represent a crucial time in the development of robotic optimization methodologies, as evidenced by 285 documents published by 191 sources, with an annual rate of increase of 49.25%. This growth emphasizes the increasing impact and uses of GWO in the robotics area. The research emphasizes the recency of these documents and their effect, with an average document age of 3.01 years and an average citation count of 9.204 per document. The detailed analysis of 1721 ‘Keywords Plus’ and 826 ‘Author’s Keywords’ points to a diverse research landscape, showing GWO in manifold applications, from theoretical background to practical use. Key findings from VOSviewer analysis uncover two primary research streams: robotics and optimization algorithms with GWO in mechanical and control. The high link strength of topics such as ‘flexible manipulators’ and ‘robotic manipulators’ reflects an increasing attention to flexibility and accuracy in robotic systems. The presence of the labels ‘genetic algorithms’ and ‘particle swarm optimization (PSO)’ together with ‘gray wolves’ suggests a continuing comparative investigation of optimization techniques. This research not only emphasizes the role of GWO in robotics but also suggests further research areas, suggesting a detailed investigation of GWO application in different robotic types and optimization issues. It gives in-depth view of the current status and future development of GWO in robotics, implying the trend of its further expansion and innovation in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.