Digital holography benefits from interferometric amplification, which enhances sensitivity. Coherence-gated digital holography allows for the suppression of noise sources such as multiple scattered photons and other light sources. In addition, digital holography provides access to optical phase information, which is an important metrological parameter for three-dimensional measurements. The aim of this study is to investigate the potential of digital holography as a new sensor concept for the environmental perception of autonomous vehicles under difficult visibility conditions. Our experiments are conducted using a 27-meter-long fog tube and serve in particular to characterize the capacity to effectively filter multiple scattered photons based on their coherence property. From the comparison between holography and time-of-flight (ToF) imaging, it follows that although the ballistic photon filtering works significantly better in ToF, the increase in sensitivity due to the interferometric amplification effect results in holography outperforming ToF. In addition, we combine these results with previous experiments and show the importance of the advantages of digital holography for environmental perception through scattering media.
There exist two key aspects to enable autonomous vehicles: Robust sensors to provide all relevant data concerning the vehicle's surroundings as well as algorithms to evaluate this data in real time. Apart from radar and ultrasonic sensors, optical sensors such as lidar and cameras are state-of-the-art in prototype autonomous vehicles. In adverse weather conditions, such as e.g. fog, snow, dust, heavy rain or poorly illuminated scenes, though, those sensors do not perform reliably. Recently, we proposed to use a time-gated-single-pixel-camera to not only significantly reduce the amount of recorded data but additionally filter ballistic object photons, i.e. suppress the effect of noise from the obscuring medium. Apart from generating 3D object information, such a system can operate fast enough to deal with the highly dynamic environment as well as respect eye-safety norms. Moreover, a time-gated-single-pixel-camera offers the ability of image-free detection of all relevant objects within the scene which speeds up data evaluation as well. Here, we want to report on our progress towards realizing such a system. We will demonstrate image-free object detection on simulated data. We realize multi-object detection by generating object heat-maps for the different classes. Additionally, we discuss the difficulties we have to overcome to robustly detect objects in real measured data and shortly present our prototype setup, which we have implemented on a car together with our partners from Fraunhofer Institut für Physikalische Messtechnik and Institut für Autonome Intelligente Systeme, Universität Freiburg.
Differential perspective is a simple and cost-effective monocular distance measurement technique that works by taking two images from two different (axially separated) locations. The two images are then analysed using image processing in order to obtain the change of size for different objects within the scene. Based on this information the distances to the objects can be easily computed. We use this principle to realize a sensor for assisted driving where the camera takes two images separated by 0.32 seconds. Distances to objects (e.g. number plates, traffic signs) of up to 200 meters can be measured with satisfactory accuracy. In the presentation we explain the basic principle and the employed image processing.
Optical metrology faces significant challenges as functional devices continue to shrink in size due to new patterning processes for semiconductor chips. Consequently, there is a growing interest in modeling optical systems to achieve more accurate measurements and to compare measurements from different optical instruments, such as confocal microscopes, white light interference microscopes, and focus-varied microscopes. Previous models have employed either a thin layer approximation or 2D periodic structures to simulate light scattering. However, to accurately simulate more complex structures and compare them with experimental data, there is a need for a physically accurate modeling and simulation tool that can handle large-scale aperiodic 3D surfaces. To address this need, we have developed a simulation tool called SpeckleSim, which utilizes the boundary element method. By incorporating a multi-level fast multiple method, we are able to calculate light scattering from 3D nanostructures within a reasonable timeframe. In this report, we adapt the method to a confocal microscopy model and investigate the extent to which it can reproduce surface profiles for different types of structures. The obtained results will be compared with experimental measurements and the results from other rigorous simulation tools such as rigorous coupled wave analysis (RCWA) method.
A reliable tool for simulations of confocal microscopes shall be developed to enable improved model-based dimensional metrology. To simulate measurements on rough surfaces the boundary element method (BEM) simulation tool SpeckleSim, developed by the ITO of the University of Stuttgart, is combined with a Fourier optics based image formation. SpeckleSim, which calculates the light-structure interaction by solving the Maxwell equations, is compared with the well-known FEM based solver JCMsuite and the FDTD based solver Ansys Lumerical. As an example, a rectangular shaped line is used as an object. Due to different boundary conditions the results show as expected small deviations, which require further investigations. First comparison results and the general concept of the image formation method will be presented.
In this article we show a highly accurate vibration measurement technique based on imaging of multiple light emitters that are attached to the object of interest. Each emitter is holographically replicated to a cluster of spots in the image plane. By averaging the centroids of all replications the position measurement accuracy can be improved. We show, that vibration amplitudes of 100nm can be measured within a measurement field of 148mm×110mm using standard imaging sensors. The standard deviation between our camera setup and a commercial Laser-Doppler-Vibrometer used as reference is σ =0.095 µm in object space, which corresponds to 0.0017 pixels in image space. To overcome the frame rate limitations of standard imaging sensors we also investigate the application of the proposed method to an event based camera. Since the signal no longer consists of grey value images, other approaches have to be developed to reconstruct the object position. One reconstruction approach as well as first experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.