The edge-emitting laser diodes (EELs) are widely used due to their superior performance, however, the strongly asymmetric beam profile along the fast and slow axes presents a big challenge in the beam shaping of EEL. Traditional optical devices mainly focus on adjusting the asymmetric divergence angle of the fast and slow axes of the EEL, and it is difficult to achieve flexible and precise control of the luminous distribution of the EEL due to the limited freedom of the conventional beam-shaping elements. In this article, we employ freeform lenses to flexibly reshape EEL beams and develop an approach to tackle the obstacles caused by the strongly asymmetric beam profile by generalizing the Monge– Ampère method to tailor freeform beam-shaping lenses for EELs. Three typical but challenging beam-shaping tasks show that both the intensity and wavefront of an EEL beam can be reshaped in a desired manner by the use of a single compact freeform lens without any symmetric restrictions on the architecture of the beam-shaping system.
Designing a general method of freeform optics for illuminating hard-to-reach areas is a challenging but rewarding issue. Most of the current designs of freeform illumination optics are valid in the applications in which the region of interest is easily accessible. However, there are some applications in which the region of interest is inaccessible due to the obstacles that cannot be removed and high-quality illumination is still needed (this is usually the case in endoscopic lighting). In this paper, we present a general formulation of designing freeform lenses for illuminating hard-to-reach areas. In this method, the freeform lens consists of two elaborately designed surfaces, by which both the irradiance distribution and wave-front of the light beam are manipulated in a desired manner. The light beam after refraction by the freeform lens is further guided through a light-guiding system to produce a prescribed illumination on a target plane which is inaccessible. Here, the light-guiding system can be a light-guiding element [e.g., a gradient refractive index (GRIN) lens] or an optical system that consists of several optical components. The properties of the light-guiding system are taken into account in the tailoring of the freeform lens profiles to guarantee the prescribed illumination on the target plane. The result shows that the design of freeform optics for illuminating hard-to-reach areas in the presence of a light-guiding system can still be formulated into an Monge–Ampère equation (MA) with a nonlinear boundary condition. Two examples are given to demonstrate the elegance of this method in designing freeform optics for illuminating hard-to-reach areas.
It is a meaningful but challenging issue that designing illumination optics for extended sources directly. A number of direct design methods developed specifically to deal with prescribed intensity designs usually fail to produce satisfactory illumination in the near field where the influence of lens size on the irradiance distribution cannot be ignored. In this paper, a direct method of designing aspherical lenses for extended sources is introduced to achieve specified irradiance characteristics. And various types of prescribed irradiance distributions are shown in this paper to verify the broad applicability and high efficiency of the direct design method, especially two examples of producing discontinuous irradiance distributions are analyzed in detail.
Due to the low cost and easy deployment, the depth estimation of monocular cameras has always attracted attention of researchers. As good performance based on deep learning technology in depth estimation, more and more training models has emerged for depth estimation. Most existing works have required very promising results that belongs to supervised learning methods, but corresponding ground truth depth data for training is inevitable that makes training complicated. To overcome this limitation, an unsupervised learning framework is used for monocular depth estimation from videos, which contains depth map and pose network. In this paper, better results can be achieved by optimizing training models and improving training loss. Besides, training and evaluation data is based on standard dataset KITTI (Karlsruhe Institute of Technology and Toyota Institute of Technology). In the end, the results are shown through comparing with different training models used in this paper.
Spectral confocal technology is an important three-dimensional measurement technology with high accuracy and non-contact; however, traditional spectral confocal system usually consists of prisons and several lens whose volume and weight is enormous and heavy, besides, due to the chromatic aberration characteristics of ordinary optical lenses, it is difficult to perfectly focus light in a wide bandwidth. Meta-surfaces are expected to realize the miniaturization of conventional optical element due to its superb abilities of controlling phase and amplitude of wavefront of incident at subwavelength scale, and in this paper, an efficient spectral confocal meta-lens (ESCM) working in the near infrared spectrum (1300nm-2000nm) is proposed and numerically demonstrated. ESCM can focus incident light at different focal lengths from 16.7 to 24.5μm along a perpendicular off-axis focal plane with NA varying from 0.385 to 0.530. The meta-lens consists of a group of Si nanofins providing high polarization conversion efficiency lager than 50%, and the phase required for focusing incident light is well rebuilt by the resonant phase which is proportional to the frequency and the wavelength-independent geometric phase, PB phase. Such dispersive components can also be used in implements requiring dispersive device such as spectrometers.
A novel method is proposed in this paper for light field depth estimation by using a convolutional neural network. Many approaches have been proposed to make light field depth estimation, while most of them have a contradiction between accuracy and runtime. In order to solve this problem, we proposed a method which can get more accurate light field depth estimation results with faster speed. First, the light field data is augmented by proposed method considering the light field geometry. Because of the large amount of the light field data, the number of images needs to be reduced appropriately to improve the operation speed, while maintaining the confidence of the estimation. Next, light field images are inputted into our network after data augmentation. The features of the images are extracted during the process, which could be used to calculate the disparity value. Finally, our network can generate an accurate depth map from the input light field image after training. Using this accurate depth map, the 3D structure in real world could be accurately reconstructed. Our method is verified by the HCI 4D Light Field Benchmark and real-world light field images captured with a Lytro light field camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.