The automation of inspection processes in aircraft engines comprises challenging computer vision tasks. In particular, the inspection of coating damages in confined spaces with hand-held endoscopes is based on image data acquired under dynamic operating conditions (illumination, position and orientation of the sensor, etc.). In this study, 2D RGB video data is processed to quantify damages in large coating areas. Therefore, the video frames are pre-processed by feature tracking and stitching algorithms to generate high-resolution overview images. For the subsequent analysis of the whole coating area and to overcome the challenges posed by the diverse image data, Convolutional Neural Networks (CNNs) are applied. In a preliminary study, it was found that the image analysis is advantageous when executed on different scales. Here, one CNN is applied on small image patches without down-scaling, while a second CNN is applied on larger down-scaled image patches. This multi-scale approach raises the challenge to combine the predictions of both networks. Therefore, this study presents a novel method to increase the segmentation accuracy by interpreting the network results to derive a final segmentation mask. This ensemble method consists of a CNN, which is applied on the predictions of the given patches from the overview images. The evaluation of this method comprises different pre-processing techniques regarding the logit outputs of the preceding networks as well as additional information such as RGB image data. Further, different network structures are evaluated, which include own structures specifically designed for this task. Finally, these approaches are compared against state-of-the-art network structures.
External Fabry-Perot resonators are widely used in the field of optics and are well established in areas such as frequency selection and spectroscopy. However, fine tuning and thus most efficient coupling of these resonators into the optical path is a time-consuming task, which is usually performed manually by trained personnel. The state of the art includes many different approaches for automatic alignment, which, however, are designed for special optical configurations and cannot be generalized. However, these approaches are only valid for individually designed optical systems and are not universally applicable. Moreover, none of these approaches address the identification of the spatial degrees of freedom of the resonator. Knowledge of this exact pose information can generally be integrated into the alignment process and has great potential for automation. In this work, convolutional neural networks (CNNs) are applied to identify the sensitive spatial degrees of freedom of a FabryPerot resonator in a simulation environment. For this purpose, well established CNN architectures, which are typically used for feature extraction, are adapted to this regression problem. The input of the CNNs was chosen to be the intensity profiles of the transversal modes, which can be obtained from the transmitted power behind the resonator. These modes are known to be highly correlated with the coupling quality and thus with the spatial location of resonators. To achieve an exact pose estimation, the CNN input consists of several images of mode profiles, which are propagated through an encoder structure followed by fully-connected layers providing the four spatial parameters as the network output. For training and evaluation, intensity images as well as resonator poses are obtained from a simulation of a free spectral range of a resonator. Finally, different encoder structures including a memory efficient, small self-developed network architecture are evaluated.
Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enables an exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.
The rapid developments in the micro camera industry, driven by the smartphone sector, offer a wide range of innovations with respect to the performance and miniaturization of endoscopic instruments. For the fast 3D inspection of inaccessible components such as turbine blades in partially disassembled aircraft engines, a borescopic fringe projection system was developed. This study provides a methodology for the comparison of different camera and projection configurations within borescopic fringe projection. Furthermore, the current limits for the use of multimedia sensors in metrological applications are shown and quantified. The projection unit of the developed measuring system is based on a digital micromirror device, which generates structured light that is imaged into the measurement scene by means of an objective lens and a borescope. Threedimensional, high-resolution reconstructions are carried out via chip-on-the-tip miniature cameras based on the MIPI interface by forming a triangulation base with the projection unit. To enable inspections in confined spaces, the cameras are connected to external framegramer boards. In this study, 1/6” and 1/4” sensors with fixedfocus lenses are evaluated to assess the trade-off between the physical sensor size and the possible reconstruction accuracy. Typical camera parameters such as the sensitivity with respect to the signal-to-noise ratio are determined by means of a standardized test setup. In addition, the application in the triangulation system is evaluated through the modulation signal strength as well as the suitability for typical system calibrations based on the widely used pinhole camera model in combination with a distortion polynomial correction according to the approach of Conrady and Brown. Finally, the sensors are compared regarding 3D reconstructions of calibrated geometric features in accordance with ISO/IEC Guide 98-3:2008 and VDI/VDE 2634-2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.