|
1.IntroductionThe optical triangulation method is a state-of-the-art technique to acquire geometry data of complex freeform geometries and used in different scales.1 A common industrial application is the inspection of formed metal sheets in the automotive sector by fringe projection systems,2 whereas endoscopic systems for confined spaces with small measurements heads are being investigated for in-situ inspection tasks (e.g., the restoration of turbine blades3). Both fringe pattern and laser light section method require homogeneous measurement conditions in terms of the surrounding optical medium’s refractive index, as a rectilinear projection of light is assumed in optical triangulation.4 Although the refractive index of air depends on various parameters—such as humidity, pressure, and the content—it varies only slightly if temperature and pressure can be considered constant.5 As most measurements are performed under normal conditions, the hypothesis of a rectilinear light propagation is usually valid or accurate enough. In subproject C5 of the Collaborative Research Centre 1153 (CRC) Process chain to produce hybrid high performance components by Tailored Forming, the geometry of high-temperature, hybrid workpieces is meant to be inspected via optical triangulation techniques between subsequent forming steps. The condition monitoring of critical workpiece features—such as the joining zone of different materials in a hybrid component—can help to discard deficient parts in an early manufacturing stage. Another advantage of an immediate—and therefore high-temperature—inspection is the economization of energy, as the present workpiece temperature can be exploited in the following forming chain steps. Unfortunately, the hypothesis of a rectilinear light propagation is violated when optically measuring hot objects: workpiece temperatures of more than 1000°C lead to a non-neglectable heat input into the surrounding air and induce a heat and thereby a density reduction, which creates a locally differing refractive index.6 The resulting 3-D refractive index field’s (RIF) shape, extension, and magnitude is time-variant and highly depends on the object’s temperature, geometry, and the present air flow conditions.7 The light propagation is affected, as its path is bent toward more dense air layers. Most articles in this field neglect this deflection effect,8–11 which is legitimate if the light path deflection is too small to be reproduced by the applied measurement system. Ghiotti et al.12 present a high-speed measuring system based on multiple laser scanning triangulation sensors to acquire the geometry of freeform parts with temperatures up to 1200°C. The refraction of the laser light due to a heat input in air is not considered, as a maximum error of is assumed to occur for the described measurement scenario. In order to model the light path in (inhomogeneous) media, Fermat’s principle has to be adhered to. A modern and general version of Fermat’s principle is formulated with respect to variational calculus: between two points and , a light ray takes the path, which is extremal with respect to variations of this path. A mathematical formulation for the optical path length (OPL) is where is the refractive index of the traversed medium and a function of location .13In this paper, the effect of an inhomogeneous RIF in air on a 3-D optical triangulation measurement is numerically modeled. The exemplary measurement is performed by simulating the geometry acquisition of a hot cylinder via light-section method. The approach fully considers both light deflection from the illumination unit (laser with telecentric lens) to object and from object to the detection unit (pinhole camera). As the path of stationary optical length between laser and camera is not known, a solution to Eq. (1) can only be gained by an iterative approximation procedure, optimizing the path between object (cylinder) and camera by ray tracing. The simulations are performed with the software Comsol Multiphysics,14 as the software provides both a simulation module for numerical heat transfer calculations, as well as a ray tracing module. 2.Former WorkIn a former SPIE proceedings contribution, the authors experimentally investigated the effect of a convective density flow on a 3-D geometry measurement of a hot steel pipe by light-section method from above.15 To realize measurements with reduced refractive index inhomogeneity, triangulation measurements have been conducted by controlling the RIF’s shape via superimposition of an external laminar air flow. The laminar flow allowed the acquisition of reference geometry data of a hot object subject to thermal expansion but only slightly affected by the RIF. The experimental results revealed an interesting fact: The hot cylinder’s geometry measured under full influence of the RIF led to a significantly smaller cylinder radius compared to the hot measurement with reduced convective flow. As the cylinder’s temperature was just slightly differing between the two measurements, the documented change in radius could not be caused by difference in thermal geometry expansion. Therefore, it must have been induced by light deflection in the RIF. The design of the light section experiment permitted a—more or less accurate—documentation of the virtual geometry manipulation due to heat induced inhomogeneous RIFs but did not allow a deeper analysis on the nature of deflection. Superimposing laminar flow in order to “homogenize” the RIF is a rather complex method in order to gain a hot non-RIF-affected reference measurement, to which a hot RIF-affected measurement can be compared to. Furthermore, the success of this approach highly depends on the object’s geometry and its influence on the external air flow behavior. If a hot object would not be subject to thermal expansion, a cold object measurement could serve as reference in order to exclusively expose the RIF effect on a measurement. This can be achieved by means of software: if just the heat input into air but not the measurement object’s thermal expansion is numerically modeled, the object’s geometry in hot and cold state are the same. In this scenario, deviations from the geometry in hot state are exclusively caused by the RIF and can be revealed by simply comparing the object’s geometry in hot and cold state. The starting point for the present article are former simulation results of the laser light path manipulation from the virtual illumination unit to measurement object due to refractive inhomogeneity. The simulation setup is now extended by a virtual camera and a multistep ray tracing optimization in order to model a complete triangulation process. 3.Simulation Design: Assumptions and Boundary ConditionsThis section comprises information on the geometrical simulation setup, the virtual triangulation sensor, and a detailed overview of the boundary conditions and theoretical models, such as the camera pinhole model and the derivation of the RIF induced by heat transfer. 3.1.Geometrical Setup and Refractive Index FieldThe quantification of the virtual geometry manipulation by optical inhomogeneity in air requires a reference geometry. The geometry choice is guided by numerical needs: a horizontal cylinder guarantees robust conditions for the crucial density simulations based on heat transfer, as a numerically stable convective heat and density flow is building up above the shaft. This is indispensable for the derivation of the RIF. The geometrical dimensions of the simulation setup are outlined in Fig. 1. The cylinder has a diameter of 27 mm and a length of 170 mm. It has a starting temperature of 900°C, 1100°C, or 1250°C, respectively. These parameters are similar to a Tailored Forming workpiece after forming, postulating a slight cooling effect down to 900°C, considering workpiece handling time. The heat transfer simulation requires the specification of involved materials. As a start, a simple steel monomaterial is chosen for the cylinder geometry in order to limit the simulation complexity. The relevant material parameters are listed in Table 1, e.g., the steel cylinder’s thermal conductivity or specific heat capacity. Humid air at a pressure of 1 atm is postulated as surrounding medium. Furthermore, the expected convective flow is restricted to a laminar character. Turbulences are not reproduced in the model to save calculation costs and in order to keep the analysis of the subsequent ray tracing results as simple as possible. Further information on the used heat transfer equations goes beyond the scope of this paper and can be found in the provided software user guide for the heat transfer module.14 Table 1Summary of simulation boundary conditions for heat transfer and ray tracing simulations.
The following simulation routine has been implemented to gain an inhomogeneous 3-D RIF: First, the heat transfer from the hot measurement object into the surrounding air is simulated in order to gain a scalar 3-D density field with locally varying density values. The simulation is stopped after a simulation time of since this is the planned maximum time to position the hot measurement object in front of the sensor in an experimental setup. Subsequently, the density values are used to derive a scalar 3-D RIF using the Ciddor equation.5 Ciddor introduced an equation for the refractive index in air dependent on wavelength, temperature, pressure, humidity, and content. By using the ideal gas law and postulating isobaric state, a relationship between density and refractive index can be deduced. This approach is only accurate for moderate temperatures, as the Ciddor equation is only valid up to 100°C. Assuming that a density of results in a refractive index , the Ciddor equation can be linearly extrapolated for extreme density values in air that develop near the hot object. An exemplary simulation result for the RIF is displayed in Fig. 1 for a temperature of , revealing the convective density flow above the cylinder and its symmetrical shape. A summary of the hypothesized simulation boundary conditions is given in Table 1. 3.2.Optical Triangulation in Inhomogeneous Media: Simplified OutlineA simplified outline of a 2-D triangulation measurement setup with RIF effect, illumination unit (laser), and camera sensor is given in Fig. 2. To enhance clarity, the RIF is approximated by discrete air layers with different refractive index values , , , and . The air layer directly next to the hot cylinder surface features the lowest refractive index (). For demonstration purposes, the sensor is positioned laterally to the measurement object. A 2-D point is represented by a bold character (e.g., ). Index indicates a measured point. The blue (solid) line encodes the unaffected light path assuming homogeneous optical conditions, the red (dashed) line encodes the affected path in an inhomogeneous field. The surface of the cylinder is reconstructed by intersecting the activated camera’s line-of-sight with the laser line (or in 3-D with the laser plane), leading to a measurement difference when comparing affected and unaffected scenario. The difference between the actual laser point to the measured location by triangulation in a homogeneous scenario (cold cylinder) is small, if a high triangulation accuracy is assumed. This is indicated by depicting and in the same location (). A loss in geometry information due to the sensor discretization is not considered in the simplified outline Fig. 2. 3.3.Virtual Triangulation SensorThe actual simulation has been realized with a virtual 3-D triangulation sensor using the light-section method. It comprises a matrix camera and a telecentric laser line generator (see Fig. 1). The measurement results are given in world coordinate system , if not declared differently. The laser is approximated by several discrete and equidistant laser rays, differing in the -discharge value for ray tracing. As a telecentric laser line generator is used (fan angle is 0 deg), the start vector defining the rays’ tracing direction is assumed to be constant. Laser line generators with fan angles greater than 0 deg would require different ray tracing start vectors to reproduce the beam expansion. The virtual camera’s projection center and the laser are positioned in distance of 300 mm to the world coordinate system . The triangulation angle is 60 deg. In order to examine the effect of the sensor pose on the measurement result, a rotation angle is defined to adjust the sensor location relatively to the cylinder axis. The exemplary angles are , 15 deg, and 30 deg. To keep the simulation routine as simple as possible, the virtual camera is modeled as ideal pinhole camera. This precondition leads to a set of assumptions:
The mathematical description of the mapping of an arbitrary 3-D point in the camera coordinate frame onto the 2-D sensor frame in pixel position is given in Eq. (2) (e.g., according to Ref. 16, see also Fig. 3): with as the camera’s physical focal length in mm, and as the pixel size in in - and -direction, and and as the shift in pixel between the two coordinate systems and . is a scaling factor in mm that parametrizes the length of the camera’s line-of-sight through a certain pixel . The camera matrix comprises the intrinsic parameters of the modeled pinhole camera. In an experimental setup, the camera parameters can be approximated by a calibration routine (e.g., according to Ref. 17).If a 2-D point needs to be reprojected into 3-D space, the scaling factor needs to be known (the length of the camera’s line-of-sight). To this end, Eq. (2) can be transformed to The transformation between two different coordinate systems (for instance, between the world and the camera coordinate frame) can easily be realized with the help of transformation matrix , according to the definition in Eq. (4): where combines rotation and translation to transform homogeneous data points from one coordinate frame to another. The rotation matrix is built from orthonormal vectors , , and , the translation vector according to .The basic triangulation routine is realized by a simple plane-line-intersection, as outlined in Fig. 3(a) (e.g., according to Ref. 18). The exemplary viewing direction onto the displayed triangulation setup is indicated in Fig. 3(b) (with the cylinder cross-section, white arrow). The camera sensor is displayed in front of the camera’s projection center (unlike the depiction in Fig. 2). This is done for demonstration purposes and in order to display the camera according to the mathematical definition of the pinhole model, as given in Eqs. (2) and (3). Although physically not correct, this basic mathematical camera pinhole definition (with sensor in front of the projection center) is commonly used, as it simplifies the description of the mapping of a 3-D point onto the 2-D sensor (image is not upside down, no need for negative signs, e.g., according to Ref. 19, p. 370 ff.). The mathematical definition of the camera’s line-of-sight in coordinate frame is represented by line , the laser plane is given in the Hessian normal form and is represented by plane . If a laser line is projected onto the measurement object, the line is deformed subject to the object’s geometry. This line deformation is captured by the camera. A specific laser line dot activates a specific camera pixel . If the camera’s line-of-sight through this specific pixel is constructed and intersected with the laser plane , the 3-D information of the laser line point can be reconstructed. As laser plane is given in the simulation in the coordinate frame of the laser , it first has to be transformed into the coordinate frame of the camera by an appropriate transformation matrix to intersect and . 4.Ray Tracing in Inhomogeneous Optical MediaIn this section, theoretical background information on the used ray tracing algorithm is given. The derived iterative optimization routine is presented in a step-by-step pseudocode format in order to enhance comprehensibility. 4.1.Theoretical BackgroundThe following Eqs. (5)–(7) are taken from the provided ray tracing software user guide.14 A derivation of the presented equations goes beyond the scope of this paper. Nevertheless, the equations are cited to provide physical background information for inhomogeneous ray tracing. More detailed information can be found in Born et al.,20 Saleh and Teich,21 and Krueger.22 The ray tracing algorithm in Comsol is deduced from the principles of wave optics. Basic assumptions are that the electromagnetic ray is observed at locations far from the light source and that its amplitude changes very slowly with time and place. The electromagnetic field can therefore be approximated locally by plane waves. The mathematical description of the amplitude is neglected. In this case, the rays’ phase is nearly linearly dependent on time and position according to with phase , position vector , wave vector , time , angular frequency , and as arbitrary phase shift.22 Equation (5) allows the derivation of six coupled first-order ordinary differential Eqs. (6) and (7) given in vector notation:The equations need to be solved with respect to and to calculate ray trajectories in inhomogeneous media. Fermat’s principle can be gained from these equations, using the so-called Eikonal.21 Fermat’s principle is defined based on the path of light, but not on the path direction. This means that the light path can be simulated in either way, as long as it passes the same two points: from object to camera or inversely from camera to object. This so-called inverse principle13 is helpful for the iterative ray tracing optimization: starting point for ray tracing is the camera and not the laser light point on the measurement object. 4.2.Measurement Simulation with Iterative Ray Tracing OptimizationAn iterative approximation of the light path from laser incidence location on the cylinder surface to camera needs to be implemented in order to approximate the corresponding camera pixel location on which the laser dot is projected on. Alternatively, referring to the inverse principle, the camera can be starting point for the iterative approximation, as light takes the same way from point to as from point to [see Eq. (1)]. Provided the inhomogeneous RIF around the cylinder has been derived, the measurement simulation for a single data point can be summed up by the subsequent steps, referring to the parameter labeling in Fig. 4.
The main challenge arises from step 2 in which the projected laser dot location in terms of pixel location is approximated. As an idealized pinhole model is hypothesized, light mapped onto the 2-D camera sensor is forced to pass projection center . In a first step, the start pixel is calculated by linearly mapping onto the camera sensor with the help of Eqs. (2) and (4). Due to the pinhole assumption, a directional vector can be constructed through projection center and , leading to the light discharge direction for ray tracing in iteration step . After the initial ray tracing simulation (step 2.3), the actual distance between the actual light incidence location and the target location is calculated in helper coordinate system (step 2.4). There is no need to compare the -values, as depth information is lost when imaging. If the condition is fulfilled, being the smallest distance value so far, both and are updated with the actual step values. Provided the maximum number of iterations is not reached and is not smaller than the maximum deviation radius allowed, the new pixel location is determined according to step 2.5 with iterative pixel step size . is adapted in dependency of the actual iteration step and the width and height of the pixel search grid. To prevent an erroneously mapping of undercut points onto the camera sensor, the -distance is finally checked in step 3.1 by calculating the Euclidean norm . If is bigger than an initially defined threshold value , the corresponding camera point is not used for triangulation. In an experimental (not simulated) triangulation measurement, the limited lateral resolution of the camera sensor restricts the exact mapping of a 3-D world point onto the sensor. Furthermore, a light section measurement depends on the accurate localization of the laser’s center line in the camera image, e.g., by fitting Gaussian distribution curves into the laser line’s intensity profiles. This approach permits subpixel accuracy. To take this discrete and virtual increase in pixel number into account, the value for is rounded to a virtual pixel size of 0.25 pixel. As the lateral resolution of the camera sensor is (compare to Table 1), the maximum deviation radius is set to a value of . A stricter threshold is not necessary, as even the assumed subpixel accuracy of 0.25 pixel only allows a mapping of areas of onto the camera sensor. 5.ResultsThe results in this section are based on the boundary conditions according to Table 1 and the geometry setup depicted in Fig. 1. First, detailed information on a triangulation measurement from above is presented (to ensure full manipulation by the inhomogeneous RIF on the laser light path). The gained results are analyzed in Sec. 5.2. Results for different cylinder temperatures and sensor poses are presented in Sec. 5.3. 5.1.Cylinder Geometry Measurement by Light-Section MethodThe steel cylinder temperature is set to 1250°C. The triangulation sensor is not rotated around the cylinder axis () to realize a measurement from above under full influence of the RIF (see Fig. 1, right side). Nine discrete light paths from laser to camera are simulated—only differing in the -discharge location (equidistantly arranged from 0 to ) but with same directional vectors (laser with telecentric lens). The parameter nomenclature is given in Fig. 5, and the corresponding simulation results are depicted in Figs. 6 and 7. The laser incidence location on the cylinder for a homogeneous RIF (cold cylinder) is given in the world coordinate frame as , the laser incidence location for an inhomogeneous RIF (hot cylinder) as . The corresponding locations on the camera sensor are and and given in pixel in the coordinate frame . Normally, the pixel location on the sensor is defined by the letter . This nomenclature is deviated from in this section to ensure a clear distinction of parameters and in order to avoid the introduction of further indexes. The measured 3-D points for cold and hot cylinder scenario are and (see Fig. 5). The results in Fig. 7 are given as distances between two points [e.g., ], whereas not only the difference between the scalar entries [e.g., , , and ] are presented but also the 2-D or 3-D Euclidean norms of the distances between two points (e.g., , , or ). In an experimental triangulation measurement, a camera sensor always operates as low pass filter, as information is lost due to discretization. The difference between the actual laser point location to the measured location by triangulation in a homogeneous scenario (cold cylinder) is relatively small, as a subpixel accuracy of is assumed for the detection of the points and no further light deflection is induced by the surrounding medium air in a cold scenario. This is indicated in the measurement outline in Fig. 5 by depicting and in the same location (). The term quasi-continuous indicates the fact that a result is given not rounded to the camera sensor’s discretization limitation of 0.25 pixel but according to the output of the ray tracing optimization routine. The iteration step defines the optimization routine’s variable pixel step size (compare to step 2.5 in Sec. 4.2). Therefore, depending on the routine’s stop criteria, the sensor pixel on which the laser dot is projected on, is determined more accurately than given by the sensor’s subpixel accuracy (0.25 pixel). As this more accurate pixel location is still not continuous—the iteration does stop at a discrete value —the nonrounded results are called quasi-continuous. As the RIF induced deflection effects are easier to interpret without sensor discretization, the simulated data in Fig. 7 are given for quasi-continuous sensor conditions. Discrete results (rounded to 0.25 pixel) are discussed later. The quality of the iterative ray tracing optimization according to Sec. 4.2 is checked by analysis of Fig. 6: The maximum deviation radius has been set to . Therefore, the difference between the actual laser incidence location and the optimized location in the helper coordinate frame has to be smaller than this threshold value. This is the case, as all values for stay below . The simulated curve in Fig. 7(a) depicts the displacement of the laser incidence location () on the cylinder surface in the world coordinate frame . The curve reveals the effect of the cylinder curvature on the measurement: With decreasing laser -discharge values, and are continuously getting smaller (the absolute value is increased). This is due to the changing cylinder’s surface gradient , when distancing from the origin of coordinate frame in negative -direction [see Fig. 5(b), the cylinder’s “shoulder slope” is getting steeper]. The value increases [see definition of -axis in Fig. 5(a)]. This geometry effect due to the cylinder curvature is not or only slightly obeyed to for . In this case, the laser’s start vector (for ray tracing) is directly pointing on the origin of coordinate frame . The surface gradient is 0. Therefore, the cylinder’s curvature does not “boost” small RIF-induced deflection values, if a light ray hits the surface in the vicinity of the origin of coordinate frame . The simulated data in Fig. 7(b) gives information on the difference between the actual laser incidence location on the hot cylinder surface and the measured point . A thought experiment helps to reveal the significance of the data: Provided the light deflection from laser to object only happens inside the laser plane, and no further deflection occurs from object to camera, the measured point would not differ from the real point . This means that in the very unlikely event of a pure laser plane-bound light deflection, and if no deflection from object to camera is induced at all, the correct 3-D point would be triangulated. Figure 7(b) proves that this is not the case for the simulated triangulation measurement. Furthermore, the great Euclidean distances up to between measured 3-D point and real 3-D point demonstrate clearly that the measurement deviation due to an inhomogeneous RIF cannot be neglected. The measurement result for homogeneous conditions [compare to Fig. 7(e)] shows a maximum deviation of for . The results are given rounded to the camera sensor’s discretization limitation of 0.25 pixel. This allows a mapping of areas of on a quarter of a pixel. The maximum discretization error by rounding is therefore 2.75 pixel, which matches the maximum value of in Fig. 7(e). As no further light deflection is induced by the surrounding medium air in the simulation scenario with cold cylinder, the absolute distance between and is very small and can be explained exclusively by sensor discretization (). The data in Fig. 7(b) gives no information on whether the 3-D point is accidentally a point of the cylinder surface. In order to verify this, the closest distance between and the numerical cylinder surface has to be calculated. The additional merit of such an analysis is restricted, as it only slightly helps to understand the deflection in the RIF. Within the scope of this work, the measured points in cold cylinder state are used as reference data for the evaluation of the measured points in hot state. The difference between theses points is depicted in Fig. 7(c). The measurement results are correlated to the laser point displacement on the camera sensor [compare norms and in Fig. 7(c) and (d)], as a specific laser point location on the sensor is used to derive the camera’s line-of-sight in order to reconstruct the point’s 3-D data via line-plane-intersection (see Sec. 3.3). Oddly enough, an increasing laser point displacement on the cylinder surface [see Fig. 7(a)] does not necessarily result in increasing differences between the measured points [see Fig. 7(c)]. The values for the difference show a maximum value in region (I) for the laser ray with . The difference decreases until it reaches the lowest value in region (II), only to increase again in region (III) [compare to Figs. 7(c) and 7(d)]. A detailed analysis of the simulated norms and is given in the next subsection to explain this apparent contradiction. 5.2.Analysis: Superimposition of Light DeflectionTo allow an interpretation of the difference according to Fig. 7(c), the laser point displacement on the camera sensor in Fig. 7(d) has to be analyzed. The laser light displacement is a superimposition of the deflection in two different paths: the deflection from laser to cylinder surface and the deflection from cylinder surface to camera. If both paths’ deflection effects are opposed to each other, the resulting pixel displacement on the sensor is reduced. Therefore, the decrease in region (II) in Fig. 7(d) can be explained. The basic procedure to separate the paths’ deflection effects is depicted in Figs. 8(a)–8(c). An information in advance to avoid irritation: the deflection of point is not necessarily restricted to the laser plane [as depicted in Fig. 8(a), compare to red, dashed line]. This is for demonstration purposes only. To gain the laser light displacement for the path “laser to object,” both (blue, solid line) and (red, dashed line) are linearly projected onto the camera sensor, resulting in two corresponding pixel locations (blue, solid line) and (red, solid line) [compare to Fig. 8(a)]. By doing this, the deflection induced in the path “object to camera” is not taken into account. This deflection effect is derived according to image Fig. 8(b): The linear projection of onto the camera sensor in location is compared to the nonlinear, RIF-affected projection in location (see red, dashed line from to camera). If now both pixel displacement values are superimposed [see Fig. 8(c)], the resulting displacement is . The superimposition must lead to the results depicted in Fig. 7(d). A special scenario is depicted in Fig. 8(c): the resulting pixel displacement from laser to camera can be close to zero, leading a small difference between and , even though the light deflection in the two different paths is non-neglectable [see Fig. 8(c), ]. The depicted approach in Fig. 8 is nevertheless also valid for points and , which are reconstructed in different locations. The suggested routine has been realized for the pixel displacement in Fig. 7(d) and is outlined in Fig. 9. Not only the laser point displacement on the camera sensor is given for both light paths [see Figs. 9(a)–9(c)] but also a graphical interpretation of the displacement on the sensor for the laser light path corresponding to [see Figs. 9(d)–9(f)]. The displacement values [e.g., or ] are marked in Figs. 9(a)–9(c). First of all, the resulting pixel displacement by superimposition in Fig. 9(c) is the same as in Fig. 7(d), the suggested approach is therefore legitimate. Moreover, especially the progression of the pixel displacement in the -direction indicates that the induced light deflection values are opposite to each other: is decreasing to a pixel value of approximately for [Fig. 9(a)], whereas increases to a value of more than 2 pixel [Fig. 9(b)]. The resulting value varies around a value of [Fig. 9(c)]. The value for in Fig. 9(c) is therefore not contradictory: the consideration of both light paths results in a reduction of pixel displacement [see also graphical interpretation in Figs. 9(d)–9(f)], which again leads to a reduced difference for the gained values in region (II) in Fig. 7(d). This special scenario is exemplary depicted in Fig. 8(c): the resulting 3-D point (hot cylinder) is depicted in the same location as point (cold cylinder). As the resulting pixel displacement is again increasing for light rays with , also the distance between and rises. To gain a deeper understanding of an exemplary light refraction scenario, the rounded (discrete) interpretation of Fig. 7(c) is analyzed for the laser ray with a discharge value of . The graphical result of this analysis is given in Fig. 10(a), based on the rounded pixel displacement [to sensor subpixel accuracy of 0.25 pixel, Fig. 10(b)]. First of all, there is only a slight difference between the graphs in Figs. 7(d) and 10(b). The difference would be bigger, if the subpixel accuracy was further limited—for instance, to a value of 0.5 pixel. The laser light path for a -discharge value of 0 mm is only marginally deflected in the -direction due to the symmetry of the RIF to the -plane (in ). The cylinder curvature only has small influence on the resulting incidence location on the cylinder surface. Therefore, only the -plane is depicted in the graphical analysis in Fig. 10(a). When the laser light enters the inhomogeneous RIF from the left side, the ray is deflected downward toward more dense air layers, where greater refractive index values are present [see vertical black lines in Fig. 10(a), the lines separate areas with different refractive index value]. This leads to the dashed light path. The closer the ray moves toward the cylinder, the more predominates a horizontal expansion and variation in the refractive index (see horizontal black lines). Therefore, the ray is refracted away from the cylinder, where the surrounding air is more dense. The ray reaches the cylinder in location . As the homogeneous RIF is basically symmetric to the -plane of the world coordinate frame (see Fig. 1), the path from cylinder to camera is flipped vertically in the graphical interpretation, according to Fig. 10(a). Based on this simplification, the triangulated measurement point is reconstructed above the real cylinder surface. This matches the gained result for and [see Fig. 10(b)]. The interpretation in Fig. 10(a) explains the simulation result in Fig. 10(b) for the laser ray with discharge value of . It also reveals the complexity of light refraction in an inhomogeneous RIF. 5.3.Comparison: Different Cylinder Temperatures and Sensor PosesThe result section is closed by a comparison of different measurement scenarios. To this end, the steel cylinder temperature (900°C, 1100°C, 1250°C) and the triangulation sensor pose are varied (0 deg, 15 deg, 30 deg). In Figs. 11(a)–11(c), different data curves for a measurement from above with are depicted, revealing the influence of a temperature increase: The pixel displacement on the camera sensor and the measurement difference () indicates an increase due to temperature only for the measurement points corresponding to the laser rays with -discharge values from 0 to . This is due to the expansion of the inhomogeneous RIF (see Fig. 1, right side with cylinder cross-section): As the RIF variation develops its full effect directly above the hot cylinder due to the convective density flow, the spatial region in which a cylinder temperature increase takes effect, is expanded widely. Smaller light ray -discharge values do not lead to differences (), except for . This might be explained by the analysis in Sec. 5.2: the resulting light deflection is reduced, due to the superimposition in the path from laser to object and from object to camera. This effect, based on the symmetry of the RIF, is not affected by a temperature increase, as the symmetry of the RIF does not change. The discretization effect can be evaluated, when comparing Figs. 10(a) and 10(b). Due to the pixel rounding to a value of 0.25 pixel (subpixel accuracy of the camera sensor), the RIF-induced deflection effect is “discretized” as well. Curve (a.2) shows more abrupt steps than curve (a.1). The triangulation sensor’s resolution therefore affects the “reproduction” of the deflection effect in an inhomogeneous RIF. The effect of a sensor rotation around the cylinder axis (see Fig. 1, right side) on the laser point displacement is depicted in Fig. 11(d) for a cylinder temperature of 900°C. The displacement is reduced with increasing angle . The simulated curves therefore indicate the obvious: if a measurement is not performed directly through the greatest expansion and variation of the inhomogeneous RIF, but rather sideways through less expanded regions, the pixel displacement is reduced and the corresponding measurement more trustworthy. The measurement difference () for the rotated sensor is not depicted: The Euclidean distance is below for all laser rays for and below for . 6.Summary and ConclusionIn this paper, a virtual triangulation setup based on the light section method is presented, using a matrix camera with entocentric lens as detection unit and a telecentric laser line generator as illumination unit. Geometry measurements of a cylinder in different temperature states are simulated and compared in order to analyze the effect of an inhomogeneous RIF on triangulated measurement data. To this end, detailed information is given on the simulation design, comprising the numerical calculation of the inhomogeneous RIF via heat transfer simulations and the modeling of the virtual sensor (camera pinhole model), next to the reconstruction of 3-D points via triangulation (Sec. 3). In Sec. 4, theoretical background is given on the applied ray tracer, as well as an extensive description in pseudocode of the implemented iterative optimization routine in order to reproduce a point projection onto a pinhole camera, taking light refraction into account. Simulation results, using the derived virtual triangulation routine, are presented and discussed in detail in Sec. 5. The analysis of the measurement differences for homogeneous and inhomogeneous optical conditions leads to the following conclusions: The measurement object’s geometry directly influences the laser point displacement on the object’s surface, and therefore, the RIF-induced light deflection effects [compare to Fig. 7(a)]. Furthermore, the absolute light deflection, as seen by the measurement camera in terms of pixel displacement, is a superimposition of the deflection effects in the path illumination unit to object and object to detection unit (Fig. 9). These path deflections can be opposed in their effect, resulting in a much smaller camera pixel displacement than expected. Moreover, already the one-way light path manipulation from camera to object shows the complexity of light refraction in a heat-induced convective density flow: The refractive fields shape, extension, and magnitude can result in a refraction toward the hot object, as well as away from the object on the same light path 10(a), complicating the interpretation of the gained result. By changing the triangulation sensor’s pose in relation to the measurement object, the camera pixel displacement can be reduced [Fig. 11(d)], resulting in more accurate triangulation results. Lateral measurements or measurements from underneath the object are therefore an alternative to reduce light refraction effects in an experimental setup, as the refractive field’s extension and deflection effect is limited. Unfortunately, this approach is not sufficient, if 360 deg geometry data is required at the same measurement time in order to capture a full shrinkage process of wrought-hot, hybrid workpieces. A possible solution to allow high precision geometry measurements of hot objects is actuated or computer-assisted routines that either guarantee a rectilinear propagation of light despite the object’s heat or that allow a subsequent correction of RIF-disturbed measurements. If compensation algorithms for a subsequent geometry data correction are meant to be derived from simulation results, all parameters of the real measurement setup have to be considered in the simulation (e.g., sensor resolution and pose, triangulation angle, object geometry and temperature). Especially, the dynamics of the heat-induced refractive field has to be taken into account, as an areal triangulation measurement by a structured light system requires the acquisition of an image sequence over time. 7.Forthcoming WorkThe presented simulation model will be developed into a virtual fringe projection system to allow virtual areal measurements. To this end, the telecentric laser line generator is replaced by a projector. As a projector can be considered as an inverse pinhole camera,23 the same model implementation is used for detection unit (camera) and illumination unit (projector). The complete virtual triangulation setup is defined and visualized in a MATLAB24 script (e.g., the sensor pose, the camera’s focal length, compare to Fig. 12), before the simulation boundary conditions are passed to the simulation platform in Comsol. The model will not be able to virtually reproduce the projection of an image sequence to solve the projector pixel—camera pixel—correspondence problem. Fortunately, this is not necessary, as the correspondence problem is solved via the presented optimization routine. Further work will be the implementation of a camera lens distortion model, the investigation of the RIF dynamic and the parallelization of ray tracing routines, in order to speed up the whole virtual triangulation process. A faster and more dense reconstruction of surface data by virtual triangulation would allow the evaluation of the standard deviation, when fitting a cylinder into RIF-affected and nonaffected measurement data. This would enable a more general analysis of heat-induced light deflection and its effect on triangulation measurements. AcknowledgmentsWe want to thank the Deutsche Forschungsgemeinschaft (DFG) for funding subproject C5 “Multiscale Geometry Inspection of Joining Zones” as part of the Collaborative Research Centre (CRC) 1153 Process chain to produce hybrid high performance components by Tailored Forming. ReferencesM. Rahlves and J. Seewig, Optisches Messen technischer Oberflächen., Beuth Verlag, Berlin
(2009). Google Scholar
GOM GmbH, “Sheet metal forming—3D metrology in industrial sheet metal forming processes,”
(2017) https://www.gom.com/industries/sheet-metal-forming/sheet-metal-forming-download-brochure.html Google Scholar
S. Matthias et al.,
“Fringe projection profilometry using rigid and flexible endoscopes,”
Tech. Mess., 84
(2), 123
–129
(2017). https://doi.org/10.1515/teme-2016-0054 Google Scholar
J. Beyerer, F. Puente León and C. Frese, Automatische Sichtprüfung: Grundlagen, Methoden und Praxis der Bildgewinnung und Bildauswertung, Springer Vieweg, Berlin, Heidelberg
(2012). Google Scholar
P. E. Ciddor,
“Refractive index of air: new equations for the visible and near infrared,”
Appl. Opt., 35 1566
–1573
(1996). https://doi.org/10.1364/AO.35.001566 APOPAI 0003-6935 Google Scholar
T. Dale and J. Gladstone,
“On the influence of temperature on the refraction of light,”
Phil. Trans. R. Soc. Lond., 148 887
–894
(1858). https://doi.org/10.1098/rstl.1858.0036 Google Scholar
R. Beermann et al.,
“Background oriented schlieren measurement of the refractive index field of air induced by a hot, cylindrical measurement object,”
Appl. Opt., 56 4168
–4179
(2017). https://doi.org/10.1364/AO.56.004168 APOPAI 0003-6935 Google Scholar
T. Kreis et al.,
“Noncontacting measurement of distortion by digital holographic interferometry,”
Materialwiss. Werkstofftech., 37
(1), 76
–80
(2006). https://doi.org/10.1002/(ISSN)1521-4052 MATWER 0933-5137 Google Scholar
H. Gafsi and G. Goch,
“Calibration routine for in-process roundness measurements of steel rings during heat treatment,”
Proc. SPIE, 8082 808231
(2011). https://doi.org/10.1117/12.889515 PSISDG 0277-786X Google Scholar
W. Liu et al.,
“Fast dimensional measurement method and experiment of the forgings under high temperature,”
J. Mater. Process. Technol., 211
(2), 237
–244
(2011). https://doi.org/10.1016/j.jmatprotec.2010.09.015 JMPTEF 0924-0136 Google Scholar
A. Zatočilová, D. Paloušek and J. Brandejs,
“Image-based measurement of the dimensions and of the axis straightness of hot forgings,”
Meas. J. Int. Meas. Confed., 94 254
–264
(2016). https://doi.org/10.1016/j.measurement.2016.07.066 Google Scholar
A. Ghiotti et al.,
“Enhancing the accuracy of high-speed laser triangulation measurement of freeform parts at elevated temperature,”
CIRP Ann., 64
(1), 499
–502
(2015). https://doi.org/10.1016/j.cirp.2015.04.012 CIRAAT 0007-8506 Google Scholar
E. Hecht, Optik, Oldenbourg Wissenschaftsverlag, 6 ed.De Gruyter, Munich
(2014). Google Scholar
Comsol Multiphysics 5.1, “Heat transfer and ray optics module,”
(2018) https://www.comsol.de/products September ). 2018). Google Scholar
R. Beermann et al.,
“Light section measurement to quantify the accuracy loss induced by laser light deflection in an inhomogeneous refractive index field,”
Proc. SPIE, 10329 103292T
(2017). https://doi.org/10.1117/12.2269724 PSISDG 0277-786X Google Scholar
R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, Cambridge
(2004). Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
B. A. Abu-Nabah, A. O. ElSoussi and A. E. K. Al Alami,
“Simple laser vision sensor calibration for surface profiling applications,”
Opt. Lasers Eng., 84 51
–61
(2016). https://doi.org/10.1016/j.optlaseng.2016.03.024 Google Scholar
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, 1 ed.O’Reilly & Associates, Sebastopol, California
(2008). Google Scholar
M. Born et al., Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed.Cambridge University Press, Cambridge
(1999). Google Scholar
B. E. A. Saleh and M. C. Teich, Grundlagen der Photonik, Wiley-VCH Verlag, Berlin
(2008). Google Scholar
D. A. Krueger,
“Spatial varying index of refraction: an open ended undergraduate topic,”
Am. J. Phys., 48 183
–188
(1980). https://doi.org/10.1119/1.12169 AJPIAS 0002-9505 Google Scholar
S. Zhang and P. S. Huang,
“Novel method for structured light system calibration,”
Opt. Eng., 45 083601
(2006). https://doi.org/10.1117/1.2336196 Google Scholar
The MathWorks, Inc, “MATLAB 2015b,”
(2018) https://de.mathworks.com/products/new_products/release2015b.html September ). 2018). Google Scholar
BiographyRüdiger Beermann is a research associate at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He received his diploma in mechanical engineering from the Leibniz Universität Hannover in 2013 and his state examination as a teacher for math and metal technology for vocational schools in 2015. His current research interests include the development of fringe projection systems for high temperature workpieces and thermal-optical simulations. Lorenz Quentin is a research associate at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He obtained his diploma in mechanical engineering in 2016. His current research interests include the development of fringe projection systems for high temperature workpieces. Eduard Reithmeier is professor at the Leibniz Universität Hannover and head of the Institute of Measurement and Automatic Control. He received his diplomas in mechanical engineering and in math in 1983 and 1985, respectively, and his doctorate in mechanical engineering at the Technische Universität München in 1989. His research focuses on system theory and control engineering. Markus Kästner is the head of the Production Metrology Research Group at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He received his PhD in mechanical engineering in 2008 and his postdoctoral lecturing qualifications from the Leibniz Universität Hannover in 2016 . His current research interests are optical metrology from macro- to nanoscale and optical simulations. |