Open Access
30 September 2016 Technique for measuring the three-dimensional shapes of telescope mirrors
Zhenzhou Wang
Author Affiliations +
Abstract
Telescope mirrors determine the imaging quality and the observation ability of the telescopes. Unfortunately, manufacturing highly accurate mirrors remains a bottleneck problem in space optics. One primary cause is the lack of a technique to robustly measure the three-dimensional (3-D) shapes of mirrors for inverse engineering. After centuries of study, researchers developed different techniques for testing the quality of telescope mirrors and proposed different methods for measuring the 3-D shapes of mirrors. Among them, interferometers become popular in evaluating the surface errors of the manufactured mirrors. However, interferometers could not measure some important mirror parameters, e.g., paraxial radius, geometry dimension, and eccentric errors, directly and accurately although these parameters are essential for mirror manufacturing. For those methods that could measure these parameters, their measurement accuracies are far beyond satisfactory. We present a technique for robust measurement of the 3-D shapes of mirrors with single-shot projection. Experimental results show that this technique is significantly more robust than state-of-the-art techniques, which makes it feasible for commercial devices to measure the shapes of mirrors quantitatively and robustly.

1.

Introduction

In the 16th century, the first astronomical telescope that used a refractive lens was invented by the great scientist Galileo. In 1789, Frederick William Herschel established the 1.22 m reflective telescope that uses specular mirrors. In 1948, the famous Hale telescope was established in San Diego and it used a reflective mirror with a size of 200 in. In 1976, a larger telescope with a 6-m in diameter and 25 m in length was made in Russia. Later on, more powerful telescopes with larger sizes were established, e.g., the Giant Magellan Telescope (GMT), Thirty Meter Telescope, Hubble Space Telescope, James Webb Space Telescope (JWST), and European Extremely Large Telescope (EELT), that join many small mirrors. Among them, the JWST and EELT are three-mirror anastigmatics and were built with three curved mirrors for minimum optical aberrations to achieve a wide field of view. The advantages of the three-mirror anastigmatic technique make its usage popular for military and civilian space observations. For example, the three-mirror anastigmatic Korsch telescope was used in both the Deimos-2 and DubaiSat-2 Earth observation satellites. From the above brief overview of the development history of astronomical telescopes, it can be seen that most of the designed telescopes used mirrors.

It is known that the mirrors used by the telescopes determine their imaging quality and observing ability. However, it remains a bottleneck problem to manufacture accurate enough telescope mirrors in applications where high-resolution and high-imaging quality are required for the manufactured telescopes. For instance, the accuracy of state-of-the-art optics manufacturing technology could not meet the technical requirements of the astronomical telescopes used in the projects from NASA, e.g., Astronomical Search for Origins, Structure and Evolution of the Universe, and Sun–Earth Connection). Research on new manufacturing techniques and technology in space optics is urgent and important for the development of the next generation telescopes. One important factor that reduces the imaging accuracy of the telescope is the mirror’s manufacturing errors that are defined as the differences between the practical parameters of the mirrors and their theoretical values. These parameters include the paraxial radius, geometry dimension, eccentric errors, and surface errors. To improve the manufacturing accuracy, techniques of measuring the three-dimensional (3-D) shapes of mirrors become important because the manufacturing errors of mirrors could be computed directly after their shapes are known. Although many noncontact techniques110 have been developed to measure the mirrors in the past decades, none of them are capable of measuring the shapes of mirrors with adequate accuracy.

The efforts for measuring the mirror’s 3-D shape and calculating the mirror’s manufacturing errors could be traced back to the time when the telescope was invented. In 1858, Jean Foucault invented a method to measure the telescope mirrors with the knife-edge test. Unfortunately, it could only measure spherical mirrors while most telescope mirrors are aspheric instead of spherical. In addition, the Foucault method could only provide qualitative results instead of quantitative results. To yield quantitative results, it must be combined with other methods and requires great effort and considerable skills to make accurate judgments. In 1922, Vasco Ronchi invented a different technique to measure telescope mirrors and it could not provide quantitative results either. Based on the previous methods, several new methods were proposed later, e.g., the star test, the Ross null test, and the autocollimation test. However, none of them are satisfactory. In the late 1960s, the laser unequal path interferometer was invented to test spherical concave surfaces.11 In the early 1970s, Karl Bath invented an interferometer to test telescope mirrors with quantitative results and it was recognized as the most informative method of that time. The Ceravolo interferometer is an alternative method with similar performances. Unfortunately, these three interferometers are only suited for testing spherical surfaces. Later on, an interferometer made by ZYGO used the peak-to-valley (PV) and root mean squares (RMS) to evaluate the quality of the mirror. However, the complexity of the optics makes PV/RMS incapable of adequately describing the mirror quality. Hence, power spectral density, slope RMS, inverse Hartmann test, and structure function (SF) are adopted widely in mirror quality evaluations.1218 In Ref. 16, an inverse Hartmann test was proposed for surface form measurement in the spherical coordinates with increased dynamic range and resolution. However, its accuracy was decreased compared to that in the rectangular coordinates. In Ref. 17, a tutorial about the SF analysis is presented and its advantages over Fourier-based methods were proved. In Refs. 19 and 20, researchers at the University of Arizona used the laser tracker to obtain the direct shape measurement for the GMT mirror and they achieved a measurement accuracy of 1/4  μm. In Ref. 21, a large deformable aspherical mirror is measured with sub-μm accuracy by the software configurable optical test system. Its measurement principle is the same as that of deflectometry and it is based on the integration of the surface slope. In Ref. 22, ray tracing was used to measure the optical aberrations of aspherical lenses. All the above methods except for Ref. 22 could only indirectly give quantitative results of the surface errors for the mirror. The paraxial radius, geometry dimensions, and eccentric errors of the mirror, are outside of their capabilities.

Although Ceyhan et al.,22 claimed that their method could measure the profile of the surface, only a one-dimensional profile was given in their experiments. In Ref. 23, a method was proposed to measure the 3-D profiles of the mirrors with analytic solutions, which has the potential to achieve the zero error accuracy provided that no noise and lens distortion exist. Unfortunately, the system noise is great and the radial lens distortion of the used pico laser projector is also severe. Consequently, the measurement accuracy of Ref. 23 is limited. To improve the measurement accuracy, the pico laser projector was replaced with an silicon nitride film (SNF) laser, which is free from lens distortion and achieves better measurement accuracy.24 However, the measurement accuracy is still not good enough because the noise significantly reduces the measurement accuracy. The used SNF laser does not strictly obey central projection from which some fundamental equations of the system are derived. Consequently, the reconstruction result of the system would be greatly distorted and the least deformation principle24 is required to recalibrate the system, which is very time-consuming.

In this paper, a pattern modeling method is proposed to remove the noise and radial lens distortion as a whole by decreasing the degrees of freedom of the multiple laser rays to one. The proposed method registers the captured pattern with a theoretical pattern that replaces the captured pattern during the reconstruction. Since the pattern modeling method requires the projected rays to obey the principle of central projection, it could not be adopted by the system in Ref. 23 unless the SNF laser is made to be a strictly central projection. Hence, we choose the pico laser projection to generate the required laser rays in this research work. The mirror measurement system is designed according to the requirement of measuring the telescope mirrors. Due to the larger sizes of the mirrors, larger diffusive planes and a beam splitter are used with carefully selected distances. With the proposed pattern modeling method incorporated into the design system, this technique could achieve femtometer measurement accuracy (1013  mm) for telescope mirrors, which is superior to most state-of-the-art methods.

This paper is organized as follows: Sec. 2 describes the working principle of the mirror measurement system and the method by which to calculate the projection center. In Sec. 3, noise analysis is given and the fundamental pattern modeling method is proposed. Experimental results are given in Sec. 4. Section 5 concludes the paper.

2.

System and Projection Center

The working principle of the designed system is shown in Fig. 1. Three cameras c1, c2, and c3 are aimed at three planes p1, p2, and p3, respectively. The projection center of the projector is denoted as c and its symmetry point relative to the horizontal plane is c which could be treated as the projection center of a virtual pin-hole camera. The central projection from c intercepts the mirror plane and the diffusive plane in the same way as light goes through a pin-hole and images on the image plane. Hence, the mirror plane can be treated as the image plane of this virtual pin-hole camera. The horizontal plane p1 is the imaging plane of this virtual camera, it is defined as the reference plane, z=0 and its origin is at o. The laser ray is projected onto the horizontal plane p1 and reflected by it onto a beam splitter. The beam splitter transmits half of the ray to intercept the plane p2 and reflects half of the ray to intercept the plane p3, respectively. During the system calibration, the poses of the three cameras are estimated. The equation of the diffusive plane p2 or p3 is computed by the calibrated camera c2 or c3 and the virtual pin-hole camera. With the equation of the diffusive plane known, the homography between the camera and the diffusive plane known, and the camera coordinates of the interception points known, the 3-D world coordinate of the interception point can be computed. After the points on p3 are computed, they are mapped to p4. Thus, two points intercepting the reflected ray are obtained and this ray can be uniquely determined with a closed form solution. With the incident rays determined by the calibrated camera c1, the interception points of the projected pattern on the mirror surface are obtained with closed form solutions as the intersections between the incident rays and their correponding reflected rays.

Fig. 1

Working principle of the system.

OE_55_9_094108_f001.png

A set of laser rays are projected from point c. The center c is computed as the inception point of all the projected laser rays and it is stated as

Eq. (1)

xix0iai=yiy0ibi=ziz0ici=ti,
where [ai,bi,ci]T are the coefficients of the i’th incident ray.

To determine the coefficients of the incident rays, we intercept the laser rays with seven horizontal reference planes by elevating a metric lab jacks 1 mm each time. The 3-D coordinates of these intercepted points are computed by the calibrated camera c1 with the controlled reference planes, z=2, 1, 0, 1, 2, 3, 4. Figure 2 shows the calculated interception points on different reference planes. With seven points known for the ith incident ray, its coefficients [ai,bi,ci]T are computed by singular value decomposition.

Fig. 2

Interception points of the incident rays with different horizontal reference planes.

OE_55_9_094108_f002.png

With all the incident rays determined, we use the least square method to find the projection center c whose distances to all the incident rays are the smallest. The distance of the projection center c to each laser ray is calculated as

Eq. (2)

d2=|(P1P0)×(P0P)|2|P1P0|2,
where the coordinate of the projection center is denoted as P(x,y,z). P0(x0i,y0i,z0i) and P1(x1i,y1i,z1i) are the two known points on the i’th incident ray. Differentiating d2 over x, y, and z, respectively, and then setting the derivations to zero, we get the projection center C(xc,yc,zc) as shown by the red square in Fig. 3. The intrinsic matrix of the virtual camera is then obtained. Its principle point (Cx,Cy) equals (xc,yc) and its focal length f equals zc.

Fig. 3

Projection center denoted by the red square and its relative position to the interception points on different horizontal reference planes.

OE_55_9_094108_f003.png

3.

Noise Analysis and Pattern Modeling

We choose the interception points on the reference plane z=0 and show their zoomed-in views in Fig. 4. It is seen that these five groups of points are not on straight lines as designed, which is caused by the random noise. The noise in this imaging system is mainly caused by the following two factors: (1) the captured image is affected by different influencing light sources and (2) the automatic image processing algorithms are affected by the unevenly distributed grayscales of the laser points. In addition to the noise, there are radial lens distortions that are inherent in the projectors and cameras. Both the noise and radial lens distortion greatly decrease the calibration accuracy and system measurement accuracy. Hence, they must be removed for better accuracy. For the central projection, the angle between any two projected rays does not change and all the projected rays intersect at the projection center. This is the fundamental property of the projector and camera. The proposed method is based on this property.

Fig. 4

The zoomed-in view of the interception points of the incident rays with the horizontal reference plane z=0.

OE_55_9_094108_f004.png

For two practically projected rays, the angle between them is computed by the following equation:

Eq. (3)

θij=cos1(xixc,yiyc,zizc)·(xjxc,yjyc,zjzc)|xixc,yiyc,zizc||xjxc,yjyc,zjzc|,
where (xi,yi,zi) and (xj,yj,zj) are the i’th interception point and the j’th interception point on the horizontal reference plane z=0 as shown in Fig. 4.

In the ideal pattern design coordinate system, each bright point represents one projected ray. The bright point at the center of the designed pattern corresponds to the center projected ray by the projector. The distance between two adjacent bright points is equal in both the row and column directions, which is another property that the proposed pattern modeling method relies on. We could determine the relative positions of the projected rays based on this property though their equations in the virtual coordinate system are not known. From the fundamental central projection property of the projector, we know that the angle between the center ray and any other ray is always fixed. In the ideal pattern design coordinate system, the angle between the i’th ray and the center ray can be computed using the following equation:

Eq. (4)

θi=tan1diD,
where di denotes the distance between the center point and the i’th point. D denotes the distance between the projection center of the projector and the orthogonal interception plane.

From Eq. (4), it is seen that the angle is determined by the ratio of di and D, thus they are not affected by their scales. Hence, we could assume that the distance di equals the pixel distance in the original pattern. Since the distance D is unknown, we need to compute it and the following algorithm is proposed accordingly.

  • Step 1: We select a set of points (45 points in this research work) around the center of the projected rays and compute the practical angles between the selected rays and the center ray with Eq. (3).

  • Step 2: We select the correponding set of points from the ideally designed pattern and compute the ideal angles between the selected points and the center ray based on Eq. (4) with an initial estimate of D as 1000 pixels.

  • Step 3: We then compute the total difference between the practical angles and the corresponding ideal angles by the following equation:

    Eq. (5)

    Δθ=i=144|θi0θi|,
    where θi0 denotes the angle between the i’th ray and the center ray.

  • Step 4: We compute D by making Δθ minimum

    Eq. (6)

    D¯=argminDΔθ.

After distance D is computed, the practical angle and the ideal angle between the center ray and any other ray are computed by Eqs. (3) and (4), respectively. The results are shown in Fig. 5. The practical angle computed by Eq. (3) is denoted as “original” and the ideal angle computed by Eq. (4) is denoted as “modeled.” It is seen that the angles obtained from these two different equations [Eqs. (3) and (4)] match well, but there are also obvious differences caused by the noise.

Fig. 5

Modeled angles versus original angles.

OE_55_9_094108_f005.png

After θi is computed by Eq. (4) for each ray, the projected rays have been modeled and they are denoted as modeled rays Rim, i=1,2,.45. To determine the interception pattern, we need to compute the plane that intercepts the projected rays. To compute the interception plane, we propose the following pattern modeling method:

  • Step 1: We intercept the modeled rays Rim, i=1,2,.45 using the plane ax+by+cz=1 and compute the modeled distances between the center point and the selected set of points around it, which is calculated as

    Eq. (7)

    dim=(ximx0m)2+(yimy0m)2+(zimz0m)2.

  • Step 2: We then compute the original distances between the center point and the same set of points around it in the practically captured pattern, which is determined as

    Eq. (8)

    dip=(xipx0p)2+(yipy0p)2+(zipz0p)2.

  • Step 3: We compute the total difference between the modeled distances and the original distances by the following equation:

    Eq. (9)

    Δd=Δdi=|dimdip|.

  • Step 4: We compute the optimal interception plane P(a,b,c) by making Δd minimum, which is formulated as

    Eq. (10)

    P¯=argminPΔd.

Because the intercepted points are computed in a virtual coordinate system instead of the world coordinate system, we need to convert the coordinates of the original points and the the coordinates of the modeled points by affine registration. We register the two sets of points based on the least square errors by finding the transformation matrix A that makes the sum of square errors, dr minimum

Eq. (11)

[x¯ipy¯ipz¯ip1]=ω[a11a13a13a14a21a22a23a24a31a32a33a34a41a42a43a44][ximyimzim1],

Eq. (12)

dr=i=144(x¯ipxip)2+(y¯ipyip)2+(z¯ipzip)2,

Eq. (13)

A¯=argminAdr,
where ω is a constant and the transformation matrix A is defined as

Eq. (14)

A=[a11a12a13a14a21a22a23a24a31a32a33a34a41a42a43a44].

Figure 6 shows the modeled points after registration (in the blue circle) versus the original points (in the red dot). It is seen that the proposed registration method performs very well. The points in each group of the modeled points are on the same straight line, which verifies that the noise and radial lens distortions were successfully removed. To see the removal effect more clearly, the differences of the x and y coordinates for these 45 points before pattern modeling are plotted in Figs. 7(a) and 7(b), respectively. The differences of the x and y coordinates for these 45 points after pattern modeling are plotted in Figs. 8(a) and 8(b), respectively. It is seen that the variation of the differences after pattern modeling becomes regular and the noise (random variation) is successfully removed. After pattern modeling, the practically captured patterns are replaced with the modeled patterns. The measurement system is calibrated with the modeled patterns as described in Ref. 20. After calibration, the mirror could be measured robustly in real time by single projection.

Fig. 6

Modeled points after registration versus original points.

OE_55_9_094108_f006.png

Fig. 7

Differences of adjacent points before pattern modeling: (a) differences in the x coordinates and (b) differences in the y coordinates.

OE_55_9_094108_f007.png

Fig. 8

Differences of adjacent points after pattern modeling: (a) differences in the x coordinates and (b) differences in the y coordinates.

OE_55_9_094108_f008.png

4.

Experimental Results

Figure 9 shows the practically established system. The horizontal screen, p1, is placed on the top of the Metric Lab Jack whose height can be adjusted. The rays are produced by a pico laser projector and they are reflected by the mirror surface onto the beam splitter that splits the rays into two parts. Half of the rays pass through and image on the diffusive plane, p2. The other half of the rays are reflected and image on the diffusive plane, p3. The dragonfly camera c1 is aimed at the horizontal screen to compute the equations of the incident rays after calibration. The dragonfly cameras, c2 and c3, are aimed at the two diffusive planes, p2 and p3, respectively, and synchronically record images at 60  frames/s.

Fig. 9

The developed system.

OE_55_9_094108_f009.png

Figure 10(a) shows the designed pattern which is projected by the pico laser projector onto a horizontal diffusive plane. The brightest point in the center is the center point. The projected pattern on the horizontal screen is captured by camera c1 and Fig. 10(b) shows one captured example. Figures 11(a) and 11(b) show the modeled coordinates (in red) versus the original original coordinates (in blue). It is seen that the modeled coordinates and the original coordinates match well, which meets the requirement that the random variations (noise) are removed while the pattern is kept undistorted.

Fig. 10

Designed pattern and captured pattern: (a) designed pattern in the computer and (b) captured pattern by the camera.

OE_55_9_094108_f010.png

Fig. 11

Modeled coordinates versus the original coordinates: (a) x coordinate and (b) y coordinate.

OE_55_9_094108_f011.png

For quantitative evaluation, we compute the RMS errors between the reconstructed points and the modeled points with the following equation:

Eq. (15)

[ExEyEz]=[1Ni=1N(XriXmi)21Ni=1N(YriYmi)21Ni=1N(ZriZmi)2],
where Ex denotes the error in the x coordinate, Ey denotes the error in the y coordinate, and Ez denotes the error in the z coordinate. (Xri,Yri,Zri) denotes the i’th reconstructed point and (Xmi,Ymi,Zmi) denotes the i’th modeled point. Suppose the flat mirror is ideal, then Zoi is constant for each point. Xoi and Yoi are computed by camera c1 as follows.

We determine the homography between camera c1 and the horizontal reference plane z=0 using the MATLAB calibration toolbox. The determined homography could be formulated as

Eq. (16)

H=1Zc[fx0Cx0fyCy001][r0r1Txr3r4Tyr6r7Tz],
where Zc is a scalar. fx and fy are the focal lengths in the x- and y-directions, respectively. (Cx,Cy) is the principal point of camera c1. [r0,r1,Tx;r3,r4,Ty;r6,r7,Tz] are the extrinsic parameters between the horizontal reference plane z=0 and the imaging plane of camera c1. (Xoi,Yoi) could be computed by the following equation:

Eq. (17)

[XoiYoi1]=H1[uivi1],
where (ui,vi) is the camera coordinate of the i’th point. After (Xoi,Yoi,Zoi) is determined, (Xmi,Ymi,Zmi) is determined by Eqs. (3)–(15) as described in Secs. 3 and 4, which could be summarized as

Eq. (18)

[XmiYmiZmi1]=M[XoiYoiZoi1],
where M is the affine pattern modeling matrix.

To quantitatively compare the measurement accuracy, we compute the RMS errors of reconstructing the flat mirror without pattern modeling first. The computed RMS error is 0.2254 mm in the x coordinate, 0.1977 mm in the y coordinate, and 0.0825 mm in the z coordinate. The reconstructed points versus the original points are shown in Fig. 12(a), where the blue circles denote the original points and the red crosses denote the reconstructed points. We then compute the errors of reconstructing the flat mirror with pattern modeling of world coordinates in p1, p2, and p3. The computed RMS error is 0.0332 mm in the x coordinate, 0.0278 mm in the y coordinate, and 0.0113 mm in the z coordinate. The reconstructed points versus the original points are shown in Fig. 12(b), where the blue circles denote the original points and the red crosses denote the reconstructed points.

Fig. 12

Illustration of the reconstruction error (a) result without modeling; (b) result with the world coordinates on the three diffusive planes modeled; and (c) result with two camera coordinates and the world coordinates on the three diffusive planes modeled.

OE_55_9_094108_f012.png

To see the pattern modeling effect further, we compute the RMS errors of reconstructing the flat mirror with pattern modeling of the camera coordinates in c2 and c3 and the world coordinates in p1, p2, and p3. The reconstructed points versus the original points are shown in Fig. 12(c), where the blue circles denote the original points and the red crosses denote the reconstructed points. The computed RMS error becomes 3.4681×1014  mm in the x coordinate, 6.6771×1014  mm in the y coordinate, and 2.4653×1014  mm in the z coordinate, respectively. At first glance, such a small RMS error seems extremely unlikely. A further thought confirms that it is reasonable. When all the involved patterns are modeled, both the calibration stage and the reconstruction stage could be assumed to be free of noise. In addition, the reconstruction error is computed between the reconstructed points and the modeled points instead of the original points. Thus, no random noise could be introduced in the evaluation stage. The reconstruction error is close to zero, but is not exactly equal zero as in our simulation with MATLAB,23 which is caused by the use of nonideal hardware. Another adverse factor caused by the nonideal hardware is the shape distortion. Hence, the used hardware should be as precise as possible to achieve the highest measurement accuracy. Since the telescope mirrors have known forms, e.g., spherical form, plannar form, and parabolic form, we could always model the camera coordinates in c2 and c3 while measuring their shapes. Hence, the measurement accuracy of this technique for telescope mirrors is femtometers (1013  mm). Based on theoretical analysis, this technique could achieve zero error plateau measurement accuracy when all the used devices are nearly perfect and all the involved patterns are modeled.

At last, we reconstruct a spherical convex mirror to show the strength of the proposed pattern modeling method visually. Figure 13(a) shows the reconstructed spherical convex mirror without pattern modeling. The noise severely ruined the reconstruction. Figure 13(b) shows the reconstructed spherical convex mirror with three patterns modeled. Figures 13(c) shows the reconstructed spherical convex mirror with three patterns of world coordinates and two patterns of camera coordinates modeled. It is seen that there is a noise threshold that determines if the proposed structured light technique could work effectively. Only when all the noises in the five patterns involved are eliminated will the proposed structured light technique yield satisfactory accuracy. From all these experimental results, both the strength of the proposed pattern modeling method and the proposed structured light technique were verified.

Fig. 13

Reconstruction of the convex mirror (a) result without modeling; (b) result with the world coordinates on the three diffusive planes modeled; and (c) result with two camera coordinates and the world coordinates on the three diffusive planes modeled.

OE_55_9_094108_f013.png

To show the superiority of this technique in measurement accuracy over state-of-the-art methods, we compared it with those state–of-the-art methods with available quantitative results and show the comparisons in Table 1. It is seen that the proposed technique is significantly more robust than state-of–the-art methods. Furthermore, some state-of-the-art methods, e.g., Ref. 21, could not measure the 3-D profiles of the mirror while this technique can. The advantage of this technique over state-of-the-art methods is further verified.

Table 1

Comparison of our method with state-of-the-art methods.

MethodsCameraPatternImagesError (mm or %)
11110.3%
21110.644 mm
31110.5 mm
41110.48  mm
510Sequence11.74%
62120.3 mm
921215%
10>21Sequence0.02/0.2  mm
151110.5×103  mm
201110.25×103  mm
211110.2×103  mm
232120.1/20  mm
242120.09/20  mm
Proposed2121013/20  mm

5.

Conclusion

It is important and challenging to measure the 3-D shapes of the mirrors accurately for telescope manufacturing. This paper presents a technique that is capable of robustly measuring the profiles of mirrors, which is essential for the direct measurement of manufacturing errors for the mirror. A pattern of laser rays is projected onto the mirror surface, reflected, and intercepted by two diffusive planes. With two points of the reflected ray obtained, it is uniquely determined in the world coordinate system. Further, the interception points of the projected pattern on the mirror surface are obtained by computing the intersections between the incident rays and the reflected rays. The proposed pattern modeling method is capable of removing the noise and radial lens distortion by replacing the captured pattern with a theoretical pattern that was computed by registering the captured pattern with the designed pattern. Experimental results showed that the proposed pattern modeling method could increase the measurement accuracy of this technique from 0.1 to 1013  mm. Above all, this technique is significantly more accurate than most state-of-the-art techniques in measurement accuracy. Compared to the evaluation methods adopted by popular interferometers in mirror manufacturing, this technique is capable of measuring more mirror parameters such as the paraxial radius, geometry dimension, eccentric errors, and surface errors after the 3-D shape of the mirror is reconstructed. On the contrary, an interferometer could only evaluate the surface errors of the mirrors. Hence, this technique is promising for commercial products and devices in mirror manufacturing.

References

1. 

T. Bonfort, P. Sturm and P. Gargallo, “General specular surface triangulation,” in Asian Conf. on Computer Vision, 872 –881 (2006). Google Scholar

2. 

K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vision, 76 (1), 13 –29 (2008). http://dx.doi.org/10.1007/s11263-007-0049-9 IJCVEQ 0920-5691 Google Scholar

3. 

M. M. Liu, R. Hartley and M. Salzmann, “Mirror surface reconstruction from a single image,” IEEE Trans. Pattern Anal. Mach. Intell., 37 (4), 760 –773 (2015). http://dx.doi.org/10.1109/TPAMI.2014.2353622 ITPIDJ 0162-8828 Google Scholar

4. 

S. Savarese, M. Chen and P. Perona, “Local shape from mirror reflections,” Int. J. Comput. Vision, 64 (1), 31 –67 (2005). http://dx.doi.org/10.1007/s11263-005-1086-x IJCVEQ 0920-5691 Google Scholar

5. 

M. F. Tappen, “Recovering shape from a single image of a mirrored surface from curvature constraints,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2545 –2552 (2011). http://dx.doi.org/10.1109/CVPR.2011.5995376 Google Scholar

6. 

J. Balzer, S. Hofer and J. Beyerer, “Multiview specular stereo reconstruction of large mirror surfaces,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2537 –2544 (2011). http://dx.doi.org/10.1109/CVPR.2011.5995346 Google Scholar

7. 

J. Lellmann et al., “Shape from specular reflection and optical flow,” Int. J. Comput. Vision, 80 (2), 226 –241 (2008). http://dx.doi.org/10.1007/s11263-007-0123-3 IJCVEQ 0920-5691 Google Scholar

8. 

J. E. Solem, H. Aanas and A. Heyden, “A variational analysis of shape from specularities using sparse data,” in 2nd Int. Symp. on 3D Data Processing, Visualization and Transmission, 2223 –2238 (2004). http://dx.doi.org/10.1109/TDPVT.2004.1335137 Google Scholar

9. 

Z. F. Wang and S. Inokuchi, “Determining shape of specular surfaces,” in the 8th Scandinavian Conf. on Image Analysis, 25 –28 (1993). Google Scholar

10. 

M. Weinmann et al., “Multi-view normal field integration for 3D reconstruction of mirroring objects,” in Int. Conf. on Computer Vision, 1 –8 (2013). Google Scholar

11. 

J. B. Houston, C. J. Buccini and P. K. Neill, “A laser unequal path interferometer for the optical shop,” Appl. Opt., 6 (7), 1237 –1242 (1967). http://dx.doi.org/10.1364/AO.6.001237 APOPAI 0003-6935 Google Scholar

12. 

L. Y. He, A. Davies and C. J. Evans, “Comparison of the area structure function to alternate approaches for optical surface characterization,” Proc. SPIE, 8493 84930C (2012). http://dx.doi.org/10.1117/12.929166 PSISDG 0277-786X Google Scholar

13. 

L. Y. He, A. Davies and C. J. Evans, “Two-quadrant area structure function analysis for optical surface characterization,” Opt. Express, 20 (21), 23275 –23280 (2012). http://dx.doi.org/10.1364/OE.20.023275 OPEXFF 1094-4087 Google Scholar

14. 

L. Rosenboom, T. Kreis and W. Juptner, “Surface description and defect detection by wavelet analysis,” Meas. Sci. Technol., 22 (4), 045102 (2011). http://dx.doi.org/10.1088/0957-0233/22/4/045102 MSTCEP 0957-0233 Google Scholar

15. 

J. Burke et al., “Qualifying parabolic mirrors with deflectometry,” J. Eur. Opt. Soc. Rapid Publ., 8 13014 (2013). http://dx.doi.org/10.2971/jeos.2013.13014 Google Scholar

16. 

J. R. Ma et al., “Inverse Hartmann surface form measurement based on spherical coordinates,” Proc. SPIE, 8201 820126 (2011). http://dx.doi.org/10.1117/12.906981 PSISDG 0277-786X Google Scholar

17. 

T. Kreis, J. Burke and R. B. Bergmann, “Surface characterization by structure function analysis,” J. Eur. Opt. Soc. Rapid Publ., 9 14032 (2014). http://dx.doi.org/10.2971/jeos.2014.14032 Google Scholar

18. 

A. H. Hvisc and J. H. Burge, “Structure function analysis of mirror fabrication and support errors,” Proc. SPIE, 6671 66710A (2007). http://dx.doi.org/10.1117/12.736051 PSISDG 0277-786X Google Scholar

19. 

J. H. Burge et al., “Design and analysis for interferometric measurements of the GMT primary mirror segments,” Proc. SPIE, 6273 62731V (2006). http://dx.doi.org/10.1117/12.670982 PSISDG 0277-786X Google Scholar

20. 

J. H. Burge et al., “Alternate surface measurements for GMT primary mirror segments,” Proc. SPIE, 6273 62732T (2006). http://dx.doi.org/10.1117/12.672522 PSISDG 0277-786X Google Scholar

21. 

R. Huang, P. Su and G. Brusa, “Measurement of a large deformable aspherical mirror using SCOTS (software configurable optical test system),” Proc. SPIE, 8838 883807 (2013). http://dx.doi.org/10.1117/12.2024336 PSISDG 0277-786X Google Scholar

22. 

U. Ceyhan et al., “Measurements of aberrations of aspherical lenses using experimental ray tracing,” Proc. SPIE, 8082 80821K (2011). http://dx.doi.org/10.1117/12.895009 PSISDG 0277-786X Google Scholar

23. 

Z. Z. Wang et al., “Measurement of mirror surfaces using specular reflection and analytical computation,” Mach. Vision Appl., 24 (2), 289 –304 (2013). http://dx.doi.org/10.1007/s00138-012-0432-6 Google Scholar

24. 

Z. Z. Wang, “A one-shot-projection method for robust measurement of specular surfaces,” Opt. Express, 23 (3), 1912 –1929 (2015). http://dx.doi.org/10.1364/OE.23.001912 OPEXFF 1094-4087 Google Scholar

Biography

Zhenzhou Wang received his bachelor’s and master’s degrees from Tianjin University, and his PhD from University of Kentucky. He worked as a researcher at the University of Kentucky until 2012. He was selected in the “Hundred Talents Plan, A-Class” of Chinese Academy of Sciences in 2013 and works as a research fellow/professor. He serves as a panelist for the NSF of China. His research interests include image processing, computer vision, structured light and so on.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Zhenzhou Wang "Technique for measuring the three-dimensional shapes of telescope mirrors," Optical Engineering 55(9), 094108 (30 September 2016). https://doi.org/10.1117/1.OE.55.9.094108
Published: 30 September 2016
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
Mirrors

Telescopes

Space telescopes

Cameras

James Webb Space Telescope

Manufacturing

Interferometers

RELATED CONTENT

Ultra-stable structures lab (USS)
Proceedings of SPIE (August 27 2022)
eROSITA
Proceedings of SPIE (September 13 2011)
In process testing for cryo figuring 1.5 meter diameter auto...
Proceedings of SPIE (September 14 2011)

Back to Top