Open Access
8 July 2021 On-line process decisions using convolutional neural network for centering high-precision short-focus lens
Shiau-Cheng Shiu, Ke-Er Tang, Chun-Wei Liu
Author Affiliations +
Abstract

This study integrated the use of a centering machine with an automatic optical axis measuring technique to improve the centering process for short-focus lenses, which are widely used in interferometric inspection, microscopy, and spectrometry. A major concern of the centering process is coma aberrations during the axis centering of a lens, which leads to deformation of the image system. Because of the small size and high curvature of short-focus lenses, high optical axis error and unstable grinding quality are highly problematic within the high-precision centering process. To reduce optical axis error and improve manufacturing quality, an on-line optical axis measuring system that applies convolutional neural network (CNN) machine learning for the evaluation of centering stability was developed. According to experimental results, the CNN achieved 95% accuracy. With the use of trace classification and optical axis measurements, the optical axis error was controlled to <150  μrad, the range of cracks to <E0.1, and the circularity error to <0.1  mm.

1.

Introduction

High-end optical inspection instruments are widely used in interferometric inspection and astronomy, as well as in microscopes, spectrometers, and cameras.114 The key factor affecting inspection precision is the magnification of a lens module. In an inspection instrument, a lens module comprising short-focus lenses can achieve magnification of 1000×. The high curvatures of both surfaces, which mainly determine the effective focal length (EFL), limit the potential diameter of a lens. Therefore, the diameters of short-focus lenses are usually smaller than 20 mm. Generally, in a camera or telescope lens module, a short-focus lens is used to broaden the viewing angle and depth, but this causes a decrease in magnification. By contrast, the working distance is so much shorter for a microscope that a short-focus lens can greatly increase the magnification.

However, the geometry of short-focus lenses creates challenges in the centering process. Their small size renders them highly sensitive to radial grinding force during the edging process and errors, including those caused by vibration. Because the thickness varies greatly from the edge to the center, the grinding force varies as the grinding wheel feeds in.15 The low magnification through a short-focus lens also results in low measurement accuracy of the optical axis error. This critically increases the difficulty of centering because manufacturing a short-focus lens usually requires high precision.

Centering is not only the final step of lens processing but also the key process that minimizes optical axis centering error with respect to the lens mechanical axis. High optical axis precision may require on-line measurement.16,17 Latyev et al.18 proposed design and processing methods for lens centering and explained the value of centering in the manufacture of optical components and lens-related products; they also introduced the key technologies used for centering and measurement. Gluhchev et al.19 used real-time automatic image processing to measure lens decentration through automatically collimated reticles. Magarill and Welham20 developed a mathematical model to determine the positions of curvature centers in lens assemblies. Kaew-aram and Sutapun21 designed an apparatus for measuring the centering error of ophthalmic lenses. Parks22 used point source microscope to simultaneously view through the upper lens surface of the centers of curvature of each element as it is assembled in a lens barrel.

The present study combined on-line optical axis measurement with convolutional neural network (CNN) machine learning to perform diagnosis and decision-making for the centering and edging processes of short-focus lenses. Centering is the process to center the lens optical axis on the lens edging machine, which shapes lens edge by wheel grinding, in order to provide a lens mechanical axis that is coincident with the lens optical axis. However, if edge cracks or surface scratches caused by wheel grinding or clamping are too large, the lens thickness or diameter may be insufficient to reperform the centering or polishing processes to eliminate the defect. Therefore, by analyzing the trace of reticle images during centering and training a CNN, aberrations can be detected early in the edging process. The following sections present theory regarding on-line measurement; the procedures for optimizing optics and magnification, for data preprocessing, and for trace classification and making process decisions; and the structure of the CNN used in this study. After training on numerous data, the process decisions made by the CNN achieved 95% accuracy.

The specifications of the lenses used for the present study are shown in Fig. 1.

Fig. 1

Short-focus lens specifications in the present study.

OE_60_7_075103_f001.png

2.

Theory of On-Line Measurement

Figure 2 shows the optical structure of an on-line optical axis transmission measuring device. Transmission measurement considers the optical axis of the whole lens system. The light source on the side opposite the image sensor emits light toward the target lens. The light is first focused at the point located at a distance A behind the target lens. In an on-line optical axis transmission measuring device, the lens module of the camera includes a head lens and an objective lens. The position of the head lens and sensor can be adjusted so that the image focused by the target lens can pass through the head lens and object lens and be focused on the image sensor.

Fig. 2

Optical structure of on-line optical axis transmission measuring device.

OE_60_7_075103_f002.png

Since optical axis error is small, the distance between the first focused reticle image and the optical axis is ignored. In this study, the first focused reticle image is considered to be on the optical axis. Therefore, the optical axis error α is calculated as

Eq. (1)

α=aA,
where a is the distance between the image of target lens and the main axis, and A is the image distance of target lens, as shown in Fig. 2.

In the preprocessing algorithm, the distance d is obtained through image processing. In an image system, the magnification of image is proportional to image distance and inversely proportional to object distance. The magnification between the initially focused image and the image on the sensor is determined by the following equation:

Eq. (2)

mag=da=aada=CBQD=QCDB,
such that the optical axis error α is

Eq. (3)

α=aA=DBAQCd.
where d is the distance between the final image on the sensor and the main axis, a’ is the distance between the image of head lens and the main axis, B is the object distance of head lens, of which the object is the image of target lens, C is the image distance of head lens, D is the object distance of object lens, of which the object is the image of head lens, and Q is the image distance of object lens.

3.

Methods

3.1.

Position and Magnification Optimization

To clearly capture the image on the reticle through multiple lenses, the camera of the measurement module is made to be adjustable. However, even though the lens focal length is known, the magnification, which is used to calculate the optical axis error, is still variable. An additional parameter or constraint is required to simplify the algorithm so that the magnification can be determined entirely by the lens focus.

Figure 3 shows the system mounted on the centering machine, and the distances marked on the figure correspond to those in Fig. 2. The distance Q follows the Deutsches Institut für Normung standard regulating the distance of a sensor behind a microscope objective. The distance D is also standardized in accordance with object lens specifications.

Fig. 3

Light source on the left, centering machine in the middle, camera plus sensor on the right and distance relationships corresponding to those in Fig. 2.

OE_60_7_075103_f003.png

The parameters A+B and C are controlled by the camera; A+B is determined by the head lens position, and A+B+C is determined by the sensor position. With the head lens or sensor position fixed, the image magnification can be determined from the target lens focal length f;  f can be calculated from the distances A+B and C:

Eq. (4)

f=f(A+B,C)orf=f(A+B+C,C).

Because of the limitations of the mechanical structure, if the head or sensor position is fixed, then the short EFL is insufficient to focus an image clearly on the sensor. With Eq. (4), the immeasurable focal range can be calculated. The following are two possible mechanical setups:

  • 1. The head lens position (distance B) is fixed.

  • 2. The sensor position (distance B+C) is fixed.

C is variable in both setups. That is, within a limited distance C, the range of focal lengths is immeasurable. The equation f=f(A+B,C) describes setup 1, and f=f(A+B+C,C) describes setup 2. By rearranging Eq. (3), the formulas for the focal lengths in setups 1 and 2 can be expressed as follows:

Eq. (5)

f(lAB,C)=PlAB(Cfh)PCfh(P+lAB)(Cfh)Cfh.

Eq. (6)

f(lABC,C)=PlABC(Cfh)PC2(P+lABC)(Cfh)C2,
where lAB is distance A+B, and lABC is distance A+B+C, P is the object distance of target lens, or the distance between light source and target lens, and fh is the focal length of head lens, as shown in Fig. 2.

If lAB or lABC is fixed, then the image magnification and target lens focus are a function of distance C, as displayed in Fig. 4. In this study, the optimal position was considered to be that with the minimal range of focal length.

Fig. 4

(a) Immeasurable focus range of setup 1. (b) Magnification when A+B=447.76. (c) Focal length when A+B=447.76. (d) Immeasurable focal range of setup 2. (e) Magnification when A+B+C=605.5. (f) Focal length when A+B+C=605.5.

OE_60_7_075103_f004.png

3.2.

Data Preprocessing

The optical axis error algorithm has five steps: contrast maximization, subpixel division, binarization, reticle center definition, denoising, and trace center calculation.

  • 1. Contrast maximization: Maximizing contrast is effective for identifying the most distinct features of an image. By proportionally increasing the luminance difference between each pair of pixels, all the contours of images become apparent. The following method is used to calculate the new luminance by using the maximal and minimal values:

    Eq. (7)

    lmid=lmin+(lmaxlmin)lmin(255lmax)+lmin,

    Eq. (8)

    lnew(l)=(llmid)·255(lmaxlmin)+lmid.
    Here, lmax and lmin are the maximal and minimal values of the image, lmid is the defined middle value of the image, and lnew is the new luminance according to the original luminance value l. The type of luminance value is uint8, which is from 0 of the darkest to 255 of the lightest. With this method, the new minimal luminance is sure to be 0, new maximal luminance is 255.

  • 2. Subpixel division: By dividing each pixel into 3×3 or more subpixels, the contours between the black and white areas can be evaluated more precisely. The luminance value of each subpixel is linearly calculated among four neighboring pixels.

  • 3. Binarization: A reticle image includes a dark background and light reticle. The background luminance is always below average, and the reticle luminance is above average. In the present study, the mean value is set as the threshold of binarization.

  • 4. Reticle center definition: A reticle is composed of a vertical and a horizontal line. The center point is the intersection of the two lines.

  • 5. Denoising: A noise point is two standard deviations’ distance away from the trace center and should be discarded.

3.3.

Structure of CNN

The CNN structure for trace classification is built referring to the Modified National Institute of Standards and Technology (MNIST) database and learning algorithms for classification.23 Each piece of data in the MNIST dataset is a 28×28  pixels grayscale picture of 0-9 handwritten digit. The input images are replaced by the reticle trace images with 640×480  pixels in this study. Under various conditions, traces appear at different locations on images, or the reticle of the trace can pass through half or more of the image. Thus, an image cannot be compressed. The large size of the images and the small number of trace types require that the four convolutional layers be used to achieve training that results in 96.18% accuracy.

The CNN in the present study contains an input layer, four convolutional layers, three max pooling layers, a fully connected layer, and a classification layer, as shown in Fig. 5.

Fig. 5

Structure of CNN for trace classification.

OE_60_7_075103_f005.png

The CNN was trained on 2936 data points, with 734 trained for each type of trace, 80% for training and 20% for testing. By using this structure, classification accuracy of 96.18% was achieved.

Figure 6 shows the confusion matrix of CNN trace classification results. The blocks on the diagonal from the upper-left to the lower-right corner, which are deep-colored, indicate the correct prediction part, while the other blocks with light color indicate the misidentification cases. The prediction accuracy for testing data is 96.18%.

Fig. 6

Confusion matrix of CNN trace classification results.

OE_60_7_075103_f006.png

The trace of reticle image is classified by the CNN trace classification. Then according to the classification result, the corresponding process decision is provided by system. This method assists operators to adjust process parameters immediately and accurately, and the improper decisions by operator experience are highly decreased.

4.

Trace Classification and Process Decisions

The optical axis of a lens was tested according to International Organization for Standardization standard 10110-6. Because the centering and edging process are complex and the requirements are strict, the stability of the edging process always depends on experience. However, short-focus lens is too small to resist any unstable grinding process that causes cracks, which usually occur in 20 or even more process decisions purely by the operator experiences.

When a lens rotates along its geometric central axis, the reticle image, which represents the optical axis, revolves around the geometric center. The trace of the reticle image is a perfect circle in theory. However, if the work axis does not coincide with the central axis of the clamp or if vibration occurs during rotation, the trace would become distorted and include noise points.

Figure 7 shows the common trace types for a reticle image. In the analysis of optical axis error, traces are classified as points, circles, noncircles, irregular traces, or other types of traces. Points are ideal because they indicate that the optical axis error of the target lens is small. Circles are common traces indicating no manufacturing error. Noncircles are caused by lens tilt or high circularity error. Irregular traces result from working vibration. The optical axis error values of both noncircles and irregular traces are nonsense and usually high because they indicate that the centering process has already caused measurement error.

Fig. 7

Common reticle trace types. (a) Point. (b) Circle. (c) Noncircle. (d) Irregular trace.

OE_60_7_075103_f007.png

Figure 8 shows the result of a process decision experiment. Process decisions were made in accordance with the on-line optical axis measurement and trace classification.

Fig. 8

Results of process decision experiment. (a) Optical axis error. (b) Circularity error. (c) Edge cracks. (d) Trace variance.

OE_60_7_075103_f008.png

Centering and edging processes were executed for every lens. On-line measurement measured optical axis error of target lens immediately during both processes. In centering process, lens was placed so that the reticle image on the screen was located at the center point, and then optical axis error was measured while lens rotates. In edging process, on-line measurement not only measured optical axis error, but also recorded the trace of reticle image. By the effect of wheel grinding, trace in edging is not the same as that in centering for the same lens. Process decision was made by the system at the end of edging process. Before the next lens was placed, process parameters were adjusted manually according to CNN trace classification and process decision. Five lenses were tested after every process decision. A total of 10 process decisions and 50 lenses were tested.

Specifications of every processed lens like optical axis error, circularity error, edge cracks, and variance of trace distance were measured. The optical axis error was measured by the on-line measurement. The circularity error was evaluated by 6 measured diameters, which were measured by micrometer, at different angles. The edge cracks were measured by scale loupe. The variance of trace distance was calculated by the reticle traces recorded by the on-line measurement device in this study to evaluate manufacturing quality.

Thus, trace variance was used to evaluate the stability of the trace and was calculated by using the following formula during optical axis measurement:

Eq. (9)

Vt=i=1n(di2μd2)n×μd2×100%,

Eq. (10)

di=(xixc)2+(yiyc)2,
where Vt is the trace variance, di is the distance between the i-th point and the center point, μd is the mean value of di, and n is the number of points. The trace variance of a circle or point is nearly zero. Trace variance can be used to evaluate how much an irregular or noncircular trace varies from a circular trace.

Figure 8(a) shows that the optical axis error generally decreased as the number of process decisions increased. Most traces in the beginning were noncircles or irregular traces; thus, the optical axis errors were high and nonsense. Figures 8(b) and 8(c) reveal that both circularity error and edge cracks, which depend on working parameters, improved with more process decisions. In this experiment, noncircular traces were considered to be those with circularity error exceeding 0.6 mm, and irregular traces were considered those with edge cracks exceeding 0.3 mm. In Fig. 8(d), the initial trace variance of the trace ranged from 10% to 35%, indicating that the centering process was unstable, and noncircular or irregular traces were common. As process decisions were made, the magnitude and range of the trace variance decreased. The trace variance indicates lens position offset caused by grinding force difference that might lead to edge cracks and circularity error.

In this experiment, processing did not stop or change with regard to the trace or optical axis error. The process decisions were made to assist the operator to adjust the working parameters of the centering machine. Based on the CNN trace classification, parameters and the direction were provided by the algorithm. Then the parameters were evaluated by the operator according to the degree of trace deformation. Figure 9 presents four cases with their corresponding optical axis error and trace classification from the experiment. In the first case, an irregular trace was observed. By increasing the wheel feed rate from 0.005 to 0.01  mm/s and changing the diameter of grinding wheel from 160 to 140 mm, the wheel deflection and grinding vibration were reduced. Consequently, the trace became a circle. In the second case, a noncircular trace was observed. Reducing the wheel feed rate from 0.015 to 0.01  mm/s and increasing the clamping force resulted in the resistance from the clamps preventing the wheel from pushing the lens. The third case was a normal case, indicating stable grinding, but the optical axis error was high. The lens slightly moved until the reticle image coincided with the trace center. Thus, the optical axis was certain to coincide with the work axis, and the optical axis error was reduced.

Fig. 9

Different trace classification cases before and after process decisions.

OE_60_7_075103_f009.png

On-line optical axis measurement supported by CNN machine learning is able to limit the optical axis error to <150  μrad, the range of cracks to <E0.1, and the circularity error to <0.1  mm. Process decisions according to trace classification help operators to immediately figure out the improper parameters leading to optical axis error or unstable grinding quality. The experiment results show that it can save parameter adjustment time from >20 times to <10 times. Therefore, the process decisions made in accordance with trace classification by the CNN in the present study can effectively improve short-focus lens manufacturing quality.

5.

Conclusion

The on-line optical axis measuring device in the present study was designed to measure the optical axis error during the centering process. The small size and short focal length of short-focus lenses create challenges in controlling grinding quality. By observing the trace types of reticle images, manufacturing error can be minimized. CNN machine learning was integrated into the on-line optical axis measuring device. Optimization of optics, positioning, and magnification and data preprocessing were implemented to improve measuring precision and trace classification.

The accuracy of trace classification by the CNN reached 95%. In 10 times of process decision, the optical axis error was controlled to <150  μrad, the range of cracks to <E0.1, and the circularity error to <0.1  mm. Thus, the present study realized an on-line centering process for high-precision short-focus lenses.

Acknowledgments

The authors are grateful for the support of the Research Project of the Ministry of Science and Technology, Taiwan (MOST 109-2622-E-007-028).

References

1. 

R. Minkowski, “Schmidt systems as spectrograph cameras,” J. Opt. Soc. Am., 34 (2), 89 –92 (1994). https://doi.org/10.1364/JOSA.34.000089 JOSAAH 0030-3941 Google Scholar

2. 

E. Schonbrun, W. N. Ye and K. B. Cronzier, “Scanning microscopy using a short-focal-length Fresnel zone plate,” Opt. Lett., 34 (14), 2228 –2230 (2009). https://doi.org/10.1364/OL.34.002228 OPLEDP 0146-9592 Google Scholar

3. 

N. Fraval and J. Louis, “Low aberrations symmetrical adaptive modal liquid crystal lens with short focal lengths,” Appl. Opt., 49 (15), 2778 –2783 (2010). https://doi.org/10.1364/AO.49.002778 APOPAI 0003-6935 Google Scholar

4. 

F. C. López, C. B. Varela and C. Ruiz, “Spatiotemporal polarization pattern obtained by interference in a single cross-polarized wave-generation crystal,” J. Opt. Soc. Am. B, 33 (8), 1740 –1748 (2016). https://doi.org/10.1364/JOSAB.33.001740 Google Scholar

5. 

J. Yamada et al., “Simulation of concave-convex imaging mirrot system for development of a compact and achromatic full-field x-ray microscope,” Appl. Opt., 56 (4), 967 –974 (2017). https://doi.org/10.1364/AO.56.000967 APOPAI 0003-6935 Google Scholar

6. 

Y. Liu et al., “50× five-group inner-focus zoom lens design with focus tunable lens using Gaussian brakets and lens modules,” Opt. Express, 28 (20), 29098 –29111 (2020). https://doi.org/10.1364/OE.404098 Google Scholar

7. 

Y. Li et al., “Radial-sharing interferometric imaging with Theon-Kepler bifocal telescope,” Appl. Opt., 59 (17), 5265 –5268 (2020). https://doi.org/10.1364/AO.392574 APOPAI 0003-6935 Google Scholar

8. 

I. A. Neil, “Optimization glitches in zoom lens design,” Proc. SPIE, 3129 158 –180 (1997). https://doi.org/10.1117/12.284237 PSISDG 0277-786X Google Scholar

9. 

C. Braig and P. Predehl, “Multiband imaging at the diffraction limit using Fresnel x-ray telescopes,” Opt. Eng., 51 (9), 096501 (2012). https://doi.org/10.1117/1.OE.51.9.096501 Google Scholar

10. 

G. H. Smith et al., “Optical designs for the Mars 03 rover cameras,” Proc. SPIE, 4441 118 (2001). https://doi.org/10.1117/12.449558 PSISDG 0277-786X Google Scholar

11. 

D. Slater, “Afocal viewport optics for underwater imaging,” Proc. SPIE, 9192 91920P (2014). https://doi.org/10.1117/12.2061445 PSISDG 0277-786X Google Scholar

12. 

F. T. Ghaeml, “Design and fabrication of lenses for the color science cameras aboard the mars science laboratory rover,” Opt. Eng., 48 (10), 103002 (2009). https://doi.org/10.1117/1.3251343 Google Scholar

13. 

I. A. Neil, “Use of special glasses in visual objective lenses,” Proc. SPIE, 0766 69 (1987). https://doi.org/10.1117/12.940205 PSISDG 0277-786X Google Scholar

14. 

I. A. Neil, “High-performance wide-angle objective lens systems with internal close-focusing optics and multiple aspheric surfaces for the visible waveband,” Proc. SPIE, 2744 216 –242 (1996). https://doi.org/10.1117/12.246665 PSISDG 0277-786X Google Scholar

15. 

I. D. Marinescu et al., Tribology of Abrasive Machining Process, William Andrew, Norwich, New York (2004). Google Scholar

16. 

K. C. Huang, C. L. Chang and W. H. Wu, “Novel image polarization method for measurement of lens decentration,” IEEE Trans. Instrum. Meas., 60 (5), 1845 –1853 (2011). https://doi.org/10.1109/TIM.2011.2108070 IEIMAO 0018-9456 Google Scholar

17. 

M. Beier et al., “Lens centering of aspheres for high-quality optics,” Adv. Opt. Technol., 1 (6), 441 –446 (2012). https://doi.org/10.1515/aot-2012-0052 1687-6393 Google Scholar

18. 

S. M. Latyev, D. M. Rumyantsev and P. A. Kuritsyn, “Design and process methods of centering lens systems,” J. Opt. Technol., 80 (3), 197 –200 (2013). https://doi.org/10.1364/JOT.80.000197 JOTEE4 1070-9762 Google Scholar

19. 

G. Gluhchev et al., “Automatic evaluation of lens decentration,” 49 –57 (2002). Google Scholar

20. 

S. Magarill and B. Welham, “Enhanced measurement of decentration in multielement lens assemblies,” Proc. SPIE, 1996 10 –18 (1993). https://doi.org/10.1117/12.160413 PSISDG 0277-786X Google Scholar

21. 

S. Kaew-aram and B. Sutapun, “Measurement of centering errors of glass molds and casted lenses for production of ophthalmic lenses,” in Front. Opt./Laser Sci., 4640 (2018). Google Scholar

22. 

R. E. Parks, “Lens centering using the point source microscope,” Proc. SPIE, 6676 667603 (2007). https://doi.org/10.1117/12.726837 PSISDG 0277-786X Google Scholar

23. 

Y. LeCun et al., “Learning algorithms for classification: a comparison on handwritten digit recognition,” 261 –276 (1995). Google Scholar

Biography

Shiau-Cheng Shiu received his master’s degree from National Tsing Hua University, Taiwan, in 2020. Since 2020, he has been a PhD student in the Department of Power Mechanical Engineering, National Tsing Hua University. His research interests include the centering process for high-precision glass optical lens.

Ke-Er Tang received the BEng degree in Department of Mechanical Design Engineering from the National Formosa University, Taiwan, in 2020. He is currently pursuing the master’s degree in the Department of Power Mechanical Engineering, The National Tsing Hua University, Taiwan. His major research interests include use machine learning for optimizing the processing of high-precision optical parts.

Chun-Wei Liu received his PhD from National Tsing Hua University, Taiwan, in 2015. Since 2016, he has been an assistant professor in the Department of Power Mechanical Engineering, National Tsing Hua University. His research interests are in brittle material grinding/polishing process and ultra-precision machining.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Shiau-Cheng Shiu, Ke-Er Tang, and Chun-Wei Liu "On-line process decisions using convolutional neural network for centering high-precision short-focus lens," Optical Engineering 60(7), 075103 (8 July 2021). https://doi.org/10.1117/1.OE.60.7.075103
Received: 25 March 2021; Accepted: 23 June 2021; Published: 8 July 2021
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Optics manufacturing

Reticles

Convolutional neural networks

Image processing

Head

Optical testing

Image sensors

Back to Top