OE Letters

Optimal layout of fringe projection for three-dimensional measurements

[+] Author Affiliations
Victor S. Cheng, Rongqian Yang, Chun Hui, Yazhu Chen

Shanghai Jiao Tong University, Biomedical Instrument Institute, Shanghai 200240, China

Opt. Eng. 47(5), 050503 (May 22, 2008). doi:10.1117/1.2931577
History: Received February 08, 2008; Revised March 19, 2008; Accepted March 25, 2008; Published May 22, 2008
Text Size: A A A

Open Access Open Access

We optimize the layout of each light plane in a dual-view multistripe measurement system by employing spatial geometry analysis to avoid ambiguity. The imaging regions of every light plane can be labeled uniquely on two image planes within a certain measurement depth. Moreover, the flexible density of fringe patterns corresponding to the different measurement depths is immediately projected without the conventional coding procedure. Some experiments verify the effectiveness of the proposed method applied in high-resolution 3-D measurements.

Figures in this Article

3-D optical measurements based on the structured light method have been applied in many fields over the past twenty years. Meanwhile, a variety of fringe patterns having the advantage of the imaging efficiency have been investigated.14 Although the spatial resolution and scanning speed can be improved by increasing the density of projective lines, an ambiguity problem arises from stripe identification on complex surfaces.2 Some coding methods1,34 have been developed in order to distinguish two adjacent stripe tracks. Although Dipanda and Woo5 provide an efficient correspondence method in 3-D shape reconstruction based on a grid of 361 spots, the illumination pattern in their system is invariable and custom-made. Also, little attention has been paid to optimizing the space among adjacent light planes in 3-D measurements based on fringe projection.

This work employs spatial geometry to overcome the ambiguity problem in fringe-pattern projection. We compute the corresponding space between the adjacent light planes without identification difficulty within a certain measurement depth. We combine our method and the line-shifting idea.4 As a result, the flexible density of parallel-shifting lines is immediately projected into practical high-resolution measurements without the conventional coding procedure.

Our system consists of a LCD projector and two CCD cameras. Figure 1 shows two adjacent light planes lm and lm1 emits from the point A, the lens center of the projector. The initially defined depth D is from the virtual farthest position DF to the virtual nearest position DN. The point df is the intersection point of lm and DF in the OXZ plane. The point dn is the intersection point of lm1 and DN in the OXZ plane. Then, df and dn are linked by the reflected line kn that just passes the lens center O of the left camera so that df and dn cannot be distinguished. However, kn rotates to knl when the front measurement position moves from DN to DN through a distance δ. Note that the stripes on the target surface illuminated by lm and lm1 can be distinguished within the bounds DF and DN.

Grahic Jump LocationF1 :

Geometry of the unambiguous condition.

We can obtain the depth Z by means of the following conversion formula:Display Formula

1Z=Ltan[αtan1(pf)]tanθ,
where L is the X coordinate of A, p is the position of the imaged spot on the image plane, α is the deflection angle of the optical axis of the camera, θ is the deflection angle of lm, and f is the focal length of the camera. Thus, the Z-axis resolution ΔZ that is due to the pixel size Δp can be obtained by calculating the partial derivative of Eq. 1 with respect to p:Display Formula
2ΔZ=Lfsec2[αtan1(pf)]{tan[αtan1(pf)]tanθ}2(p2+f2)Δp.

Given the width of the stripes and the configuration of the two cameras, the condition for unambiquity involving the value of δ can be expressed as follows:Display Formula

3δ>(w+m)ΔZDF,
where w is the maximum pixel number occupied by df on the two image planes, m is the expected minimum interval between pixels of two adjacent stripes on the image planes, and ΔZDF is the worst Z-axis resolution of df computed by Eq. 2. Thus, the maximum measurement depth of the object is required to be smaller than Dδ.

Figure 2 shows that there are 2n+1 light planes emitted from A in our system. Here lm is adjusted to the position of the central light plane that is parallel to the OYZ plane. The optimal width between lm+i1 and lm+i on DF is expressed by Lm+i(nin). Under the condition of unambiguous corresponding stripes on the image planes, the consecutive optimal widths Lm+i(1in) at the right side of lm can be derived asDisplay Formula

4Lm+i=LD(H+HD)i1Hi1(HD)i(H+H)i1,
where H is the Z coordinate value of A, and H is the distance between DF and the OXY plane. We can obtain a similar rule for the consecutive widths Lmi(1in) at the left side of lm:Display Formula
5Lmi=RD(H+HD)i1(H+HH)i1(H+HHD)i(H+H)i1,
where R is the X displacement between A and O, which is the lens center of the right camera, and H is the Z displacement between A and O.

Grahic Jump LocationF2 :

Optimal layout and corresponding region procedure based on the dual-view method.

Note that the consecutive widths Lm+i(nin) on DF on the two sides of lm are equal when the two cameras are designed in a symmetrical layout and A is located on the X axis. Namely,Display Formula

6Lm+i=LDHD.

Within the distance D, the variable range of the projective stripe corresponding to lm+i(nin) is from li to li+1 on the left image plane, but is from ri1 to ri on the right image plane. The initial position and the end position on the image planes corresponding to lm+i(nin) can be obtained when two cameras take pictures of a planar board illuminated by a fringe projection based on the proposed method at DF.

The value of δ is nearly constant in practical application. The value of D varies due to the maximum measurement depths of different objects, so that a set of flexible widths Lm+i(nin) must be calculated within the performance range of the projector. Moreover, we design different patterns of parallel-shifting lines with flexible widths, which are aimed at different specifications of scanned objects. These labeled stripes under the first projection are linked to a series of lines. Then, the end points of these lines are detected and linked in consecutive order. Moreover, the two image planes are again divided into different corresponding divisions. Each light stripe under the following projections falls in the corresponding division.

The experimental procedure consisted of a calibration stage and a 3-D measurement stage. In the calibration stage, each parameter of the different sensors, including every light plane, was calibrated. In our system, H was designed at the position 1800mm, and the calibration values of L, R, H, and H were 358.92, 360.21, 2.17, and 1.32mm, respectively. The configuration parameters of our projector, such as the focal length, the pixel size, and the resolution of the LCD screen, were known in advance. We chose Eq. 6 as the scheme of multiple light planes in our system. Moreover, the different multistripe projection patterns corresponding to different measurement depths, as well as the line-shifting method, were designed and labeled for accurate 3-D reconstruction.

The effect of the proposed method was observed at the 3-D measurement stage. A picture of an aluminum workpiece, which consisted of a wedge part and a sidestep part, was taken by the left camera, as shown in Fig. 3. The wedge part was 80×80mm2 at the top and 90×90mm2 at the base, with a 30-mm depth. The sidestep part was 125×125mm2 with a 15-mm depth. The interval width between two adjacent light planes on DF was 13.5mm. Ten projections were made, with the same illumination time of 80ms. The 3-D shape reconstruction of the workpiece is shown in Fig. 3. In the next experiment, the first author’s face, as measured, was roughly 150mm in width, 200mm in length, and 65mm in depth. Figure 3 are the pictures taken by the left and right cameras in the video capture mode at 25frames. The total scanning time was 960ms with twelve projections, while the interval width between two adjacent light planes on DF was 17mm. Figure 3 shows the rendered surface of the merged 3-D result from the two cameras, obtained using the Visualization ToolKit (VTK). The respective measurement errors along X axis, Y axis, and Z axis were approximately ±0.4, ±0.4, and ±0.6mm, with the current system calibration. The maximum relative error between the real value and the measured value of the workpiece using the proposed method and the previous method4 were 0.68% and 0.87%, respectively. Since these stripes could not be overlapped on the image planes, the image processing in our experiments was easy. The 3-D results of the first projection were very robust due to the reliable image divisions. The proposed method was insensitive to albedo and depth variation.

Grahic Jump LocationF3 :

(a) Workpiece with the projection of the fringe pattern in the left image plane. (b) 3-D shape reconstruction of the scanned workpiece. (c) Face with the projection of the fringe pattern on the left image plane. (d) Face with the projection of the fringe pattern on the right image plane. (e) Rendered surface of the merged 3-D facial data.

The layout of the projective light planes in the dual-view multistripe measurement system is optimized in this letter. Each appropriate space of adjacent light planes has been calculated from the known configuration parameters and the restricted measurement depth. Accordingly, these light planes have been assigned to corresponding regions of the image planes. The experiments on high-resolution 3-D shape reconstruction of real objects show the effectiveness of the proposed method. This technique promises to be a valuable tool for real-time or all-field 3-D shape reconstruction.

We wish to acknowledge the support of the National Natural Science Foundation of China (30470488).

Blais  F.,  Review of 20 years of range sensor development. , J. Electron. Imaging.  1017-9909 13, (1 ), 231–243  ((2004)).
Chang  M., , Chang  W. C., , and Lin  K. H.,  High speed three-dimensional profilometry utilizing laser diode arrays. , Opt. Eng..  0091-3286 42, (12 ), 3595–3599  ((2003)).
Batlle  J., , Mouaddib  E., , and Salvi  J.,  Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. , Pattern Recogn..  0031-3203 31, (7 ), 963–982  ((1998)).
Gühring  J.,  Dense 3-D surface acquisition by structured light using off-the-shelf components. , Proc. SPIE.  0277-786X 4309, , 220–231  ((2001)).
Dipanda  A., and Woo  S.,  Efficient correspondence problem-solving in 3-D shape reconstruction using a structured light system. , Opt. Eng..  0091-3286 44, (9 ), 093602  ((2005)).
© 2008 Society of Photo-Optical Instrumentation Engineers

Citation

Victor S. Cheng ; Rongqian Yang ; Chun Hui and Yazhu Chen
"Optimal layout of fringe projection for three-dimensional measurements", Opt. Eng. 47(5), 050503 (May 22, 2008). ; http://dx.doi.org/10.1117/1.2931577


Figures

Grahic Jump LocationF1 :

Geometry of the unambiguous condition.

Grahic Jump LocationF2 :

Optimal layout and corresponding region procedure based on the dual-view method.

Grahic Jump LocationF3 :

(a) Workpiece with the projection of the fringe pattern in the left image plane. (b) 3-D shape reconstruction of the scanned workpiece. (c) Face with the projection of the fringe pattern on the left image plane. (d) Face with the projection of the fringe pattern on the right image plane. (e) Rendered surface of the merged 3-D facial data.

Tables

References

Blais  F.,  Review of 20 years of range sensor development. , J. Electron. Imaging.  1017-9909 13, (1 ), 231–243  ((2004)).
Chang  M., , Chang  W. C., , and Lin  K. H.,  High speed three-dimensional profilometry utilizing laser diode arrays. , Opt. Eng..  0091-3286 42, (12 ), 3595–3599  ((2003)).
Batlle  J., , Mouaddib  E., , and Salvi  J.,  Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. , Pattern Recogn..  0031-3203 31, (7 ), 963–982  ((1998)).
Gühring  J.,  Dense 3-D surface acquisition by structured light using off-the-shelf components. , Proc. SPIE.  0277-786X 4309, , 220–231  ((2001)).
Dipanda  A., and Woo  S.,  Efficient correspondence problem-solving in 3-D shape reconstruction using a structured light system. , Opt. Eng..  0091-3286 44, (9 ), 093602  ((2005)).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement

 

 

  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.