In fringe projection profilometry (FPP), non-sinusoidal fringes due to the gamma effect of a projector will cause measurement errors. To solve this problem, a binary defocusing technique has been introduced in recent years, yet the appropriate defocusing is hard to evaluate quantitatively. An innovative approach to quantify binary defocusing in real-time is proposed. No matter how the projector and camera are arranged in FPP system, the fringe period in a captured defocusing fringe image is strictly varying; therefore, unlike previous methods, the proposed method uses a limited window for defocusing evaluation. This not only improves the accuracy of defocusing evaluation, but also greatly improves the evaluation speed, thus in turn enabling real-time evaluation. In addition, numerical differentiation is introduced on the window using a modified five-interval-point algorithm to effectively improve the sensitivity to improper (inadequate or excessive) defocusing. The difference between the numerical differentiation and its fundamental harmonic extracted by Levenberg–Marquardt iterative is taken as an evaluation value of binary defocusing. The smaller the difference, the more appropriate is the defocusing. By minimizing this difference, the most appropriate defocusing can be obtained quantitatively. Both numerical simulations and experiments validate the high sensitivity and high speed of the proposed method.
Camera/projector defocusing is a method of generating sinusoidal images for binary images in fringe projection profilometry. While the most appropriate degree of defocus is often difficult to determine. Therefore, in this paper, it is proposed a defocusing degree evaluation algorithm. This algorithm is to calculate the image gray value error E of the fitted sinusoidal image and the actual image based on image difference and Levenberg-Marquardt (LM) iteration method. Firstly, differential operation is performed on the image captured by the camera. And then, the LM iteration method is executed on the difference image. The phase fluctuation error EM is quantitatively described as E ∙ S α by image gray range S, where α is constant which can be determined experimentally. For a system with no defocusing, the measured phase fluctuation error of ± 0.05 radian, this can be reduced to ± 0.01 radian under the optimal defocus degree obtained by this method. The algorithm can handle one image within 20 ms, which meets the requirements of real-time calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.