Paper
29 October 1996 Neural network mapping of image-to-object coordinates for 3D shape reconstruction
Author Affiliations +
Abstract
A neural network approach that automatically maps measured 2D image coordinates to 3D object coordinates for shape reconstruction is described. The appropriately trained radial-basis function network eliminates the need for rigorous calibration procedures. The training and test data are obtained by capturing successive images of the intersection points between a projected light line and horizontal strips on a calibration bar. Once trained, the 3D object space coordinates that correspond to an illuminated pixel in the image plane is determined from the neural network. In addition, the generalization capabilities of the neural network enable the intermediate points to be interpolated. An experimental study is presented in order to demonstrate the effectiveness of this approach to 3D measurement and reconstruction.
© (1996) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
George K. Knopf and Jonathan Kofman "Neural network mapping of image-to-object coordinates for 3D shape reconstruction", Proc. SPIE 2904, Intelligent Robots and Computer Vision XV: Algorithms, Techniques,Active Vision, and Materials Handling, (29 October 1996); https://doi.org/10.1117/12.256268
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

3D image processing

Neural networks

Calibration

3D image reconstruction

3D metrology

CCD cameras

Back to Top