Paper
1 March 1992 Three-dimensional monocular pose measurement using computational neural networks
Author Affiliations +
Abstract
Experimental measurement of position and attitude (pose) of a rigid target using machine vision is of particular importance to autonomous robotic manipulation. Traditionally, the monocular four-point pose problem has been used which encompasses three distinct subproblems: inverse perspective; calibration of internal camera parameters; and knowledge of the pose of the camera (external camera parameters). To this end, a new unified concept for monocular pose measurement using computational neural networks has been developed which obviates the need to estimate camera parameters and which provides rapid solution of inverse perspective with compensation for nonhomogeneous lens distortion. Input neurons are (x, y) image coordinates for target landmarks. Output neurons are (X, Y, Z, roll, pitch, yaw) target position and attitude relative to an external reference frame. Modified back-propagation has been used to train the neural network using both synthetic and experimental training sets for comparison to current four-point pose methods. Recommendations are provided for number of neural layers, number of neurons per layer, and richness versus breadth of pose training sets.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
H. Joe Sommer III and Radha Krishnan "Three-dimensional monocular pose measurement using computational neural networks", Proc. SPIE 1608, Intelligent Robots and Computer Vision X: Neural, Biological, and 3-D Methods, (1 March 1992); https://doi.org/10.1117/12.135112
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Neural networks

Neurons

Calibration

Machine vision

Distortion

Robot vision

Back to Top