SPIE Journal Paper | 1 January 2008
KEYWORDS: Object recognition, Detection and tracking algorithms, Cameras, Optical engineering, Lutetium, Distance measurement, Zoom lenses, Optical pattern recognition, Artificial intelligence, Data processing
Object recognition can be formulated as matching image features to model features. When recognition is patch-based, the feature correspondence is one to one. However, due to noise, repetitive structures, and background clutter, features do not match one to one, but one to many. By using the multiscale feature point technique, a new object recognition algorithm is presented to identify the center, scale, and orientation of the objects in the images. This approach recognizes the objects the presence of translation, scale variation, rotation, partial occlusion, and changed viewpoint. It does not require that features match one to one, and maintains the structure information of the object. This is accomplished by voting on the object's center, scale factor, and orientation for each match point. Experimental results demonstrate that the method works well with translation, rotation, scale changes, and partial occlusion, but gives less accurate results when the viewpoint is altered.