This work classifies color images of ships attained using cameras mounted on ships and in harbors. Our data-sets contain 9 different types of ship with 18 different perspectives for our training set, development set and testing set. The training data-set contains modeled synthetic images; development and testing data-sets contain real images. The database of real images was gathered from the internet, and 3D models for synthetic images were imported from Google 3D Warehouse. A key goal in this work is to use synthetic images to increase overall classification accuracy. We present a novel approach for autonomous segmentation and feature extraction for this problem. Support vector machine is used for multi-class classification. This work reports three experimental results for multi-class ship classification problem. First experiment trains on a synthetic image data-set and tests on a real image data-set, and obtained accuracy is 87.8%. Second experiment trains on a real image data-set and tests on a separate real image data-set, and obtained accuracy is 87.8%. Last experiment trains on real + synthetic image data-sets (combined data-set) and tests on a separate real image data-set, and obtained accuracy is 93.3%.
A key to solving the multiclass object recognition problem is to extract a set of features which accurately and
uniquely capture the salient characteristics of different objects. In this work we modify a hierarchical model of
the visual cortex that is based on the HMAX model. The first layer of the HMAX model convolves the image
with a set of multi-scale, multi-oriented and localized filters, which in our case are learnt from thousands of
image patches randomly extracted from natural stimuli. These filters emerge as a result of optimization based
in part on approximate-L1-norm sparseness maximization. A key difference between these filters and standard
Gabor filters used in the HAMX model is that these filters are adapted to natural stimuli, and hence are more
biologically plausible. Based on the modified model we extract a flexible set of features which are largely scale,
translation and rotation invariant. This model is applied to extract features from Caltech-5 and Caltech-101
datasets, which are then fed to a support vector machine classifier for the object recognition task. The overall
performance successfully demonstrates the plausibility of using filters learned from natural stimuli for feature
extraction in object recognition problems.
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Conference Committee Involvement (2)
Image Processing: Machine Vision Applications VIII
10 February 2015 | San Francisco, California, United States
Image Processing: Machine Vision Applications VII
3 February 2014 | San Francisco, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.