In this study, we investigated how shapes are classified based on local and global features by four representative convolutional neural networks (CNNs), i.e., AlexNet, VGG, ResNet and Inception. While the local features are based on simple components, such as orientation of line segment, the global features are based on the whole object, such as whether an object has a hole. For example, solid triangles and solid squares are differentiated by local features, solid circles and rings are differentiated by a global feature. Two sets of experiments were performed in this research. In the first experiment, we examined how the four CNNs pre-trained on ImageNet (with transfer learning) learned to differentiate the regular shapes (equilateral-triangles, squares, circles and rings). Our results showed that the pre-trained CNNs exhibited faster learning rates in the tasks discriminating the local features than in the tasks discriminating the global feature. However, the transfer learning of discriminating the global feature in regular shapes were better generalized to irregular shapes than the transfer learning of discriminating local features. In the second experiment, the CNNs were trained from scratch (with random weights initialization) to discriminate local and global features in regular and irregular shapes. Different from the transfer learning, the CNNs exhibited faster learning rates in discriminating the global feature than the local features. Similar to transfer learning, the CNNs exhibited excellent learning generalization to discriminating the global feature of irregular shapes, but poor learning generalization to discriminating the local features in the irregular shapes. The overarching goal of this research is to create a paradigm and benchmark to directly compare how the CNNs and primate visual systems process geometrical invariants. In contrast to the ImageNet approach which employs natural images to train CNNs, we employed the “ShapeNet” approach which features geometrical shapes with well-defined properties. The ShapeNet approach will not only help elucidate the strengths and limitations of CNN computation, but also provide insights into visual information processing in the primates.
In the human visual system, visible objects are recognized by features, which can be classified into local features that are based on their simple components (i.e., line segment, angle, color, etc.) and global features that are based on the whole objects (i.e., connectivity, number of holes, etc.). Over the past half century, anatomical, physiological, behavioral and computational studies of the visual systems have led to a generally accepted model of vision, which starts at processing local features in the early stages of the visual pathways, followed by integrating them to global features in the later stages of the visual pathways. However, this popular local-to-global model has been challenged by a set of experiments showing that the visual systems in humans, non-human primates and honey bees are more sensitive to global features than local features. These “global-first” studies further motivated developing new paradigms and approaches to understand human vision and build new vision models. In this study, we started a new series of experiments that examine how two representative pre-trained Convolutional Neural Networks (CNN) (AlexNet and VGG-19) process local and global features. The CNNs were trained to classify geometric shapes into two categories based on local features (e.g., triangle, square and circle) or a global feature (e.g., having a hole). In contrast to the biological visual systems, the CNNs were more effective at classifying images based on local features than the global feature. We further showed that adding distractors greatly lowered the performance of the CNNs, again different from the biological visual systems. Ongoing studies will extend these analyses to other geometrical invariants and internal representations of the CNNs. The overarching goal is to use the powerful CNNs as a tool to gain insights into the biological visual systems, including that of humans and non-human primates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.