Conventional deep learning based object detection methods demand substantial annotated data for training, incurring considerable time and labor costs. Conversely, few-shot object detection necessitates only limited data from novel categories, emerging as a prominent research focus. This study proposes the Attention Contrastive Network (ACNet) to address few-shot object detection challenges. ACNet incorporates an attention mechanism architecture, extracting attention values and keys from image features in both support and query sets. It compares key attention across the sets and weights query set features with attention to augment local features. Additionally, multi-scale pooling layers enhance the network's capability to identify objects across varying scales. The introduction of an attract-repel mechanism in the loss function significantly amplifies inter-class differences, thereby improving classification accuracy. ACNet's efficacy is experimentally affirmed on the PASCAL VOC and COCO datasets, yielding commendable results in few-shot detection tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.