The Llama-2 model has demonstrated excellent performance in various research fields and can be adapted to specific tasks through fine-tuning techniques. This paper focuses on exploring the application of Llama-2 in aspect-based sentiment analysis (ABSA), specifically focusing on the joint tasks of aspect term extraction and polarity classification. We propose a few-shot ABSA method based on fine-tuning Llama-2. We discuss the impact of simple and complex training data instructions on model performance and find that their influence is minimal. Additionally, we investigate the performance of the fine-tuned model when using different numbers of context prompts during inference. We find that the fine-tuned Llama-2, combined with few-shot context prompts, performs well and can consistently output JSON format, achieving a maximum F1 score of 69.5%, which is a 3.8% improvement compared to GPT-3.5. Results analysis indicates that finetuning helps reduce false positives, improve model sensitivity, and specificity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.