To address the existing problems of Chinese named entity recognition, the traditional pre-training model fails to characterize the multiple meanings of words and the model does not sufficiently dig into the potential semantic features at the Chinese word level. This paper proposes a Chinese named entity recognition method based on BERT and fused attention mechanism. First, the word vector features are obtained by the pre-training of large-scale corpus with the use of BERT model to deal with the problem of multiple meanings in one word. Then, the contextual features are recognised through the use of BiLSTM and thus passing the results into the attention layer. This is to exploit the potential semantic features within the text in order to face the shortcomings of unpromising relevance with the given information of semantic feature in previous models. Last, the output results are annotated in sequential order by CRF to reduce the probability of incorrect labelling. Through comparative experiments, the F1 values of this paper's model are 95.12% and 95.43% on MSRA corpus and People's Daily corpus datasets, respectively, which are both better than the comparison models, revealing the effective improvement in the named entity recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.