Next Article in Journal
A Security-Enhanced Energy Conservation with Enhanced Random Forest Classifier for Low Execution Time Framework (S-2EC-ERF) for Wireless Sensor Networks
Previous Article in Journal
Methods for Identifying Effective Microseismic Signals in a Strong-Noise Environment Based on the Variational Mode Decomposition and Modified Support Vector Machine Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Chinese Named Entity Recognition Based on Lexical Information and Spatial Features

School of Computer Science and Technology, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2242; https://doi.org/10.3390/app14062242
Submission received: 5 February 2024 / Revised: 29 February 2024 / Accepted: 3 March 2024 / Published: 7 March 2024

Abstract

:
In the field of Chinese-named entity recognition, recent research has sparked new interest by combining lexical features with character-based methods. Although this vocabulary enhancement method provides a new perspective, it faces two main challenges: firstly, using character-by-character matching can easily lead to conflicts during the vocabulary matching process. Although existing solutions attempt to alleviate this problem by obtaining semantic information about words, they still lack sufficient temporal sequential or global information acquisition; secondly, due to the limitations of dictionaries, there may be words in a sentence that do not match the dictionary. In this situation, existing vocabulary enhancement methods cannot effectively play a role. To address these issues, this paper proposes a method based on lexical information and spatial features. This method carefully considers the neighborhood and overlap relationships of characters in vocabulary and establishes global bidirectional semantic and temporal sequential information to effectively address the impact of conflicting vocabulary and character fusion on entity segmentation. Secondly, the attention score matrix extracted by the point-by-point convolutional network captures the local spatial relationship between characters without fused vocabulary information and characters with fused vocabulary information, aiming to compensate for information loss and strengthen spatial connections. The comparison results with the baseline model show that the SISF method proposed in this paper improves the F1 metric by 0.72%, 3.12%, 1.07%, and 0.37% on the Resume, Weibo, Ontonotes 4.0, and MSRA datasets, respectively.

1. Introduction

Unlike English named entity recognition [1,2,3], Chinese NER is more complex due to the lack of explicit delimiters, making it more challenging than English NER. An incorrect participle can adversely affect the accuracy of Named Entity Recognition [4]. Initially, the focus was mainly on the semantic information of characters within sentences for entity recognition, overlooking vocabulary information, which presented certain difficulties for Chinese NER [5]. Subsequently, Zhang et al. [6] implemented a lexical augmentation strategy, amalgamating matched vocabulary with character-level data. This approach leveraged lexical insights to refine the demarcation of entity boundaries. However, the design of this method allowed characters to merge only with vocabulary ending with that character, leading to a loss of information as characters could not merge with vocabulary that began or was in the middle of the character. Li and Yan [7] introduced the Flat-Lattice approach, improving the lexical enhancement method. Flat-Lattice obtains two indices for the head and tail positions by labeling the token positions in the sentence and then uses these two positional indices to reconstruct the Flat-Lattice structure in such a way that a character can interactivity directly with the vocabulary containing that character, as illustrated in Figure 1.
Although Flat-Lattice uses vocabulary enhancement methods to integrate vocabulary features into character-based features, utilizing vocabulary information to enhance the ability of Chinese-named entity recognition, there are still two issues with this method. Firstly, Flat-Lattice compares the vocabulary in the sentence with the vocabulary in the lexicon character by character until the vocabulary in the sentence cannot find the exact matching vocabulary in the lexicon. In this process, all sub-matched words are added to the vocabulary sequence. However, the words matched by the method of character comparison may conflict, i.e., a single character in a sentence may correspond to characters in more than one matched vocabulary. If the conflicting matched vocabularies are fused with characters after encoding only, this may lead to the lack of semantic and sequential temporal information between the matched vocabularies. Instead of enhancing entity boundary delimitation, this may have an inhibitory effect. For example, in Figure 1, the character ‘人’ (‘person’) can match both ‘重庆人’ (‘Chongqing person’) and ‘人和药店’ (‘Renhe Pharmacy’). The Flat-Lattice method, after encoding, directly merges these matched vocabularies with the character, which may mislead the model to identify ‘人’ as both ‘E-LOC’ and ‘B-LOC’. More critically, conflicts between matched vocabularies are common in Chinese NER [8]. In order to solve the problem of conflict between matched vocabularies, researchers have applied various models. Among them, Gui et al. [9] applied a CNN model to alleviate the conflict between matching words by extracting the semantic features of the matching words through the feedback method of rethinking mechanism. However, the convolutional operation used by CNN perceives the input locally, and each convolutional kernel can only perceive a fixed-size window of the input data. In addition, the convolution kernel is invariant, which means that the model uses the same convolution kernel at different locations. Therefore, the CNN model ignores the vocabularies’ sequential temporal information and the matching vocabularies’ global information throughout the sentence. To compensate for this shortcoming, Hu et al. [10] turned to the GRU model to address the same issue. The advantage of the GRU model is that it can extract the semantic information of matched vocabulary and effectively capture its sequential temporal information. However, the single memory unit design of the GRU model, which integrates features storing long-term dependencies with the current moment’s hidden state, still needs to be improved in extracting the global information of matched vocabularies in the entire sentence. This paper further analyzes matched vocabularies, considering the number of vocabularies and the global information of matched vocabularies in the entire sentence. Our method aims to comprehensively obtain the semantic and sequential temporal information of matched vocabularies while considering local and global features, thereby more effectively handling conflicts between matched vocabularies.
Secondly, this approach to lexical enhancement ignores the limited nature of the lexicon, and there are words in sentences that cannot be matched to words in the lexicon. Under such circumstances, the effectiveness of lexical enhancement methods in enhancing entity participles can be limited. Flat-Lattice fuses the encoded matching vocabulary with the characters for obtaining semantic information and obtains a seq*seq attention score matrix through a self-attention mechanism, which is subsequently multiplied with the values of the input information to perform a weighted summation in order to obtain weights that contain character-vocabulary information, which is used for predicting entity labels, thus improving the ability of named entity recognition. However, this approach may ignore the spatial information of the characters after fusing the matched vocabulary, making them unavailable for sharing by other characters of the unfused matched vocabulary; this process can result in an information deficit. The accurate segmentation of the characters within the integrated matching vocabulary might positively influence the character-level differentiation in the non-integrated vocabulary. For example, in the case of “重庆人和药店”, the dictionary may not be able to match the word “人和”, so the characters “人” and “和” cannot be fused with the information of the matching vocabulary to enhance the entity participle. If the entity “重庆” can be correctly participle, it will facilitate the participle of “human and pharmacy” later. Xue et al. [11] used the Bi-GRU model to enhance the semantic depe ndency between neighboring characters. Before that, the model extracted the semantic features of characters to obtain a seq*seq matrix. Therefore, this paper considers that extracting spatial features between characters may help named entity recognition. In addition, Yan et al. [12] demonstrated the use of Convolutional Neural Networks (CNNs) to model the local spatial relationships between words in an English dataset, which improves the effectiveness of named entity recognition. Therefore, this paper proposes a “local attention” approach to capture the local spatial relationships between characters. It is anticipated that this methodology will enhance the efficacy of named entity recognition, particularly in scenarios where character fusion with matching vocabulary is not feasible.
Considering these factors, this paper proposes a study based on lexical information and spatial features. It consists of two main aspects: firstly, by comparing the head and tail coordinates of the matched vocabulary, if the tail coordinate of the preceding vocabulary is greater than or equal to the head coordinate of the vocabulary that follows it, this kind of vocabulary is considered to be in conflict. Then, the encoded matched vocabulary is used as an input sequence and processed by the Bi-LSTM model. The Bi-LSTM model is chosen for two reasons: on the one hand, the number of matching words in each sentence is different; on the other hand, the global bi-directional semantic and sequential temporal information of the matching words is obtained. After obtaining the matched words’ global bidirectional semantic and sequential temporal information, they are fused with character features for predicting entity labels. By calculating the loss value and feedback function between the predicted label and the actual label, the conflict-matching vocabulary for obtaining global bidirectional semantic and sequential temporal information is iteratively optimized multiple times, gradually reducing the weight of the conflict-matching vocabulary to promote Chinese-named entity recognition. Secondly, preceding research within this study has already extracted the semantic information of characters and matched vocabulary, leading to the derivation of a seq*seq attention score matrix using a self-attention mechanism. In order to encapsulate the spatial relationships among characters, local attention is introduced; the local spatial relationship between the characters of unfused matching vocabulary and those of fused matching vocabulary is mainly captured by the convolutional layer; this strategy additionally fosters entity segmentation, thereby augmenting the effectiveness of named entity recognition. The principal contributions of this paper can be summarized as follows:
  • This article introduces Bi-LSTM to obtain global bidirectional semantic information and sequential temporal information of matching vocabulary. Reduce the weight of conflict-matching vocabulary to alleviate the impact of matching vocabulary conflicts and character fusion on entity participle.
  • The local attention method extracts the local spatial relationship between characters with unfused matching vocabulary and characters with fused matching vocabulary, further promoting the entity participle.
  • The efficacy of the SISF approach introduced in this paper, in comparison to both the baseline model and an enhanced variant derived from the baseline model, is validated through experiments conducted on four openly accessible Chinese-named entity recognition datasets.
The framework of this article is as follows: Firstly, in Section 1, we explore the challenges and solutions faced by vocabulary enhancement in the field of Chinese named entity recognition; Section 2 provides an overview of relevant research on Chinese named entity recognition; Section 3 provides a detailed analysis of existing methods; in Section 4, we introduce the dataset on which our proposed method relies for training and evaluation, as well as the baseline model used for comparison; Section 5 conducts extensive experiments on four publicly available Chinese datasets and analyzes the experimental results in depth to validate the effectiveness of our proposed SISF method; and finally, at the end of the article, we summarize the results of this study and look forward to possible directions for future research.

2. Related Work

With the development of deep learning, researchers have begun applying neural network models to Named Entity Recognition (NER) [12]. There is a significant difference between Chinese named entity recognition and English named entity recognition. These differences include the following aspects: firstly, in English, there are natural spaces as separators between words, while in Chinese, there are no clear boundaries between characters. In Chinese entity recognition, there are already vocabulary-based or character-based methods. For instance, Li et al. [13] proposed a character-based Chinese entity tagger, proving the superiority of character-based methods over vocabulary-based methods. However, character-based methods do not utilize lexical information. Therefore, Zhang et al. [12] introduced a lattice structure, which matches characters with dictionaries to obtain matching vocabulary containing characters, and fused these matching vocabulary information with characters. This process helps to partition the boundaries of entities, thereby improving the accuracy of naming entity boundaries. Yu et al. [14] also achieved good results in using vocabulary enhancement methods for named entity recognition in classical Chinese.
Additionally, Aguilar et al. [15] used Bi-LSTM to fuse the phonetic features of words with word features, reducing the impact of noise on NER in English social media datasets. Although the methods above have achieved good results in NER, LSTM requires information from the previous time step’s hidden units, limiting the full utilization of GPU parallelism, and there might be conflicts between matched vocabularies. Gui et al. [9] introduced a Convolutional Neural Network (CNN)-based Named Entity Recognition (NER) approach that incorporates a vocabulary feedback mechanism to tackle the aforementioned two challenges. On the one hand, CNN can leverage GPU parallelism for increased efficiency; on the other hand, CNN reanalyzes matched vocabularies, refining the network with feedback on high-level features to supplement low-level features, thus alleviating vocabulary conflicts. Moreover, due to shortcomings in the method designed by Zhang et al. [10], each character could only acquire information about vocabulary ending with it. For example, in the sentence ‘重庆人和药店’ (Chongqing Renhe Pharmacy), the character ‘药’ (medicine) could only obtain information from the vocabulary ‘药店’ (pharmacy) and not ‘人和药店’ (Renhe Pharmacy), leading to a loss of lexical information.
To resolve this issue, Li et al. [7] proposed the Flat-Lattice structure, constructing head and tail indices for each character and word based on their positions in the sentence, thus enabling the direct modeling of interactions between characters and matched vocabulary and introducing lexical information. However, this method overlooked possible conflicts between matched vocabularies. Zhang et al. [16] used an attention mechanism to calculate the weights of conflicting vocabularies and merge them with corresponding character embeddings to mitigate vocabulary conflicts. Conversely, in contrast to the dynamic weights of the attention mechanism, Ma et al. [5] used statistical weights dependent on word frequency to address the issue of vocabulary conflicts. Zhang et al. [17] can explicitly capture various semantic and boundary relationships between different semantic units through adjacency matrices by transforming lattice structures into a unified graph, reducing excessive dependence on word information. With the continuous deepening of research, Liu et al. [18] combined multiple feature fusion such as words and word roots to enhance the semantic information of sentences. In addition, Gu et al. [19] utilized rule information by observing the internal rules of entities while avoiding excessive attention to cross internal regularity. Cauteruccio et al. [20] conducted computational and qualitative analysis on the audience’s experience during the competition, using social network-based modeling techniques and thematic analysis, respectively, to focus on emotional changes in the audience. Zhang et al. [21] considered pronunciation issues in Chinese, such as polyphonic characters or characters with the same pronunciation but different characters, introducing speech features through cross functions to enhance Chinese named entity recognition.
Additionally, some researchers focus on extracting deeper features based on existing features. For example, Zhu et al. [22] introduced a Convolutional Attention Network designed for Chinese named entity recognition (NER). This approach leverages Convolutional Neural Networks (CNNs) to capture proximate character relationships and employs GRU to capture sentence-level contextual information. This further indicates that there are certain local spatial relationships between characters and vocabularies, and these relationships are beneficial for NER. Moreover, Jin et al. [23] proposed the attention-mechanism-based Gated Convolutional Recurrent Neural Network (GCRA), using LSTM to utilize local contextual features while integrating global dependencies of different spaces and adjacent characters through a gating mechanism. With the widespread application of Transformers, researchers continue to optimize them. For instance, Lu et al. [24] proposed a dynamic hybrid visual Transformer, using convolution to extract fine-grained spatial features and fusing them with features extracted by Transformers, mitigating the underfitting issue in small datasets when training with Transformers. Dai et al. [25] introduced a new neural network structure, Transformer-XL, using a recursive mechanism to mitigate the directional information of texts and the limitation of fixed-length encoding in models. Yan et al. [26] proposed an attention mechanism with directional and relative position information to address the self-attention dot product direction in Transformers, further enhancing the capability of NER. Additionally, Meng et al. [27] proposed using Chinese character glyph features, treating characters as images and using CNN to obtain semantic representations, enhancing the model’s generalization ability. Furthermore, Yu et al. [28] used CNN modeling to capture the local spatial relationships of words, reducing the nesting issue in English datasets and improving the capability of NER.
Model Comparison: Strubell et al. [29] demonstrated the advantages of CNN in computing speed by utilizing GPU parallel processing, surpassing Bi LSTM. However, Yan et al. [26] pointed out that in terms of the accuracy of named entity recognition, CNN’s performance is inferior to Bi LSTM. Their experiment found that directly replacing the model with Transformer on small datasets like Weibo did not improve the effectiveness of named entity recognition but rather was inferior to Bi LSTM. This is mainly attributed to the complexity introduced by Transformer through its self attention mechanism, which allows for comprehensive interaction between positions within the sequence. Compared to the sequential processing of Bi LSTM, it increases the complexity of the model and can easily lead to underfitting on small-scale training sets. Given that the matching vocabulary in entity recognition is usually small, this study decided to use the Bi LSTM model to capture contextual information of vocabulary. Bi LSTM is easy to train in limited data and exhibits better performance compared to Transformer due to its lower model complexity.

3. Methods

This section describes the method proposed in this paper in detail. Firstly, it describes how to mitigate the conflicts between the matched words by obtaining the matched words’ global bidirectional semantic and sequential temporal information. Secondly, how to further acquire the local spatial relations of the characters of the unfused vocabulary to enhance the capability of Chinese named entity recognition. The overall model of this paper is shown in Figure 2.

3.1. Mitigating Matching Word Conflicts Based on Bi-LSTM

In this paper, the matching vocabulary is denoted as W = w 1 ,   w 2 ,     w n , and the head and tail subscripts of the matching vocabulary are used to determine whether there is a conflict in the matching vocabulary. When there is a conflict in the matching vocabulary, the encoded matching vocabulary will be sequentially encoded, and the global bidirectional semantic information and sequential temporal information of the matching vocabulary will be obtained through Bi-LSTM as shown in Figure 3, fused character features to predict entity labels. By comparing the loss values between predicted and actual entity labels, the conflict-matching vocabulary is iteratively optimized multiple times using a feedback function. Gradually reduce the weight of conflict matching vocabulary to alleviate the impact of matching vocabulary conflicts and character fusion on entity participle. For example, in the “南京市长江大桥”, the vocabulary matched through the dictionary includes W i j = w 2 1 = , w 3 1 = , w 4 3 = , w 5 4 = , w 7 4 = .
Through the matching vocabulary, it was found that the “市长” matching vocabulary conflicts with “南京市” and “长江”. If these conflicting words are improperly integrated into the “市” and “长” characters, it may mislead the model to overly focus on “市长”, thereby negatively affecting the accurate division of entity boundaries and leading to bias in entity label prediction. Therefore, the key to solving this problem lies in the in-depth analysis of the semantic relationship between conflicting words and their adjacent words, leveraging the semantic information exchange between words to effectively alleviate the above conflicts, thereby improving the accuracy and robustness of the model in entity recognition tasks.
Bi-LSTM processes the input matching vocabulary sequence through forward LSTM and backward LSTM, which can obtain the global bidirectional semantic and sequential temporal information of the matching vocabulary. The forward input sequence h i is fused with the semantic and sequential temporal information in the hidden unit of the matching vocabulary at the previous moment h i 1 as the forward LSTM input sequence to obtain h containing the forward semantic and sequential temporal information. On the contrary, the backward input sequence h i and the semantic and sequential temporal information in the hidden unit of the matched vocabulary at the previous moment h i 1 are fused as a backward LSTM input sequence to obtain h containing backward semantic and sequential temporal information. Then, the matching vocabulary forward output sequence h and the backward output sequence h are spliced to output the feature vector with global bi-directional semantics and sequential temporal order as the output of the Bi-LSTM neural network layer. This is shown as follows:
h i = L S T M ( x i , h i 1 )
h i = L S T M ( x i , h i 1 )
h i = L S T M [ h , h ]
where h , h denotes the forward and backward inputs of the matching vocabulary i, and h i denotes the feature vector for the matching vocabulary i.
Bi-LSTM controls the transfer of semantic information between matching words through the gating mechanism on the one hand; on the other hand, it can also obtain the sequential temporal information between matching words, as in Figure 4.

3.2. Extracting Local Spatial Relations of Characters Based on Local Attention

After obtaining the semantic and sequential temporal information features of the matched words, they are fused with the character features, and an attention score matrix of seq*seq, where seq denotes the length of the character, is obtained through the self-attention mechanism. Previous studies have shown that CNN is effective in extracting local spatial information for Chinese-named entity recognition [29]. Use convolutional functions to extract the local spatial relationship between fused vocabulary information characters and unfused vocabulary characters. Use C to represent the input characters C = c 1 , c 2 , c n ; then, C R s e q s e q represents the character matrix.
A convolution function is used to extract the local spatial relationship between characters with fused vocabulary information and characters unfused vocabulary information. Firstly, multiply the character attention scores in the area covered by the convolutional kernel one by one and then accumulate the resulting products. Obtain a new weight that represents the extraction of spatial features between characters of unmerged vocabulary and characters of fused vocabulary. Subsequently, the convolutional kernel is shifted by means of a sliding window over the entire input matrix of character attention scores. The weights of the characters with unfused lexical information are continuously adjusted by means of a back-propagation algorithm in order to extract the features of the characters with fused lexical information that are beneficial for the characters with the current unfused lexical information. At the same time, the convolutional layer adopts a weight-sharing mechanism, where each neuron uses the same weight for convolution operations, and the sliding window extracts the same features at different spatial positions. This approach aids in diminishing the quantity of parameters that necessitate learning, consequently enhancing the model’s computational efficiency. The convolutional layer formula is as follows:
l o c a l i = C o n v 2 d ( ( Q i + u ) T K i + ( Q i + v ) T R i j )
where u R d h e a d denotes the global content bias and v R d h e a d denotes the global position bias, which is obtained by training.
The BatchNorm2d() function then normalizes the extracted character space features. The normalization process serves two primary purposes: firstly, it helps to ensure that the inputs of each layer have a similar distribution, stabilizing the distribution of input data and aiding in accelerating model convergence; secondly, it effectively prevents problems of gradient vanishing and gradient explosion, enhancing the network’s generalization ability. Next, the ReLU activation function is used. Through nonlinear gating, it removes noise generated by characters after acquiring spatial features while retaining beneficial character spatial features. The normalization function and activation function formulas are as follows:
l o c a l = R e l u ( B a t c h N o r m 2 d ( l o c a l i ) )
Finally, the feature vector is again passed through a convolutional layer to remap the character features back to their original dimensions after the spatial features have been acquired.
Ultimately, after processing through the residual connection and normalization layer, the feature vector is sent to the output layer and decoded and predicted through a Conditional Random Field (CRF) [30].

4. Experiments

In this section, the paper details the datasets used for training and evaluating the proposed method and the baseline models used for comparison. Quantifications have been performed for the count of entities, the tally of conflict-matching vocabularies, and the hyperparameters associated with each dataset.

4.1. Data Sets and Hyperparameters

This research employs four Chinese named entity recognition datasets to substantiate the enhanced performance of the proposed SISF method in the domain of NER. These datasets encompass OntoNotes 4.0 and MSRA datasets originating from the news domain, as well as Resume and Weibo datasets sourced from online repositories within China. Specifically, the Resume dataset was constructed from resumes on Sina Finance, while the Weibo dataset was built from information on China’s social media platform, Sina Weibo.
The Weibo dataset comprises four distinct entity types: Person Names (PER), Organization Names (ORG), Location Names (LOC), and Geographic/Social/Political Entities (GPE). On the other hand, the Resume dataset encompasses eight entity categories: Countries (CONT), Education Levels (EDU), Location Names (LOC), Person Names (PER), Organization Names (ORG), Professions (PRO), Racial Backgrounds (RACE), and Job Titles (TITLE). In contrast, the MSRA dataset comprises three entity types: Organization Names (ORG), Person Names (PER), and Location Names (LOC). Lastly, OntoNotes 4.0 includes four entity types: Person Names (PER), Organization Names (ORG), Location Names (LOC), and Geographic/Social/Political Entities (GPE).
A statistical enumeration of entities within the four datasets is provided in Table 1. ‘Train’ indicates the size of the training set, ’Dev’ is the size of the validation set, and ‘Test’ is the size of the test dataset. ‘Entity Types’ refers to the types of entities in the datasets, and ‘Charavg’, ‘Wordavg’, and ‘Entityavg’ are the average number of words, words, and entities annotated by dictionaries and entities in the instance, respectively. Conflict lexicon denotes the number of matched vocabulary conflicts in the test set. The hyperparameters for each dataset are listed in Table 2.
Given that Flat-Lattice uses experiments conducted on NVIDIA GeForce RTX 2080Ti cards, this paper uses NVIDIA GeForce RTX 3090 cards in order to ensure the rigor of the experiments, as well as to demonstrate the superiority of this paper in optimizing the vocabulary enhancement method for enhancing Chinese named entity recognition. This paper conducts the corresponding experiments on the NVIDIA GeForce RTX 3090 card.

4.2. Evaluation Indicators

Following the prevailing practices in Chinese named entity recognition tasks from prior studies, the evaluation criteria for experimental effectiveness primarily include Precision (P), Recall (R), and F1 scores (F1). Precision represents the ratio of correctly identified positive samples to the total positive cases, while Recall signifies the ratio of correctly identified positive samples to the entirety of positive samples. Because of the contradiction between precision and recall, this paper mainly uses F1 as the evaluation index of the SISF model. The formula is as follows:
P = T P T P   +   F P
R = T P T P   +   F N
F 1 = 2 P R P   +   R
In this context, the notations TP (True Positive), FP (False Positive), TN (True Negative), and FN (False Negative) are utilized to represent the following: TP refers to correctly identified positive cases, FP indicates instances where negative cases are erroneously classified as positive, TN represents correct identifications of negative cases, and FN signifies instances where positive cases are incorrectly categorized as negative.

4.3. Baseline Model

The Flat-Lattice model makes full use of the parallel capability of the GPU to construct the head and tail for each character and vocabulary, reconstructing the original lattice structure, which can directly model the interaction between the character and all the vocabulary information that it matches and introduce the vocabulary information. Secondly, the relative encoding of character positions is obtained by successive transformations of the head and tail information, using dense vectors to model the relationship between them. Using relative position embedding, position information is assigned to each node; the distance and direction information of the characters is obtained to enhance the Transformer’s direction perception ability, and the direction perception helps the characters to identify whether their neighbors constitute a continuous entity or not. Moreover, use Conditional Random Field (CRF) to decode the named entity to identify the label sequence.
In order to assess that the proposed SISF model optimization vocabulary enhancement method in this paper is effective for Chinese-named entity recognition, this paper not only chooses Flat-Lattice as a comparison model but also selects some experiments in solving the above problems by using different methods and carries out a comparative analysis.
(1)
FGN [31]: A novel CNN structure called CGSCNN is proposed to capture the interaction of glyph information between neighboring graphs, providing a method with a sliding window and an attentional mechanism to fuse the BERT representation and glyph representation of each character.
(2)
Token-Relation [32]: proposes a masked self-attention mechanism to integrate the local contextual information of matched words and designs gated information controllers to deal with the conflict problem existing in matched words.
(3)
LR-CNN+BERT [9]: is a convolutional neural network-based approach that integrates the vocabulary through a rethinking mechanism to mitigate conflicts between matching repertoires of words in the lexicon.
(4)
PLTE+BERT [11]: introduces a novel porous mechanism to enhance the local dependency between neighboring characters.
(5)
SLK-NER [10]: This model mitigates conflicts in the matching vocabulary by developing a method to compute a weighted sum of lexical information and using it as an additional feature.
(6)
FLAT [7]: constructs head and tail for each character and vocabulary, reconstructs the original lattice structure, and can directly model the interaction between characters and all the vocabulary information they match. Reduces the loss of information where a character can only be fused with the vocabulary ending with that character and enhances named entity recognition.

5. Results

In this section, an extensive series of experiments is carried out using four openly accessible Chinese datasets, followed by a comprehensive analysis of the results to confirm the efficacy of the SISF method. Also, ablation experiments are conducted to verify the validity of each part of the proposed method.

5.1. Comparison Experiment

The specific experimental results of this paper are shown in Table 3 and Table 4, where B E R T + F L A T t h i s is the experimental result of Flat-Lattice on the NVIDIA GeForce RTX 3090 card.
An analysis of the experimental results reveals that the SISF model demonstrates improvement in Chinese named entity recognition (NER) across all four datasets relative to the baseline model, Flat-Lattice.
Weibo: As shown in Table 3, the F1 score for NER in the Weibo dataset exhibited growth of 3.12%. The results indicate that compared to the other three datasets, the SISF model exhibits a significant performance enhancement in NER on the relatively minor Weibo dataset. This phenomenon arises due to the pronounced decline in named entity recognition performance in the Weibo dataset compared to the other three datasets when transitioning from the Transformer model to the Bi-LSTM model [26]. This suggests that the SISF model contributes to NER by mitigating conflicts between matched vocabularies and enhancing the local spatial relationships between characters with unfused and fusion vocabulary information. Moreover, compared to the Flat-Lattice method, the SISF model also compensates for the underfitting issue of the Transformer model in small datasets, thereby further improving Chinese NER performance.
Resume: As indicated in Table 3, the F1 score for NER in the Resume dataset demonstrated an enhancement of 0.72%. In contrast, the F1 score for the Flat-Lattice model on the same dataset is 95.86%. This dataset uses vocabulary information to enhance named entity recognition through lexical enhancement methods, but the enhancement is more limited than the other three datasets [12]. The experiments conducted in this article have substantiated that extracting local spatial relationships between characters from the non-integrated vocabulary and characters from the integrated vocabulary yields superior results in named entity recognition, in comparison to mitigating vocabulary matching conflicts. This further confirms that in vocabulary enhancement methods, when vocabulary information is insufficient, the accuracy of named entity recognition can be effectively improved by considering the local spatial relationship between characters of unfused and fused vocabulary.
Ontonotes 4.0: As shown in Table 4, the F1 score for NER in the Ontonotes 4.0 dataset increased by 1.07%. The Ontonotes 4.0 test set contains the most severe matched vocabulary conflicts, as indicated in Table 1. In mitigating these vocabulary conflicts, the increase in the NER F1 score accounted for the most significant proportion of the overall F1 score improvement, reaching 70.00%. This further confirms that obtaining global bidirectional semantic and sequential temporal information of vocabulary to alleviate vocabulary conflicts plays a crucial role in enhancing NER performance.
MSRA: As indicated in Table 4, the F1 score for Named Entity Recognition (NER) in the MSRA dataset exhibited an improvement of 0.37%. Among the four datasets, the Flat-Lattice model exhibits the best performance on the MSRA dataset, marking its leading position in named entity recognition capability in this series of datasets. However, the SISF model has already achieved a significant improvement in Chinese named entity recognition performance on this basis by effectively mitigating the conflict problem in lexical matching and finely capturing the interrelationships of characters in local space. The progress of the model is reflected in the optimization of the lexical matching mechanism and the deep mining of the local spatial information, which further strengthens its capability in handling the entity recognition task in complex Chinese texts.

5.1.1. The Impact of Reducing the Weight of Conflicting Vocabulary on Named Entities

This article compares the differences between vocabulary in obtaining global bidirectional semantic information and sequential temporal information, as shown in Figure 5a,b, through intuitive visualization. The Bi-LSTM effectively captures the global bidirectional semantic and temporal sequential information between conflict-matched words and other words through its advanced gating mechanism. This mechanism not only extracts deep-level semantic relationships but also further refines the comprehension ability of the model through character-level information fusion. Through the loss values and feedback computed during the iterative optimization process, Bi-LSTM gradually reduces the weights of the conflicting matching words, which effectively mitigates the interference that these conflicting words may cause in entity disambiguation. Especially on datasets such as Weibo and Ontonotes 4.0, in which the test set is large relative to the training set, it is more necessary to fuse matching vocabulary features to facilitate entity participle [12]. However, due to the very large number of conflicting matching vocabularies relative to the test set, resolving these inter-vocabulary conflicts becomes particularly important. Experiments have demonstrated that by utilizing the global bidirectional semantic and temporal sequential information of the matching vocabulary to reduce the weight of the conflicting matching vocabulary, the negative impact of conflicting vocabulary and character fusion on entity boundary delineation can be significantly reduced, resulting in significant F1 value enhancement on both datasets, which further enhances the accuracy of named entity recognition.

5.1.2. Extracting Local Spatial Information for Analysis

In this section, this article presents a visual analysis of vocabulary that has undergone and has not undergone local attention processing, as shown in Figure 6a,b for comparison of attention visualization. The SISF model introduces an innovative strategy that effectively overcomes the failure of lexical enhancement methods when there is a lack of matching vocabulary by digging deeper into the local spatial relationships between unfused lexical information and fused lexical information characters. Especially for the Weibo dataset with small data volume, the direct adoption of the Transformer model instead of the Bi-LSTM model does not produce the desired performance improvement, and instead it may lead to the degradation of the named entity recognition performance due to insufficient feature extraction and insufficient training of the model parameters. To this end, the SISF model incorporates a local attention mechanism to capture the local spatial relationships between characters in detail, which significantly improves the recognition efficiency of unfused matched words, especially on the Weibo dataset, which shows more significant performance improvement than other datasets. When encountering scenarios in which the matching vocabulary enhancement method cannot effectively enhance the entity participle, the model is able to use the character space information of the fused vocabulary to further improve the accuracy of entity disambiguation. The experimental results fully validate the effectiveness and superiority of the SISF model in the Chinese named entity recognition task. The experimental results fully validate the effectiveness and superiority of the SISF model in the Chinese named entity recognition task. The introduction of the local attention mechanism not only strengthens the model’s grasp of the local spatial relationships between the characters of unfused and fused lexical information but also reduces the reliance on human intervention by adaptively learning these relationships, thus optimizing the overall named entity recognition process.

5.2. Ablation Experiment

In this research paper, ablation experiments were conducted to confirm the effectiveness of each component of the proposed methodology. The outcomes of these experiments are depicted in Table 5:
Through the ablation experiments in this paper, the model’s ability for named entity recognition generally decreases when the local attention module is removed, thus validating the importance of further extracting local spatial features between unfused lexical information and fused lexical information at the character level. In particular, for a small dataset like Weibo, where the model’s training capability is underfitted, the model performs best in named entity recognition by extracting spatial relationships between characters compared to the other three datasets.
In addition, to illustrate the superior capability of the SISF model in mitigating conflicts between matching words and enhancing named entity recognition, this paper performs experimental comparisons with Gui et al. [9], who proposed to use CNNs to feedback high-level features to resolve the conflicts of matching words by refining the network model, as shown in Table 6. The experimental findings indicate that, in contrast to the Bi-LSTM model introduced in this study, the utilization of CNN to extract enhanced features from matching vocabulary is less efficacious at mitigating conflicts among matching words.
The advantage of the Bi-LSTM model is its ability to capture the global bidirectional semantic information and sequential temporal information of lexical matching. When dealing with conflicting matching words, Bi-LSTM models the sequence of matching words in the whole sentence. The incorporation of the gating mechanism enables the effective capture of global bidirectional semantic information and sequential temporal information within the sequence of matching words. In contrast, a convolutional neural network (CNN) mainly focuses on local features within the coverage of the convolutional kernel. It performs convolutional operations by sliding the convolutional kernel over the matching vocabulary, thus capturing the local features of the matching vocabulary. However, in terms of capturing the semantic and sequential temporal information of the matching vocabulary in the whole sentence, the performance of CNN is weaker compared to Bi-LSTM. Therefore, Bi LSTM is superior to CNN in solving vocabulary-matching conflicts and improving named entity recognition capabilities.

6. Conclusions

In this study, this paper introduces a study of Chinese NER based on lexical information and spatial features. The method cleverly combines Bi-LSTM and the local attention mechanism to effectively mitigate lexical matching conflicts and extract spatial relationships between characters, thus enhancing the ability of named entity recognition. The method of this paper mainly includes two aspects: firstly, through the Bi-LSTM gating mechanism, we obtain the global bidirectional semantic information and sequential temporal information of the conflict matching vocabulary, reduce the weight of the conflict matching vocabulary, and mitigate the effects of entity disambiguation after fusion with characters. Secondly, the local attention mechanism is used to make full use of the local spatial relationship between the unfused vocabulary information characters and the fused vocabulary information characters, and the local spatial relationship of the unfused vocabulary information characters is further extracted in order to solve the problem of ineffective vocabulary enhancement methods for some characters. The experiments provide evidence that the SISF model outperforms the established benchmark model by a substantial margin in the context of Chinese named entity recognition. Additionally, ablation experiments were conducted to comprehensively investigate the influence of various modules on the model’s performance. In the future, this paper will explore the following directions: (1) introducing structural features of Chinese characters, exploring the fusion methods between features, and highlighting the advantages of different features; (2) character and non-entity matching vocabulary fusion reduces the model’s performance, and knowing how to reduce the introduction of the non-entity matching vocabulary is also the direction of this paper’s subsequent research.

Author Contributions

Conceptualization, Z.Z.; Methodology, Z.Z.; Software, Z.Z. and S.L.; Validation, Z.Z.; Formal analysis, Z.Z. and Z.J.; Investigation, Z.Z. and H.Y.; Resources, Z.Z., S.L. and H.Y.; Data curation, Z.Z. and Z.J.; Writing—original draft, Z.Z. and S.L.; Writing—review & editing, Z.Z.; Visualization, Z.Z.; Supervision, Z.Z. and S.L.; Project administration, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Key R&D Program of China (2022ZD0115801), Major Science and Technology Projects in Xinjiang Uygur Autonomous Region (2022A02012-1), and the National Natural Science Foundation of China (61966034).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the anonymous reviewers for their contribution to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, L.; Shang, J.; Ren, X.; Xu, F.F.; Gui, H.; Peng, J.; Han, J. Empower Sequence Labeling with Task-Aware Neural Language Model. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, LA, USA, 2–7 February 2018; McIlraith, S.A., Weinberger, K.Q., Eds.; AAAI Press: Washington, DC, USA, 2018; pp. 5253–5260. [Google Scholar] [CrossRef]
  2. Sun, T.; Shao, Y.; Li, X.; Liu, P.; Yan, H.; Qiu, X.; Huang, X. Learning Sparse Sharing Architectures for Multiple Tasks. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, the Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, the Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020; AAAI Press: Washington, DC, USA, 2020; pp. 8936–8943. [Google Scholar] [CrossRef]
  3. Yang, J.; Zhang, Y.; Dong, F. Neural Reranking for Named Entity Recognition. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, Varna, Bulgaria, 2–8 September 2017; Mitkov, R., Angelova, G., Eds.; INCOMA Ltd.: Moscow, Russia, 2017; pp. 784–792. [Google Scholar] [CrossRef]
  4. He, H.; Sun, X. F-Score Driven Max Margin Neural Network for Named Entity Recognition in Chinese Social Media. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, 3–7 April 2017; Lapata, M., Blunsom, P., Koller, A., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2017; Volume 2: Short Papers, pp. 713–718. [Google Scholar] [CrossRef]
  5. Ma, R.; Peng, M.; Zhang, Q.; Wei, Z.; Huang, X. Simplify the Usage of Lexicon in Chinese NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2020; pp. 5951–5960. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Yang, J. Chinese NER Using Lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, 15–20 July 2018; Gurevych, I., Miyao, Y., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2018; Volume 1: Long Papers, pp. 1554–1564. [Google Scholar] [CrossRef]
  7. Li, X.; Yan, H.; Qiu, X.; Huang, X. FLAT: Chinese NER Using Flat-Lattice Transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2020; pp. 6836–6842. [Google Scholar] [CrossRef]
  8. Ma, G.; Li, X.; Rayner, K. Word segmentation of overlapping ambiguous strings during Chinese reading. J. Exp. Psychol. Hum. Percept. Perform. 2014, 40, 1046. [Google Scholar] [CrossRef] [PubMed]
  9. Gui, T.; Ma, R.; Zhang, Q.; Zhao, L.; Jiang, Y.; Huang, X. CNN-Based Chinese NER with Lexicon Rethinking. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, 10–16 August 2019; pp. 4982–4988. [Google Scholar] [CrossRef]
  10. Hu, D.; Wei, L. SLK-NER: Exploiting Second-order Lexicon Knowledge for Chinese NER. In Proceedings of the 32nd International Conference on Software Engineering and Knowledge Engineering, SEKE 2020, Pittsburgh, PA, USA, 9–19 July 2020; García-Castro, R., Ed.; KSI Research Inc.: Pittsburgh, PA, USA, 2020; pp. 413–417. [Google Scholar] [CrossRef]
  11. Xue, M.; Yu, B.; Liu, T.; Zhang, Y.; Meng, E.; Wang, B. Porous Lattice Transformer Encoder for Chinese NER. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), 8–13 December 2020; Scott, D., Bel, N., Zong, C., Eds.; International Committee on Computational Linguistics: New York, NY, USA, 2020; pp. 3831–3841. [Google Scholar] [CrossRef]
  12. Ma, X.; Hovy, E.H. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Berlin, Germany, 7–12 August 2016; Association for Computer Linguistics: Kerrville, TX, USA, 2016; Volume 1: Long Papers. [Google Scholar] [CrossRef]
  13. Li, H.; Hagiwara, M.; Li, Q.; Ji, H. Comparison of the Impact of Word Segmentation on Name Tagging for Chinese and Japanese. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, 26–31 May 2014; Calzolari, N., Choukri, K., Declerck, T., Loftsson, H., Maegaard, B., Mariani, J., Moreno, A., Odijk, J., Piperidis, S., Eds.; European Language Resources Association (ELRA): Paris, France, 2014; pp. 2532–2536. [Google Scholar]
  14. Yu, J.; Feng, X.; Li, J.; Liu, J. Named Entity Recognition in Classical Chinese by Lexicon Enhancement. In Proceedings of the IEEE International Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2023, Venice, Italy, 26–29 October 2023; pp. 463–468. [Google Scholar] [CrossRef]
  15. Aguilar, G.; López-Monroy, A.P.; González, F.A.; Solorio, T. Modeling Noisiness to Recognize Named Entities using Multitask Neural Networks on Social Media. arXiv 2019, arXiv:1906.04129. [Google Scholar]
  16. Zhang, N.; Li, F.; Xu, G.; Zhang, W.; Yu, H. Chinese NER Using Dynamic Meta-Embeddings. IEEE Access 2019, 7, 64450–64459. [Google Scholar] [CrossRef]
  17. Zhang, D.; Lu, J.; Zhang, P. Unified Lattice Graph Fusion for Chinese named entity recognition. arXiv 2023, arXiv:2312.16917. [Google Scholar]
  18. Chu, J.; Liu, Y.; Yue, Q.; Zheng, Z.; Han, X. Named entity recognition in aerospace based on multi-feature fusion transformer. Sci. Rep. 2024, 14, 827. [Google Scholar] [CrossRef] [PubMed]
  19. Gu, Y.; Qu, X.; Wang, Z.; Zheng, Y.; Huai, B.; Yuan, N.J. Delving Deep into Regularity: A Simple but Effective Method for Chinese named entity recognition. In Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, USA, 10–15 July 2022; Carpuat, M., de Marneffe, M., Ruíz, I.V.M., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2022; pp. 1863–1873. [Google Scholar] [CrossRef]
  20. Cauteruccio, F.; Kou, Y. Investigating the emotional experiences in eSports spectatorship: The case of League of Legends. Inf. Process. Manag. 2023, 60, 103516. [Google Scholar] [CrossRef]
  21. Zhang, B.; Cai, J.; Zhang, H.; Shang, J. VisPhone: Chinese named entity recognition model enhanced by visual and phonetic features. Inf. Process. Manag. 2023, 60, 103314. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Wang, G. CAN-NER: Convolutional Attention Network for Chinese Named Entity Recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019; Burstein, J., Doran, C., Solorio, T., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2019; Volume 1 (Long and Short Papers), pp. 3384–3393. [Google Scholar] [CrossRef]
  23. Jin, Y.; Xie, J.; Guo, W.; Luo, C.; Wu, D.; Wang, R. LSTM-CRF Neural Network with Gated Self Attention for Chinese NER. IEEE Access 2019, 7, 136694–136703. [Google Scholar] [CrossRef]
  24. Lu, Z.; Xie, H.; Liu, C.; Zhang, Y. Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets. In Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
  25. Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.G.; Le, Q.V.; Salakhutdinov, R. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, 28 July–2 August 2019; Korhonen, A., Traum, D.R., Màrquez, L., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2019; Volume 1: Long Papers, pp. 2978–2988. [Google Scholar] [CrossRef]
  26. Yan, H.; Deng, B.; Li, X.; Qiu, X. TENER: Adapting Transformer Encoder for Named Entity Recognition. arXiv 2019, arXiv:1911.04474. [Google Scholar]
  27. Meng, Y.; Wu, W.; Wang, F.; Li, X.; Nie, P.; Yin, F.; Li, M.; Han, Q.; Sun, X.; Li, J. Glyce: Glyph-vectors for Chinese Character Representations. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 2742–2753. [Google Scholar]
  28. Yu, J.; Bohnet, B.; Poesio, M. Named Entity Recognition as Dependency Parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2020; pp. 6470–6476. [Google Scholar] [CrossRef]
  29. Strubell, E.; Verga, P.; Belanger, D.; McCallum, A. Fast and Accurate Entity Recognition with Iterated Dilated Convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, 9–11 September 2017; Palmer, M., Hwa, R., Riedel, S., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2017; pp. 2670–2680. [Google Scholar] [CrossRef]
  30. Lafferty, J.D.; McCallum, A.; Pereira, F.C.N. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williamstown, MA, USA, 28 June–1 July 2001; Brodley, C.E., Danyluk, A.P., Eds.; Morgan Kaufmann: Burlington, MA, USA, 2001; pp. 282–289. [Google Scholar]
  31. Xuan, Z.; Bao, R.; Jiang, S. FGN: Fusion Glyph Network for Chinese named entity recognition. In Proceedings of the Knowledge Graph and Semantic Computing: Knowledge Graph and Cognitive Intelligence—5th China Conference, CCKS 2020, Nanchang, China, 12–15 November 2020; Revised Selected Papers. Chen, H., Liu, K., Sun, Y., Wang, S., Hou, L., Eds.; Communications in Computer and Information Science. Springer: Berlin/Heidelberg, Germany, 2020; Volume 1356, pp. 28–40. [Google Scholar] [CrossRef]
  32. Huang, Z.; Rong, W.; Zhang, X.; Ouyang, Y.; Lin, C.; Xiong, Z. Token Relation Aware Chinese named entity recognition. ACM Trans. Asian Low Resour. Lang. Inf. Process. 2023, 22, 24. [Google Scholar] [CrossRef]
Figure 1. Flat- Lattice structure.
Figure 1. Flat- Lattice structure.
Applsci 14 02242 g001
Figure 2. Overall model.
Figure 2. Overall model.
Applsci 14 02242 g002
Figure 3. Bi-LSTM obtains lexical semantic information.
Figure 3. Bi-LSTM obtains lexical semantic information.
Applsci 14 02242 g003
Figure 4. Character-vocabulary encoding model.
Figure 4. Character-vocabulary encoding model.
Applsci 14 02242 g004
Figure 5. From (a,b), it is observed that when there is a conflict between words, the global bidirectional semantic and sequential temporal information of the matched words is obtained, and the weights of the conflicting matched words are adjusted step by step to alleviate the conflict between the matched words effectively. (a) Obtaining semantic and sequential temporal information. (b) Semantic and sequential temporal information not captured.
Figure 5. From (a,b), it is observed that when there is a conflict between words, the global bidirectional semantic and sequential temporal information of the matched words is obtained, and the weights of the conflicting matched words are adjusted step by step to alleviate the conflict between the matched words effectively. (a) Obtaining semantic and sequential temporal information. (b) Semantic and sequential temporal information not captured.
Applsci 14 02242 g005
Figure 6. Attentional visualization: (a) represents the visualization of the score matrix of characters with fused vocabulary information versus those with unfused lexical information after local attention, and (b) represents the visualization of the score matrix of characters with fused lexical information versus those with unfused lexical information without local attention. (a) Through local attention. (b) Without local attention.
Figure 6. Attentional visualization: (a) represents the visualization of the score matrix of characters with fused vocabulary information versus those with unfused lexical information after local attention, and (b) represents the visualization of the score matrix of characters with fused lexical information versus those with unfused lexical information without local attention. (a) Through local attention. (b) Without local attention.
Applsci 14 02242 g006
Table 1. Data set statistics.
Table 1. Data set statistics.
Ontonotes4.0MSRAResumeWeibo
Train15,77846,67538211350
Dev43014376463270
Test43464376477270
Entity Types4384
Charavg36.9245.8732.1554.37
Wordavg17.5922.3824.9921.49
Entityavg1.151.583.481.42
conflict lexicon48,76638,80975832368
Table 2. Hyperparameters of the dataset.
Table 2. Hyperparameters of the dataset.
Hyper-ParametersOntonotes4.0MSRAResumeWeibo
epoch100100100100
head881212
head_dim20201620
batch10688
Table 3. Weibo and Resume.
Table 3. Weibo and Resume.
ModelWeiboResume
PRF1PRF1
FGN69.0273.6571.2596.4997.0896.79
Token-Relation72.8266.0269.6296.0196.5096.36
LR-CNN--59.92--95.11
PLTE+BERT72.0066.6769.2396.1696.7596.45
SLK-NER61.8066.3064.0095.2096.4095.80
BERT+FLAT--68.55--95.86
B E R T + F L A T t h i s 68.6069.6669.1395.6896.5696.12
SISF71.3273.2072.2596.7596.9396.84
Table 4. MSRA and Ontonotes4.0.
Table 4. MSRA and Ontonotes4.0.
ModelMSRAOntonotes4.0
PRF1PRF1
FGN95.4595.8195.6482.6181.4882.04
Token-Relation96.0896.1896.1382.5783.9983.28
LR-CNN--93.71--74.45
PLTE+BERT94.9194.1594.5379.6281.8280.60
SLK-NER---77.9082.2080.20
BERT+FLAT--96.09--81.82
B E R T + F L A T t h i s 95.9496.0095.9780.6682.7681.70
SISF96.6995.9996.3480.2185.5182.77
Table 5. F1 values for NER for each dataset after reducing the weight of conflicting terms.
Table 5. F1 values for NER for each dataset after reducing the weight of conflicting terms.
ModelOntonotes4.0ResumeMSRAWeibo
SISF82.7796.8496.3472.25
BERT+FLAT+Bi-LSTM82.4596.4396.1470.89
B E R T + F L A T t h i s 81.8296.1295.9769.13
BERT+FLAT81.8295.8696.0968.55
Table 6. Experimental results of Bi-LSTM vs. CNN comparison.
Table 6. Experimental results of Bi-LSTM vs. CNN comparison.
ModelOntonotes4.0Weibo
BERT+FLAT+Bi-LSTM82.4570.89
BERT+FLAT+CNN82.2169.35
BERT+FLAT81.8268.55
LR-CNN74.4559.92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Liu, S.; Jian, Z.; Yin, H. Research on Chinese Named Entity Recognition Based on Lexical Information and Spatial Features. Appl. Sci. 2024, 14, 2242. https://doi.org/10.3390/app14062242

AMA Style

Zhang Z, Liu S, Jian Z, Yin H. Research on Chinese Named Entity Recognition Based on Lexical Information and Spatial Features. Applied Sciences. 2024; 14(6):2242. https://doi.org/10.3390/app14062242

Chicago/Turabian Style

Zhang, Zhipeng, Shengquan Liu, Zhaorui Jian, and Huixin Yin. 2024. "Research on Chinese Named Entity Recognition Based on Lexical Information and Spatial Features" Applied Sciences 14, no. 6: 2242. https://doi.org/10.3390/app14062242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop