Next Article in Journal
Digital Transformation and Innovation: The Influence of Digital Technologies on Turnover from Innovation Activities and Types of Innovation
Previous Article in Journal
Resilience Metrics for Socio-Ecological and Socio-Technical Systems: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Category Mapping of Emergency Supplies Classification Standard Based on BERT-TextCNN

1
School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China
2
School of Management Science and Engineering, University of Jinan, Jinan 250024, China
3
College of Engineering, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
Systems 2024, 12(9), 358; https://doi.org/10.3390/systems12090358
Submission received: 12 August 2024 / Revised: 5 September 2024 / Accepted: 6 September 2024 / Published: 10 September 2024
(This article belongs to the Section Supply Chain Management)

Abstract

:
In recent years, the escalation in emergency occurrences has underscored the pressing need for expedient responses in delivering essential supplies. Efficient integration and precise allocation of emergency resources under joint government–enterprise stockpiling models are pivotal for enhancing emergency response effectiveness and minimizing economic repercussions. However, current research predominantly focuses on contract coordination and cost-sharing within these joint reserve modes, overlooking significant discrepancies in emergency supply classification standards between government and enterprise sectors, as well as the asymmetry in cross-sectoral and cross-regional supply information. This oversight critically impedes the timeliness and accuracy of emergency supply responses. In practice, manual judgment has been used to match the same materials under differing classification standards between government and enterprise reserves. Still, this approach is inefficient and prone to high error rates. To mitigate these challenges, this study proposes a methodology leveraging the BERT pre-trained language model and TextCNN neural network to establish a robust mapping relationship between these classification criteria. The approach involves abstracting textual representations of both taxonomical classes, generating comparable sentence vectors via average pooling, and calculating cosine similarity scores to facilitate precise classification mapping. Illustrated with China’s Classification and Coding of Emergency Supplies standards and Global Product Classification standards, empirical validation on annotated data demonstrates the BERT-TextCNN model’s exceptional accuracy of 98.22%, surpassing other neural network methodologies such as BERT-CNN, BERT-RNN, BERT-BiLSTM, etc. This underscores the potential of advanced neural network techniques in enhancing emergency supply management across diverse sectors and regions.

1. Introduction

In recent years, global emergencies have become more prevalent, presenting a substantial risk to the survival and development of humanity. Data from the Emergency Events Database (EM-DAT) indicates that over 22,000 major emergencies transpired globally from 1990 to 2020, leading to direct economic losses amounting to trillions of dollars. Events such as the 2015 Nepal earthquake, the 2017 Hurricane Harvey in the United States, the 2018 Indonesia earthquake and tsunami, the 2018 Yarlung Tsangpo landslide weir on China’s Jinsha River, and the 2019 outbreak of the global COVID-19 epidemic pose a serious threat to the national economy and the safety of life and property, resulting in a large number and variety of demands for emergency supplies [1]. Because of the uncertainty of emergencies and the severity of the disaster, it is often difficult to respond quickly to emergency demands by relying solely on a single, limited variety of emergency materials stockpiled by governments, so governments must mobilize the material resources of businesses and society to provide collaborative relief [2]. In this context, the development of the government–enterprise joint reserve model has emerged as a contemporary research hotspot.
The emergency supplies joint reserve mode (ESJRM) [3] coordinates and dispatches supplies across sectors, regions, and disasters. Although the problem of resource scarcity has been solved through contractual coordination [4] and cost-sharing [5], the government and enterprises have adopted different classification standards for emergency supplies due to their different purposes of use and demands, and different classification standards have different classification numbers and levels, resulting in the phenomenon of emergency supplies not being “found or deployed”. The government generally classifies emergency supplies according to their purpose, such as the US government’s FEMA’s Authorized Equipment Catalogue (AEL) and the Interagency Advisory Board’s (IAB) Standardized Equipment List (SEL) [6], Japan’s Emergency Supplies Reserve and Rotation System (ESSRS) [7], and Australia’s Federal Emergency Management Agency (FEMA)’s Overseas Disaster Rescue Plan (ODRP), which sets out detailed resources for relief supplies [8], the Chinese government adopts the national standard GB/T 38565-2020 Classification and Coding of Emergency Supplies [9] (hereinafter referred to as GB/T 38565), etc. Enterprises generally classify their emergency supplies based on commerce and trade demands, and they use common product standards such as the Global Product Classification (hereinafter referred to as GPC) [10] and The United Nations Standard Products and Service Codes (hereinafter referred to as UNSPSC) [11]. However, due to the differing objectives and purposes of the government and enterprises, it is difficult to accept or build a new unified supply classification standard.
With the advancement of artificial intelligence and other technologies, constructing the mapping relationship between the emergency supplies classification standard and the general supplies classification standard can become more efficient, precise, and convenient to achieve information sharing of joint reserve supplies between government and enterprises, thereby providing a solution to the aforementioned problems. The mapping of supply classification standards falls under the area of taxonomy category mapping, which has been a research hotspot in natural language processing (NLP). Traditional machine learning algorithms, such as NB [12], KNN [13], and SVM [14], rely too heavily on manually set features and have poor model generalization capabilities. Deep learning methods based on neural networks are preferred for their powerful feature extraction capabilities, such as RNN [15], LSTM [16], CNN [17], TextCNN [18], etc., which provide good mapping classification results. With the use of pre-trained language models such as BERT [19], BERT can train finer-grained dynamic word vectors than classic word vector models such as Word2vec [20] and TF-IDF [21].
As a result, several researchers are merging BERT with neural networks, such as BERT-RNN [22], BERT-CNN [23], etc., to improve their performance in domain-specific text categorization mapping tasks. Emergency supplies classification category mapping includes issues such as data sparsity and a strong reliance on context, and little study has been conducted on emergency supplies classification standard category mapping using this combined approach.
Therefore, this uses BERT’s sophisticated semantic extraction of supply classes to characterize full-text features and the TextCNN convolutional layer to extract additional local features. It is believed that this combination will outperform any network working alone. The purpose of this study is to propose a novel combination of BERT and TextCNN network for more accurate category mapping of two separate supply categorization standards, as well as to give technical assistance for the subsequent development of a collaborative government–enterprise reserve supply information exchange system.
The rest of this study is shown below. Section 2 presents the literature review. Section 3 constructs an algorithmic model for emergency supplies classification standard category mapping. Section 4 details the experimental methodology employed in the model. Section 5 describes the experimental results and analyses them in depth. The paper concludes with the appropriate conclusions and discussion, including the significance of the study, limitations, and future research directions.

2. Literature Review

2.1. Analysis of Emergency Supplies Classification Standard for Joint Government–Enterprise Reserve

In the government–enterprise joint reserve model, the government will cooperate with several enterprises, and scientifically determine the reserve varieties, scale, and structure [24], and the government–enterprise joint reserve has been a hot research direction in the academic community. The research focuses mainly on determining the supplier selection [25], emergency reserve quantity [26], cooperation period [27], and benefit distribution [28] between the government and enterprises by establishing framework agreements [29,30], quantity elasticity contracts [31], option contracts [32,33], incentive contracts [34], and game models [35]. In terms of research themes, these studies are all based on the cooperation between governments and enterprises and do not address how to match and share information on emergency supplies and how to operate quickly and efficiently after the cooperation.
The government–enterprise cooperation contract incentives to cope with the uncertain demand for supplies [36], but the problem of uniformity in the classification and labeling of emergency supplies is relatively neglected [37,38], which often leads to the phenomenon that the government–enterprise parties are unable to locate the supplies even though they have signed the contract. Because government and enterprises are independent of each other in decision making and have different service objectives, the two parties are bound to adopt different classification labels for supplies. Take the situation in China as an example, as shown in Table 1; the Chinese government’s emergency departments follow the national standard GB/T 38565-2020 Classification and Coding of Emergency Supplies [9], while enterprises mainly focus on the demand for statistics, trade, etc., and adopt different classification standards according to the different purposes and service objects, such as the Global Product Classification (GPC) and other common domestic and international standard systems [10]. It can be found that each supplies classification standard adopted by the government and enterprises organizes and manages supplies according to different knowledge systems, resulting in the difficulty of sharing information and interoperability between the supply systems of government and enterprises in joint reserve of emergency supplies, which in turn affects the timeliness of supplies rescue. Establishing the mapping between the government purchaser and the enterprise supplier supplies classification standards is of great significance to achieve the matching of demand and supply information of joint government–enterprise reserve supplies, as well as the cross-retrieval and sharing between the organizational systems of the government and the enterprises.
The GPC standard is the goods classification standard adopted by The Article Numbering Center of China (also known as GS1 China), and all goods retailed and listed by Chinese enterprises must be licensed by the GS1 China to obtain a GPC classification code before being assigned a commodity barcode. GS1 China, as an affiliate of the General Administration of State Administration for Market Regulation, is in charge of organizing, coordinating, and administrating article numbering and Auto-ID work throughout China, and represents China’s accession to the Global Standard 1 (GS1). Therefore, this paper takes China as an example and selects the national standard GB/T 38565 adopted by the Chinese government, as well as the GPC standard adopted by enterprises with global universality, as the object of this paper, to realize the mapping between the supply classification standards adopted by the government and enterprises, respectively.

2.2. Taxonomy Category Mapping

A taxonomy is a classification system with a hierarchical structure, organized according to different contents and attributes. Although the hierarchical structure and compilation principles of different taxonomies vary greatly, the basic principles and purposes of their compilation are the same, and they are a series of markers expressing concepts and conceptual relationships compiled to improve retrieval efficiency. Therefore, there is a certain similarity between different taxonomies in terms of conceptual expression, so a mapping relationship between them can be established [40]. Category mapping refers to the process of establishing links between the classification codes of different knowledge classification systems. The supplies classification standard system is a classification established using different knowledge systems, so the mapping of the emergency supplies classification standard for the joint reserve of government and enterprises can be regarded as a category mapping problem between multiple classifications. As shown in Table 2, columns 1 and 2 give the classification code and category name of supplies related to “drinking water” in GB/T 38565, and columns 3 and 4 give the classification code and category name of supplies related to “water” in GPC. Both classification standards have a strict hierarchical structure, and each level has a classification code.
The methods of category mapping between taxonomies include manual labeling and automatic mapping. Although the accuracy of manual labeling is guaranteed to a certain extent, it has high manpower cost and strong subjectivity, which is not conducive to the construction of mapping relationships between large-scale categories in two taxonomies. With the development of computer technology, automatic mapping methods have made great progress, and they can be broadly classified into four types. (1) Method based on Same-occurrence [41]: when the same supplies are labeled with the classification codes of two classification standards, a certain relationship can be established between the classification codes of these two standards. (2) Method based on category similarity [42,43]: each entry of the classification criteria is described by several subject words or sentences, and the matching degree of the two categories of classification codes can be obtained by calculating the degree of similarity of words and sentences between different categories. (3) Method based on cross-searching [44]: collect a collection of supplies with a certain classification code under classification standard A, and use the keywords of this collection of supplies to retrieve the classification name indicated by another classification standard B. Then, the high-frequency classification codes “b1, b2, b3, , bn” in the retrieved classification standard B are counted, so that the association between them and the classification code a can be established. However, the accuracy and coverage of this mapping method are not high and often establish a one-to-many relationship. (4) Method based on Machine learning [45]: this method trains the text information of a category labeled with a certain classification code to obtain a two-class classifier for the text of this category, and then the classifier is used to classify the corpus of another classification criterion labeled with category “b1, b2, b3, , bn” to determine whether there is any difference between category a and category “b1, b2, b3, , bn”. Then, we can use this classifier to classify the corpus identified by another classification criterion “b1, b2, b3, , bn” and judge whether category a can be mapped with category “b1, b2, b3, , bn” or not. Later, some scholars combined manual annotation and automatic mapping for category mapping, applying the idea of crowdsourcing to taxonomy category mapping, in which crowdsourced users use the automatic mapping results as a preliminary mapping between categories, and manually re-labeled on its basis [46]. The mapping efficiency and accuracy of this method vary with the labeling differences of crowdsourcing users, and the mapping controllability is poor.

2.3. Category Mapping Based on Deep Learning

In the beginning, traditional machine learning methods were used for text classification and mapping. NB [12] was the first model used for text classification tasks. Subsequently, general-purpose models, such as KNN [13], SVM [14], RF [47], DT [48], Centre Vector Method, and Ada Boost technique [49], were widely used for text classification mapping. However, traditional machine learning is shallow feature extraction, ignoring the relationship between words and words as well as between sentences and sentences, insufficient understanding of the semantics, structure, sequence, and context behind the text, poor processing and generalization of high-dimensional data, and low classification mapping results due to the limited representational ability of the model.
Pre-trained language models with strong semantic understanding such as GPT [50,51], BERT [52,53], ELMo [54], etc. have gained wide attention in the field of natural language processing (NLP). They are pre-trained on massive monolingual texts to obtain a generalized linguistic modal edge [55], which is then applied to downstream tasks and fine-tuned according to the characteristics of the tasks. This pre-training plus fine-tuning approach not only greatly improves the performance of the downstream task, but also drastically reduces the size of the annotated corpus required for the downstream task. ERNIE (Enhanced Representation through kNowledge IntEgration), released in 2019, is an improved pre-training model for BERT [56]. ERNIE masks entities and phrases during pre-training to obtain a priori semantic knowledge about them, which is more suitable for Chinese noun phrase recognition.
With the development of deep learning, many neural network models have been successfully applied to text sequence modeling tasks. Recurrent neural network RNN [15] is commonly used to obtain the evolutionary direction of the sequence through recursive computation, which makes it easy to capture the positional information in all the words for the text classification task, but at the same time, there is the problem of exploding or vanishing gradient [16]. LSTM can effectively alleviate the problem of vanishing gradient due to the RNN in successive multiplication [57], which provides the basis of the text classification model. GRU with joint attention uses the attention mechanism to find out the keywords in the text for Chinese text classification [58]. The convolutional neural network CNN [17] was initially proposed for image classification due to its convolutional filter that extracts features from images. Based on CNN networks, an unbiased model of convolutional neural networks TextCNN was proposed [18]. It can better determine the distinguished phrases in the maximum pooling layer by one layer of convolution and learn hyperparameters other than word vectors by keeping the word vectors static, which has the advantages of simple network structure, small number of parameters, low computational effort and fast training [59]. Later, Transform [60] was successfully applied to the text sequence modelling task. Algorithms combining BERT and neural networks have received more attention. Some scholars proposed BERT-CNN for multi-label text classification [61], BERT-DCNN method for sentiment analysis of new crown tweets [62], etc. Therefore, this paper proposes a BERT-based TextCNN method to construct a category mapping model for GB/T 38565 and GPC, combining the advantages of both the BERT model and TextCNN to improve the mapping performance.

3. Construction of a BERT-TextCNN Category Mapping Model

3.1. Introduction of the BERT Pre-Training Model

BERT [19] is a pre-trained contextual language model characterized by its deep bi-directional encoded representations. It employs bi-directional pre-training to mitigate the limitations associated with information leakage in the predictions of the Generative Pre-Training (GPT) model. Consequently, this study selects BERT for the conversion of text into word vectors within the word embedding layer. The fundamental architecture of BERT is illustrated in Figure 1, which comprises three primary components: the word vector encoding layer, the multi-head self-attention mechanism, and the position-wise fully connected feed-forward networks. The model streamlines the normalization layer that follows the multi-head self-attention mechanism and the position-wise fully connected feed-forward networks. The notation N on the left side denotes the number of stacked Transformer encoder layers. The output of the word vector encoding layer of a given sentence S = w 1 ,   w 2 ,   ,   w n is processed through N integrated layers of the multi-head self-attention mechanism and the position-wise fully connected feed-forward network, resulting in a profound abstract representation O = o 1 ,   o 2 ,   ,   o n of each word within the sentence.

3.2. Text Convolutional Neural Networks

TextCNN [63], a variant of convolutional neural networks, employs a sliding window approach with convolutional kernels of varying scales to sample text, thereby facilitating the extraction of local features of diverse sizes and enabling the capture of different levels of semantic information within the text [64]. Each convolutional kernel is characterized by distinct sliding window dimensions and quantities, which allows the model to extract features across multiple scales, a factor that is crucial for effective category mapping. Furthermore, the convolutional kernels in TextCNN are designed to be shared across the entire input, which contributes to a reduction in the number of parameters, simplifies the model’s complexity, and enhances its generalization capabilities. The TextCNN architecture demonstrates both high computational efficiency and a robust capacity to capture semantic features at various scales and levels within textual data, rendering it particularly suitable for the analysis of large-scale and lengthy text datasets. The structural representation of the TextCNN model is illustrated in Figure 2.

3.3. A Category Mapping Model of BERT-TextCNN

The architecture of the BERT-TextCNN-based model for classifying emergency supplies into standard categories is illustrated in Figure 3. The BERT model serves as an effective approach for processing textual data, adeptly capturing the contextual relationships within sentences. In contrast, the TextCNN model employs multiple convolutional kernels of varying sizes to extract critical information from sentences, akin to utilizing multiple window-sized n-grams, thereby enhancing the ability to identify local relevance. Consequently, this study integrates both models to leverage comprehensive textual information while effectively capturing local features. The detailed structure of the model can be delineated into six distinct components.
(1)
To prepare the manually annotated corpus for processing, it is necessary to eliminate punctuation and special symbols. Subsequently, the BERT layer converts the input Chinese text into a format compatible with the pre-trained ERNIE model by utilizing the Tokenizer. This component is tasked with segmenting sentences into words or subwords and assigning a distinct numerical identifier to each word or subword.
(2)
The output generated by the Tokenizer serves as the input for the ERNIE model, where it is processed by the multi-layer TransformerEncoder inherent to the model. Each layer comprises a multi-head self-attention mechanism alongside a feed-forward neural network, facilitating the model’s ability to incrementally extract abstract feature representations and attain a more profound comprehension of the text. Ultimately, the word sequences input into the model are transformed into a uniform-length matrix of word vectors, encapsulating both semantic and positional information about the words. These processed data are subsequently directed to the TextCNN layer.
(3)
The TextCNN layer is designed to extract significant features from textual data. It employs window widths of 2, 3, 5, and 1. The Rectified Linear Unit (ReLU) function serves as the activation mechanism, while the parameter (padding = same) is implemented to maintain consistent output vector dimensions despite the varying window sizes during the convolution process. This approach ensures that the four resultant outputs retain the same dimensions as the input, thereby facilitating subsequent operations. Additionally, the utilization of four distinct window widths allows for a more comprehensive understanding of local features within the text.
(4)
The output vectors generated by the TextCNN are subsequently processed through a mean pooling layer, which is configured with a window size of 2 and a stride of 2. This configuration effectively reduces the dimensionality of the final sentence vectors by fifty percent while simultaneously preserving certain contextual relationships.
(5)
Following the mean pooling operation within the fully connected layer, the outputs from the TextCNN, which utilizes three different window sizes, are concatenated.
(6)
The final layer of the model is a softmax layer dedicated to text classification predictions. This layer receives input from the fully connected layer and performs classification by evaluating the relative magnitudes of the values within the output dimension vector, thereby facilitating the determination of category alignment.
In the process of categorizing emergency supplies according to a classification standard, the textual descriptions provided as input to the model undergo a series of transformations. Initially, the text is processed through a word embedding layer, followed by a convolutional layer, a pooling layer, and ultimately a fully connected layer, culminating in the generation of category mapping results. When the textual description about the classification of supplies is input into the mapping model, it is segmented into token1_ids and segment1_ids via the Tokenizer. These identifiers are subsequently fed into the BERT model, which produces an embedded representation known as seq_output. This seq_output then serves as the input for the TextCNN architecture. Through a sequence of operations involving convolution, activation functions, pooling, and fully connected layers, the model generates an output from the output layer, which is ultimately converted into classification predictions for the two texts. The BERT-TextCNN category mapping algorithm is illustrated in Figure 4.

3.4. Evaluation Indicators

This study employs accuracy, precision, recall, and F1-Score as metrics for evaluation, with the confusion matrix of the prediction outcomes presented in Table 3. In this context, TP refers to instances where both the model’s predicted mapping and the actual mapping are positive, TN indicates instances where both the model’s predicted mapping and the actual mapping are negative, FP signifies instances where the actual mapping is negative while the model predicts it as positive, and FN represents instances where the actual mapping is positive but the model predicts it as negative.
(1)
Accuracy is the ratio of correct model predictions to total forecasted data, and the formula is as follows:
A c c u r a c y = T P + T N T P + T N + F N + F P
(2)
Precision is the proportion of data predicted positively and properly by the model out of all data projected positively, and the formula is as follows:
P r e c i s i o n = T P T P + F P
(3)
Recall indicates the ratio of data identified as positive by the model to the total actual positive data, and the formula is as follows:
R e c a l l = T P T P + F N
(4)
The F1-Score is an equally weighted reconciling average of precision and recall, with the following formula:
F 1 S c o r e = 2 1 p r e c i s i o n + 1 r e c a l l = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l

3.5. Cross-Entropy Loss Function

During the training of the model, cross-entropy loss [65] is employed to optimize features within cosine space by creating hyperplanes that delineate distinct classes of features across various subspaces. This loss function quantifies the divergence between the model’s predicted probability distribution and the actual labels. Given that the GB/T 38565 training set utilized in this study comprises three categories, the selection of the cross-entropy loss function is particularly appropriate for addressing classification problems involving C categories. Furthermore, the parameter weights are designated as a one-dimensional tensor, with specific weights assigned to each category, thereby enhancing the effectiveness of the model in scenarios characterized by unbalanced training sets.
L c = i = 1 N q i log e e W i T f k = 1 N e W k T f   q i = 0 , y i q i = 1 , y = i ,
where N represents the total number of categories present in the training dataset, y denotes the true label, and W signifies the weight vector associated with the fully connected layer for category i.

4. Experimental Methods

4.1. Manual Labeling of the Corpus

Previous studies have primarily conducted experimental evaluations on select categories from two taxonomies, resulting in a relatively homogeneous corpus that limits the comprehensive assessment of the model’s generalization capabilities. The GB/T 38565 taxonomy comprises three categories totaling 739 items, while the GPC taxonomy encompasses 44 categories with a total of 5250 items, each representing distinct domains, as illustrated in Figure 4. To develop a mapping corpus with comprehensive coverage, this study utilizes the categories from the GB/T 38565 taxonomy as a reference point. Three experts were engaged to manually annotate the mappings between the GB/T 38565 and GPC categories. Subsequently, a single expert undertook the task of uniformly correcting and validating the annotated corpus, which includes both one-to-one and one-to-many mapping relationships. Ultimately, a total of 798 fully mapped category pairs, encompassing 200 categories from the GB/T 38565 taxonomy, were established for model testing.
Figure 5a presents the statistical distribution of categories within the three classifications of GB/T 38565. There are significant disparities in the number of categories across the various classifications of GB/T 38565. Employing a straightforward hierarchical sampling approach to extract categories for manual mapping within these three classifications may result in an uneven distribution of sampled categories. To maintain the distinctions in the number of categories among each classification and to enhance the model’s generalization capabilities when utilized as a training set, this study adopts the method of random polynomial sampling from diverse languages as implemented in the cross-language pre-training model XLM [66]. The polynomial sampling formula utilized for determining the number of categories sampled from the three classifications of GB/T 38565 is as follows:
q i = p i α j = 1 M p j α ,   p i = n i k = 1 M n k
where the parameter α serves to regulate the sampling ratio, with the reference value for XLM set at α = 0.5, M denotes the total number of categories within the GB/T 38565 framework, ni represents the number of categories encompassed in the i-th GB/T 38565 category, pi indicates the proportion of categories included in the i-th GB/T 38565 category, and qi reflects the proportion of the sampling for the i-th GB/T 38565 category.
The final count of categories derived from the application of the polynomial sampling method for each GB/T 38565 category is presented in Table 4. It is evident that employing simple stratified sampling results in a highly disproportionate distribution of samples across the GB/T 38565 categories, which is detrimental to the effective training of subsequent models. This issue of imbalance can be mitigated through the implementation of polynomial post-sampling techniques.
As illustrated in Table 2, both GB/T 38565 and GPC exhibit a rigorous hierarchical framework. The hierarchical structure of GB/T 38565 encompasses categories classified as Large, Medium, Small, and Fine, whereas the GPC structure is organized into Segment, Family, Class, and Brick. In the present study, a mapping corpus has been established between the large and fine categories of GB/T 38565, utilizing a polynomial sampling method for manual labeling from the large categories to the fine categories. Additionally, a mapping corpus has been constructed between the broad categories and subcategories of GB/T 38565, with selections made randomly from the broad categories and manually annotated to the subcategories. The classification levels of GPCs are also determined through manual annotation. The subsequent section provides a comprehensive description of the data composition of the manually annotated corpus. For instance, to ascertain the GPC classification code corresponding to the GB/T 38565 classification code “1100401”, an expert identifies the GPC classification code “10000232” within the GPC mapping target, based on the GPC category name associated with the GB/T 38565 classification name. Following this, the expert performs data cleaning by consolidating all category names from the preceding levels corresponding to the GB/T 38565 code “1100401” into a single sentence, separated by semicolons, and applies the same procedure for the GPC code “10000232”. Ultimately, the class description texts for both GB/T 38565 and GPC are generated, resulting in a mutually mapping corpus. It is noteworthy that the manually annotated corpus encompasses both one-to-one and one-to-many mapping relationships, as detailed in Table 5.

4.2. Building the Datasets

Utilizing a manually annotated corpus comprising 798 entries, a dictionary known as GBT2GPC has been developed to document the associations between each GBT category and its initial occurrence within the GPC list. The next step involves iterating through this dictionary to generate positive samples for each key–value pair, assigning a label of (1) to these positive samples. To ensure a balanced dataset, four randomly selected negative samples—specifically, GPC Category Texts that do not correspond to the same category—are produced for each positive sample, with these negative samples being labeled as (0). Subsequently, the entire dataset is randomized and partitioned into training and testing sets at a ratio of 9:1, facilitating the subsequent training and evaluation of the model, as illustrated in Figure 6.

4.3. Experimental Setup

In the course of the experiment, the configuration of parameters is crucial to the efficacy of the final training model. The model is designed to accommodate a maximum sentence length of 256 characters; any annotated text exceeding this limit is truncated, while text shorter than 256 characters is padded with zeros. The training of the BERT-based ERNIE [56] framework is fine-tuned utilizing a substantial Chinese corpus, including resources such as Baidu Encyclopedia and Baidu Literature Library. This framework comprises a 12-layer Transformer encoder, 12 multi-head self-attention mechanisms, and 768 hidden units. Following the hyperparameter recommendations outlined in the BERT literature [67], the following parameters were established: 20 epochs, a learning rate of 2 × 10−5, and a batch size of 16.
Given the brevity of the category target annotation corpus, smaller convolutional kernels were employed within the TextCNN layer. After identifying the optimal single convolutional kernel, further exploration of adjacent values revealed that a combination of kernels yielded superior results compared to the singular best kernel. Consequently, the selected combination of convolutional kernels is (2, 3, 5, 1). The ReLU activation function is utilized, with the Adam optimizer employed for optimization. The mean pooling strategy demonstrated enhanced performance, and a dropout mechanism with a rate of 0.1 was implemented to mitigate overfitting. To facilitate comparative analysis of model performance on the dataset, all other neural network models were configured with identical parameters to those of TextCNN. The experimental environment is detailed in Table 6.

5. Results and Analysis

5.1. Results

Table 7 presents the accuracy, precision, recall, and F1 scores for the GB/T 38565 and GPC category mapping across various models. Notably, the BERT-TextCNN approach introduced in this study achieves the highest accuracy at 98.22%, significantly surpassing other deep learning models optimized on the training dataset, including BERT-DSSM, BERT-S2Net, BERT-RNN, BETR-CNN, BERT-BiLSTM, and BERT-BiLSTM-CNN. This finding suggests that the BERT and TextCNN-based methodology proposed herein substantially enhances the accuracy of GB/T 38565 and GPC category mapping. Furthermore, the BERT-TextCNN model achieved the highest F1 score of 97.14%, indicating that the predictions were the most accurate among all models, as well as the strongest identification of positive samples. Lastly, the accuracy of all seven models evaluated in this study exceeds 90%, which indirectly reflects the high quality of the manually annotated corpus and the significant improvement in the performance of the models trained on this dataset.
The performance comparison between BERT-S2Net and BERT-DSSM, both of which employ a two-tower language model architecture, reveals only a marginal difference of 0.36 percentage points. This minimal variance may be attributed to the incorporation of two deep neural network (DNN) structures (f1, f2) within both models, which possess nearly identical parameters, thereby yielding comparable performance outcomes. In the analysis of BERT-RNN and BERT-CNN models, the results demonstrate a notable consistency, likely due to their reliance on shared textual features derived from the pre-training of the foundational BERT model. This configuration enables the RNN to capture global information from text sequences, while the CNN is oriented toward local information, resulting in aligned mapping results.
When examining the performance of BERT-RNN, BERT-BiLSTM, and BERT-BiLSTM-CNN, BERT-BiLSTM achieves an accuracy of 97.86% with the highest recall of 100%. This notable performance can be attributed to the BiLSTM’s capacity to establish a more effective contextual relationship by processing information in both forward and backward directions, thereby enhancing its understanding of the deep semantics of the text. However, the introduction of a CNN layer on top of the BiLSTM may complicate the model, potentially leading to overfitting or increased training difficulty, which could elucidate why BERT-BiLSTM-CNN does not perform as well as BERT-BiLSTM. Furthermore, the evaluation metrics for the BERT-TextCNN model exceed those of the BERT-CNN model, likely due to the superior capability of CNNs in handling image features, while TextCNN is more proficient in efficiently capturing the textual features extracted by BERT, resulting in all its evaluation metrics surpassing 95%.

5.2. Analysis

Figure 7 illustrates the variations in accuracy, precision, recall, and F1 score for the BERT-TextCNN model presented in this study, as evaluated on both the training and test datasets. The red dashed curve represents the training set, while the green solid curve denotes the testing set. The results indicate that the model achieves an accuracy exceeding 95% on the training set following the training process. After 20 epochs, the training accuracy reached 99.80%, with a corresponding testing accuracy of 98.22%. Throughout the training process, the model’s accuracy exhibited minor fluctuations within a narrow range of values after each training stage. Notably, the performance metrics for the test and training sets are closely aligned, with a maximum difference of no more than 2%. This observation suggests that the model possesses robust generalization capabilities and demonstrates a strong capacity to adapt to new data.
To further elucidate the performance of the models, we conducted a comparative analysis of the training outcomes across various models. Figure 8 illustrates the accuracy, precision, recall, and F1 scores associated with different models engaged in category mapping on the testing dataset. The findings indicate that as the number of epochs increases, the performance metrics for the seven models generally improve, suggesting that all models are capable of learning and adapting to the dataset’s features throughout the training process. Notably, the models BERT-RNN, BERT-LSTM, BERT-CNN, BERT-LSTMCNN, and BERT-TextCNN exhibit exceptionally high performance from the outset, with all metrics surpassing 90%. In contrast, while the BERT-DSSM and BERT-S2Net models gradually approach the highest accuracy, their performance remains significantly inferior to that of the other models, indicating a need for enhancement in their ability to capture and learn from the dataset features. Furthermore, the performance metrics for all models tend to stabilize at higher epochs, particularly for the BERT-TextCNN model, which demonstrates robust adaptability and learning capacity regarding the dataset features. Concurrently, all models exhibit indications of overfitting with an increasing number of epochs, particularly after 15 epochs, suggesting that the optimal number of epochs is influenced by both the dataset size and the model complexity.
In Table 3, the terms True Positive (TP) and True Negative (TN) represent the number of samples accurately classified by the classifier, and thus, the sum of TP and TN reflects the total number of correctly classified samples. Analyzing the confusion matrix presented in Figure 9, it is evident that the BERT-TextCNN model achieves the highest number of correct predictions, totaling 276 (comprising 191 TP and 85 TN) out of 281 test samples, thereby establishing it as the model with the superior performance and highest accuracy. Although the BERT-BiLSTM model reports zero false negatives, it exhibits a false positive rate of 2.14%, which is the highest among the evaluated models. In contrast, when considering both false positive and false negative rates, BERT-TextCNN demonstrates the lowest combined rate of 1.78% (0.71% false positives and 1.07% false negatives). This further substantiates that BERT-TextCNN not only attains the highest accuracy but also maintains a minimal incidence of false positive and false negative classifications, thereby minimizing the potential for erroneous mappings.
Figure 10 illustrates the variation in the loss function across various models throughout the training process on the dataset. The findings indicate that the loss values for all models exhibit a consistent decline as the number of epochs increases, suggesting that the models are capable of effectively learning the features of the category description text during training and are continuously enhancing their mapping performance. Notably, the BERT-TextCNN model demonstrates the lowest loss value within the dataset, signifying its superior generalization capability and stability, as well as its enhanced ability to extract both local and global features of the text through model fusion.

6. Conclusions and Discussion

6.1. Conclusions

Recent advancements in the research and implementation of collaborative government–enterprise reserve supplies have largely overlooked the establishment of standardized classification systems for emergency supplies. The absence of consistency in the classification standards utilized by both governmental bodies and enterprises hampers the efficiency of supply responses during coordinated relief operations, potentially resulting in significant adverse outcomes.
This study integrates the strengths of the BERT model, which excels in semantic abstraction, and the TextCNN model, known for its differential representation capabilities. The proposed approach, termed BERT-TextCNN, is applied to the task of mapping categories between emergency supplies taxonomy and general-purpose supplies taxonomy, facilitating the automatic alignment of these two classifications. Training, learning, and testing experiments are conducted using a high-quality, manually annotated corpus. When compared to six other models, including BERT-RNN and BERT-CNN, the BERT-TextCNN model demonstrates superior prediction accuracy and exhibits the highest level of stability among the models evaluated. The experimental findings indicate that: (1) the hybrid model introduced in this study effectively integrates the strengths of BERT and TextCNN, successfully capturing local correlations while preserving comprehensive textual information with notable accuracy and stability. (2) In comparison to the suboptimal BERT-BiLSTM model, the target model BERT-TextCNN demonstrates enhancements in the evaluation metrics of accuracy, recall, and F1 by 0.36%, 2.3%, and 0.61%, respectively, thereby proficiently accomplishing the task of category mapping by the supplies classification standard. (3) Each component of the target model is essential; the fine-tuned ERNIE (an enhanced pre-training model based on BERT) is particularly well-suited for the semantic representation of the emergency supplies classification standard data, thereby enhancing text comprehension. Additionally, the TextCNN module adeptly extracts significant feature information from the text and accurately identifies keywords, resulting in relatively precise category mapping.

6.2. Theoretical Significance

This study contributes to the field of supply chain management for emergency supplies by developing a novel methodology aimed at optimizing both performance and stability. The approach is grounded in established emergency supplies classification standards and an automated mapping knowledge base. Our method comprises three main components:
(1)
Dataset Pre-processing: We performed pre-processing on category description datasets aligned with two supply classification standards: the GB/T 38565 dataset, which includes three classes and 739 categories, and the GPC dataset, which consists of 44 classes and 5250 categories. Through random polynomial sampling, we manually labeled 200 categories from the GB/T 38565 dataset, creating a category mapping pair dataset with 798 pairs.
(2)
Semantic Representation Generation: We developed and fine-tuned a BERT-based word embedding model using the category mapping pair dataset to generate global semantic representations. These word vectors were then integrated into a TextCNN framework, which analyzes the semantic representations and extracts locally significant features. Our approach involves a comprehensive comparative analysis to evaluate the accuracy of each model’s category mapping.
(3)
Model Evaluation and Selection: We retained the highest-performing BERT-TextCNN model for further computations involving the inference dataset. This model enables the derivation of mappings between all categories from the GB/T 38565 dataset and the GPC categories.

6.3. Practical Significance

The practical implications of the proposed methodology are significant for addressing inefficiencies in emergency supply management. The BERT-TextCNN mapping model developed in this study offers a robust solution for both governmental and private entities, enabling the automated assessment of correspondence between two supply classification standards. This model can be seamlessly integrated into software applications to facilitate automated mapping between the GPC and GB/T 38565 standards.
From an application perspective, the model demonstrating optimal performance during experimental trials can be utilized to infer mappings for new datasets, thus aiding in the determination of relationships between novel supply classification standards. Additionally, due to the normative and stable nature of classification standards, the mapping results produced by this methodology are expected to remain valid over extended periods, pending any updates to the classification standards.
Furthermore, the code, model, and manually annotated corpus used in this research are made available for free use by other researchers 1, promoting further exploration and application of the methodology.

6.4. Limitations

The research process is characterized by several significant limitations. Firstly, the labeling of datasets presents considerable challenges, particularly in terms of selecting the appropriate quantity and diversity of datasets. An increase in the number of datasets and the breadth of categories necessitates a greater investment of time, while a limited dataset may undermine the effectiveness of model training. This study seeks to mitigate these challenges by employing existing methodologies to optimize dataset selection based on overall volume.
Secondly, the quality of the dataset has a direct influence on the efficacy of model learning, which is heavily dependent on the expertise of the individuals conducting the labeling. Consequently, the accuracy and reliability of the model are contingent upon the skill level of the personnel involved in this process.
Lastly, the proposed model encounters limitations in terms of interpretability. While it is proficient in making predictions, it lacks the capability for inference. This limitation arises from the need to compare each category in the GB/T 38565 classification standard with all categories in the GPC, which could involve over 3.8 million calculations. Such extensive computational requirements place significant demands on the experimental environment. In light of these constraints, this study primarily assesses the feasibility of the experimental methodology and the performance of the model, concluding that the proposed method is practical and capable of addressing real-world challenges.

6.5. Future Work

Future research should focus on several critical areas to further the advancement of the field. First, it is essential to enhance the processes involved in manual dataset labeling. This entails refining methodologies to optimize both the quantity and quality of data, which is vital for improving model performance. Subsequent studies could explore automated or semi-automated labeling techniques to address the limitations inherent in manual processes, thereby reducing the time and resources required.
Second, within the realm of semantic refinement, there is a pressing need to investigate more sophisticated models for semantic comprehension and feature extraction. The integration of advanced text enhancement strategies and cutting-edge feature learning techniques could substantially enhance the accuracy and interpretability of semantic analyses. Assessing these enhancements through metrics such as ROC-AUC curves will ensure the validity of classifications and contribute to the robustness of the models.
Finally, the utilization of existing text similarity corpora, coupled with the integration of deep learning models via transfer learning, represents a promising direction for future inquiry. Adapting the proposed model to other classification tasks, particularly those related to supply standard categories, could mitigate the challenges associated with the labor-intensive nature of manual labeling. This strategy has the potential to improve the efficiency and scalability of category-matching systems, thereby addressing current limitations and broadening the applicability of the model across diverse contexts.

Author Contributions

Conceptualization, Q.Z.; Data curation, Q.Z., J.Y., K.Z. and J.C.; Formal analysis, Q.Z.; Funding acquisition, H.H.; Investigation, Q.Z.; Methodology, Q.Z., H.H. and J.Y.; Resources, K.Z. and J.C.; Software, Q.Z.; Supervision, Y.J.; Writing—original draft, Q.Z.; Writing—review and editing, H.H., Y.J. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Humanities and Social Science Fund of Ministry of Education of China (grant numbers 21YJA630029); National Key R&D Plan of China under grant (grant numbers 2016YFC0803207).

Data Availability Statement

The original data presented in the study are openly available in reference [9] and [http://gmd.gds.org.cn:8080/].

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

ESJRM, emergency supplies joint reserve mode; GPC, Global Product Classification; NB, Naive Bayes; KNN, K-Nearest-Neighbor; SVM, Support Vector Machine; DT, Decision Trees; CNN, Convolutional Neural Network; RF, Random Forest; TextCNN, Text Convolutional Neural Network; RNN, Recurrent Neural Network; LSTM, Long Short Term Memory; BERT, Bidirectional Encoder Representations from Transformers; ERNIE, Enhanced Representation through kNowledge IntEgration.

Notes

1
GitHub Open-Source Address: https://github.com/simpleax/Category-Mapping.git (accessed on 1 October 2024).

References

  1. Liu, Y.; Tian, J.; Feng, G.; Hu, Z. A relief supplies purchasing model via option contracts. Comput. Ind. Eng. 2019, 137, 106009. [Google Scholar] [CrossRef]
  2. Hu, Z.; Tian, J.; Feng, G.; Zhang, L. Exploratory Research on the System of China Relief Reserve. Syst. Eng. Theory Pract. 2012, 40, 605–616. [Google Scholar]
  3. Zhang, M.; Kong, Z. A tripartite evolutionary game model of emergency supplies joint reserve among the government, enterprise and society. Comput. Ind. Eng. 2022, 169, 108132. [Google Scholar] [CrossRef]
  4. Hou, H.; Zhang, K.; Zhang, X. Multi-scenario flexible contract coordination for determining the quantity of emergency medical suppliers in public health events. Front. Public Health 2024, 12, 1334583. [Google Scholar] [CrossRef]
  5. Yang, M.; Liu, D.; Li, D. Differential Game Model of Government and Entweprise Material-Production Capacity Emergency Supplies Reserve and Procurement Pricing. Manag. Rev. 2023, 35, 274–286. [Google Scholar] [CrossRef]
  6. Li, H. Research on Classification Standard and Coding Specification of Emergency Materials. Stand. Sci. 2017, 7, 18–24. [Google Scholar]
  7. Nazarov, E. Emergency Response Manangement in Japan; Crisis Management Center Ministry of Emergency Situations of the Republic of Azerbaijan: Baku, Azerbaijan, 2011.
  8. Zhou, L.; Wu, X.; Xu, Z.; Fujita, H. Emergency decision making for natural disasters: An overview. Int. J. Disaster Risk Reduct. 2018, 27, 567–576. [Google Scholar] [CrossRef]
  9. GB/T 38565-2020; Classification and Code of Emergency Supplies. State Administration for Market Regulation; Standardization Administration of the People’s Republic of China: Beijing, China, 2020; 44p.
  10. Hu, J. 《Global Product Classification》. The basis for integration into global E-Commerce. Inf. Comput. 2004, 9, 30–32. [Google Scholar]
  11. Hepp, M.; Leukel, J.; Schmitz, V. A quantitative analysis of product categorization standards: Content, coverage, and maintenance of eCl@ss, UNSPSC, eOTD, and the RosettaNet Technical Dictionary. Knowl. Inf. Syst. 2007, 13, 77–114. [Google Scholar] [CrossRef]
  12. Maron, M.E. Automatic Indexing: An Experimental Inquiry. J. ACM 1961, 8, 404–417. [Google Scholar] [CrossRef]
  13. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1953, 13, 21–27. [Google Scholar] [CrossRef]
  14. Joachims, T. Text categorization with Support Vector Machines: Learning with many relevant features. In Proceedings of the European Conference on Machine Learning, Chemnitz, Germany, 21–23 April 1998. [Google Scholar]
  15. Wang, Z.; Wang, X. Research on text classification methods based on neural networks. Comput. Eng. 2020, 46, 11–17. [Google Scholar] [CrossRef]
  16. Du, J.; Vong, C.-M.; Chen, C.L.P. Novel Efficient RNN and LSTM-Like Architectures: Recurrent and Gated Broad Learning Systems and Their Applications for Text Classification. IEEE Trans. Cybern. 2021, 51, 1586–1597. [Google Scholar] [CrossRef]
  17. Soni, S.; Chouhan, S.S.; Rathore, S.S. TextConvoNet: A convolutional neural network based architecture for text classification. Appl. Intell. 2022, 53, 14249–14268. [Google Scholar] [CrossRef]
  18. Kim, Y. Convolutional Neural Networks for Sentence Classification. arXiv 2014, arXiv:1408.5882. [Google Scholar]
  19. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019. [Google Scholar]
  20. Rezaeinia, S.M.; Rahmani, R.; Ghodsi, A.; Veisi, H. Sentiment Analysis based on Improved Pre-trained Word Embeddings. Expert Syst. Appl. 2018, 117, 139–147. [Google Scholar] [CrossRef]
  21. Kim, D.; Seo, D.; Cho, S.; Kang, P. Multi-co-training for document classification using various document representations: TF-IDF, LDA, and Doc2Vec. Inf. Sci. Int. J. 2019, 477, 15–29. [Google Scholar] [CrossRef]
  22. Bello, A.; Ng, S.-C.; Leung, M.-F. A BERT Framework to Sentiment Analysis of Tweets. Sensors 2023, 23, 506. [Google Scholar] [CrossRef]
  23. Abas, A.R.; Elhenawy, I.; Zidan, M.; Othman, M. BERT-CNN: A Deep Learning Model for Detecting Emotions from Text. Comput. Mater. Contin. 2022, 71, 2943–2961. [Google Scholar] [CrossRef]
  24. Zhang, M.; Kong, Z. A two-phase combinatorial double auction and negotiation mechanism for socialized joint reserve mode in emergency preparedness. Socio-Econ. Plan. Sci. 2023, 87, 101512. [Google Scholar] [CrossRef]
  25. Hu, S.; Dong, Z.S. Supplier selection and pre-positioning strategy in humanitarian relief. Omega 2019, 83, 287–298. [Google Scholar] [CrossRef]
  26. Olanrewaju, O.G.; Dong, Z.S.; Hu, S. Supplier selection decision making in disaster response. Comput. Ind. Eng. 2020, 143, 106412. [Google Scholar] [CrossRef]
  27. Gao, X.-N.; Tian, J. Multi-period incentive contract design in the agent emergency supplies reservation strategy with asymmetric information. Comput. Ind. Eng. 2018, 120, 94–102. [Google Scholar] [CrossRef]
  28. Hu, S.-L.; Han, C.-F.; Meng, L.-P. Stochastic optimization for joint decision making of inventory and procurement in humanitarian relief. Comput. Ind. Eng. 2017, 111, 39–49. [Google Scholar] [CrossRef]
  29. Dowdy, D.W.; Zhang, J.-H.; Sun, X.-Q.; Zhu, R.; Li, M.; Miao, W. Solving an emergency rescue materials problem under the joint reserves mode of government and framework agreement suppliers. PLoS ONE 2017, 12, e0186747. [Google Scholar] [CrossRef]
  30. Wang, X.; Fan, Y.; Liang, L.; De Vries, H.; Van Wassenhove, L.N. Augmenting Fixed Framework Agreements in Humanitarian Logistics with a Bonus Contract. Prod. Oper. Manag. 2019, 28, 1921–1938. [Google Scholar] [CrossRef]
  31. Hu, Z.; Tian, J.; Feng, G.; Zhang, L. A research on emergency supplies reserve and purchase pricing under the mode of joint reserve of governments and enterprises. Syst. Eng. Theory Pract. 2020, 40, 3181–3193. [Google Scholar]
  32. Wang, J.; LIiu, H. Coordination and optimization model of multiple supply modes for emergency material considering option procurement. J. Saf. Sci. Technol. 2019, 15, 13–19. [Google Scholar] [CrossRef]
  33. Aghajani, M.; Torabi, S.A.; Heydari, J. A novel option contract integrated with supplier selection and inventory prepositioning for humanitarian relief supply chains. Socio-Econ. Plan. Sci. 2020, 71, 100780. [Google Scholar] [CrossRef]
  34. Xiao, H.; Xu, T.; Xu, H.; Lin, Y.; Sun, M.; Tan, M. Production Capacity Reserve Strategy of Emergency Medical Supplies: Incentive Model for Nonprofit Organizations. Sustainability 2022, 14, 11612. [Google Scholar] [CrossRef]
  35. Coskun, A.; Elmaghraby, W.; Karaman, M.M.; Salman, F.S. Relief aid stocking decisions under bilateral agency cooperation. Socio-Econ. Plan. Sci. 2019, 67, 147–165. [Google Scholar] [CrossRef]
  36. Li, S.; Feng, j.; Wu, k.; Zhang, K. Incentive Dicision of Government-Enterprise Joint Reserve of Emergency Supplies. J. Syst. Manag. 2022, 31, 840–850. [Google Scholar] [CrossRef]
  37. Gong, W. Research on Emergency Supply Chain Management. China Bus. Mark. 2014, 28, 50–55. [Google Scholar] [CrossRef]
  38. Shan, Z.; Sheng, C.; Han, X.; Hou, C. Retrospective Analysis of the Demand of Emergency Supplies Based and SEIRD Dynamic Model: Case Study by Taking COVID-19 Epidemic in Wuhan as An Example. Oper. Res. Manag. Sci. 2023, 32, 40–45+60. [Google Scholar] [CrossRef]
  39. GB/T 7635.1-2002; National Central Product Classification and Codes—Part 1: Transportable Product. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China: Beijing, China, 2002; 1394p.
  40. Li, Y.; Su, C.; Pan, Y. A Review of Classification Mapping. Inf. Stud. Theory Appl. 2018, 41, 154–160. [Google Scholar]
  41. Qu, J.; Li, F.; Zhang, Y. Study and Implementation on the Automatic Mapping Rules Between Knowledge Organization Systems—The Case of the Dewey Decimal Classification and the Chinese Library Classification. New Technol. Libr. Inf. Serv. 2012, 10, 83–88. [Google Scholar]
  42. Zhou, L.; Qi, J.; Wang, J. Mapping between IPC and CLC Based on Similarity of Words. Comput. Eng. 2010, 36, 274–276+279. [Google Scholar]
  43. Zhang, Y.; Peng, J.; Huang, D.; Li, F. Design of Automatic Mapping System between DDC and CLC. Digit. Libr. Cult. Herit. Knowl. Dissem. Future Creat. 2011, 7008, 357–366. [Google Scholar]
  44. Zhang, J.; Li, Y.; Zhou, L. Classification Mapping between IPC and CLC based cross-concordance. In Proceedings of the 12th Annual Conference of Education Informatization Branch of China Higher Education Association, Shanghai, China, 14–15 November 2014. [Google Scholar]
  45. Ji, X.; Qi, J.; Wang, L. Approach of classification mapping between international patent classification and chinese library classification based on machine learning. J. Comput. Appl. 2011, 31, 1781–1784. [Google Scholar]
  46. Chen, R.; Jia, J. Research on Taxonomy Mapping Based on Crowdsourcing Model. Inf. Stud. Theory Appl. 2020, 43, 1000–7490. [Google Scholar]
  47. Aly, R.; Remus, S.; Biemann, C. Hierarchical Multi-label Classification of Text with Capsule Networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, Florence, Italy, 28 July–2 August 2019; pp. 323–330. [Google Scholar] [CrossRef]
  48. Liu, X.; Tan, Q.; Zeng, P. Comparison and implementation of several text classification algorithms based on MOOC. Software 2016, 37, 27–33. [Google Scholar] [CrossRef]
  49. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the KDD ‘16: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  50. Patra, B.G.; Sun, Z.; Cheng, Z.; Kumar, P.K.R.J.; Altammami, A.; Liu, Y.; Joly, R.; Jedlicka, C.; Delgado, D.; Pathak, J.; et al. Automated classification of lay health articles using natural language processing: A case study on pregnancy health and postpartum depression. Front. Psychiatry 2023, 14, 1258887. [Google Scholar] [CrossRef] [PubMed]
  51. Herrmann-Werner, A.; Festl-Wietek, T.; Holderried, F.; Herschbach, L.; Griewatz, J.; Masters, K.; Zipfel, S.; Mahling, M. Assessing ChatGPT’s Mastery of Bloom’s Taxonomy Using Psychosomatic Medicine Exam Questions: Mixed-Methods Study. J. Med. Internet Res. 2024, 26, e52113. [Google Scholar] [CrossRef] [PubMed]
  52. Sun, J.; Huang, S.; Wei, C. A BERT-based deontic logic learner. Inf. Process. Manag. 2023, 60, 103374. [Google Scholar] [CrossRef]
  53. Mutinda, J.; Mwangi, W.; Okeyo, G. Sentiment Analysis of Text Reviews Using Lexicon-Enhanced Bert Embedding (LeBERT) Model with Convolutional Neural Network. Appl. Sci. 2023, 13, 1445. [Google Scholar] [CrossRef]
  54. Hamza, A.; Alaoui Ouatik, S.E.; Zidani, K.A.; En-Nahnahi, N. Arabic duplicate questions detection based on contextual representation, class label matching, and structured self attention. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 3758–3765. [Google Scholar] [CrossRef]
  55. Rong, L. Research on Prompt Learning of Large Language Model for Automatic Book Classification. Res. Libr. Sci. 2024, 86–103. [Google Scholar] [CrossRef]
  56. Zhang, Z.; Han, X.; Liu, Z.; Jiang, X.; Sun, M.; Liu, Q. ERNIE: Enhanced Language Representation with Informative Entities. arXiv 2019, arXiv:1905.07129. [Google Scholar]
  57. Tai, K.S.; Socher, R.; Manning, C. Improved Semantic Representations from Tree-Structured Long Short-Term Memory Networks. Comput. Sci. 2015, 5, 36. [Google Scholar] [CrossRef]
  58. Mingmin, S. Chinese text classification based on GRU-Attention. Mod. Inf. Technol. 2019, 3, 10–12. [Google Scholar]
  59. Zhang, C.; Guo, R.; Ma, X.; Kuai, X.; He, B. W-TextCNN: A TextCNN model with weighted word embeddings for Chinese address pattern classification. Comput. Environ. Urban Syst. 2022, 95, 101819. [Google Scholar] [CrossRef]
  60. Chen, T.Y.; Wu, X.H.; Li, L.Y.; Li, J.H.; Feng, S. Extraction of entity relations from Chinese medical literature based on multi-scale CRNN. Ann. Transl. Med. 2022, 10, 520. [Google Scholar] [CrossRef] [PubMed]
  61. Liu, W.; Pang, J.; Li, N.; Zhou, X.; Yue, F. Research on Multi-label Text Classification Method Based on tALBERT-CNN. Int. J. Comput. Intell. Syst. 2021, 14, 201. [Google Scholar] [CrossRef]
  62. Joloudari, J.H.; Hussain, S.; Nematollahi, M.A.; Bagheri, R.; Fazl, F.; Alizadehsani, R.; Lashgari, R.; Talukder, A. BERT-deep CNN: State of the art for sentiment analysis of COVID-19 tweets. Soc. Netw. Anal. Min. 2023, 13, 99. [Google Scholar] [CrossRef]
  63. Gao, S.; LI, S.; Cai, Z. A survey of Chinese text classification based on deep learning. Comput. Eng. Sci. 2024, 46, 684–692. [Google Scholar] [CrossRef]
  64. Guo, B.; Zhang, C.; Liu, J.; Ma, X. Improving text classification with weighted word embeddings via a multi-channel TextCNN model. Neurocomputing 2019, 363, 366–374. [Google Scholar] [CrossRef]
  65. Li, X.; Chang, D.; Tian, T.; Cao, J. Large-Margin Regularized Softmax Cross-Entropy Loss. IEEE Access 2019, 7, 19572–19578. [Google Scholar] [CrossRef]
  66. Lample, G.; Conneau, A. Cross-lingual Language Model Pretraining. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; pp. 7059–7069. [Google Scholar]
  67. He, X.; Li, M.; He, Y. Siamese BERT-Networks Based Classification Mapping of Scientific and Technological Literature. J. Comput. Res. Dev. 2021, 58, 1751–1760. [Google Scholar] [CrossRef]
Figure 1. Structure diagram of the BERT model.
Figure 1. Structure diagram of the BERT model.
Systems 12 00358 g001
Figure 2. Structure diagram of the TextCNN model.
Figure 2. Structure diagram of the TextCNN model.
Systems 12 00358 g002
Figure 3. Structure diagram of the BERT-TextCNN model.
Figure 3. Structure diagram of the BERT-TextCNN model.
Systems 12 00358 g003
Figure 4. BERT-TextCNN category mapping algorithm.
Figure 4. BERT-TextCNN category mapping algorithm.
Systems 12 00358 g004
Figure 5. Categories statistics of GB/T 38565 and GPC. (a) GB/T 38565. (b) GPC.
Figure 5. Categories statistics of GB/T 38565 and GPC. (a) GB/T 38565. (b) GPC.
Systems 12 00358 g005aSystems 12 00358 g005b
Figure 6. Example of training and test dataset.
Figure 6. Example of training and test dataset.
Systems 12 00358 g006
Figure 7. Accuracy, precision, recall, and F1 for the training set and testing set of the BERT-TextCNN model.
Figure 7. Accuracy, precision, recall, and F1 for the training set and testing set of the BERT-TextCNN model.
Systems 12 00358 g007
Figure 8. Accuracy, precision, recall, and F1 for different category mapping models.
Figure 8. Accuracy, precision, recall, and F1 for different category mapping models.
Systems 12 00358 g008
Figure 9. Confusion matrix for different category mapping models.
Figure 9. Confusion matrix for different category mapping models.
Systems 12 00358 g009
Figure 10. Loss rates for different category mapping models.
Figure 10. Loss rates for different category mapping models.
Systems 12 00358 g010
Table 1. Mainstream supplies classification standard systems adopted by the Chinese government and enterprises.
Table 1. Mainstream supplies classification standard systems adopted by the Chinese government and enterprises.
Demand-Side of Government ProcurementSupply-Side of Enterprise Production
GB/T 38565-2020 Classification and Coding of Emergency SuppliesGlobal Product Classification (GPC)
The Harmonized Commodity Description and Coding System (HS)
Classified catalog of priority supplies for emergency protection (2015)United Nations Standard Products and Services Code (UNSPSC)
GB/T 7635.1-2002 National central product classification and codes—Part 1: Transportable product [39]
Table 2. Examples of the structure of GB/T 38565 taxonomy and GPC taxonomy.
Table 2. Examples of the structure of GB/T 38565 taxonomy and GPC taxonomy.
HierarchyClassification Code and Category Name of GB/T 38565HierarchyClassification Code and Category Name of GPC
Large category1,000,000: Basic life support suppliesSegment50,000,000: Food/Beverage
Medium category1,100,000: Processed foodsFamily50,200,000: Beverages
Small category1,100,400: BeveragesClass50,202,300: Non-Alcoholic Beverages—Ready to Drink
Fine category1,100,401: Drinking water (including natural, purified, and mineral water for drinking)Brick10,000,232: Packaged Water
Table 3. Confusion matrix.
Table 3. Confusion matrix.
Actual ResultPredicted Results
Predicted PositivePredicted Negative
Actual PositiveTPFN
Actual NegativeFPTN
Table 4. Number of samples for each category of GB/T 38565.
Table 4. Number of samples for each category of GB/T 38565.
Sampling Methods1,000,000: Basic Life-Support Supplies2,000,000: Emergency Equipment and Supporting Supplies3,000,000: Engineering Materials and Machining EquipmentTotals
Simple stratified sampling341579200
Polynomial sampling5511827200
Table 5. Examples of text labeling for GB/T 38565 and GPC category descriptions.
Table 5. Examples of text labeling for GB/T 38565 and GPC category descriptions.
GB/T CodeGB/T Category TextGPC CodeGPC Category Text
1,100,401Basic life support supplies; Processed foods; Beverages; Drinking water (including natural, purified, and mineral water for drinking)10,000,232Food/Beverage; Beverages; Non-Alcoholic Beverages-Ready to Drink; Packaged Water
1,130,201Basic life support supplies; Daily necessities; Household refrigeration appliances; Refrigerators (cabinets)10,003,698Home Appliances; Major Domestic Appliances; Refrigerating/Freezing Appliances; Freezers
10,003,694Home Appliances; Major Domestic Appliances; Refrigerating/Freezing Appliances; Refrigerators
10,003,695Home Appliances; Major Domestic Appliances; Refrigerating/Freezing Appliances; Refrigerator/Freezers
2,090,300Emergency equipment and supporting supplies; Logistical support equipment Fuel storage equipment; Including coal, oil, and gas fuel storage equipment10,005,306Fluids/Fuels/Gases; Fluids/Fuel Storage/Transfer; Fluids/Fuel Storage; Fluids/Liquid Fuel Bottles/Containers (Empty)
10,005,258Fluids/Fuels/Gases; Fluids/Fuel Storage/Transfer; Fluids/Fuel Storage; Pressurized Gas Fuel Bottles/Canisters (Empty)
3,010,300Engineering materials and machining equipment; engineering materials; asphalt10,003,898Building Products; Building Products; Asphalt/Concrete/Masonry; Asphalt/Concrete Patching
Table 6. Experimental environment.
Table 6. Experimental environment.
Development EnvironmentParameter
CPUIntel(R) Core(TM) i7-10510U CPU@ 1.80 GHz 2.30 GHz
Graphics cardNVIDIA GeForce MX350
Operating systemWin11X64
RAM16 GB
Programming ToolsPycharm
Programming languagePython3.9.7
Development FrameworkTensorflow + keras
Table 7. Results of category mapping for different models in percentages.
Table 7. Results of category mapping for different models in percentages.
ModelAccuracyPrecisionRecallF1
BERT-S2Net92.8887.6489.6688.64
BERT-DSSM94.6697.3785.0690.80
BERT-RNN97.5193.4898.8596.09
BERT-CNN97.5193.4898.8596.09
BERT-BiLSTM97.8693.55100.0096.67
BERT-BiLSTM-CNN97.5194.4497.7096.05
BERT-TextCNN98.2296.5997.7197.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Q.; Hou, H.; Ju, Y.; Yuan, J.; Zhang, K.; Wang, H.; Chen, J. Category Mapping of Emergency Supplies Classification Standard Based on BERT-TextCNN. Systems 2024, 12, 358. https://doi.org/10.3390/systems12090358

AMA Style

Zhang Q, Hou H, Ju Y, Yuan J, Zhang K, Wang H, Chen J. Category Mapping of Emergency Supplies Classification Standard Based on BERT-TextCNN. Systems. 2024; 12(9):358. https://doi.org/10.3390/systems12090358

Chicago/Turabian Style

Zhang, Qiuxia, Hanping Hou, Yingjie Ju, Jiandong Yuan, Kun Zhang, Huanhuan Wang, and Junhe Chen. 2024. "Category Mapping of Emergency Supplies Classification Standard Based on BERT-TextCNN" Systems 12, no. 9: 358. https://doi.org/10.3390/systems12090358

APA Style

Zhang, Q., Hou, H., Ju, Y., Yuan, J., Zhang, K., Wang, H., & Chen, J. (2024). Category Mapping of Emergency Supplies Classification Standard Based on BERT-TextCNN. Systems, 12(9), 358. https://doi.org/10.3390/systems12090358

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop