Next Article in Journal
Performance Analysis of Magnetorheological Porous Fabric Composite
Previous Article in Journal
A Modified Gaussian Model for Spectral Amplitude Variability of the SMART 1 Array Records
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

E3W—A Combined Model Based on GreedySoup Weighting Strategy for Chinese Agricultural News Classification

1
College of Information Engineering, Sichuan Agricultural University, Yaan 625000, China
2
The Lab of Agricultural Information Engineering, Sichuan Key Laboratory, Yaan 625000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(23), 12059; https://doi.org/10.3390/app122312059
Submission received: 20 September 2022 / Revised: 11 November 2022 / Accepted: 18 November 2022 / Published: 25 November 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
With the continuous development of the internet and big data, modernization and informatization are rapidly being realized in the agricultural field. In this line, the volume of agricultural news is also increasing. This explosion of agricultural news has made accurate access to agricultural news difficult, and the spread of news about some agricultural technologies has slowed down, resulting in certain hindrance to the development of agriculture. To address this problem, we apply NLP to agricultural news texts to classify the agricultural news, in order to ultimately improve the efficiency of agricultural news dissemination. We propose a classification model based on ERNIE + DPCNN, ERNIE, EGC, and Word2Vec + TextCNN as sub-models for Chinese short-agriculture text classification (E3W), utilizing the GreedySoup weighting strategy and multi-model combination; specifically, E3W consists of four sub-models, the output of which is processed using the GreedySoup weighting strategy. In the E3W model, we divide the classification process into two steps: in the first step, the text is passed through the four independent sub-models to obtain an initial classification result given by each sub-model; in the second step, the model considers the relationship between the initial classification result and the sub-models, and assigns weights to this initial classification result. The final category with the highest weight is used as the output of E3W. To fully evaluate the effectiveness of the E3W model, the accuracy, precision, recall, and F1-score are used as evaluation metrics in this paper. We conduct multiple sets of comparative experiments on a self-constructed agricultural data set, comparing E3W and its sub-models, as well as performing ablation experiments. The results demonstrate that the E3W model can improve the average accuracy by 1.02%, the average precision by 1.62%, the average recall by 1.21%, and the average F1-score by 1.02%. Overall, E3W can achieve state-of-the-art performance in Chinese agricultural news classification.

1. Introduction

As the internet and big data continue to evolve, accessing online information more effectively has become an increasingly important issue [1,2,3]. Agriculture is currently rushing towards modernization and informatization [4], and the volume of agricultural news is increasing daily. However, many industries that are difficult to distinguish are involved in agricultural news, resulting in people needing to spend more time screening the required agricultural news [5], dramatically hindering the dissemination of agricultural news [6,7,8]. In this line, the correct classification of agricultural news can lead to the more accurate dissemination of some advanced agricultural technologies [9,10,11], such as bioenergy robots for agriculture [12] and specialist drones [13], which can be used in farming to provide more solutions for the development of agriculture. Therefore, disseminating agricultural news can significantly contribute to the development of agriculture, and is of great importance to modern agriculture [14]. However, few studies have focused on the classification of agricultural news. Therefore, accurately classifying agricultural news has become an urgent problem.
In recent years, with scientific and technological development, natural language processing (NLP) has been developing rapidly [15]. As an important branch of NLP, text classification has also developed rapidly, and there have been many studies on news text classification. However, there is a lack of relevant research on agricultural news classification.
Hu et al. [16] have focused on improving a patent keyword extraction algorithm by using the distributed Skip-gram model, and proposed a new text classification keyword extraction method to improve the effectiveness of the text classification algorithm. Junjie Li et al. [17] have proposed a two-channel news headline classification model based on the enhanced representation through knowledge integration (ERNIE) pre-training model, through the use of ERNIE and BiLSTM-AT to extract text information, and deep pyramid convolutional neural networks (DPCNNs) to overcome the long-distance text dependency problem; their approach performed well in news text multi-classification applications. Taimoor Ahmed Javed et al. [18] have proposed a deep learning model for hierarchical text classification of Urdu news. Their model uses Word2vec to convert words into vectors, followed by LSTM networks to learn text features and perform the final classification.
There is still relatively little research on the classification of Chinese agricultural news. Yang et al. [19] have proposed a model based on ENIRE, BiGRU, and DPCNN-upgrade (EGC), in which the text is first encoded by ERNIE, followed by feature extraction by the DPCNN and bidirectional gating recurrent unit (BiGRU). Next, the extracted features are fused, and the fused features are finally classified by Softmax. This model has achieved the best results in a Chinese agricultural news data set so far.
Almost all of the research methods have tweaked individual models and only improved the average accuracy of the classification, while not achieving the highest accuracy in all categories; as such, text classification has not yet reached the desired level of effectiveness. As there are five general categories of agricultural news, even the most advanced models are unable to achieve the highest classification accuracy in all categories. The fine-tuning process typically consists of two steps: first, fine-tune the various parameters of the model; second, keep the model that achieves the highest accuracy on the validation set, while abandoning the other models. These discarded models can also provide great utility in the field of classification. Research from Google has shown that, while not performing optimally overall, some models perform well on certain data sets, such that the accuracy can be improved by combining such models [20]. They gave multiple examples where combining just two models can significantly improve the accuracy, with significant implications for improving the performance of the models in downstream tasks. Google has further proposed the GreedySoup weighting strategy for improving model accuracy [21]. Instead of selecting a single fine-tuned model that achieves the highest accuracy on the validation set, this approach combines multiple independent models, then adjusts the model weights; this strategy was also shown to be effective in improving the accuracy of the models.
In summary, E3W proposed in this paper is obtained by combining several models that perform well in different categories of the data and adjusting the weights of the models, which provides a feasible and effective modelling approach. In comparative experiments with 13 models, the four models that performed best in different categories of the data set were selected; these four models were ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN. Glove essentially involves the dimensionality reduction of a matrix; however, certain attributes are naturally lost in the process of dimensionality reduction. As agricultural text titles are inherently short, have limited information, and cover many categories, this can easily lead to some information being lost through dimensionality reduction, making it difficult to correctly classify. Based on the above reasons, we chose to combine word2vec with TextCNN.
The E3W model proposed in this paper first combines these four models, and then uses the GreedySoup weighting strategy to weigh the outputs of the four sub-models differently. The category with the highest calculated weight value is selected as the final output. Comparative experiments on a Chinese agricultural news data set demonstrate that E3W can achieve the most advanced results. The experimental validation analysis proved the effectiveness of the model in the classification of Chinese agricultural news.
The main contributions of this paper are as follows:
  • We propose the E3W model, which combines several sub-models and adopts the GreedySoup weighting strategy to adjust the model, achieving the best classification performance on a Chinese agricultural news data set to date;
  • By combining multiple sub-models, we solve the problem wherein traditional models cannot achieve the highest classification accuracy in all categories;
  • Applying the GreedySoup weighting strategy to combinatorial models solves the problem of weight assignment when multiple models are combined, and provides a solution to similar problems in other fields
The remainder of this paper is structured as follows: Section 2 presents information on the four used sub-models. Section 3 describes the structure and operational procedures of the E3W model proposed in this paper. Section 4 describes the set of experiments conducted, as well as discussing and analyzing the obtained results. Finally, in Section 5, we discuss the strengths, weaknesses, and outlook of the proposed approach.

2. Background

2.1. Combined Model Studies

In the multi-classification domain, developing improvements on the basis of individual models makes it difficult to achieve the best results in all categories. This is an important reason behind the difficulty related to improving accuracy in the multi-classification domain. Chinese agricultural news classification is inherently a five-category task and, in order to improve the classification accuracy more significantly, we should consider the use of other methods to improve the classification accuracy of the model.
Traditional approaches to improving model accuracy often involve first training multiple models with different hyperparameters, then selecting the single model that performs best on the validation set while discarding the rest [22,23]. Alternatively, different techniques may be used in the different stages of a single model, in order to improve the accuracy of the model [24,25]. While research on combining multiple models remains scarce, recent studies have shown that combined models (i.e., those that combine the outputs of multiple models) can outperform the best single models.
The improvement in effectiveness through model combination is clear, and a Google study—which conducted systematic experiments using 82 models—has shown that, even with some low-accuracy models, their combinations may be useful. By combining a high-accuracy model with a low-accuracy model, the accuracy was improved by 7% in an experiment [20], comprising a significant improvement. Therefore, in order to improve the classification accuracy for agricultural news, we considered the use of a combination model approach for classification.
When using a combined model for classification, it is important to adjust the weights of each model across the structure, in order to achieve better results, as some incorrect weight adjustments may lead to a reduction in the effectiveness of the model. Therefore, we used the GreedySoup strategy to adjust the weights of the model. Google proposed GreedySoup as a strategy for adjusting the output weights of multiple models in 2022 [21]. GreedySoup first sorts the models from highest to lowest, according to their experimental accuracies in the data set, then combines some of them and adjusts the weights of each model’s output; if a weight adjustment can improve the accuracy of the combined model, then that weight is used. The GreedySoup weighting strategy allows the combined model to classify no less effectively than any of its sub-models. Therefore, for this paper, we adopted this approach to adjust the model weights, that is, weighting the output of each model and experimentally obtaining the weight set leading to the best classification accuracy.
To improve the classification accuracy on agricultural texts, we apply this multi-model combination approach to the field of text classification and propose the E3W classification model, which combines four sub-models and uses the GreedySoup strategy to select the models that perform best in the sub-domains, as well as to weigh the outputs of the sub-models, resulting in significant accuracy improvements.

2.2. Related Work

Natural language processing (NLP) is the study of theories and methods that enable effective communication between humans and computers through the use of natural language, which has a wide range of applications in scenarios such as opinion monitoring, opinion extraction, text classification, and question-answering. Text classification—one of the most fundamental tasks in NLP—has been addressed in many scenarios, such as conversational bots, emotion recognition, and other directions. Similarly, there has been a significant amount of research in the field of news classification.
Fesseha et al. [26] proposed a convolutional neural network (CNN) based text classification method. The method achieved better results than using traditional machine learning methods. In the field of Chinese agricultural news classification, Huo Tingting [27] proposed an improved algorithm, CFT-fast text, based on fast text for solving the agricultural news text classification problem. Subsequently, Yang et al. proposed a model based on ENIRE, BiGRU and DPCNN-upgrade (EGC), which achieved the best results so far in the field of Chinese agricultural news classification. At present, some of the most advanced and common tools used for text classification include BERT, ERNIE, TextCNN, DPCNN, Word2Vec, and BiGRU.
As E3W consists of a combination of four sub-models, this section will focus on the four sub-models: ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN.

2.2.1. ERNIE

Enhanced representation through knowledge integration (ERNIE) is a model proposed by Baidu back in April 2019 [28] that further improves upon the BERT model to obtain state-of-the-art results in NLP tasks in Chinese. BERT masks text when processing Chinese [29], where its masking function is based on individual words, ignoring textual connections, resulting in less-than-comprehensive extracted features; however, this problem does not occur when processing English [30]. On the other hand, ERNIE masks the whole phrase, which can well capture the relationship between words.
As shown in Figure 1, BERT masks 15% of the text at random, but the masking does not take into account contextual connections, resulting in a word being separated and fragmenting the original meaning expressed by the phrase, thus not easily inferring the masked phrase. ERNIE changes the way in which BERT masks: instead of masking individual words, a mask of entities and phrases is added, which gives the model a stronger grammar learning capability. The ways that ERNIE and BERT mask words are depicted in Figure 1.
In addition to the major changes to the mask, ERNIE also adds a number of Chinese data sets for training. BERT uses the Chinese Wikipedia as a training data set in the Chinese processing domain, while ERNIE also uses the Chinese Wikipedia and adds Baidu’s own data sets to this, including Baidu Encyclopedia (solid, strong descriptives), Baidu News (professional fluent corpus), and Baidu Posting (multi-round conversations). These three data sets have different emphases and provide a more comprehensive enhancement to the model. The core of ERNIE is the transformer-encoder, where data are input, encoded, and location information is added, and then computed using a multi-head attention mechanism before a normalization operation is performed to output the final encoded result. The structure of the transformer is shown in Figure 2.

2.2.2. ERNIE + DPCNN

The ERNIE model is a further optimization based on the BERT model that modifies the BERT masking approach by masking phrases as the smallest unit and adding multiple Chinese training sets. In the ERNIE + DPCNN model, the text is first encoded by ERNIE, following which the encoded text is used as input to the DPCNN to fully extract the features of the text. The obtained features are finally used for text classification by Softmax.
Deep pyramid convolutional neural network (DPCNN) is a type of convolutional neural network proposed by JR [31]. The core of the network comprises equal-length convolution and half pooling layers, where the equal-length convolution has input and output of size n, while the half pooling layers halve the length of each input sequence, stacking up the lengths as the layers deepen and eventually giving the core a pyramid shape.
After the text is input, it passes through a region embedding layer containing three different convolutional feature extractors, then two layers of equal-length convolution. Finally, the text is repeated through a residual block of half pooling, which continuously improves the semantics of the text and makes the extracted features richer. The structure of the DPCNN is shown in Figure 3.

2.2.3. EGC

Based on a combination of the ENIRE, BiGRU, and DPCNN-upgrade models and proposed by Yang Sengqi et al. in 2022, EGC is a model for agricultural news classification that achieved the best classification accuracy on Chinese agricultural news.
EGC is divided into four parts, consisting of an input coding layer, a feature extraction layer, a feature fusion layer, and Softmax activation. At the input encoding layer, the input text is masked by ERNIE, embedded, and finally fed into the transformer for encoding. The encoded text is then fed into the feature extraction layer, which feeds the data into the DPCNN-upgrade and BiGRU modules for feature extraction. The extracted features are then fed into the feature fusion layer, where they are stitched together and fused into new features. The stitched features are then classified using Softmax, in order to obtain the final result. The EGC structure is shown in Figure 4.
The GRU model contains two gate structures: the update gate and the reset gate [32]. The reset gate determines how new input information is combined with previously stored data, while the update gate indicates the amount of previous memories saved at the current time step. Compared to the LSTM model [33], the GRU model is faster to train and better able to represent text features. Let the input be Xt and the output of the GRU hidden layer at moment t be Ht. W is the weight matrix connecting the two layers, the subscripts r and Z denote the reset and update gates, respectively, and σ denotes the activation function. The calculation formula is as follows:
Z t = σ ( W z × [ H t 1 , X t ] )
R t = σ ( W r × [ H t 1 , X t ] )
H ˜ t = tanh ( W × [ R t H t 1 , X t ] )
H t = ( 1 Z t ) H t 1 + Z t H ˜ t
The model uses BiGRU for data extraction. As news texts are typically short, the relationships between contexts need to be extracted in order to fully capture the semantic information of the headlines. However, GRU only extracts the impact of the preceding text on the following text and does not reflect the impact of the following text on the preceding text. The output of each step of BiGRU includes a combination of the forward and backward states of the current state, which better considers the relationships between the contexts such that more complete and rich feature information can be extracted [34]. The BiGRU structure is shown in Figure 5.
DPCNN-upgrade is an improvement to the deep pyramidal convolutional neural network DPCNN proposed by JR; in particular, it is improved with regard to the features of short agricultural news. According to our calculations, the average length of the agricultural news data set in this paper was about 18.98 words, indicating that agricultural news is generally shorter than other news. In terms of this feature, DPCNN-upgrade reduces the two convolutional layers of DPCNN to retain more text features, thus achieving better results. The DPCNN-upgrade structure is shown in Figure 6.

2.2.4. Word2Vec + TextCNN

In the proposed model, we combine Word2Vec (a feature-rich word vector for training) with TextCNN (a neural network specifically designed for text classification), in order to further improve the classification accuracy.
Word2Vec, a set of word-embedding tools invented and pioneered by Google in 2013 [35], has a wide range of applications [36] and provides a more efficient means of representing semantic distances between words through a deep learning-based tool that obtains a vector representation of each word in a corpus by calculating the cosine distance between word vectors. Word2Vec consists of two word vector training models, CBOW and Skip-gram, both of which include an input layer, a projection layer, and an output layer. The CBOW model predicts the current word from its context, which is suitable for use with a smaller data set; therefore, in this paper, we use the CBOW model. In contrast, the Skip-gram model predicts the current word in context. The structures of these two models are shown in Figure 7.
Word2Vec processes text vectors by taking into account the relationship between contexts, resulting in a more feature-rich text vector than the previous embedding method, making it easier to extract features and perform some related processing. The word vectors generated by Word2Vec are less dimensional than those generated by embedding, and are faster and more general in operation, allowing them to be used in a variety of NLP tasks and to achieve better results in classifying data sets [37,38].
We also use TextCNN, which is based on a modified CNN architecture. CNNs have a wide range of applications in the field of deep learning, due to their great usefulness [39,40]. In the CNN, the data is first fed into the input layer of the CNN to obtain the original matrix. The features of this matrix are then extracted by the convolutional layer; the formula for calculating the output results is given in Formula (5). After that, the output results are obtained. As the calculation between adjacent layers of the neural network only can be conducted linearly, it is necessary to use an activation function to carry out the non-linear operation, which enables the neural network to simulate more complex models. There are two activation functions that are mainly used in traditional neural networks: σ(x) and tanh(x). In the process of using these two activation functions, there are problems related to their small interval range and gradient vanishing. To solve these problems, the ReLU function is mainly used, as shown in Formula (6), which is a linear operation with high efficiency. The pooling layer is used for down-sampling (i.e., sparse processing) of feature data. The pooling formula is given in Formula (7).
N 2 = ( N 1 + 2 P F ) stride + 1
where N2 is the size of the output, N1 is the size of the input data, F is the size of the convolution kernel, stride is the sliding step of the convolution kernel, and P is used to fill in the input data, allowing it to be divisible when the stride is greater than 1;
f ( x ) = max ( 0 , X )
where it can be seen that the function takes a value of 0 when the gradient is less than 0, while the actual number is taken when the gradient is greater than 0, which avoids the problem of the gradient vanishing;
N 2 = ( N 1 F ) stride + 1
TextCNN is an algorithm for classifying text using a convolutional neural network, proposed by Yoon Kim in 2014 [41]. TextCNN introduces some deformations into the input layer of the CNN, in order to improve the text classification performance. The data are first encoded as a vector matrix, then convolved and max-pooled. Finally, the data are classified by Softmax. TextCNN is able to automatically combine and filter N-gram features to obtain semantic information at different levels of abstraction when convolving the text, and is able to extract richer text features. Notably, this approach is more effective when processing text with fewer features. The TextCNN structure is shown in Figure 8.

3. E3W Model

In selecting the models that make up E3W, we chose the following four models, taking into account the fact that Chinese agricultural news headlines are short, making it difficult to extract more features.
ERNIE: Compared with BERT, ERNIE’s improved mask mechanism makes it more applicable to Chinese, obtaining state-of-the-art results in Chinese NLP tasks.
ERNIE + DPCNN: With shorter news headlines, in order to make the extracted features richer, we applied DPCNN to the model as well. In DPCNN, the text repeatedly passes through the remaining half pooling blocks. In this way, the semantics of the text are continuously improved, and more features can be obtained.
EGC: In the field of Chinese agricultural news classification, the EGC model has presented the best classification effect and, at the same time, can provide a more intuitive and better comparison effect.
Finally, we also chose Word2Vec + TextCNN: Word2Vec is used to train feature-rich word vectors to compensate for the lack of features due to the short text, while TextCNN is an improved CNN model that more advantageous for text classification processing.
For these reasons, we chose ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN as the sub-models of E3W. Meanwhile, the GreedySoup weighting strategy was used to obtain weights for the outputs of the four sub-models appropriately, such that the four sub-models produced better results when combined. Agricultural news was divided into five categories, where we denoted by 0, 1, 2, 3 and 4 the five categories of fisheries, forestry, planting, animal husbandry, and side-businesses, respectively. We experimentally obtained the classification accuracy for each model in the different categories within the used data set, in order to obtain the best classification areas for each model. These areas represent the category in which the model has the highest classification accuracy, which is an important basis for the weighting of E3W sub-models using GreedySoup. The best classification areas for ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN were categories 1, 0, 2 and 3, and 4, respectively.
The E3W structure can be divided into four parts: input layer, model combination, GreedySoup Fine-tune, and final output. The overall E3W structure is shown in Figure 9.
First, in the input layer, the text is fed into the pre-treatment module for pre-processing. The pre-processed text is then fed into the combination of models, consisting of all four models (i.e., ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN), each of which outputs a result to the next stage. In the GreedySoup Fine-tune stage, the outputs of the four sub-models are weighted using the GreedySoup weighting strategy, which is carried out to obtain a matrix containing information on the weights of each category. Finally, in the final output stage, the category with the highest weight is selected as the final output.

3.1. Input Layer

The text is first entered into the model, after which it flows into a pre-processing module. In the pre-processing module, punctuation marks and stop words are filtered out [42], as these are among the many useless symbols in text information that no practical meaning. This is followed by word segmentation processing. Chinese word segmentation is not as simple as English word segmentation, as there are typically no obvious distinguishing marks between words, and semantic and logical relationships must often be taken into account. The effect of word segmentation directly affects information analysis and the experimental results. The word segmentation tool we used was Jieba. The cleaned text was then fed into the model combination for calculation.

3.2. Combination of Models

In the model combination stage, we combined the four sub-models to process the data. These four sub-models are those models with the highest classification accuracies on different categories of the data set and, by combining them, the classification accuracy could be improved. This idea comes from the approach proposed by Google and other research institutions in 2022 [20]: as some models perform better on different categories of the same data set, by combining these models, a higher accuracy can be achieved. In this way, the performance of the models in downstream tasks can be improved.
In the model combination stage, the cleaned text passes through four processing routes: (1) In ERNIE, the text is first encoded by the transformer in ERNIE, in order to obtain a vector representation of the text, and is then classified by Softmax to obtain the output; (2) in ERNIE + DPCNN, the text is first encoded by ERNIE to obtain a vector representation of the text, following which the DPCNN encodes the text vector again, improving the semantics of the text and making the extracted features richer. Finally, the text vector is classified by Softmax to obtain the output result; (3) in EGC, the text is first encoded by ERNIE, following which the encoded data are fed into DPCNN-upgrade and BiGRU for processing, respectively. BiGRU contains a bi-directional neural network structure that fully extracts the contextual relationships in the text, which facilitates the extraction of deeper features from the text. DPCNN-upgrade is an improvement of DPCNN. As the news text is short, DPCNN-upgrade reduces two convolutional layers to retain more text features. Then, we fuse the two text features extracted by DPCNN-upgrade and BiGRU, in order to form new features. Finally, Softmax classifies these new features to obtain the output results; and (4) in Word2Vec + TextCNN, Word2Vec first encodes the text to obtain a vector representation of the text, after which TextCNN convolves and pools the encoded data. TextCNN is a neural network specifically applied to text classification, and the extracted text features will be richer. Finally, the TextCNN-processed data are classified by Softmax, in order to obtain the output results. The model combination structure is shown in Figure 10.

3.3. GreedySoup Fine-Tune

In the GreedySoup Fine-tune stage, the weights are adjusted using the GreedySoup weighting strategy, in which the output weights of each model are adjusted to achieve the best classification effect. GreedySoup is a method for adjusting model weights, as part of the ModelSoup method proposed by Google 2022 [21]. GreedySoup first sort the models from highest to lowest, according to their experimental accuracy in the data set, then combines some of them and adjusts the weights for each model output: if a weight adjustment can improve the accuracy of the combined model, then the improved weight is used. The GreedySoup weighted strategy allows the combined model to have classification performance no less effective than that of any of the sub-models.
The weight values X, Y in the GreedySoup Fine-tune stage are obtained through parametric experiments, and subsequent experimental results showed that the best results were obtained when Y was greater than X. The final parameter used in this paper was (X:1, Y:2). In the GreedySoup Fine-tune stage, we weigh the four outputs produced in the model combination stage. First, we determine whether the category of the output of a model belongs to the best classification area of the model; if it does, the output is assigned a value of Y and, if not, the output is assigned a value of X. We then adjust the weights of the outputs to improve the performance of the combined model.
By adjusting the weights, we obtain the output of each model, along with the weight for this output. For example, if the output of the ERNIE model is Class 0 and this output is not the best classification area of ERNIE, the weight of this output is X; if the output of the EGC model is Class 3 and this output is the best classification area of EGC, the weight of this output is Y. Next, we add up the weights for the same class; for example, if the output and weights of EGC and Word2Vec + TextCNN are Class 3: Y, Class 3: X, respectively, by adding them together, we obtain the weights of Class 3 as X + Y. By summing the weights, a matrix containing the different output categories and the corresponding weights is obtained. The GreedySoup Fine-tune process is depicted in Figure 11.

3.4. Final Output

In the GreedySoup Fine-tune structuring stage, we obtained a matrix containing different weights for different categories. In the final output stage, we select the category with the highest weight in this matrix as the E3W output result. For example, the weight of Class 3 is X + Y, which has the highest weight; thus, the final output is Class 3.
Finally, combined with the above description, Algorithm 1 provides a concrete implementation of the E3W classification method.
Algorithm 1. GreedySoup Fine-tune.
Input: Enter the text set Class and preprocess it
Output: ClassN//N indicates the text category
1:  for each moudle ∈ Moudles do //Pass the data into each model
2:  P = Best_classfication_ares
3:  N = classfication_result
4:  if P = N then
5:   ClassN = Y
6:  else
7:   ClassN = X
8:  end if
9:  Sum(ClassN)//Calculate the probability of ClassN
10: end for
11: Max(ClassN) //Maximum probability of getting ClassN
12: return ClassN //The output

4. Experiments and Analysis

As we found no publicly available data set for Chinese agricultural news, we put together a self-built data set. Then, we compared E3W with sub-models against several different combined models, analyzing them from several perspectives.

4.1. Data Set

At present, a data set for Chinese agricultural news is lacking. The most famous data set for Chinese news classification is the THUCNews data set [43], which contains 740,000 news documents. This data set includes 14 categories of news, such as sports, education, and science and technology; however, agricultural news is absent. In the absence of a data set in a research direction, it is common to construct the required data set [44]; accordingly, we endeavored to construct the data set used in this paper.
The data set constructed in this paper includes Chinese agricultural news headlines, collected using the Octopus software [45]. The data were obtained from agricultural news websites such as China Animal Husbandry Network, Ocean Information Network, Southwest Fishery Network, China Agriculture Network, and China Soybean Network. These websites are all very large agricultural websites in China, hosted by professional agricultural companies. The above websites provide professional and objective news and perfect information, and have great influence in the Chinese agricultural community. The data set collected in this paper includes up to the latest data in 2022, which ensures the data currency of this data set, which is crucial in ensuring data quality [46,47,48].
The E3W model takes text data as input, including exclamations, special characters, etc. Therefore, we pre-process the data set, including the deletion of stop words and word segmentation. The deletion of stop words involves removing meaningless symbols, such as ‘[’ and ‘*’, as well as semantic words that have no real meaning, such as ‘etc’, ‘so on’, and ‘the’. These stop words and symbols provide no information, and their removal can help us to reduce the size of the training data, in order to capture the meaning more appropriately. Furthermore, word segmentation is the process of dividing Chinese text into entities and phrases, such that more of the meaning of the text can be retained for classification.
The data set included 15,548 news headline data, collected from websites in different domains. Some manual checks were performed after data collection so that the data could be more correctly classified into the appropriate categories. Chinese agricultural classifications are generally divided into five categories; hence, some studies on Chinese agricultural news have also treated it as a five-category NLP task [19]. The exact number of categories is shown in Table 1. The ratio of data in the training, test, and validation sets was 8:1:1, and the average Chinese word length of sentences in the data set was 18.98. The details of the data set are provided in Table 1.
For a more visual representation of the data set constructed for this paper, we provide word clouds for the five categories of the data set, as well as for the total data, in Figure 12.
Table 2 shows a selection of the self-built Chinese agricultural news data set, presented in English. As can be seen from Table 2, the average length in the data set was short, resulting in insufficient text features being extracted and making classification more difficult. It is difficult for traditional models to achieve the highest classification accuracy in every category, which was considered as an important factor in this paper, leading to the use of multiple models for processing, through which we improved the classification accuracy by combining multiple sub-models. Among them, the text length for the planting category was the shortest, while there was not much difference between the text lengths in other categories. It is worth noting that the side business category texts were the most extensive and partially similar to the texts in the other categories, which is an important reason why the accuracy rate for the side business category was lower than those for the other categories in the classification results.

4.2. Experimental Parameter Settings

In addition to the parameter settings of the four sub-models and E3W, the relevant parameters for the other models used in the experiments were set as follows: The dimension of the words was 100, the number of hidden layers was 769, the learning rate was set to 1 × 10−5, the maximum length of the sentences was 19, the dropout was set as 0.5, and the learning decay rate was 0.9. When CNN convolutional kernels were involved, the convolutional kernels were set to (2,3,4), and the number was 128.

4.2.1. Parameter Settings for ERNIE

The dimension of the words was 100. The number of ERNIE hidden layers was 769. The learning rate was 1 × 10−5, the sentence length was 19, and the dropout was set to 0.1.

4.2.2. Parameter Settings for ERNIE + DPCNN

The number of convolution kernels was 250. The size of the convolution kernels was (2,3,4). The other parameters were consistent with ERNIE.

4.2.3. Parameter Settings for EGC

The number of BiGRU layers was 2, while the number of BiGRU hidden layers was 256. Other parameters were consistent with ERNIE and ERNIE + DPCNN.

4.2.4. Parameter Settings for Word2Vec + TextCNN

The dimension of the words was 100. The sentence length was 19. The dropout was 0.5. The number of convolution kernels was 128. The size of the convolution kernels was (2,3,4). The learning rate was 1 × 10−3. The decay rate of the learning rate was 0.9.

4.2.5. Parameter Settings for E3W

The parameters of the E3W model were the same as those described above, with the weighting parameters set to 1 for X and 2 for Y.
We obtained the best values for the weighted parameters X and Y in the GreedySoup Fine-tune stage. As E3W consists of a combination of four sub-models, parameters 1–4 were chosen for the experiments. When the output of a model resulted in the best classification area for that model, the value was assigned to Y, or was assigned to X otherwise.
For example, as the best classification area of Word2Vec + TextCNN is Class 4, when the output of Word2Vec + TextCNN is Class 4, the weight of this output is Y; otherwise, it is X.
From the experiments, when Y was greater than or equal to X, the results were the best. However, in theory, Y greater than X has the best effect, because there are four sub-models in the E3W, and when two sub-models incorrectly choose a category, the weighting ensures that there is still a chance of obtaining the correct result if the other two sub-models choose the correct category. Additionally, X equal to 1 and Y equal to 2 is also in the range where Y is greater than or equal to X. Therefore, the final parameters used in this paper were X = 1, Y = 2. The experimental results are detailed in Table 3.
In addition, in GreedySoup Fine-tune, there may be a situation where multiple categories are weighted equally, making it impossible to select the best output. In this case, two solutions are provided in this paper:
(1)
The smaller category label was selected as a result (if class 0 and class 4 had the same weight, then class 0 was selected as the final result). This operation achieved good results in the experiment. This is mainly because E3W has a lower classification accuracy for class 4, while classes 0, 1, 2, and 3 have more text, more obvious features, and a higher probability of correct classification.
(2)
When the final weights of two categories were the same, if one of the outputs came from the best classification area of the model, then the weights of the other three sub-models were ignored. This operation ensured that the accuracy of E3W was not lower than that of either sub-model.
The above two solutions significantly improved the classification accuracy on the data set, where the improvement with both approaches was almost identical.

4.3. Model Evaluation Indicators

For this study, an indicator test of the text classification model was carried out. The model evaluation indicators of the classification algorithm are often measured using the confusion matrix, as shown in Table 4. The data obtained from the confusion matrix are extended by calculation to obtain four secondary metrics—accuracy, precision, recall, and F1-score—which are the core metrics for evaluating classification models [49].
The accuracy of a classification model represents the ratio of samples correctly predicted by the model for all samples. In general, the higher the accuracy, the better the classifier. The accuracy is calculated as follows:
Accuracy = TP + TN TP + FP + TN + FN
The precision of a classification model is defined as the percentage of samples with true positive class among all samples predicted to be positive class, calculated as:
Precision = TP TP + FP
The recall of a classification model is defined as the percentage of samples with true positive classes that are correctly predicted, and the formula is as follows:
Recall = TP TP + FN
The F1-score is the summed mean of precision and recall (see Equation (8)), which combines the precision and recall results and is closer to the smaller of the two; therefore, when the precision and recall are close, the F1-score is large. A higher value of F1 indicates a better model prediction effect.
F 1 = 2 TP 2 TP + FP + FN

4.4. Experiments

Next, 13 models were selected for experiments, in order to test the classification effects of different models on the agricultural news data set. The four models that performed best in the different categories were finally selected and combined to generate E3W.

4.4.1. Experiments on the 13 Models

The models tested included the basic Naïve Bayes and k-nearest neighbors models, as well as more advanced models such as BERT, RoBERTa, and MacBERT. When combining CNN, RNN, and DPCNN with BERT, we found that BERT combined with DPCNN gave the best results. Additionally, to compensate for BERT’s lack of masking capability when dealing with Chinese, ERNIE was also used for the classification of agricultural news, combined with DPCNN and BiGRU, respectively. In order to be more applicable to agricultural news and retain more text features, DPCNN was optimized by reducing two convolutional layers, in order to obtain DPCNN-upgrade. We extracted features using both BiGRU and DPCNN-upgrade, then fused the two extracted features, which were then classified to achieve better results. In addition, we attempted to combine Word2Vec with TextCNN by first training Word2Vec with word vectors, then inputting it into TextCNN, obtaining the final results through Softmax classification. The experimental results for the models are provided in Table 5.
It can be concluded, from Table 5, that the best results were achieved with ERNIE + DPCNN for the fishery classification. The average length in the fisheries data set was the longest of the five data sets and, with ERNIE and DPCNN, many valid features were still preserved, such that good results were achieved. ERNIE achieved the best results for forestry classification, where the average length in the forestry data set was second only to that in the fisheries data set. Effective features could be extracted with good results when using only ERNIE. EGC achieved the best results in the planting and animal husbandry categories, where the average lengths in the plantation and livestock data sets were the shortest. In EGC, the data is encoded by ERNIE, followed by DPCNN-upgrade and BiGRU to extract different features. Then, the two parts are fused to obtain richer features for better results. The models based on BERT and ERNIE did not obtain very good results when classifying the side business category. Word2Vec + TextCNN achieved the best results for side business, which contains many other categories that are not easy to distinguish. Word2Vec first trains the text adequately and then combines it with TextCNN to extract features, in order to achieve the best results.

4.4.2. Experiments with E3W and Four Sub-Models

Considering Table 5, we obtained four sub-models with excellent classification results: ERNIE, ERNIE + DPCNN, EGC, and Word2Vec + TextCNN. We combined these four high-performing models to obtain the proposed E3W model. Then, we evaluated these five models, in terms of accuracy (ACC), precision (P), recall (R), and F1-score (F1), in order to evaluate their effectiveness and conducted relevant ablation experiments, the details of which are provided below.
E3W is the model proposed in this paper. In Figure 13, we provide the confusion matrix for E3W; it can be seen, from the figure, that the number of misclassifications for side business was the highest among all of the classification areas. In particular, the distinction between side business and planting categories was the most difficult, as some side business-related industries, such as the textile industry, often use materials from the plantation industry, making them prone to misclassification. There are also difficulties in distinguishing between side business and animal husbandry for similar reasons, such as the leather sector, which obtains its raw materials from animal husbandry. These are the key reasons underlying the low classification accuracy of side business. Based on the presented confusion matrix, we also obtained the ACC, P, R, and F1 values for E3W, which will be shown subsequently for comparison.
We next verified whether the overall performance of the model was improved by analyzing the average accuracy. Table 6 shows the average accuracies of different models. The E3W model proposed in this paper achieved the highest average accuracy, being 1.02% higher than that of the most advanced EGC. It can be concluded that the classification effect of the E3W model is excellent, and it is more effective than using any model alone.
Figure 14 shows the precision of the different models. It can be seen that E3W achieved the highest precision values for fisheries and planting, with increases of 0.02% and 1.59%, respectively. E3W achieved an average precision of 93.09%, which was 1.62% better than the state-of-the-art (EGC).
Figure 15 shows the recall of the different models. Again, E3W had the highest recall rates in fisheries, animal husbandry, and side business categories, improving by 0.78%, 0.81%, and 1.44%, respectively. E3W reached 90.61% in the average recall rate, ahead of the state-of-the-art by 1.21%.
Figure 16 shows the F1-scores of the different models, which reflect the overall capability of a model. E3W achieved the highest F1-score values in the fisheries, planting and side business categories, improving by 0.47%, 0.08% and 2.62%, respectively, with the average F1-score of E3W improving upon the state-of-the-art by 1.02%.
To verify the effectiveness of the E3W model combination, we conducted ablation experiments. From Table 7, it can be seen that E3W presented significant improvements in Avg Accuracy, Avg Precision, Avg Recall score, and Avg F1-score, by 0.15%, 0.14%, 0.15%, and 0.12%, respectively. The results of the ablation experiment fully illustrate the validity and correctness of the E3W model.

4.5. Experiment Discussion

It can be concluded, from the experimental results, that the classification effectiveness of the E3W model proposed in this paper was significantly improved in various evaluation metrics, with a 1.02% improvement in average accuracy, 1.62% improvement in average precision, 1.12% improvement in average recall, and 1.02% improvement in average F1-score. Subsequent ablation experiments also showed that the E3W model was made more effective through the used combination. The above experiments demonstrate that E3W is the most advanced approach, combining multiple models for agricultural news classification.
Experiments comparing E3W with 13 classical and advanced models showed that E3W performed best for the classification of Chinese agricultural news. This is because E3W is a combination of the models with the highest accuracy in each classification domain, such that E3W can achieve the best results on each domain. We also compared the E3W model with its four sub-models in terms of four evaluation metrics: accuracy, precision, recall, and F1-score. E3W not only achieved the highest results in each evaluation metric, but also provided a significant improvement in accuracy in each evaluation metric.
The output of E3W takes into account the output of the four sub-models and, if the output category of a sub-model is the category in which the sub-model has the highest classification accuracy, we weigh this output to compensate for the sub-model’s lower classification accuracy in other categories, thus improving the accuracy.
Finally, in order to verify that the combination used in E3W is scientific and reasonable, ablation experiments were conducted to compare the results under different model combinations; however, none of these combinations performed as well as E3W. The main reason for this is that E3W considers all categories and uses a combination of models with the highest accuracy in each classification domain, such that any other combination will not be as comprehensive as E3W.

5. Conclusions

In this paper, we proposed a Chinese agricultural news classification model, E3W, based on a GreedySoup weighting strategy and a multi-model combination approach. E3W consists of a combination of four sub-models, where the outputs of the four sub-models are weighted to obtain the final classification results. The proposed model was tested on a self-built Chinese agricultural news data set. A total of 13 model comparison experiments indicated a significant improvement in the classification performance of our proposed model; in particular, E3W improved the average accuracy by 1.02%, the average precision by 1.62%, the average recall by 1.21%, and the average F1-score by 1.02%. Subsequent ablation experiments also validated that the E3W model was composed of an optimal combination of sub-models. The results presented above demonstrate the excellent classification capability of the E3W model, which further enhances the effectiveness of Chinese agricultural short text classification and provides a new strategy for subsequent text classification work, using a combination of models to improve the model performance in downstream tasks.
The method proposed in this paper still has some limitations. For example, several more accurate models should be assessed before performing model combination. There are many model improvement experiments and comparison experiments, and more pre-preparation research needs to be developed, where the pre-preparation workload is expected to be high when the problem to be dealt with requires more models. In addition, when the combined model contains a large number of sub-models, more comparative experiments are required to obtain the most effective weights.
Moreover, E3W still has some shortcomings, as it was less accurate in classifying side businesses than other categories. This is because the side business category comprises many elements, such as textiles, handicrafts, agro-processing, and other related industries. Further research is needed to more effectively classify side businesses, in order to accurately distinguish side businesses from other categories and improve the agricultural news classification accuracy.

Author Contributions

Conceptualization, Z.X. and S.Y.; methodology, Z.X. and S.Y.; software, S.Y.; validation, X.D., Z.X. and S.Y.; formal analysis, Z.X.; investigation, D.T. and Y.G.; resources, S.Y.; data curation, D.T.; writing—original draft preparation, Z.X. and S.Y.; writing—review and editing, X.D., Y.G. and Z.L.; visualization, Z.X.; supervision, X.D.; project administration, Z.X. and S.Y.; funding acquisition, Z.L.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Sichuan Province, China (Grant No. 2022NSFSC0172).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

We thank Du Hui, who helped us with part of the investigation and data curation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, D.; Wang, J.; Xu, J. Densely feature fusion based on convolutional neural networks for motor imagery EEG classification. IEEE Access 2019, 7, 132720–132730. [Google Scholar] [CrossRef]
  2. Do, L.N.; Yang, H.J. Deep neural network-based fusion model for emotion recognition using visual data. J. Supercomput. 2021, 77, 10773–10790. [Google Scholar] [CrossRef]
  3. Chandio, A.; Asikuzzaman, M. Cursive character recognition in natural scene images using a multilevel convolutional neural network fusion. IEEE Access 2020, 8, 109054–109070. [Google Scholar] [CrossRef]
  4. Ashir, D.M.; Talukder, M.; Rahman, T. Internet of Things (IoT) Based Smart Agriculture Aiming to Achieve Sustainable Goals. arXiv 2022, arXiv:2206.06300. [Google Scholar] [CrossRef]
  5. Duan, Q.L.; Zhang, L.; Liu, Y.R. Automatic Extraction Method of Hot Words Based on Agricultural Network Information Classification. Trans. Chin. Soc. Agric. Mach. 2018, 49, 160–167. [Google Scholar]
  6. Yuanyuan, D.; Wang, L. Discussion on methods and Strategies of agricultural news reporting in the new era. News Outpost 2022, 7, 49–50. [Google Scholar]
  7. Li, J.Y.; Hu, X.D.; Lan, Y.B. Research advance on worldwide agricultural UAVs in 2001-2020 based on bibliometrics. Trans. Chin. Soc. Agric. Eng. 2021, 37, 328–339. [Google Scholar]
  8. Meichen, G.; Cheng, Y. Research requirements on how to give more effective play to the main position function of agricultural news. J. Nucl. Agric. 2021, 35, 509. [Google Scholar]
  9. Li, Y.; Qiao, T.; Leng, W.; Jiao, W.; Luo, J.; Lv, Y.; Tong, Y.; Mei, X.; Li, H.; Hu, Q.; et al. Semantic Segmentation of Wheat Stripe Rust Images Using Deep Learning. Agronomy 2022, 12, 2933. [Google Scholar] [CrossRef]
  10. Xue, B.; Zhu, C.; Wang, X.; Zhu, W. The Study on the Text Classification Based on Graph Convolutional Network and BiLSTM. Appl. Sci. 2022, 12, 8273. [Google Scholar] [CrossRef]
  11. Guo, Y.; Tang, D.; Tang, W. Agricultural Price Prediction Based on Combined Forecasting Model under Spatial-Temporal Influencing Factors. Sustainability 2022, 14, 10483. [Google Scholar] [CrossRef]
  12. Xaud, M.F.S.; Leite, A.C.; Barbosa, E.S. Robotic Tankette for Intelligent BioEnergy Agriculture: Design, Development and Field Tests. arXiv 2019, arXiv:1901.00761. [Google Scholar] [CrossRef]
  13. Son, N.T.; Hoang, Q.C.; Giang, D. Developing System of Wireless Sensor Network and Unmaned Aerial Vehicle for Agriculture Inspection. Sci. Technol. 2020, 56, 9. [Google Scholar] [CrossRef]
  14. Qiu, S. On the effective ways of agricultural news communication in China. Jilin Univ. 2012, 5, 23–67. [Google Scholar]
  15. Soysal, E.; Wang, J.; Jiang, M. Clamp-a toolkit for efficiently building customized clinical natural language processing pipelines. J. Am. Med. Inform. Assoc. 2018, 25, 331–336. [Google Scholar] [CrossRef]
  16. Hu, J.; Li, S.B. Patent keyword extraction algorithm based on distributed representation for patent classification. Entropy 2018, 20, 104. [Google Scholar] [CrossRef] [Green Version]
  17. Li, J.; Cao, H. Research on Dual Channel News Headline Classification Based on ERNIE Pre-Training Model. arXiv 2022, arXiv:2202.06600. [Google Scholar] [CrossRef]
  18. Javed, T.; Shahzad, A.; Arshad, W. Hierarchical Text Classification of Urdu News Using Deep Neural Network. arXiv 2021, arXiv:2107.03141. [Google Scholar] [CrossRef]
  19. Yang, S.Q.; Xiao, Z. Agricultural news text classification based on ERNIE+DPCNN+BiGRU. J. Comput. Appl. 2022. Available online: http://kns.cnki.net/kcms/detail/51.1307.tp.20220805.1037.006.html (accessed on 5 August 2022).
  20. Gontijo-Lopes, R.; Dauphin, Y.; Cubuk, E.D. No one representation to rule them all: Overlapping features of training methods. In Proceedings of the International Conference on Learning Representations (ICLR), Online, 25–29 April 2022. [Google Scholar]
  21. Wortsman, M.; Ilharco, G.; Gadre, S.; Gontijo-Lopes, R.; Morcos, A.S. Model soups: Averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In Proceedings of the International Conference on Machine Learning (ICML), Baltimore, MD, USA, 17–23 July 2022. [Google Scholar]
  22. Mars, M. From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough. Appl. Sci. 2022, 12, 8805. [Google Scholar] [CrossRef]
  23. Zhang, B.; He, Q.; Zhang, D. Heterogeneous Graph Neural Network for Short Text Classification. Appl. Sci. 2022, 12, 8711. [Google Scholar] [CrossRef]
  24. Ali, A.M.; Ghaleb, F.A.; Al-Rimy, B.A.S.; Alsolami, F.J.; Khan, A.I. Deep Ensemble Fake News Detection Model Using Sequential Deep Learning Technique. Sensors 2022, 22, 6970. [Google Scholar] [CrossRef] [PubMed]
  25. Maslej-Krešňáková, V.; Sarnovský, M.; Jacková, J. Use of Data Augmentation Techniques in Detection of Antisocial Behavior Using Deep Learning Methods. Future Internet 2022, 14, 260. [Google Scholar] [CrossRef]
  26. Fesseha, A.; Xiong, S.; Emiru, E.D.; Diallo, M. Text classification based on convolutional neural networks and word embedding for low-resource languages: Tigrinya. Information 2021, 12, 52. [Google Scholar] [CrossRef]
  27. Huo, T. Research on news text classification based on fasttext and its application in agricultural news. Jilin Univ. 2019, 12, 135–185. [Google Scholar]
  28. Sun, Y.; Wang, S.; Li, Y. ERNIE: Enhanced representation through knowledge integration. arXiv 2019, arXiv:1904.09223. [Google Scholar] [CrossRef]
  29. Devlin, J.; Chan, M.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), Minneapolis, MN, USA, 2–7 June 2019. [Google Scholar]
  30. Keya, A.J.; Wadud, M.A.H.; Mridha, M.F.; Alatiyyah, M.; Hamid, M.A. AugFake-BERT: Handling Imbalance through Augmentation of Fake News Using BERT to Enhance the Performance of Fake News Classification. Appl. Sci. 2022, 12, 8398. [Google Scholar] [CrossRef]
  31. Johnson, R.; Zhang, T. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, USA, 30 July–4 August 2017; Volume 1, pp. 562–570. [Google Scholar]
  32. Cho, K.; Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Beengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014. [Google Scholar]
  33. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  34. Lei, J.H.; Qian, J. Chinese-text classification method based on ERNIE-BiGRU. J. Shanghai Univ. Electr. Power 2020, 36, 329350. [Google Scholar]
  35. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 2013, 26, 3111–3119. [Google Scholar]
  36. Aldhyani, T.H.H.; Alsubari, S.N.; Alshebami, A.S.; Alkahtani, H.; Ahmed, Z.A.T. Detecting and Analyzing Suicidal Ideation on Social Media Using Deep Learning and Machine Learning Models. Int. J. Environ. Res. Public Health 2022, 19, 12635. [Google Scholar] [CrossRef]
  37. Chen, Y.; Sheng, J. Microblog tag generation algorithm based on LDA and Word2ve. Comput. Mod. 2021, 37, 37–42. [Google Scholar]
  38. Niu, X.Y.; Zhao, E.Y. Research on Chinese weibo text classification based on Word2Vec. Comput. Syst. Appl. 2019, 8, 256–261. [Google Scholar]
  39. Roy, S.S.; Goti, V.; Sood, A. L2 regularized deep convolutional neural networks for fire detection. J. Intell. Fuzzy Syst. 2022, 43, 1799–1810. [Google Scholar] [CrossRef]
  40. Roy, S.S.; Rodrigues, N.; Taguchi, Y.-H. Incremental Dilations Using CNN for Brain Tumor Classification. Appl. Sci. 2020, 10, 4915. [Google Scholar] [CrossRef]
  41. Kim, Y. Convolutional neural networks for sentence classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014. [Google Scholar]
  42. Zhang, D.; Xu, H.; Su, Z. Chinese comments sentiment classification based on word2vec and SVMperf. Expert Syst. Appl. 2015, 42, 1857–1863. [Google Scholar] [CrossRef]
  43. Jing, L.; He, T.T. Chinese Text Classification Model Based on Improved TF-IDF and ABLCNN. Comput. Sci. 2021, 48, 170–175+190. [Google Scholar]
  44. Azime, I.A.; Mohammed, N. An Amharic News Text Classification Dataset. arXiv 2021, arXiv:2103.05639. [Google Scholar] [CrossRef]
  45. Cui, Y.J.; Liao, K.Q. Automatic extraction of metadata from overprinted web issues with the help of octopus collector. J. Ed. 2016, 28, 485–488. [Google Scholar] [CrossRef]
  46. Li, M.H.; Li, J.Z. Data currency determination: Key theories and technologies. Intell. Comput. Appl. 2016, 6, 72–75. [Google Scholar]
  47. Li, J.Z.; Liu, X.M. An Important Aspect of Big Data: Data Usability. Comput. Res. Dev. 2013, 50, 1147–1162. [Google Scholar]
  48. Kou, X.; Duan, X.L. Data repair method based on timeliness and conditional function dependency rules. In Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition, Beijing, China, 23–25 October 2019; pp. 57–64. [Google Scholar] [CrossRef]
  49. Brzezinski, D.; Stefanowski, J. Prequential AUC: Properties of the area under the ROC curve for data streams with concept drift. Knowl. Inf. Syst. 2017, 52, 531–562. [Google Scholar] [CrossRef]
Figure 1. Difference between ERNIE and BERT masking methods.
Figure 1. Difference between ERNIE and BERT masking methods.
Applsci 12 12059 g001
Figure 2. Transformer structure.
Figure 2. Transformer structure.
Applsci 12 12059 g002
Figure 3. DPCNN model structure.
Figure 3. DPCNN model structure.
Applsci 12 12059 g003
Figure 4. EGC model structure.
Figure 4. EGC model structure.
Applsci 12 12059 g004
Figure 5. BiGRU structure.
Figure 5. BiGRU structure.
Applsci 12 12059 g005
Figure 6. Comparison between DPCNN and DPCNN-upgrade structures.
Figure 6. Comparison between DPCNN and DPCNN-upgrade structures.
Applsci 12 12059 g006
Figure 7. CBOW and Skip-gram structure diagrams.
Figure 7. CBOW and Skip-gram structure diagrams.
Applsci 12 12059 g007
Figure 8. TextCNN structure.
Figure 8. TextCNN structure.
Applsci 12 12059 g008
Figure 9. E3W structure.
Figure 9. E3W structure.
Applsci 12 12059 g009
Figure 10. Combined model structure, showing the processing of data in the model combination stage.
Figure 10. Combined model structure, showing the processing of data in the model combination stage.
Applsci 12 12059 g010
Figure 11. GreedySoup Fine-tune structure, showing the processing of data in the GreedySoup Fine-tune stage.
Figure 11. GreedySoup Fine-tune structure, showing the processing of data in the GreedySoup Fine-tune stage.
Applsci 12 12059 g011
Figure 12. Word clouds for the fisheries, forestry, planting, animal husbandry, and side business categories, along with total words in the constructed data set.
Figure 12. Word clouds for the fisheries, forestry, planting, animal husbandry, and side business categories, along with total words in the constructed data set.
Applsci 12 12059 g012
Figure 13. Confusion matrix for E3W.
Figure 13. Confusion matrix for E3W.
Applsci 12 12059 g013
Figure 14. Comparison of precision for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Figure 14. Comparison of precision for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Applsci 12 12059 g014
Figure 15. Comparison of recall for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Figure 15. Comparison of recall for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Applsci 12 12059 g015
Figure 16. Comparison of F1-scores for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Figure 16. Comparison of F1-scores for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Applsci 12 12059 g016
Table 1. Details of the data set.
Table 1. Details of the data set.
CategoryTrainTestValTotalAverage Length
Fisheries2297258258281321.0
Forestry1936192193232119.8
Planting3645356357435816.2
Animal husbandry3239371371398117.7
Side business1661207207207520.2
Total12,7781384138615,54818.98
Table 2. Examples of data set contents.
Table 2. Examples of data set contents.
CategoryTranslation of the Description
FisheriesTen-year ban on fishing with multi-fold increase in Yangtze River swordfish
Fish leapfrog customs to help boost Chongqing’s first live fish exports
ForestrySigning of a forestry project to forest management techniques in Shanxi Province
Promote a system of forest chiefs to ensure that eco-benefits do work for public
PlantingEmeishan tea industry helps farmers increase income effectively
Apple futures prices fall back as seasonal fruit hits the market
Animal husbandryStabilizing poultry numbers Haikou City promotes livestock Transformation
Bavaria adds new standardized farms for livestock development
Side businessFur Deep Processing Expansion Project of Changli County Fur Co.
Feed processing plant external idle full set of feed processing equipment scrap
Table 3. Experimental results with the weighted parameters, with the bold represents the best experimental results.
Table 3. Experimental results with the weighted parameters, with the bold represents the best experimental results.
X\Y1234
10.90390.90390.90390.9039
20.89810.90390.90390.9039
30.89810.89950.90390.9039
40.88720.89810.89950.9039
Table 4. Confusion matrix.
Table 4. Confusion matrix.
Predict True01
0TPFN
1FPTN
Table 5. Summary of experimental results for 13 models, with the best experimental results represented in bold.
Table 5. Summary of experimental results for 13 models, with the best experimental results represented in bold.
Model NameFisheriesForestryPlantingAnimal
Husbandry
Side BusinessAverage
Accuracy (ACC)
Native Bayes0.07000.01000.37000.53000.060040.00%
KNeighbors0.07000.26000.29000.01000.060019.00%
BERT0.68760.80670.77410.65990.340768.42%
RoBERTa0.32600.40000.64690.64850.008855.03%
MacBERT0.61140.75860.62620.80030.359766.51%
BERT + CNN0.95030.84030.59760.78700.161672.06%
BERT + BiGRU0.57380.84050.64960.86040.272769.41%
BERT + DPCNN0.69180.82460.74690.67770.425069.18%
ERNIE0.97350.91250.85640.90600.708788.00%
ERNIE + DPCNN0.97770.90760.84690.91210.704788.00%
ERNIE + BiGRU0.97370.90870.84730.92080.701088.18%
Word2Vec + TextCNN0.80000.82000.7900088000.770082.41%
EGC0.97010.88890.89590.93220.713089.29%
Table 6. Comparison of average accuracy for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Table 6. Comparison of average accuracy for E3W and the four sub-models, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Model NameAvg. Accuracy
ERNIE + DPCNN0.8895
ERNIE0.8728
EGC0.8959
Word2vec + DPCNN0.8526
E3W(ours)0.9061
(+1.02%)
Table 7. Ablation experiments on E3W, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Table 7. Ablation experiments on E3W, with the best experimental results in bold and the number in brackets representing the comparison of the current model with the previous best model.
Model NameAvg.
Accuracy
Avg.
Precision
Avg.
Recall Score
Avg.
F1-Score
E2(EGC + Enire)0.90390.90800.90390.9042
E2(EGC + Enire-Dpcnn)0.90460.90890.90460.9049
E3(EGC + Enire + Enire-Dpcnn)0.90390.90800.90390.9042
E1W(EGC + word2vec-Textcnn)0.90390.90800.90390.9042
E2W(EGC + Enire + word2vec-Textcnn)0.90390.90800.90390.9042
E2W(EGC + Enire-Dpcnn + word2vec-Textcnn)0.90460.90890.90460.9049
E3W(Ours)0.9061
(+0.15%)
0.9103
(+0.14%)
0.9061
(+0.15%)
0.9063
(+0.12%)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiao, Z.; Yang, S.; Duan, X.; Tang, D.; Guo, Y.; Li, Z. E3W—A Combined Model Based on GreedySoup Weighting Strategy for Chinese Agricultural News Classification. Appl. Sci. 2022, 12, 12059. https://doi.org/10.3390/app122312059

AMA Style

Xiao Z, Yang S, Duan X, Tang D, Guo Y, Li Z. E3W—A Combined Model Based on GreedySoup Weighting Strategy for Chinese Agricultural News Classification. Applied Sciences. 2022; 12(23):12059. https://doi.org/10.3390/app122312059

Chicago/Turabian Style

Xiao, Zeyan, Senqi Yang, Xuliang Duan, Dezhao Tang, Yan Guo, and Zhiyong Li. 2022. "E3W—A Combined Model Based on GreedySoup Weighting Strategy for Chinese Agricultural News Classification" Applied Sciences 12, no. 23: 12059. https://doi.org/10.3390/app122312059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop