Next Article in Journal
Discrete Derivative Nonlinear Schrödinger Equations
Previous Article in Journal
Contact Interaction of a Rigid Stamp and a Porous Elastic Cylinder of Finite Dimensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Label Classification Algorithm for Adaptive Heterogeneous Classifier Group

School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(1), 103; https://doi.org/10.3390/math13010103
Submission received: 8 November 2024 / Revised: 24 December 2024 / Accepted: 26 December 2024 / Published: 30 December 2024
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

:
Ensemble classification is widely used in multi-label algorithms, and it can be divided into homogeneous ensembles and heterogeneous ensembles according to classifier types. A heterogeneous ensemble can generate classifiers with better diversity than a homogeneous ensemble and improve the performance of classification results. An Adaptive Heterogeneous Classifier Group (AHCG) algorithm is proposed. The AHCG first proposes the concept of a Heterogeneous Classifier Group (HCG); that is, two groups of different ensemble classifiers are used in the testing and training phases. Secondly, the Adaptive Selection Strategy (ASS) is proposed, which can select the ensemble classifiers to be used in the test phase. The least squares method is used to calculate the weights of the base classifiers for the in-group classifiers and dynamically update the base classifiers according to the weights. A large number of experiments on seven datasets show that this algorithm has better performance than most existing ensemble classification algorithms in terms of its accuracy, example-based F1 value, micro-averaged F1 value, and macro-averaged F1 value.

1. Introduction

The traditional supervised learning task is based on single-label classification algorithms; that is, one data instance classifies one attribute. But, in reality, many problems have multiple attributes. Multi-label classification can mainly be applied to text classification [1], medical diagnostic classification [2], protein classification [3], music [4] or video classification [5], etc. For example, in medical diagnostic classification, a patient can have both diabetes and hypertension.
Given a d-dimensional input space X = X1×···×Xd and an output p, we have label L = { l 1 , l 2 , . . . , l p } , w h e r e   p > 1 . A multi-label instance can be defined as a pair (x, l), where x = (x1,…, xd)∈X and lL, where l is called a label set. l p equals 1 when associated with instance x, and 0 otherwise. The goal of multi-label classification (MLC) [6] is to build a prediction model h : X 2 L and provide a set of related labels for unknown instances. Each instance may have several labels associated with it from a previously defined set of labels. Therefore, for every xX, there is a binary set ( l , l ¯ ) of label space L, where l = h(x) is the set of related labels and l ¯ irrelevant labels.
Ensemble techniques are becoming increasingly important, as they have been shown to improve the accuracy of single classifiers [7]. The types of base classifiers in ensemble classifiers can be homogeneous or heterogeneous, and the heterogeneous base classifiers are constructed using different algorithms. The classical multi-label homogeneous ensemble algorithms include the Ensemble of Classifier Chains (ECC) [8], Ensemble of the Binary Relevance (EBR) [9], and Ensemble of Pruned Sets (EPS) [10]. Heterogeneous ensemble algorithms are widely used in single-label classification. Heterogeneous dynamic ensemble selection (HDES-AD) [11] based on accuracy and diversity can accurately exchange between different types of base classifiers in the ensemble to enhance its predictive performance in non-stationary environments. A heterogeneous ensemble using a variety of learning algorithms has a higher potential to generate diversified classifiers than a homogeneous ensemble. The authors of [12] use a heterogeneous ensemble to carry out unbalanced learning. Also, in multi-label classification, another study [7] uses a heterogeneous ensemble of classifiers to solve the problem of sample imbalance and solve problems related to labels. It proposes combining the most advanced multi-label methods through ensemble techniques rather than focusing on ensemble techniques in multi-label learners.
Although existing ensemble classification algorithms have been able to deal with some multi-label classification problems, most of them adopt the homogeneous ensemble method, which has a single type of base classifier and lacks diversity. At the same time, the traditional heterogeneous ensemble is aimed at the different types of base classifiers. To solve the above problems, this paper proposes an Adaptive Heterogeneous Classifier Group (AHCG) multi-label algorithm, the main contributions of which are as follows:
(1)
The concept of a Heterogeneous Classifier Group is proposed. Two different ensemble classifiers are used in the testing and training stages. It is different from the previous concept of a heterogeneous ensemble, which is no longer an ensemble classifier with different kinds of base classifiers.
(2)
The Adaptive Selection Strategy is proposed. It proposes to use the adaptive mean square error formula to calculate the sum of the error values of each group of ensemble classifiers and select the most suitable group of ensemble classifiers for testing by comparing the values.
(3)
The least squares method is used to calculate the weight of the base classifier in the Heterogeneous Classifier Group and dynamically update it according to the weight.
(4)
Experiments are carried out on seven real datasets, and the AHCG algorithm is compared with eight homogeneous ensemble methods. Good results are obtained in all four evaluation indexes.

2. Related Works

This section has three parts, which are reviewed from classical classification algorithms of single and ensemble models.

2.1. Classical Multi-Label Classification Algorithm

There are many classical multi-label ensemble classification algorithms. They are mainly divided into Problem Transformation (PT) and Algorithm Adaptation (AA). The most commonly used PT method is Binary Relevance (BR). The BR method does not consider the interdependence between labels. In order to overcome this problem, researchers proposed a method of classifier chains (CCs) [13], which is based on the BR algorithm and connects binary classifiers obtained from BR through the chain method. The label powerset (LP) method is also the PT method. Random k-label sets (RAkEL) [14] is an ensemble use of the LP method, in which each LP classifier is trained by randomly generated and different small subsets of labels. AA is to modify existing algorithms to accommodate new problems to be solved. The specific performance is to adjust the existing single-label classification problem to an MLC problem. Popular AA models for performing MLC include K-nearest neighbor [15], decision trees [16], Bayesian networks [17], support vector machines [18], neural network [19], etc.
Here, EBR [9], ECC [8], and EPS [10] algorithms are mainly introduced. The Ensemble of Binary Relevance (EBR) Classifier is generated for each BR classifier using bagging. Because of the diversity between base classifiers, having randomly selected instances of each BR improves BR performance. However, the EBR does not consider the relationship between labels.
A CC algorithm realizes the serial connection of classifiers by adding the results of previous classifiers to the current classifier; that is, the attribute space of each binary model is extended with the 0/1 label association of all previous base classifiers, thus forming a classifier chain. Since the order of the initially randomly generated label chain cannot effectively avoid the risk of error propagation, CCs are still very sensitive to the order. Therefore, it is a key problem to select an optimal sequence for CCs to ensure the high accuracy of multi-label classification. Introducing CCs into an ensemble framework is called an Ensemble of Classifier Chains (ECC). The ECC trains K CC base classifiers C1, C2, …, Ck, …, CK. Each Ck model may be unique and capable of giving different multi-label predictions. The predictions are added up by labels so that each label gets a certain number of votes. The final prediction multi-label set is formed by threshold selection.
The Ensemble of Pruned Sets (EPS) enables the formation of new label sets to accommodate irregular or complex data. EPS models are generated through training for prediction. The EPS utilizes the most important label relationships in multi-label datasets. Much unnecessary and harmful complexity is avoided by pruning the set of labels that do not occur frequently. The basic idea is to get rid of the instance in which the tag combination does not occur frequently in the training stage and then generate a new instance to replace the original one. The principle is to ensure that the label combination of the newly generated instance is a subset of the original instance’s label combination and is frequent. But this approach also tends to throw away some important label information and miss many important label combinations.

2.2. Multi-Label Classification of Single Algorithm

This section introduces the single classification algorithm from the perspective of nearest neighbors, extreme learning machine, and decision trees.
To cope with concept drift, the Punitive k-Nearest Neighbors algorithm with a Self-Adjusting Memory (MLSAMPkNN) [20] uses memory resizing to include only the current concept encouraging the system to identify and punish incorrect data instances and retain and use only the current useful data. The algorithm can quickly and efficiently adapt to changes in the data stream while still maintaining low computational complexity. The Self-Adjusting Multi-Label Nearest Neighbors algorithm (MLSAESAKNNS) [21] uses an adaptive window to adjust to drift sizes. Since multi-label data may include data that cause errors for only a single or few labels but provide valuable information for other labels, the algorithm enables and disables instances of each label based on the past performance of the instances. The kNN overlay method for the feature-weighted distance measurement [22] addresses imbalanced data by optimizing the F-value rather than the classification error. The Coupled Multi-Label k-Nearest Neighbors algorithm (CML-kNN) [23] introduces coupled similarity between class labels, where the higher the similarity, the closer these two values are in the interaction of two different label values in the selected label space, and also includes more similar neighbors to overcome the problem of lack of neighbors with specific labels.
A multi-label classification algorithm based on a Kernel Extreme Learning Machine (ML-KELM) [24] solves the problem of converting the real-valued output of the network into a binary vector using an adaptive thresholding function problem, and the threshold function is used to find the boundary between the relevant and irrelevant labels. The Multi-layer Kernel Extreme Learning Machine (ML-CK-ELM) [25] uses linear combinations of base kernels at each layer to construct a specific kernel without randomly tuning parameters, with a significant reduction in the computation time and memory storage. The ability of global approximation is demonstrated when many basis kernels are combined together while showing good general performance.
The multi-label decision tree algorithm for predicting probabilities [26] uses a normalization method to convert multi-label data into single-labeled instances using a traditional, single-label decision tree algorithm to build a tree. Logarithm-depth Streaming Multi-label Decision Trees (LdSM) [27] can be used to construct and train multi-label decision trees by optimizing a new objective function in each node of the tree that facilitates balanced splitting, maintains high class purity of the child nodes, and allows for sending instances in multiple directions with a penalty to prevent excessive tree growth. Moral-García et al. [28] extended the ECC by using CC4.5 as the base classifier. This integrated model has better performance in noisy data and also handles correlation between labels well. The splitting criterion used by CC4.5 is the imprecise information gain ratio. Learning label correlations may create circular dependencies, and to address this problem, the 3RC algorithm is proposed. 3RC [29] follows the BR approach by using multiple decision trees as binary classifiers. This novel approach aims to learn label dependencies and give model results considering only relevant dependencies in order to perform better predictions and reduce error propagation due to irrelevant and weak dependencies.

2.3. Ensemble Multi-Label Classification Algorithm

Most integrated classification algorithms assign the same importance to all base classifiers [30]. However, usually, different base classifiers have different classification performances for the same instances. Assigning suitable weights according to the performances of base classifiers can improve the classification results. In this section, fixed and dynamic weighting algorithms are described.
In multi-label classification systems, many papers have shown that the classification power of integrated classifiers is stronger than that of single models. Nan et al. [31] proposed a novel integrated approach for multi-label classification. Instead of randomly selecting a small subset, the algorithm constructs a set of k-label sets based on local positive and negative pairwise label correlations. The Adaptive Ensemble of Self-Adjusting Nearest Neighbor Subspaces (AESAKNNS) algorithm [21] uses the ADWIN detector to monitor each classifier for conceptual drift on the subspace, and once detected, the algorithm automatically trains additional classifiers in the background to try to capture the new concepts on the new feature subspace.
Moyano successively proposed the Evolutionary Algorithm for multi-Label Ensemble optimization (EAGLET) [30] and the Auto-adaptive algorithm based on Grammar-Guided Genetic Programming (AG3P-kEMLC) [32] regarding intelligent optimization. EAGLET is used to select simple, accurate, and diverse multi-label classifiers to build an ensemble considering data features. Each multi-label classifier focuses on a small subset of labels, considering the relationships between them but avoiding the high complexity of the output space.
To obtain better classification results, many algorithms perform label set processing. Mahdavi-Shahri et al. [33] proposed using base classifiers in an ensemble to perform the prediction of individual labels. Then, these base classifiers are combined into a multi-label integrated classifier that makes predictions for all labels. Zhang et al. [34] proposed a Fully Associative Ensemble Learning (FAEL) algorithm. This algorithm models the relationship between the global prediction of each class node and the local prediction of all class nodes as a multivariate regression problem with Frobenius parametric or l1 parametric regularization. Nasierding et al. [35] proposed a triple-Random Ensemble Multi-Label Classification (TREMLC) learning method for multi-label classification problems, especially for applications of image-to-text translation and automatic image annotation. The proposed stochastic learning method integrates the concepts of stochastic subspace, bagging, and stochastic k-label set ensemble learning methods to form a multi-label data classification method.
The learning methods often used in building integrated classifiers are bagging, boosting, and stacking. The Ensemble of Label specIfic FeaTures (ELIFT) [36] constructs multiple LIFT classifiers using multiple training sets generated using the bagging strategy. The different classifiers are automatically weighted according to the loss of each classifier. For each new instance, the predicted label vector is obtained using the learned weighted integrated classifier. The Boosting Weighted Extreme Learning Machine (WELM) algorithm [37] seamlessly embeds the weighted ELM into a modified AdaBoost framework. The Boosting Label Weighted Extreme Learning Machine algorithm (Boosting Label Weighting Extreme Learning machine, BLW-EL) [38] integrates the label-weighted extreme learning machine into the boosting integrated learning framework. Based on the iterative feedback of the training results, appropriate weights are designed for each training label belonging to each training instance. GOOWE-ML [39] is a new stacking ensemble method for multi-label classification with linear and dynamic weighting. It uses a spatial model to assign optimal weights to its base classifier and can be used with any existing incremental multi-label classification algorithm and its base classifier.
Li et al. [40] proposed a heterogeneous integration approach for sentiment classification. The algorithm combines LSTM, CNN, and LR because they are, respectively, good at capturing different types of text features. Then, the regression model is used to fuse the prediction results to improve the prediction accuracy. Wu et al. [41] proposed a heterogeneous ensemble learning algorithm based on the firefly optimization algorithm and applied it to network intrusion detection. The firefly optimization algorithm is used to optimize the weights of each base classifier, and the output results are weighted and fused according to the optimized weights to obtain the final result. Ding et al. [42] proposed a multi-label classification algorithm based on a dynamic heterogeneous ensemble (DHEML), which trains the data block with h different classifier algorithms to obtain the classifier candidate group and uses geometric weighting to assign weights to the classifier group to update the classifier group. An adversarial Domain Adaptation approach integrating an Adaptive Graph Convolutional Network (DA-AGCN) [43] uses the backbone network based on TResNet to extract data features, and then the adaptive graph convolutional neural network is used to learn multiple classifiers. After learning, the classifiers classify the extracted features with multiple labels.

3. AHCG Algorithm

In this section, the Adaptive Heterogeneous Classifier Group (AHCG) Algorithm is introduced in detail. The concepts of a Heterogeneous Classifier Group and an Adaptive Selection Strategy are proposed. Table 1 describes the meanings of the symbols covered in this section.

3.1. HCG Concept

The AHCG algorithm proposed the concept of a Heterogeneous Classifier Group (HCG). Unlike previous heterogeneous ensembles, it no longer has different base classifiers in the ensemble but involves using two different ensemble classifiers in the testing and training phases. In the test phase, the ASS in Section 2.2 is used to select the appropriate ensemble classifiers for the testing of the data instance. In the training phase, two groups of different ensemble classifiers are simultaneously constructed, and then the dynamic update strategy in Section 2.3 is used to update and replace the in-group base classifiers, respectively. Two groups of different ensemble classifiers are combined to increase the heterogeneity of the ensemble classifiers.
The HCG can combine any two ensemble classifiers and is a general concept. We use C1 and C2 to represent two base classifiers generated by different algorithms, respectively. Figure 1 shows the traditional homogeneous and heterogeneous ensemble diagrams, and Figure 2 depicts the schematic diagram of the HCG. Experiments show that the HCG concept proposed in this paper is reasonable and has better performance than heterogeneous ensemble methods.

3.2. Adaptive Selection Strategy

The AHCG proposes an Adaptive Selection Strategy (ASS). In the testing phase, the ensemble classifier with the minimum sum of the mean square error is selected from the HCG.
Accuracy Updated Ensemble (AUE2) [44] is a classic single-label ensemble classification algorithm. The Mean Square Error (MSE) formula is used to calculate the Mean Square Error of the classifiers. In this algorithm, the calculation formulas of MSEhk and MSEr are proposed. MSEhk is to calculate the prediction error of dh with the base classifier, where f y k x is the probability that instance x belongs to class y given by base classifier Ck. MSEr is the mean square error of the random prediction classifier and is used as a reference point for the current category distribution, where p(y) is the distribution of category y. The calculation formula is as follows:
M S E h k = 1 d h x , y d h 1 f y k x 2
M S E r = y p ( y ) ( 1 p ( y ) ) 2
Finally, MSEhk and MSEr are combined to give the accuracy of the base classifier and the current class distribution. It also adds a very small positive value θ, which avoids the problem of dividing by zero. The formula is shown in (3):
w h k = 1 M S E r + M S E h k + θ
AUE2 calculates the weight of each base classifier, and the formula is modified as follows:
w k = 1 M S E r + θ
But the AHCG algorithm targets two ensemble classifier groups and needs to separately calculate the sum of the mean square error of each ensemble. Therefore, the formula is shown in (5).
A M S E h k = k = 1 K 1 M S E h k + M S E r + θ
The values of MSEhk and MSEr are both between 0 and 1, and θ is a very small constant. k K M S E h k represents the accuracy of each base classifier in the classifier group, and MSEr represents the prediction mean square error of randomly selecting a base classifier. For A M S E h k , the smaller the sum of error values, the smaller the value of A M S E h k , and the more stable the performance of the base classifier.
Figure 3 shows a schematic diagram of the ASS using the HCG constructed with C1 and C2 as examples. Here, C1 and C2 are still used to represent the two base classifiers generated by different algorithms. CR(C1) and CR(C2) represent their classification results for all class labels of the data instance, expressed as Boolean values. First, the structure calculates the AMSE value of the HCG and then determines whether the classification result of the HCG is the same as that of the data instance. If so, it selects the one with the smaller AMSE for testing. Otherwise, it determines whether the C2 result is correct. If it is correct, it selects C2 for testing; otherwise, it selects C1.
Algorithm 1 is the pseudocode of the AHCG algorithm proposed in this paper, which mainly describes the ASS. The details are analyzed as follows: The input is data stream D, the data block is DC, the number of ensemble classifiers is K, and M1 and M2 represent two different sets of ensemble classifiers.
Algorithm 1. AHCG
Input: D: data stream; DC: data block; K: number of integrated classifiers; M1, M2: set of ensemble classifiers
Output :   predicted   results   y ^
      1.
While D has more instances, perform
      2.
            Calculate the AMSE of M1 and M2, respectively, using by Formula (4)
      3.
            If (CR(M1) =CR(M2))      //When the classification results are the same
      4.
                     If (AMSE1<AMSE2)
      5.
                               y ^ = p r e d i c t x i , M 1
      6.
                      Else
      7.
                               y ^ = p r e d i c t x i , M 2
      8.
                      End if
      9.
            Else if (CR(M1))            //Classification results are different
      10.
                       y ^ = p r e d i c t x i , M 1
      11.
            Else
      12.
                       y ^ = p r e d i c t x i , M 2
      13.
            End if
      14.
            Train (M1, DC)                                       //see Algorithm 2
      15.
            Train (M2, DC)                                       //see Algorithm 2
      16.
End while
Algorithm 1 starts with the input of the data stream, and in lines 4–15 of the algorithm is a detailed description of the ASS. After that, the HCG is trained with a data block size of DC, respectively. The specific implementation of the training is shown in Algorithm 2 in Section 3.3.

3.3. Dynamic Update of Base Classifiers

During the training of HCG base classifiers, the in-group base classifiers need to be weighted and updated. Each group of ensemble classifiers is composed of K base classifiers, and each data block in the stream is used for training. For incoming data streams, the algorithm uses a sliding window consisting of recent instances to evaluate the stream.
In [17], the least squares formula is used to obtain weights to dynamically update the base classifier. y is the vector that represents the reality of a given data point, and w is the weight vector. The formula of the least squares method is as follows:
| | y S w | | 2 2 w m i n
In the framework of the ensemble, a combinator can be defined as the function g , where g : R K × p R p . For Formula (6), its minimized function can be expressed as follows:
g ( w 1 , w 2 , . . , w K ) = i = 1 n j = 1 p ( k = 1 K ( w k s k j i y j i ) ) 2
Taking the partial derivative of w and setting the gradient to zero, and the formula is as follows:
g ( w 1 , w 2 , , w K ) = k = 1 K w k ( i = 1 n j = 1 p S q j i S k j i ) i = 1 n j = 1 p y j i S q j i
The formula can be simplified to:
k = 1 K w k ( i = 1 n j = 1 p S q j i S k j i ) = i = 1 n j = 1 p y j i S q j i
The AHCG algorithm uses Formula (9) to calculate the weight of the base classifiers in each group of ensemble classifiers. Formula (9) is simplified to obtain
k = 1 K w k = i = 1 n j = 1 p y j i S q j i ( i = 1 n j = 1 p S q j i S k j i ) 1
Figure 4 shows the schematic diagram of the training algorithm using the HCG constructed by C1 and C2 as an example. Here, C1 and C2 are still used to represent the two base classifiers generated by different algorithms. Formula (10) is used to calculate the weights of the base classifier of the HCG. It can be used to decide whether to update and replace the device based on the criteria. For the kth base classifier, one must determine whether the number of classifiers in the classifier ensemble is full. If it is full, the weight needs to be calculated using Formula (9), the base classifier with the lowest weight discarded, and a new base classifier added. For both base classifiers, the same replacement and update principles are applied.
Algorithm 2 is a detailed description of the dynamic weighting of the in-group base classifiers. The least squares method is used to calculate the weights of the base classifiers.
Algorithm 2. Train
Inputs: DC: data blocks; K: number of base classifiers; M: ensemble classifiers; C: base classifier
Output: ensemble classifiers
    1.
Cin ← uses DC to build a base classifier
    2.
If M has K classifiers                                                //When the classifier number is full
    3.
        Use Formula (9) to calculate the weight
    4.
        Cout← Selects the base classifier with the worst weight
    5.
        M ← M − Cout                                                //Classifiers have the worst weight dropped
    6.
End if
    7.
M ← M + Cin                                                                    //Add classifier in ensemble classifiers
    8.
Train all base classifiers except Cin
Algorithm 2 updates and replaces the base classifier according to the weights of the base classifiers within the group. First, the DC data block is used to train the base classifier, and Formula (9) is used to calculate the weight. When there are K classifiers, the base classifiers need to be replaced, the base classifier with the worst weight is selected and removed, and the newly trained base classifier is added. Otherwise, it is added directly.

3.4. Time Complexity Analysis

Here, we first computed the worst-case time complexity of Algorithm 2. Here, the time complexity of the process of building base classifiers and the process of training base classifiers other than Cin is set to O(1), assuming that the number of base classifiers in the current integrated classifier is K and the time complexity in the conditional judgment statement is O ( n 2 × p 2 ) . After that, the worst time time complexity of Algorithm 1 is calculated. The time complexity of Algorithm 1 is O ( | N | × ( K + ( n 2 × p 2 ) ) ) , where, n is the number of instances in DC, p is the number of labels, K is the number of base classifiers, and N is the total number in the dataset.
In order to more intuitively see the time efficiency differences of the algorithms, the time complexity of the benchmark algorithm was analyzed, and the analysis results are shown in Table 2. The time complexity of the EBR is the time complexity of training k groups of BR models, and the training time complexity of a BR model is O ( k × p × f ( m , | N | ) ) . The time complexity of the ECC is O ( k × p × f ( m + p , | N | ) ) . The algorithm models the correlation between labels through a chain structure, so its time complexity is slightly higher than that of the EBR. The time complexity of the EPS is O N × p × log g + N × 2 p × p × log g , which is mainly determined by constructing and traversing the PS model. The time complexity of GORT, EBRT, EaBR, EaCC, and EaPS is O ( N n n K c + n p K 2 + K 3 ) . The time complexity of EBRT is O ( 2 N 2 ) , which is mainly determined by the construction, training, and prediction time of the model.
The time complexity of DHEML is determined by the size of the dataset. The larger the dataset, the more complex the feature relationship, and the more difficult it is to calculate the geometric weighting. AESAKNNS depends on factors such as data preprocessing, K-nearest neighbors search, adaptive and multi-label processing, and the application of Bayesian conceptual rules. It can be seen from the table that the time complexity of the AHCG algorithm is high because the time complexity of the selected classifier group is too high. Here, p represents the number of labels, m represents the number of features, N represents the size of the dataset, g represents the number of pruning times, K represents the number of base classifiers, n represents the number of instances in DC, c is the prediction time of a single classifier, and k is the number of iterations.

4. Experimental Results and Analysis

The hardware environment for this experiment is a personal computer with Intel Corei5-7200U, 2.5 GHz CPU with 12 GB of memory. The operating system is Windows 10, and the software environment is Massive Online Analysis (MOA) MOA2021 [45] combined with the multi-label method in MEKA [6].
An interleaved-test-then-train (ITTT) [40] assessment method is used to predict the AHCG. This method is mainly used for the calculation of the data stream. Different from the traditional batch evaluation process, as the amount of data increases, it is impossible for stream processing to conduct multiple training tests. To complete the training and testing in a reasonable time, repetition and column splitting must be reduced. ITTT allows for each instance to be used for testing before training so that accuracy can be updated incrementally.
This section will conduct experiments on datasets 3.1(1). The experimental contents are as follows: (1) comparative experiments of ensemble algorithms; (2) comparison with classification of algorithms dealing with conceptual drift; and (3) Friedman statistical analysis. In the experiment of this paper, the block size n is set to 500, the number of base classifiers k is set to 10, and the rest of the parameters are set to conventional settings.

4.1. Experimental Settings

(1)
Datasets
This experiment will introduce datasets from the aspects of the research field, number of instances (n), number of attributes (m), number of labels (L), label card number (LC(D)), and label density (LD(D)), as shown in Table 3, where the label card and label density are shown in Formulas (11) and (12).
L C ( D ) = 1 n i = 1 n y i
L D ( D ) = L C ( D ) L = 1 L n i = 1 n y i
(2)
Benchmark algorithm
To verify the validity of the AHCG algorithm, the EPS and ECC are combined and named by their method, denoted as AHCG1. Similarly, AHCG2 is composed of the EPS and EBR, and AHCG3 is composed of the EBR and ECC. This article sets up eight baseline algorithms, as shown below.
  • EBR [9]: An ensemble version of the BR model: each instance of BR is randomly generated, regardless of the relationship between labels;
  • ECC [9]: An ensemble version of CCs, where the chain order of each CC is randomly generated; it takes into account global label dependencies;
  • EPS [10]: An improved integrated version of LP pays attention to the most important relationships of labels by pruning the set of labels that appear less often;
  • GORT [40]: Algorithm using iSOUP regression tree;
  • EBRT [46]: A regression tree method for multi-label classification through multi-objective regression in stream setting;
  • EaBR, EaCC, and EaPS [40]: Use ADWIN as their concept drift detector;
  • MLSAMPkNN [20]: The algorithm uses self-adjusting memory penalty kNN;
  • AESAKNNS [21]: The algorithm uses ensemble kNN;
  • DHEML [43]: This algorithm is a heterogeneous ensemble algorithm and uses the Adaptive Selection Strategy.
(3)
Evaluation Metrics
In multi-label classification, it is not appropriate to use some evaluation indexes as evaluation indexes. Many evaluation indexes are designed for multi-label classification. Accuracy, example-based F1, micro-averaged F1, and macro-averaged F1 are used for assessment in this paper. Table 4 explains the mathematical symbols in the formula.
In Formulas (13)–(16), Acc(h) represents, accuracy and F β ( h ) represents the F value based on the data instance. β expresses an equilibrium factor, usually equal to 1. Ri and Pi are the accuracy and recall rates of the ith label.
A c c ( h ) = 1 p i = 1 p | Y i h ( x i ) | | Y i h ( x i ) |
F β ( h ) = ( 1 + β 2 ) · P r e ( h ) · R e ( h ) β 2 · P r e ( h ) + R e ( h )
M i c r o _ F 1 = 2 × M i c r o _ P r e c i s i o n × M i c r o _ R e c a l l M i c r o _ P r e c i s i o n + M i c r o _ R e c a l l
M a c r o _ F 1 = 1 p i = 1 p 2 × R i × P i R i × P i

4.2. Experimental Analysis

(1)
Comparative experiments of ensemble algorithms
The algorithms of AHCG1, AHCG2, and AHCG3 are compared with the EBR, ECC, EPS, and DHEML on 12 datasets. Detailed experimental results include the accuracy, example-based F1, micro-averaged F1, macro-averaged F1, and time efficiency of each algorithm. As shown in Table 5, the best results are in bold.
As can be seen from the table, the algorithms using the AHCG ranked better on average than the algorithms with the EBR, ECC, and EPS classifiers for the four evaluation metrics. The AHCG3 algorithm received the best ranking in terms of its accuracy, example-based F1, and macro-averaged F1, and the micro-averaged F1 received the second place. The overall performances of the AHCG2 and AHCG3 algorithms are slightly better than that of DHEML, which can prove the effectiveness of the proposed HCG. Among them, the accuracy of the AHCG algorithm is significantly higher than the ensemble classifier. For example, in the dataset Medical, the AHCG3 algorithm outperformed the EBR algorithm by 9.3% and outperformed the ECC algorithm by 9.7%. In the dataset Ohsumed, the AHCG3 algorithm was 8.4% higher than the EBR algorithm and 9.5% higher than the ECC algorithm.
In general, algorithms using the AHCG have better performance than homogeneous ensemble classifier algorithms. Because algorithms can use the ASS for testing, they can select group classifiers that are likely to achieve better performance for testing. But in some cases, the prediction result of the AHCG algorithm is not as good as that of the ensemble. In Enron, for example, the EBR algorithm is more accurate than AHCG2 or AHCG3. This is because in the selection of test methods, the judgment criterion is first selected by whether the two sets of ensemble classifiers have the same test results for all labels of instances. For AHCG3 method, if both the HCG can correctly test or incorrectly test instances, and the AMSE value of the ECC of one HCG is less than or equal to the AMSE value of the EBR of the HCG, the ECC group classifier is selected by default for testing. At this time, ECC group classifiers that can be correctly classified may be selected to reduce the number of results. The running times of the AHCG2 and AHCG3 are less than that of the DHEML algorithm (As shown in Table 6, the best results are in bold).
In terms of time efficiency, as shown by the time complexity of the AHCG algorithm, the algorithm’s efficiency is mainly determined by the number of instances, the number of base classifiers, the number of labels, and the number of instances in the data block. In small datasets, the AHCG algorithm is faster than the integrated algorithm except for the EPS algorithm. The EPS algorithm saves time because it can prune the infrequent label sets to focus on the most important label relationships. In the dataset Medical, the AHCG3 algorithm saves 289,288 ms over the EBR algorithm and 295,062 ms over the ECC algorithm. However, in the face of large datasets, for the number of instances, the AHCG is more time efficient and takes more time than the algorithm that uses only homogeneous classifiers.
(2)
Comparison with the classification of algorithms dealing with concept drift
The algorithms of AHCG1, AHCG2, and AHCG3 are compared with GORT, EBRT, EaBR, EaCC, EaPS, MLSAMPKNN, AESAKNNS, and DHEML on 12 datasets. Both the AHCG and contrast algorithms are designed to deal with the details of concept drift. Detailed experimental results include the accuracy, example-based F1, micro-averaged F1, and macro-averaged F1 of each algorithm. As shown in Table 7, the best results are in bold.
The AHCG algorithm obtained better results than other algorithms in the evaluation metrics of instance-based F1, macro-averaged F1, and micro-averaged F1, ranking in the top three on average among all algorithms. Moreover, the algorithm of AHCG3 ranked first in instance-based F1 and macro-averaged F1, and it ranked third in micro-averaged F1. The algorithm of AHCG2 ranked first in micro-averaged F1, and the algorithm of AHCG3 ranked second in accuracy.
Compared with EaBR, EaCC, and EaPS algorithms with a window mechanism, AHCG algorithms achieve better experimental results. The accuracy of comparison algorithm EaCC on the datasets Slashdot, Reuters, and Ohsumed are not very optimistic. The accuracies of AHCG1 and AHCG3 are better than that of EaCC in these three datasets. For example, in the dataset Slashdot, the accuracy value of the AHCG1 algorithm is as high as 5.5% compared with EaCC, and the accuracy value of AHCG3 is as high as 5.8% compared with EaCC. In the dataset Reuters, the accuracy value of the AHCG1 algorithm is as high as 7.4% compared with EaCC, and the accuracy value of the AHCG3 algorithm is as high as 13.2% compared with EaCC. In the dataset Ohsumed, the accuracy value of the AHCG1 algorithm is up to 20% higher than that of EaCC, and the accuracy value of the AHCG3 algorithm is up to 27.1% higher than that of EaCC. The accuracy, case-based F1, micro-averaged F1, and macro-averaged F1 results of the AHCG algorithm are all higher than those of DHEML algorithm. As can be seen from Table 7, the AHCG algorithm is superior to the DHEML algorithm in terms of the example-based F1, micro-averaged F1, and macro-averaged F1. The experimental results show that the AHCG algorithm is superior to the heterogeneous ensemble method in dealing with concept drift problems.
(3)
Friedman statistical analysis
To detect the statistical significance between algorithms, Friedman statistical analysis was adopted in the result analysis process [46]. This section studies the saliency problem between the algorithm of the AHCG and the comparison algorithms involved. The results of this test can be seen in a critical graph, where each algorithm is sorted according to the average rank, and the algorithms within the critical distance are connected by a line. In this experiment, the better model is on the left side of the critical graph. The calculation formula of critical distance is shown in Formula (17).
C D = q α , k k ( k + 1 ) 6 N
where the significance level α = 0.05, k represents the number of algorithms, and N represents the number of datasets; in this experiment, k = 14, and N = 12. Calculated using the formula, CD = 5.26. Figure 4 shows the CD plots on the assessment indicator accuracy, example-based F1, micro-averaged F1, and macro-averaged F1, where the average ranking of each algorithm is marked along the axis.
Figure 5 shows that the AHCG algorithms of AHCG1, AHCG2, and AHCG3 are superior to other comparison algorithms in terms of accuracy, example-based F1, micro-averaged F1, and macro-averaged F1. Among them, the algorithm of AHCG1 is ranked first in accuracy, example-based F1, micro-averaged F1, and macro-averaged F1. AHCG2 is ranked first in accuracy and micro-averaged F1. The superiority of HCG algorithms shows that using different classifier groups can improve the heterogeneity among the classifiers and improve the classification performance.

5. Summary

For the diversity of ensemble classifiers, this paper proposes the AHCG algorithm, which uses the concept of an HCG. An Adaptive Selection Strategy is proposed to select the appropriate ensemble classifier for testing according to the AMSE. The least squares method is used to calculate the weights of the base classifiers in the group, and then the weights are updated and replaced. According to the experiments, the AHCG algorithm obtains better values in terms of the accuracy, instance-based F1, micro-averaged F1, and macro-averaged F1 compared with other algorithms for homogeneous ensemble classifiers and ranks high overall. It is a general algorithmic structure that can be applied to most algorithms. The algorithm of the AHCG is executed serially in terms of in-group classifier generation and update replacement strategies, which will improve the time efficiency of the algorithm in large datasets. Therefore, the AHCG algorithm can classify large text data with multiple labels. In future work, our research group will focus on the time efficiency of the algorithm so that it can be applied to image classification. Under the condition that the evaluation metrics are guaranteed to be stable, the HCG can run in parallel in the intra-group classifier generation and update phases to improve the time efficiency of the algorithm. Meanwhile, using different datasets for evaluation and further development, the method of dynamically updating the base classifiers is added to reduce the impact of their number on the experiment.

Author Contributions

Conceptualization, M.H. and S.Y.; methodology, M.H.; software, H.W.; validation, M.H., S.Y. and H.W.; formal analysis, J.D.; investigation, J.D.; resources, H.W.; data curation, S.Y.; writing—original draft preparation, M.H.; writing—review and editing, S.Y.; visualization, M.H.; supervision, J.D.; project administration, H.W.; funding acquisition, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ningxia Natural Science Foundation Project (2022AAC03279), the National Nature Science Foundation of China (62062004), and the Graduate Innovation Project of North Minzu University (YCX24371).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Minaee, S.; Kalchbrenner, N.; Cambria, E.; Nikzad, N.; Chenaghlu, M.; Gao, J. Deep learning--based text classification: A comprehensive review. ACM Comput. Surv. (CSUR) 2021, 54, 1–40. [Google Scholar] [CrossRef]
  2. Liu, Q.; She, X.; Xia, Q. AI based diagnostics product design for osteosarcoma cells microscopy imaging of bone cancer patients using CA-MobileNet V3. J. Bone Oncol. 2024, 49, 100644. [Google Scholar] [CrossRef] [PubMed]
  3. Rana, P.; Meijering, E.; Sowmya, A.; Song, Y. Multi-Label Classification Based On Subcellular Region-Guided Feature Description for Protein Localisation. In Proceedings of the 18th International Symposium on Biomedical Imaging, Nice, France, 13–16 April 2021; pp. 1929–1933. [Google Scholar]
  4. Ding, Y.; Zhang, H.; Huang, W.; Zhou, X.; Shi, Z. Efficient Music Genre Recognition Using ECAS-CNN: A Novel Channel-Aware Neural Network Architecture. Sensors 2024, 24, 7021. [Google Scholar] [CrossRef] [PubMed]
  5. Xie, F.; Pan, X.; Yang, T.; Ernewein, B.; Li, M.; Robinson, D. A novel computer vision and point cloud-based approach for accurate structural analysis of a tall irregular timber structure. Structures 2024, 70, 107697. [Google Scholar] [CrossRef]
  6. Read, J.; Reutemann, P.; Pfahringer, B.; Holmes, G. Meka: A multi-label/multi-target extension to weka. J. Mach. Learn. Res. 2016, 17, 1–5. [Google Scholar]
  7. Osojnik, A.; Panov, P.; Džeroski, S. Multi-label classification via multi-target regression on data streams. Mach. Learn. 2017, 106, 745–770. [Google Scholar] [CrossRef]
  8. Duan, J.; Gu, Y.; Yu, H.; Yang, X.; Gao, S. ECC++: An algorithm family based on ensemble of classifier chains for classifying imbalanced multi-label data. Expert Syst. Appl. 2024, 236, 121366. [Google Scholar] [CrossRef]
  9. Mauri, L.; Damiani, E. Hardening behavioral classifiers against polymorphic malware: An ensemble approach based on minority report. Inf. Sci. 2025, 689, 121499. [Google Scholar] [CrossRef]
  10. Ganaie, M.; Hu, M.; Malik, A.; Tanveer, M.; Suganthan, P. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
  11. Alzubi, O.A.; Alzubi, J.A.; Alweshah, M.; Qiqieh, I.; Al-Shami, S.; Ramachandran, M. An optimal pruning algorithm of classifier ensembles: Dynamic programming approach. Neural Comput. Appl. 2020, 32, 16091–16107. [Google Scholar] [CrossRef]
  12. Tinofirei, M.; Fulufhelo, V.; Nelwamond, O.; Khmaies, O. An Adaptive Heterogeneous Online Learning Ensemble Classifier for Nonstationary Environments. Comput. Intell. Neurosci. 2021, 2021, 6669706. [Google Scholar]
  13. Hg, Z.; Altnay, H. Imbalance Learning Using Heterogeneous Ensembles. Expert Syst. Appl. 2019, 142, 113005. [Google Scholar]
  14. Read, J.; Martino, L.; Olmos, P.M.; Luengo, D. Scalable multi-output label prediction: From classifier chains to classifier trellises. Pattern Recognit. 2015, 48, 2096–2109. [Google Scholar] [CrossRef]
  15. Wang, R.; Kwong, S.; Wang, X.; Jia, Y. Active k-labelsets ensemble for multi-label classification. Pattern Recognit. 2021, 109, 107583. [Google Scholar] [CrossRef]
  16. Zhang, J.; Bian, Z.; Wang, S. Style linear k-nearest neighbor classification method. Appl. Soft Comput. 2024, 150, 111011. [Google Scholar] [CrossRef]
  17. Xiao, N.; Dai, S. A network big data classification method based on decision tree algorithm. Int. J. Reason.-Based Intell. Syst. 2024, 16, 66–73. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Wang, Z.; Liu, H.; Sun, Y. Ensemble Multi-label Classification Algorithm Based on Tree-Bayesian Network. Comput. Sci. 2018, 45, 195–201. [Google Scholar]
  19. Roy, A.; Chakraborty, S. Support vector machine in structural reliability analysis: A review. Reliab. Eng. Syst. Saf. 2023, 233, 109126. [Google Scholar] [CrossRef]
  20. Kavitha, P.M.; Muruganantham, B. Mal_CNN: An Enhancement for Malicious Image Classification Based on Neural Network. Cybern. Syst. 2024, 55, 739–752. [Google Scholar] [CrossRef]
  21. Roseberry, M.; Krawczyk, B.; Cano, A. Multi-label punitive kNN with self-adjusting memory for drifting data streams. ACM Trans. Knowl. Discov. Data 2019, 13, 1–31. [Google Scholar] [CrossRef]
  22. Alberghini, G.; Junior, S.B.; Cano, A. Adaptive ensemble of self-adjusting nearest neighbor subspaces for multi-label drifting data streams. Neurocomputing 2022, 481, 228–248. [Google Scholar] [CrossRef]
  23. Rastin, N.; Jahromi, M.Z.; Taheri, M. Feature weighting to tackle label dependencies in multi-label stacking nearest neighbor. Appl. Intell. 2021, 51, 5200–5218. [Google Scholar] [CrossRef]
  24. Liu, C.; Cao, L. A coupled k-nearest neighbor algorithm for multi-label classification. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Ho Chi Minh City, Vietnam, 19–22 May 2015; pp. 176–187. [Google Scholar]
  25. Luo, F.; Guo, W.; Yu, Y.; Chen, G. A multi-label classification algorithm based on kernel extreme learning machine. Neurocomputing 2017, 260, 313–320. [Google Scholar] [CrossRef]
  26. Rezaei, M.; Eftekhari, M.; Movahed, F.S. ML-CK-ELM: An efficient multi-layer extreme learning machine using combined kernels for multi-label classification. Sci. Iran. 2020, 27, 3005–3018. [Google Scholar] [CrossRef]
  27. Bezembinder, E.M.; Wismans LJ, J.; Berkum EC, V. Constructing multi-labelled decision trees for junction design using the predicted probabilities. In Proceedings of the 20th IEEE International Conference on Intelligent Transportation Systems, Yokohama, Japan, 16–19 October 2017; pp. 1–7. [Google Scholar]
  28. Majzoubi, M.; Choromanska, A. Ldsm: Logarithm-depth streaming multi-label decision trees. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 4247–4257. [Google Scholar]
  29. Moral-García, S.; Mantas, C.J.; Castellano, J.G.; Abellán, J. Ensemble of classifier chains and credal C4.5 for solving multi-label classification. Prog. Artif. Intell. 2019, 8, 195–213. [Google Scholar] [CrossRef]
  30. Lotf, H.; Ramdani, M. Multi-Label Classification: A Novel approach using decision trees for learning Label-relations and preventing cyclical dependencies: Relations Recognition and Removing Cycles (3RC). In Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications, Rabat, Morocco, 23–24 September 2020; pp. 1–6. [Google Scholar]
  31. Nan, G.; Li, Q.; Dou, R.; Liu, J. Local positive and negative correlation-based k -labelsets for multi-label classification. Neurocomputing 2018, 318, 90–101. [Google Scholar] [CrossRef]
  32. Moyano, J.M.; Ventura, S. Auto-adaptive grammar-guided genetic programming algorithm to build ensembles of multi-label classifiers. Inf. Fusion 2022, 78, 1–19. [Google Scholar] [CrossRef]
  33. Mahdavi-Shahri, A.; Houshmand, M.; Yaghoobi, M.; Jalali, M. Applying an ensemble learning method for improving multi-label classification performance. In Proceedings of the 2nd International Conference of Signal Processing and Intelligent Systems, Tehran, Iran, 14–15 December 2016; pp. 1–62018. [Google Scholar]
  34. Moyano, J.M.; Gibaja, E.L.; Cios, K.J.; Ventura, S. Generating ensembles of multi-label classifiers using cooperative coevolutionary algorithms. In Proceedings of the 24th European Conference on Artificial Intelligence, Santiago de Compostela, Spain, 29 August–8 September 2020; pp. 1379–1386. [Google Scholar]
  35. Zhang, L.; Shah, S.K.; Kakadiaris, I.A. Hierarchical multi-label classification using fully associative ensemble learning. Pattern Recognit. 2017, 70, 89–103. [Google Scholar] [CrossRef]
  36. Wei, X.; Yu, Z.; Zhang, C.; Hu, Q. Ensemble of label specific features for multi-label classification. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo, San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
  37. Cheng, K.; Gao, S.; Dong, W.; Yang, X.; Wang, Q.; Yu, H. Boosting label weighted extreme learning machine for classifying multi-label imbalanced data. Neurocomputing 2020, 403, 360–370. [Google Scholar] [CrossRef]
  38. Li, K.; Kong, X.; Lu, Z.; Wenyin, L.; Yin, J. Boosting weighted ELM for imbalanced learning. Neurocomputing 2014, 128, 15–21. [Google Scholar] [CrossRef]
  39. Büyükçakir, A.; Bonab, H.; Can, F. A novel online stacked ensemble for multi-label stream classification. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 1063–1072. [Google Scholar]
  40. Li, D.; Ji, L.; Liu, S. Research on sentiment classification method based on ensemble learning of heterogeneous classifiers. Eng. J. Wuhan Univ. 2021, 54, 975–982. [Google Scholar]
  41. Wu, D.; Han, B. Network Intrusion Detection Method Based on Optimization Heterogeneous Ensemble Learning of Glowworms. Fire Control Command Control 2021, 46, 26–31. [Google Scholar]
  42. Ding, J.; Wu, H.; Han, M. Multi-label data stream classification algorithm based on dynamic heterogeneous ensemble. Comput. Eng. Des. 2023, 44, 3031–3038. [Google Scholar]
  43. Singh, I.P.; Ghorbel, E.; Oyedotun, O.; Aouada, D. Multi-label image classification using adaptive graph convolutional networks: From a single domain to multiple domains. Comput. Vis. Image Underst. 2024, 247, 104062. [Google Scholar] [CrossRef]
  44. Brzezinski, D.; Stefanowski, J. Reacting to different types of concept drift: The accuracy updated ensemble algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 81–94. [Google Scholar] [CrossRef] [PubMed]
  45. Bifet, A.; Holmes, G.; Pfahringer, B.; Read, J.; Kranen, P.; Kremer, H.; Jansen, T. MOA: A realtime analytics open source framework. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2011, Athens, Greece, 5–9 September 2011; Proceedings, Part III 22; Springer: Berlin/Heidelberg, Germany, 2011; pp. 617–620. [Google Scholar]
  46. Liu, J.; Xu, Y. T-Friedman test: A new statistical test for multiple comparison with an adjustable conservativeness measure. Int. J. Comput. Intell. Syst. 2022, 15, 29. [Google Scholar] [CrossRef]
Figure 1. Traditional ensemble diagram.
Figure 1. Traditional ensemble diagram.
Mathematics 13 00103 g001
Figure 2. HCG diagram.
Figure 2. HCG diagram.
Mathematics 13 00103 g002
Figure 3. ASS diagram.
Figure 3. ASS diagram.
Mathematics 13 00103 g003
Figure 4. Training algorithm diagram.
Figure 4. Training algorithm diagram.
Mathematics 13 00103 g004
Figure 5. Algorithm critical distance graph. (The blue line represents a significant difference, and algorithms outside the region represent a significant difference from controls).
Figure 5. Algorithm critical distance graph. (The blue line represents a significant difference, and algorithms outside the region represent a significant difference from controls).
Mathematics 13 00103 g005
Table 1. Symbols used in the AHCG algorithm.
Table 1. Symbols used in the AHCG algorithm.
KThe number of base classifiers in the ensemble
nThe amount of data in the instance window
HMaximum capacity of a data block DC
y j i For the ith instance, the jth class label
W k It is the weight of classifier k
s k j i For the ith instance, the correlation score of the kth base classifier to the j class label
SCorrelation score matrix; it obtains the predicted scores for different labels through different base classifiers
DCA data stream is divided into blocks of data, DC = d1, d2, …, dh, …, dH
Table 2. Time complexity analysis of the algorithm.
Table 2. Time complexity analysis of the algorithm.
Algorithms Time Complexity Influence Factor
EBR O ( k × p × f ( m , | N | ) ) The number of iterations k, the number of labels p, and the training complexity f ( m , | N | ) of each binary classifier, etc.
ECC O ( k × p × f ( m + p , | N | ) ) The number of iterations k, the number of labels p, and the training complexity f ( m + p , | N | ) of each binary classifier, etc.
EPS O ( N × p × log g + | N | × 2 p × p × l o g g ) The number of iterations k, the size of the training set N, the number of labels p, and the pruning strategy, etc.
GORT O ( N n n K c + n p K 2 + K 3 ) The number of instances in DC n, the number of classifiers K, and the number of labels p, etc.
EBRT O ( 2 N 2 ) Dataset size N and number of features m, etc.
EaBR O ( N n n K c + n p K 2 + K 3 ) The number of instances in DC n, the number of classifiers K, and the number of labels p, etc.
EaCC O ( N n n K c + n p K 2 + K 3 ) The number of instances in DC n, the number of classifiers K, and the number of labels p, etc.
EaPS O ( N n n K c + n p K 2 + K 3 ) The number of instances in DC n, the number of classifiers K, and the number of labels p, etc.
AHCG O ( | N | × ( K + n 2 × p 2 ) ) The number of labels p and the number of classifiers K, etc.
Table 3. Datasets.
Table 3. Datasets.
DatasetDomainnmLLC(D)LD(D)
MedicalText9781449451.2450.028
EnronText17021001533.3780.064
ChessText16755852272.4110.011
YeastBiological2417103144.2370.303
SlashdotText37821079221.180.053
PhilosophyText39718422332.2720.010
ReutersText60005001012.880.028
chemistryText69615401752.1090.012
CsText92706352742.5560.009
OhsumedText13,9301002231.6630.072
TMCText28,600500222.1580.098
IDBIText120,9001001282.0000.071
Table 4. Mathematical symbols for evaluation metrics.
Table 4. Mathematical symbols for evaluation metrics.
SymbolMeaning
X d dimensional instance space R d
Y it marks the space with q possible category labels, y 1 , y 2 , , y q {}
xd dimensional eigenvector, (x1, x2, …, xd) (x X )
pp indicates the number of instances. 1 ≤ ip
Ya set of labels associated with X (Y⊆Y)
h(∙)MLC h: X 2 Y , where h(x) returns the correct set of labels for x
Table 5. The experimental results of the integrated algorithm accuracy, case-based F1, micro-averaged F1, and macro-averaged F1 are compared.
Table 5. The experimental results of the integrated algorithm accuracy, case-based F1, micro-averaged F1, and macro-averaged F1 are compared.
Evaluation MetricsDate SetAHCG1AHCG2AHCG3EBRECCEPSDHEML
AccuracyMedical0.2710.2720.2860.1930.1890.2050.285
Enron0.3220.1720.3220.3420.2980.3020.304
Chess0.0560.0750.0860.0670.0620.0530.079
Yeast0.510.5050.5080.5020.4930.480.513
Philosophy0.0290.0870.0330.0130.0110.0720.072
Chemistry0.0580.0690.0580.0220.0150.020.061
Cs0.0560.0740.0860.0670.0620.0530.071
Slashdot0.0730.1330.0760.020.0180.070.114
Reuters0.0780.1540.1360.0990.0930.1310.154
Ohsumed0.2640.2120.2750.1910.180.1350.245
TMC0.5040.4990.5110.520.5110.4690.502
IDBI0.1630.1720.1640.0550.0120.0110.166
Rank. avg4.172.882.382.504.425.885.79
Example-Based F1Medical0.30.3410.3150.2160.210.2350.329
Enron0.4620.2640.4620.4680.4140.3970.460
Chess0.1280.1380.20.0950.0870.0790.182
Yeast0.6480.6390.6450.6380.6320.6010.647
Philosophy0.0870.1560.1040.0180.0150.0720.141
chemistry0.150.1430.150.030.020.0260.134
Cs0.1280.1370.2030.0950.0870.0790.168
Slashdot0.1180.1990.120.0230.020.0750.171
Reuters0.1430.2410.2260.1060.0990.1360.227
Ohsumed0.3830.3440.3890.230.2170.160.342
TMC0.6560.650.6660.6540.6430.590.644
IDBI0.2880.2940.2890.0750.0160.0150.135
Rank. avg3.002.672.082.924.836.336.17
Micro-Averaged F1Medical0.6130.7030.6360.4970.4890.3960.564
Enron0.4490.2440.4490.4030.1910.3580.436
Chess0.0340.1230.0310.1220.1150.110.120
Yeast0.6380.6390.6360.6310.6250.6040.638
Philosophy0.020.1270.020.0220.0190.0980.086
chemistry0.0270.0950.0140.040.0280.0340.053
Cs0.0320.1230.0290.1220.1150.110.119
Slashdot0.110.1990.1110.0410.0370.1060.157
Reuters0.0410.2070.0410.1430.1350.1540.153
Ohsumed0.2830.3070.2950.2940.280.1970.283
TMC0.6180.610.6320.6380.6310.5660.621
IDBI0.2150.2790.2160.0990.0140.0180.225
Rank.avg4.541.834.213.003.675.585.17
Macro-Averaged F1Medical0.0330.0350.0340.0280.0280.0210.035
Enron0.070.060.070.050.0460.0340.056
Chess0.0210.0190.0280.0370.0320.0060.025
Yeast0.3680.3530.3560.3290.3430.3430.363
Philosophy0.0190.0120.020.0120.0070.0050.015
chemistry0.0270.0620.0270.020.0120.0040.023
Cs0.0210.020.0270.0370.0320.0060.027
Slashdot0.0960.1620.0970.0390.0360.0770.139
Reuters0.0370.0720.0390.0570.0490.0210.053
Ohsumed0.2360.280.2550.2440.230.0820.251
TMC0.4320.4210.4910.4850.4650.1990.488
IDBI0.1150.1370.1150.0320.0140.0290.118
Rank. avg3.713.002.672.834.085.086.63
Table 6. Experimental results for running times.
Table 6. Experimental results for running times.
Running TimeAHCG1AHCG2AHCG3EBRECCEPSDHEML
Medical54,48053,31295,348384,636390,410701965,695
Enron330,011296,226545,682634,480623,48117,758378,999
Chess40,317,45053,310,85585,347,95328,946,03021,725,172160,89857,880,922
Yeast27,13828,00737,42729,33331,378454529,938
Philosophy10,513,19111,601,13621,001,45610,956,3418,151,04488,60013,943,644
chemistry9,525,26513,239,02521,925,7737,708,5416,474,03087,88114,452,766
Cs45,607,46069,042,003113,073,43835,952,83229,460,820196,05073,645,586
Slashdot572,558584,341857,543544,816560,60895,291651,471
Reuters2,858,1152,622,3875,223,7012,097,7962,124,15548,7773,461,739
Ohsumed4,223,4513,963,7285,874,0632,320,6742,468,183289,8884,547,406
TMC4,341,9804,309,0346,791,5092,273,0912,490,216111,7474,994,112
IDBI115,513,788100,334,992180,692,56367,979,36371,315,2691,658,930128,241,470
Table 7. Experimental results of accuracy, example-based F1, micro-averaged F1, and macro-averaged F1.
Table 7. Experimental results of accuracy, example-based F1, micro-averaged F1, and macro-averaged F1.
Evaluation MetricsDatasetAHCG1AHCG2AHCG2GORTEBRTEaBREaCCEaPSMLSAMPKNNAESAKNNSDHEML
AccuracyMedical0.2710.2720.2860.1970.2790.1920.1890.2090.0750.1120.285
Enron0.3220.1720.3220.2130.0590.3080.2890.2602940.3270.304
Chess0.0560.0750.0860.0300.0640.0030.0280.030.0430.079
Yeast0.510.5050.5080.4640.5020.5020.4950.4740.4560.2710.513
Philosophy0.0290.0870.0330.03200.0120.0030.0570.0220.0370.072
chemistry0.0580.0690.0580.03500.0130.0010.0190.0390.0260.061
Cs0.0560.0740.0860.0300.0640.0030.0280.030.030.071
Slashdot0.0730.1330.0760.110.0010.0160.0180.0440.2210.1350.114
Reuters0.0780.1540.1360.0410.0000.0510.0040.170.2890.2620.154
Ohsumed0.2640.2120.2750.1790.0490.1690.0040.1130.0690.0010.245
TMC0.5040.4990.5110.2950.0070.5290.5160.4810.1770.2140.502
IDBI0.1630.1720.1640.1620.0000.0240.0010.0190.1550.1330.166
Rank. avg4.58 3.63 3.08 6.96 9.71 6.46 8.67 7.08 6.29 6.50 3.04
Example-Based F1Medical0.30.3410.3150.2860.3210.2140.210.240.0850.1320.329
Enron0.4620.2640.4620.350.0610.4250.4080.3470.3910.4450.46
Chess0.1280.1380.2000.05700.0910.0050.040.030.0470.182
Yeast0.6480.6390.6450.6140.6380.6380.6330.5960.5860.3730.647
Philosophy0.0870.1560.1040.06300.0170.0040.0730.0490.0290.141
chemistry0.150.1430.150.06800.0170.0010.0240.0520.0340.134
Cs0.1280.1370.2030.05800.0910.0050.040.0450.0430.168
Slashdot0.1180.1990.120.1940.0010.0180.020.0470.230.1380.171
Reuters0.1430.2410.2260.080.0000.0550.0050.1760.3110.2790.227
Ohsumed0.3830.3440.3890.2980.0560.2020.0050.1340.090.0010.342
TMC0.6560.650.6660.4490.0080.6610.6460.5980.1960.2360.644
IDBI0.2880.2940.2890.280.0000.0310.0010.0260.2060.170.135
Rank. avg3.58 3.17 2.506.08 9.71 6.79 8.83 7.67 6.83 7.33 3.50
Micro-Averaged F1Medical0.6130.7030.6360.2090.4320.4930.0370.4020.2020.2930.564
Enron0.4490.2440.4490.3450.0370.4110.3950.330.3840.4430.436
Chess0.0340.1230.0310.05700.1160.0070.0510.0450.0630.12
Yeast0.6380.6390.6360.6020.6320.6320.6270.60.5870.420.638
Philosophy0.020.1270.020.06200.0210.0060.0810.0550.0370.086
chemistry0.0270.0950.0140.06700.0230.0010.0240.0610.0440.053
CS0.0320.1230.0290.05700.1160.0070.0510.0580.0590.119
Slashdot0.110.1990.1110.1910.0010.0330.0370.0740.2680.2040.157
Reuters0.0410.2070.0410.0790.0000.0760.0070.20.3420.3310.153
Ohsumed0.2830.3070.2950.2710.0760.2660.0070.1710.1140.0020.283
TMC0.6180.610.6320.4350.0080.640.6320.5770.3510.4050.621
IDBI0.2150.2790.2160.2730.0000.0410.0020.0330.2180.2010.225
Rank. avg5.63 2.505.58 5.58 9.96 5.71 8.79 6.92 5.92 6.00 3.42
Macro-Averaged F1Medical0.0330.0350.0340.0230.0160.0280.0280.0210.010.0150.035
Enron0.070.060.070.0950.0060.0650.0620.0450.1270.1480.056
Chess0.0210.0190.0280.02300.0230.0020.0040.0120.0270.025
Yeast0.3680.3530.3560.3570.3290.3290.3460.3410.3790.1940.363
Philosophy0.0190.0120.020.0200.0110.0020.0040.0060.0130.015
chemistry0.0270.0620.0270.02700.0120.0000.0030.0090.0130.023
Cs0.0210.020.0270.02200.0350.0020.0040.0120.0250.027
Slashdot0.0960.1620.0970.1150.0000.0330.0360.0410.1360.0840.139
Reuters0.0370.0720.0390.0420.0000.0280.0040.030.1630.1620.053
Ohsumed0.2360.280.2550.2190.0280.2150.0060.0650.0520.0010.251
TMC0.4320.4210.4910.2770.0030.4810.4620.3290.060.090.488
IDBI0.1150.1370.1150.1320.0000.0130.0020.0270.1070.1040.118
Rank. avg4.58 4.04 3.254.33 10.50 6.46 8.58 8.33 6.17 6.25 3.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, M.; Yang, S.; Wu, H.; Ding, J. Multi-Label Classification Algorithm for Adaptive Heterogeneous Classifier Group. Mathematics 2025, 13, 103. https://doi.org/10.3390/math13010103

AMA Style

Han M, Yang S, Wu H, Ding J. Multi-Label Classification Algorithm for Adaptive Heterogeneous Classifier Group. Mathematics. 2025; 13(1):103. https://doi.org/10.3390/math13010103

Chicago/Turabian Style

Han, Meng, Shurong Yang, Hongxin Wu, and Jian Ding. 2025. "Multi-Label Classification Algorithm for Adaptive Heterogeneous Classifier Group" Mathematics 13, no. 1: 103. https://doi.org/10.3390/math13010103

APA Style

Han, M., Yang, S., Wu, H., & Ding, J. (2025). Multi-Label Classification Algorithm for Adaptive Heterogeneous Classifier Group. Mathematics, 13(1), 103. https://doi.org/10.3390/math13010103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop