Next Article in Journal
Semi-Hyers–Ulam–Rassias Stability via Laplace Transform, for an Integro-Differential Equation of the Second Order
Next Article in Special Issue
A Novel Method for Decision Making by Double-Quantitative Rough Sets in Hesitant Fuzzy Systems
Previous Article in Journal
Using Regression Analysis for Automated Material Selection in Smart Manufacturing
Previous Article in Special Issue
Robust Multi-Label Classification with Enhanced Global and Local Label Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Suitability of Bagging-Based Ensembles with Borderline Label Noise

by
José A. Sáez
1,* and
José L. Romero-Béjar
1,2,3
1
Department of Statistics and Operations Research, University of Granada, Fuentenueva s/n, 18071 Granada, Spain
2
Instituto de Investigación Biosanitaria (ibs.GRANADA), 18012 Granada, Spain
3
Institute of Mathematics of the University of Granada (IMAG), Ventanilla 11, 18001 Granada, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(11), 1892; https://doi.org/10.3390/math10111892
Submission received: 1 May 2022 / Revised: 20 May 2022 / Accepted: 26 May 2022 / Published: 1 June 2022
(This article belongs to the Special Issue Data Mining: Analysis and Applications)

Abstract

:
Real-world classification data usually contain noise, which can affect the accuracy of the models and their complexity. In this context, an interesting approach to reduce the effects of noise is building ensembles of classifiers, which traditionally have been credited with the ability to tackle difficult problems. Among the alternatives to build ensembles with noisy data, bagging has shown some potential in the specialized literature. However, existing works in this field are limited and only focus on the study of noise based on a random mislabeling, which is unlikely to occur in real-world applications. Recent research shows that other types of noise, such as that occurring at class boundaries, are more common and challenging for classification algorithms. This paper delves into the analysis of the usage of bagging techniques in these complex problems, in which noise affects the decision boundaries among classes. In order to investigate whether bagging is able to reduce the impact of borderline noise, an experimental study is carried out considering a large number of datasets with different noise levels, and several noise models and classification algorithms. The results obtained reflect that bagging obtains a better accuracy and robustness than the individual models with this complex type of noise. The highest improvements in average accuracy are around 2–4% and are generally found at medium-high noise levels (from 15–20% onwards). The partial consideration of noisy samples when creating the subsamples from the original training set in bagging can make it so that only some parts of the decision boundaries among classes are impaired when building each model, reducing the impact of noise in the global system.

1. Introduction

Data acquisition and processing in statistical and data-mining applications are often subject to imperfections [1]. This fact may lead to the presence of errors or noise in datasets [2,3]. In classification [4], creating models from noisy data has several drawbacks, including the need for more time and samples to build the classifier [5,6]. Furthermore, both the accuracy and complexity of classifiers can be affected by modeling corrupted data [7,8].
Given the inconveniences caused by noise, previous works have raised the need for techniques to deal with it [9,10]. Thus, in the classification literature, two main options are contemplated for the treatment of noise: (i) the so-called robust learners [9,11], which involve modifications of existing algorithms to deal with errors; and (ii) the preprocessing of datasets with the aim of handling the noisy samples [2,10]. Despite the good results that both types of approaches can provide, they have some disadvantages [12,13]. The former require redesigning the algorithm associated with known classification methods, which in some cases is complex to perform. Furthermore, since the adaptation depends on such methods, it is not immediately applicable to other classification techniques [9]. On the other hand, methods following the second approach are often designed to detect noise with certain characteristics, and therefore the resulting data may be imperfect [12]. These facts show the importance of investigating other alternatives to reduce the impact of noise that allow satisfactory results when the previous approaches are impaired.
When dealing with complex datasets, several works have demonstrated that ensemble methods [14,15], which simultaneously use several classifiers, are an accurate way to overcome some of the difficulties in building models from the data. One of the best known approaches to create ensembles is bootstrap aggregating or, as it is more commonly known, bagging [16,17]. Given a dataset, it generates different versions using a bootstrap resampling procedure and builds a model on each of these subsets. Then, the outputs of all the available classifiers are combined to obtain a single final prediction for each sample [18].
Nevertheless, despite the popularity of bagging schemes to build ensembles, research works studying their behavior with noisy data are limited and use specific features [19,20]. Abellán et al. [19] focused on studying decision trees, analyzing the application of bagging of trees with imprecise probabilities compared to bagging of traditional decision trees. On the other hand, Khoshgoftaar et al. [20] compared the performance of several boosting and bagging techniques dealing with noisy datasets, only focusing on imbalanced and binary classification data. Furthermore, the above studies dealt with noise based on random mislabeling [20,21]: the samples to corrupt were chosen randomly, which represents an unlikely situation in real applications [22]. Recent works [23,24] have proposed other more advanced noise introduction models, which better represent the corruptions occurring in real-world datasets. They are based on the mislabeling of samples close to decision boundaries, where errors are more prone to occur [22]. These types of errors are more common in practice and more difficult for classification algorithms to detect and deal with [23]. There are works showing that, in real-world applications based on collaborative labeling, most of the differences between the labelers occur in the proximity of the decision boundaries [25,26]. A study on coronary disease classification [27] revealed that noise was generally caused by equipment measurement errors, which generated altered values in the proximity of the decision boundaries and led to incorrect labeling of the samples. Other works also reinforce the importance of labeling errors at decision limits [12,28]. Thus, Garcia et al. [28] analyzed a dataset in the field of ecology, in which they observed that certain alterations produced small errors in environmental characteristics, ultimately leading to mislabeling of the collected data. The importance of noisy samples at decision limits was also reflected in the field of noise filtering [12], in which the efficiency of noise filters was more notable when the dataset presented overlapping between classes.
This research delves into the above aspects, analyzing the behavior of bagging schemes when decision boundaries are affected by labeling errors. An experimental study based on the comparison of bagging against its baseline models dealing with borderline noise will be developed. For this, four robust classification methods will be considered: C4.5 [11], RIPPER [29], PART [30], and C5.0 [31]. Although the performance of these techniques with noise is usually known [32,33], research works analyzing their real behavior with borderline label noise are scarce, particularly if bagging is used. This work aims at verifying how these algorithms are affected by borderline noise and whether their robustness can be further increased by using bagging schemes. These methods, with and without bagging, will be used to create classifiers over 36 real-world datasets, both binary and multi-class, with different natures and characteristics. Recent borderline label noise models [23] will be employed in order to inject errors into these datasets considering nine noise levels (ranging from 0% to 40%, in steps of 5%), resulting in an experiment involving a total of 612 noisy datasets. The disparity between the individual models and bagging-based ensembles will be explored considering both their accuracy and robustness on each noisy dataset created. As support for the conclusions drawn from this study, the corresponding statistical tests [34] will be used on the results obtained. The datasets and the results of the experimentation carried out in this paper can be accessed through the webpage https://joseasaezm.github.io/bagbln/ (accessed on 1 May 2022). As a summary, the following are the main contributions offered by this research:
  • Deepening the understanding of the impact of borderline label noise, which is more frequent in practice than the random label noise that is commonly studied, on the efficacy of traditional classification methods.
  • Analysis of the behavior of bagging-based ensembles versus not considering them when dealing with borderline label noise, which has usually been overlooked in the literature.
  • Study of the improvement of robustness to noise, through the use of specific measures [35], of methods traditionally considered robust when included in a bagging ensemble.
  • Establishing the noise levels in the data where the use of bagging is most recommended, as well as the hypotheses that explain its good behavior with borderline label noise.
Note that this paper primarily focuses on analyzing the accuracy and robustness of classification methods with and without bagging when errors affect the labels of samples at decision boundaries. However, a study of the specific characteristics of the data (such as the overlapping level among classes, the imbalance ratio, and the dispersion degree of the samples, among others [36]) leading to a better behavior of bagging is not carried out (except for the level and type of noise, which are injected into the datasets in a controlled way).
The rest of this work is organized as follows. Section 2 contemplates the background associated with this paper, introducing the problem of noisy data in classification and the creation of ensembles using bagging. Then, Section 3 details the characteristics of the experiment that is carried out, and Section 4 focuses on the analysis of results. Finally, Section 5 concludes this paper, providing some ideas about future research.

2. Background

This section presents the background related to this paper. Section 2.1 introduces classification with noisy data, while Section 2.2 focuses on using bagging schemes for building ensembles.

2.1. Noisy Data in Classification

Because the source and input of data in real-world applications are often subject to imperfections, the data associated with them usually suffer from corruptions [6,37]. In classification problems, noise can impair classifiers by affecting their accuracy, complexity, and construction time [5,7]. In this context, there are two main types of noise found in the specialized literature [2,8]:
  • Label noise [2,38]. This occurs when samples are labeled with incorrect class labels. Its origin is commonly associated with subjectivity during the labeling process, errors in data collection, or the use of inaccurate information for labeling [39,40].
  • Attribute noise [41,42]. This is related to the imperfections occurring in the attributes of a classification dataset. This type of noise can come from various sources, such as streaming restrictions, detection device failures, and transcription errors [8].
Note that, although only two types of noise are distinguished, each of them can appear in multiple ways [10,42]. For example, label noise may occur only between certain classes [10], affect each of the classes unequally [43], or be located in certain areas [22], such as the decision boundaries analyzed in this paper. Something similar applies to attribute noise, as it can appear as small errors in the data following a Gaussian distribution [42] or more pronounced errors that can have a larger impact [44]. Among both types of noise, label noise, which is the focus of this research, is often more harmful to classifier performance than attribute noise because labels usually have more influence on model construction [32,33].
Since error is inherent in human nature and in most measurement instruments, there are many real-world applications where noisy data are typically present [45,46]. For example, label noise is common in medical applications [45], in which the information used to label each case may come from different tests whose results are unknown or imprecise. Another common application where label noise occurs is spam filtering [47], in which accidental clicks can cause samples to be mislabeled. On the other hand, attribute noise can be present in other types of applications, such as those involving voice recognition in call routing [48] or in the field of software engineering [46], where it can affect the software quality metrics.
In the context of noisy data, robustness [35] is the ability of a classification technique to create classifiers that are less affected by imperfect data. This fact implies that models created by robust algorithms from clean and noisy data are more similar. Robustness is a relevant issue when studying noisy data because it allows estimation of the performance of a technique when the characteristics of noise in a dataset are unknown. Examples of robust learners are C4.5 [11], RIPPER [29], PART [30], and C5.0 [31], which are considered in this paper. These algorithms incorporate pruning schemes to avoid overfitting the classifiers to errors. One of the contributions of this research is to analyze whether the usage of bagging improves the behavior of these algorithms traditionally considered robust to noise when dealing with borderline label noise.

2.2. Building Classification Ensembles Using Bagging

Ensemble methods [14,15] are based on creating several models from the training data. They have been postulated as an efficient alternative for complex problems, where the construction of different models from the data, in such a way that they complement each other, usually brings some advantages [49]. Thus, the usage of ensembles with respect to each of their components often implies improvements in classification performance, dynamic adaptation, and parallelization [50,51]. One of the best known and widely used approaches to building ensembles is bagging [16,17] (see Figure 1).
The operation of bagging-based ensembles is described below. Let D be a classification dataset with n samples. Bagging generates t different subsets D 1 , , D t from D using a bootstrap resampling procedure [52]. Each subset D k , k { 1 , , t } , is usually created by means of a random selection with replacement of n samples from the initial data D. This sampling procedure ensures that each subset D k is independent of the others. Then, a model m k , k { 1 , , t } , is built on each of these subsets D k using a base classification algorithm. A phase of output combination is carried out to determine the class labels for new samples [52], in which each sample is evaluated by all the available classifiers obtaining t distinct predictions p 1 , , p t . The most used approach for output combination in the specialized literature is majority voting [14,19]. This is a simple but effective procedure in which each model within the ensemble casts a vote for one of the classes, and the most voted class is chosen as the final prediction.

3. Experimental Framework

This section details the characteristics of the experimental framework designed to analyze the efficacy of bagging schemes with borderline label noise. They are influenced by the experimental framework of other recent research works published in the field of classification with noisy data [10,42,53]. Section 3.1 and Section 3.2 focus on describing the real-world datasets used and how labeling errors are induced in them. Then, Section 3.3 focuses on the classification methods. Section 3.4 presents the methodology employed for the analysis of results.

3.1. Real-World Datasets

The experimentation is based on 36 real-world datasets of different natures taken from the UCI machine learning and KEEL-dataset repositories (https://archive.ics.uci.edu/ and http://www.keel.es (accessed on 1 May 2022)). These are shown in Table 1, where sa refers to the number of samples, at to the number of attributes, and cl to the number of classes. They cover a wide range of cardinalities regarding the number of samples (from 106 up to 20,000), attributes (from 2 up to 309), and classes (from 2 up to 37). The selection of the datasets has been made considering that all their attributes are numerical. This requirement is imposed by the models used in experimentation to introduce borderline label noise into the data [23], which compute the distance of the samples to the decision boundaries and need numerical attributes for that purpose.

3.2. Noise Introduction Models

In the above datasets, nine levels of borderline label noise ρ % are injected in order to control the characteristics of the errors: from 0% (clean datasets) up to 40%, by increments of 5%. The following two noise models are used to introduce noise [23]:
1.
Neighborwise borderline label noise. This calculates a noise measure N ( x i ) for each sample x i based on the distances to its closest samples from the same class and from a different one. The noise measure N ( x i ) has the following expression:
N ( x i ) = d ( x i , x j = N N ( x i ) x j , 0 = x i , 0 ) d ( x i , x k = N N ( x i ) x k , 0 x i , 0 )
where NN ( x i ) is the nearest neighbor of x i , d ( x i , x j ) the Euclidean distance between the samples x i and x j , and x i , 0 the class label of the sample x i . Finally, the values N ( x i ) are ordered in descending order, and the first ρ % of them are chosen to be mislabeled.
2.
Non-linearwise borderline label noise. This computes a noise metric for each sample based on its distance to the decision limit induced by a support vector machine (SVM) [54]. In order to achieve this, it first uses SVM with a radial basis kernel to compute the decision boundary in the data D. Then, for each sample x i in D, its distance to the decision border is calculated, considered as the unsigned decision values of SVM for that sample. For multi-class problems, the one-vs-one approach is used, and the distance between the sample and the nearest decision boundary is selected. Finally, the values of the noise metric are ordered in ascending order, and the first ρ % of them are chosen to be altered.
For a given dataset D in Table 1, noise is introduced as follows. First, a noise level ρ % is injected into a copy D of D using one of the above noise models. Then, both datasets, D and D , are split into five equivalent parts, maintaining the same samples per fold. Finally, the training sets are selected from D (using four folds), and the test sets are built from D (using the remaining fold). Both noise models, neighborwise and non-linearwise borderline label noise, are independently considered. For each one, nine noise levels are analyzed. This fact implies the usage of a total of 612 different noisy datasets in the experiment. The accuracy of each algorithm in these datasets is computed by averaging its test results over five runs of a five-fold cross-validation.

3.3. Classification Algorithms

The choice of the classification techniques employed in the experimentation (C4.5 [11], RIPPER [29], PART [30], and C5.0 [31]) is based on two main aspects related to the research carried out. First, they are algorithms traditionally considered when creating bagging-based ensembles [17,55]. Even though bagging can be applied regardless of the classification method, those approaches based on decision trees and ruleset creation are generally recommended when building ensembles [18]. Among their advantages [55,56], we can highlight that they are non-parametric (no assumptions about the data distribution are made) and interpretable, and, what is even more important, when multiple models are built from the data, they provide good solutions in relatively short times. These types of techniques based on decision trees and rulesets are commonly used in some of the most popular ensembles, such as XGBoost [57] or random forest [58]. Second, the algorithms considered include mechanisms against overfitting and are commonly used in works on noisy data in classification [32,33]. This paper delves into this field, studying the effect of borderline label noise on the performance of these robust learners, comparing their results with and without bagging. The classification techniques used in the experimentation are briefly described below:
1.
C4.5 [11]. It is based on the ID3 [59] algorithm, including some improvements, such as the handling of missing values, the possibility of treating continuous attributes, and the usage of pruning to avoid overfitting. C4.5 follows a top-down approach to build the decision tree. In order to determine the current node in each of its stages, the attribute that best separates the remaining samples among classes is selected.
2.
RIPPER [29]. Its main goal is to create a set of crisp rules from the training data. The rules are learned one by one, until they cover all the samples of each of the classes according to their frequency. For this, a stopping criterion based on the minimum description length [60] metric is used. Each rule is then pruned to avoid the overfitting of the previous stage. After learning the ruleset for a given class, an optimization stage is run, in which the rules are improved by adjusting their antecedents.
3.
PART [30]. It relies on a divide-and-conquer strategy to create a set of if-else rules from the construction of partial decision trees, which are those whose branches are not completely explored. Thus, when the children of a given node are obtained, it can be chosen to be pruned. At each stage, PART creates a partial decision tree and converts its best branch, the one that covers the most samples, into a rule in the ruleset. The algorithm stops once all samples in the dataset have been covered.
4.
C5.0 [31]. It has been considered in the experimentation as a more recent and advanced version of the classic C4.5 algorithm. Among the improvements that C5.0 offers with respect to its predecessor, we can highlight lower temporal and spatial complexities (which are especially useful when building ensembles), the creation of smaller decision trees that maintain their accuracy, the introduction of sample and misclassification weighting schemes, and the filtering of irrelevant attributes for the classification task.
The parameter setting for each method is the default one recommended by the authors:
  • C4.5, PART, C5.0: pruning confidence c = 0.25 ; min. samples per leaf s = 2 .
  • RIPPER: folds f = 3 ; optimizations r = 2 ; min. weights w = 2 .
Note that, in real-world applications, it is interesting to find the optimal parameters for each algorithm on each dataset in order to obtain the highest possible classification accuracy for the specific problem addressed. However, this aspect is not the object of this research, whose main goal is to analyze whether there is an improvement in the behavior of ensembles based on bagging with respect to not considering them when dealing with borderline label noise. Because of this, finding the optimal parameter setup for each method is not essential, and the same parameters are set for all of them, regardless of whether they use bagging or not. In this way, the variation in accuracy of each algorithm before and after using bagging will be due to the use of bagging itself and not to the optimization of the parameters for each method and dataset.

3.4. Methodology of Analysis

The main goal of the experimentation is to compare the performance of each classification method when dealing with borderline label noise before and after using bagging. In order to do this, the analysis of results will be focused on four main aspects:
1.
Classification accuracy. Classification accuracy is computed for each algorithm on each dataset, noise model, and noise level. Note that, even though this paper presents averaged results, the conclusions drawn are supported by the proper statistical tests with respect to each of them. On the other hand, the complete results are accessible through the webpage (https://joseasaezm.github.io/bagbln/ (accessed on 1 May 2022)) with complementary material of this research.
2.
Robustness to noise. The equalized loss of accuracy (ELA) [35] metric is used to evaluate the noise robustness by measuring the performance deterioration with noisy data from a perfect classification weighted by the performance with clean data:
ELA ρ % = 1 A ρ % A 0 %
where A 0 % and A ρ % are, respectively, the classification accuracies without noise and with a noise level ρ % . In this case, the lower the ELA value, the greater the robustness of the classification algorithm. It is important to point out that the conclusions reached when studying accuracy and robustness do not necessarily have to coincide: an algorithm can have a good accuracy, but deteriorate to a greater degree (being less robust) when considering higher levels of noise in the data.
3.
Box-plots of robustness results. Box-plots allow completion of the analysis of the robustness to noise of the classification algorithms by analyzing the distribution of the ELA results. Lower medians and interquartile ranges will be an indicator of good robustness in all the datasets used, showing similar performances of the methods before and after introducing noise into the data.
4.
Datasets with the best result. Along with the above metrics (accuracy and ELA), the number of datasets in which each approach (bagging or baseline method) obtains the best result at each noise level is computed.
Wilcoxon’s test [61] will be employed to properly analyze both accuracy and ELA results and detect differences between two sample means, as suggested in the literature [62]. For each noise model and level, the baseline algorithm and that using bagging will be compared, and the corresponding p-values will be obtained. The p-value for each comparison will allow rejection of the null hypothesis of equality of means, implying that a given algorithm outperforms the other. This research considers a significance level α = 0.05 .

4. Addressing the Borderline Label Noise Problem with Bagging Ensembles

This section analyzes both the accuracy and robustness of the classification methods, with and without bagging, dealing with borderline label noise. Section 4.1 focuses on the impact of borderline noise on classification accuracy, whereas Section 4.2 focuses on the robustness against noise of each approach.

4.1. Impact of Borderline Label Noise on Classification Accuracy

Table 2 presents the accuracy results (rows ACC) of each classification method with and without bagging with each noise model and noise level. Additionally, the amount of datasets with the best result for each classification technique (rows Best) and the p-values obtained using Wilcoxon’s test (rows p W i l ) are provided.
The following observations emerge from the analysis of these results:
  • The test accuracy results are higher for bagging than for non-bagging in both noise models, neighborwise and non-linearwise, at all the noise levels.
  • The improvements using bagging are approximately between 2–4% in all cases.
  • They are slightly larger for RIPPER and PART than for C4.5 and C5.0.
  • The largest improvements for each method are generally found at medium-high noise levels, that is, from 15–20% onwards.
  • The rows Best show a clear advantage in favor of bagging, which provides the best accuracy in the majority of the datasets.
  • The low p-values obtained with Wilcoxon’s test support the superiority of the bagging schemes in all the comparisons.
The results in Table 2 show that bagging schemes provide higher accuracies for all the classification algorithms studied when the data suffer from borderline label noise. Furthermore, this ensemble-building approach even improves the accuracy of algorithms traditionally considered robust to noise when dealing with this type of complex data. The improvement percentages obtained through the application of bagging (2–4%) represent significant amounts in classification problems. Note that these percentages of improvement occur in the borderline area among classes, where samples tend to be more confusing. In certain types of real-world applications, such as medical ones, these percentages can have a large impact on the classification system. On many occasions, the classification decision usually involves the health of patients who are difficult to classify, whose descriptive attributes place them on the border between two of the classes of the problem.
On the other hand, those classification methods generally providing worse performance results in some noise levels without using bagging, such as RIPPER, benefit the most from its usage, obtaining larger percentages of improvement on average. It is worth noting the behavior of PART at the levels 25–30% with non-linearwise borderline noise, where, despite obtaining good results among the methods that do not use bagging, it reaches high percentages of improvement when considered within the ensemble. The best results of improvement for all the classification algorithms, which are usually obtained at medium-high noise levels, show that the impact of bagging is potentially greater in the most complex classification problems.
Given that bagging provides better results in the majority of the datasets and that the statistical comparisons confirm its good behavior, its usage can be recommended when the data suffer from borderline label noise. Its better performance against label noise affecting decision boundaries can be explained by the fact that the bootstrap resampling procedure may cause each model to be affected by only some of the borderline samples. In this way, the separability between the classes can be increased, reducing the chances that decision limits induced by the classifiers overfit the noisy data.

4.2. Analysis of Classification Robustness to Borderline Noise

Table 3 shows the robustness results using the ELA metric for each classification algorithm, with and without bagging, in each noise model and noise level.
From the analysis of Table 3, the following results arise:
  • The ELA values are better for the algorithms using bagging than for baseline classifiers at all the noise levels for the two borderline noise models studied.
  • The most favorable advantages for each method using bagging generally occur at medium-high noise levels (from 20–25% onwards).
  • These differences are usually more noticeable in the RIPPER algorithm, followed by PART, C4.5, and finally C5.0.
  • The number of datasets in which each algorithm shows a greater robustness provides clear results in favor of bagging, being the best in most datasets.
  • The p-values of Wilcoxon’s test confirm the robustness of the bagging schemes compared to their non-application.
Figure 2 shows the box-plots for the ELA values of each classification method on datasets with borderline label noise. Figure 2a–d show the plots for neighborwise borderline noise, whereas Figure 2e–h show the distributions for non-linearwise borderline noise. These plots show that the ELA medians and interquartile ranges of the methods that use bagging are generally lower compared to those that do not. It is also observed that the methods not using bagging often present more outliers in their robustness results at lower noise levels than if they consider it.
The above results show the efficacy of bagging when dealing with label noise at decision boundaries between classes. The bagging-based schemes obtain the highest accuracy and robustness with respect to their non-bagging counterparts. The partial consideration of mislabeled samples when creating the subsamples from the original training set in bagging can make it so that only some classifiers and some of the parts of the boundaries among classes are impaired. Because of this, the global system may be less affected than it is when creating a single model from the whole dataset containing errors.

5. Conclusions

This research has focused on a comparison of the behavior of bagging-based ensembles against their individual components when the data is affected by borderline label noise. A total of 612 noisy datasets, considering various models and noise levels, have been used to analyze this comparison. On these datasets, the C4.5, RIPPER, PART, and C5.0 robust learners have been employed to create classifiers with and without the usage of bagging.
The results derived from the experimentation carried out have shown that bagging provides better accuracy and robustness results in the models and noise levels studied. The lowest improvements of average accuracy using bagging are around 2%, whereas the largest are around 4% and are usually obtained from the noise levels 15–20%. In these noise levels, a larger amount of noisy samples are available, producing greater advantages in favor of bagging. The quantity of datasets where bagging provides the highest accuracy is always above 27 (out of 36) at each noise level and noise model, regardless of the classification algorithm (the average being 31.13). However, the robustness results show a slightly greater superiority of the bagging-based methods, which are able to increase the number of datasets with the best result above 28 in all cases (with an average amount of 31.44). Wilcoxon’s test supports the good behavior of bagging, providing p-values below 0.001 in all the comparisons.
The main hypothesis to explain the better results of bagging-based methods with borderline noise is that the bootstrap resampling procedure may cause each model to be affected by only some of the borderline samples. Thus, the separability between classes can be increased, and the classifiers do not overfit the noisy data as much as in the case where bagging is not considered. Although it should be noted that the use of ensembles increases the computational cost, since several models are created from the training set, the advantages in accuracy and robustness offered by bagging in this scenario imply that it can be recommended as a simple and effective way to deal with borderline label noise.
Among the limitations and possibilities for improvement of this work, it may be interesting to analyze the imbalance ratio and other well-known characteristics of classification data, such as the dispersion of samples and the overlapping among classes [36], before and after introducing borderline label noise, determining those cases in which bagging provides better results. Another aspect to address is the analysis of the samples that are part of each subsample created by bagging, computing the number of clean and noisy samples at class boundaries in each one in order to deepen the understanding of the circumstances that make bagging work better with this type of complex data.
In future works, the synergy between bagging and preprocessing methods for the treatment of noisy data will be studied in order to test their joint operation when dealing with borderline label errors. Furthermore, the behavior of bagging when the data is affected by other types of noise in the borderline region, such as attribute noise, must be also studied.

Author Contributions

All authors have contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used in this research are taken from the UCI machine learning (https://archive.ics.uci.edu/ (accessed on 1 May 2022)) and KEEL-dataset (http://www.keel.es (accessed on 1 May 2022)) repositories.

Acknowledgments

J.L. Romero has been partially supported by grants MCIU/AEI/ERDF, UE PGC2018-098860-B-I00, grant A-FQM-345-UGR18 cofinanced by ERDF Operational Programme 2014–2020 and the Economy and Knowledge Council of the Regional Government of Andalusia, Spain, and grant CEX2020-001105-M MCIN/AEI/10.13039/501100011033.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Chen, W.; Yang, K.; Shao, Y.; Chen, Y.; Zhang, J.; Yao, J. A trace lasso regularized robust nonparallel proximal Support Vector Machine for noisy classification. IEEE Access 2019, 7, 47171–47184. [Google Scholar] [CrossRef]
  2. Nematzadeh, Z.; Ibrahim, R.; Selamat, A. Improving class noise detection and classification performance: A new two-filter CNDC model. Appl. Soft Comput. 2020, 94, 106428. [Google Scholar] [CrossRef]
  3. Martín, J.; Sáez, J.A.; Corchado, E. On the regressand noise problem: Model robustness and synergy with regression-adapted noise filters. IEEE Access 2021, 9, 145800–145816. [Google Scholar] [CrossRef]
  4. Pawara, P.; Okafor, E.; Groefsema, M.; He, S.; Schomaker, L.; Wiering, M. One-vs-One classification for deep neural networks. Pattern Recognit. 2020, 108, 107528. [Google Scholar] [CrossRef]
  5. Tian, Y.; Sun, M.; Deng, Z.; Luo, J.; Li, Y. A new fuzzy set and nonkernel SVM approach for mislabeled binary classification with applications. IEEE Trans. Fuzzy Syst. 2017, 25, 1536–1545. [Google Scholar] [CrossRef]
  6. Yu, Z.; Wang, D.; Zhao, Z.; Chen, C.L.P.; You, J.; Wong, H.; Zhang, J. Hybrid incremental ensemble learning for noisy real-world data classification. IEEE Trans. Cybern. 2019, 49, 403–416. [Google Scholar] [CrossRef]
  7. Liu, T.; Tao, D. Classification with noisy labels by importance reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 447–461. [Google Scholar] [CrossRef] [Green Version]
  8. Sáez, J.A.; Corchado, E. ANCES: A novel method to repair attribute noise in classification problems. Pattern Recognit. 2022, 121, 108198. [Google Scholar] [CrossRef]
  9. Huang, L.; Shao, Y.; Zhang, J.; Zhao, Y.; Teng, J. Robust rescaled hinge loss twin support vector machine for imbalanced noisy classification. IEEE Access 2019, 7, 65390–65404. [Google Scholar] [CrossRef]
  10. Li, J.; Zhu, Q.; Wu, Q.; Zhang, Z.; Gong, Y.; He, Z.; Zhu, F. SMOTE-NaN-DE: Addressing the noisy and borderline examples problem in imbalanced classification by natural neighbors and differential evolution. Knowl.-Based Syst. 2021, 223, 107056. [Google Scholar] [CrossRef]
  11. Quinlan, J. C4.5: Programs for Machine Learning; Morgan Kaufmann: San Francisco, CA, USA, 2014. [Google Scholar]
  12. Sáez, J.A.; Luengo, J.; Herrera, F. Predicting noise filtering efficacy with data complexity measures for nearest neighbor classification. Pattern Recognit. 2013, 46, 355–364. [Google Scholar] [CrossRef]
  13. Chaudhury, S.; Yamasaki, T. Robustness of adaptive neural network optimization under training noise. IEEE Access 2021, 9, 37039–37053. [Google Scholar] [CrossRef]
  14. Cui, S.; Wang, Y.; Yin, Y.; Cheng, T.; Wang, D.; Zhai, M. A cluster-based intelligence ensemble learning method for classification problems. Inf. Sci. 2021, 560, 386–409. [Google Scholar] [CrossRef]
  15. Xia, Y.; Chen, K.; Yang, Y. Multi-label classification with weighted classifier selection and stacked ensemble. Inf. Sci. 2021, 557, 421–442. [Google Scholar] [CrossRef]
  16. Lughofer, E.; Pratama, M.; Škrjanc, I. Online bagging of evolving fuzzy systems. Inf. Sci. 2021, 570, 16–33. [Google Scholar] [CrossRef]
  17. Sun, J.; Lang, J.; Fujita, H.; Li, H. Imbalanced enterprise credit evaluation with DTE-SBD: Decision tree ensemble based on SMOTE and bagging with differentiated sampling rates. Inf. Sci. 2018, 425, 76–91. [Google Scholar] [CrossRef]
  18. Jafarzadeh, H.; Mahdianpari, M.; Gill, E.; Mohammadimanesh, F.; Homayouni, S. Bagging and boosting ensemble classifiers for classification of multispectral, hyperspectral and polsar data: A comparative evaluation. Remote Sens. 2021, 13, 4405. [Google Scholar] [CrossRef]
  19. Abellán, J.; Castellano, J.; Mantas, C. A new robust classifier on noise domains: Bagging of credal C4.5 trees. Complexity 2017, 2017, 9023970. [Google Scholar] [CrossRef] [Green Version]
  20. Khoshgoftaar, T.; Van Hulse, J.; Napolitano, A. Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 41, 552–568. [Google Scholar] [CrossRef]
  21. Wei, Y.; Gong, C.; Chen, S.; Liu, T.; Yang, J.; Tao, D. Harnessing side information for classification under label noise. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3178–3192. [Google Scholar] [CrossRef]
  22. Bootkrajang, J. A generalised label noise model for classification. In Proceedings of the 23rd European Symposium on Artificial Neural Networks, Bruges, Belgium, 22–23 April 2015; pp. 349–354. [Google Scholar]
  23. Garcia, L.P.F.; Lehmann, J.; de Carvalho, A.C.P.L.F.; Lorena, A.C. New label noise injection methods for the evaluation of noise filters. Knowl.-Based Syst. 2019, 163, 693–704. [Google Scholar] [CrossRef]
  24. Bootkrajang, J.; Chaijaruwanich, J. Towards instance-dependent label noise-tolerant classification: A probabilistic approach. Pattern Anal. Appl. 2020, 23, 95–111. [Google Scholar] [CrossRef]
  25. Du, J.; Cai, Z. Modelling class noise with symmetric and asymmetric distributions. In Proceedings of the 29th Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2589–2595. [Google Scholar]
  26. Sáez, J.A.; Krawczyk, B.; Woźniak, M. On the influence of class noise in medical data classification: Treatment using noise filtering methods. Appl. Artif. Intell. 2016, 30, 590–609. [Google Scholar] [CrossRef]
  27. Sluban, B.; Gamberger, D.; Lavrac, N. Ensemble-based noise detection: Noise ranking and visual performance evaluation. Data Min. Knowl. Discov. 2014, 28, 265–303. [Google Scholar] [CrossRef]
  28. Garcia, L.P.F.; Lorena, A.C.; Matwin, S.; de Leon Ferreira de Carvalho, A.C.P. Ensembles of label noise filters: A ranking approach. Data Min. Knowl. Discov. 2016, 30, 1192–1216. [Google Scholar] [CrossRef]
  29. Cohen, W. Fast effective rule induction. In Proceedings of the 12th International Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995; pp. 115–123. [Google Scholar]
  30. Frank, E.; Witten, I. Generating accurate rule sets without global optimization. In Proceedings of the 15th International Conference on Machine Learning, Madison, WI, USA, 24–27 July 1998; pp. 144–151. [Google Scholar]
  31. Rajeswari, S.; Suthendran, K. C5.0: Advanced Decision Tree (ADT) classification model for agricultural data analysis on cloud. Comput. Electron. Agric. 2019, 156, 530–539. [Google Scholar] [CrossRef]
  32. Nettleton, D.; Orriols-Puig, A.; Fornells, A. A study of the effect of different types of noise on the precision of supervised learning techniques. Artif. Intell. Rev. 2010, 33, 275–306. [Google Scholar] [CrossRef]
  33. Frenay, B.; Verleysen, M. Classification in the presence of label noise: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 845–869. [Google Scholar] [CrossRef]
  34. Singh, P.; Sarkar, R.; Nasipuri, M. Significance of non-parametric statistical tests for comparison of classifiers over multiple datasets. Int. J. Comput. Sci. Math. 2016, 7, 410–442. [Google Scholar] [CrossRef]
  35. Sáez, J.A.; Luengo, J.; Herrera, F. Evaluating the classifier behavior with noisy data considering performance and robustness: The Equalized Loss of Accuracy measure. Neurocomputing 2016, 176, 26–35. [Google Scholar] [CrossRef]
  36. Ho, T.K.; Basu, M. Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 289–300. [Google Scholar]
  37. Gupta, S.; Gupta, A. Dealing with noise problem in machine learning data-sets: A systematic review. Procedia Comput. Sci. 2019, 161, 466–474. [Google Scholar] [CrossRef]
  38. Zeng, S.; Duan, X.; Li, H.; Xiao, Z.; Wang, Z.; Feng, D. Regularized fuzzy discriminant analysis for hyperspectral image classification with noisy labels. IEEE Access 2019, 7, 108125–108136. [Google Scholar] [CrossRef]
  39. Bootkrajang, J. A generalised label noise model for classification in the presence of annotation errors. Neurocomputing 2016, 192, 61–71. [Google Scholar] [CrossRef]
  40. Yuan, W.; Guan, D.; Ma, T.; Khattak, A. Classification with class noises through probabilistic sampling. Inf. Fusion 2018, 41, 57–67. [Google Scholar] [CrossRef]
  41. Adeli, E.; Thung, K.; An, L.; Wu, G.; Shi, F.; Wang, T.; Shen, D. Semi-supervised discriminative classification robust to sample-outliers and feature-noises. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 515–522. [Google Scholar] [CrossRef]
  42. Koziarski, M.; Krawczyk, B.; Wozniak, M. Radial-Based oversampling for noisy imbalanced data classification. Neurocomputing 2019, 343, 19–33. [Google Scholar] [CrossRef]
  43. Zhao, Z.; Chu, L.; Tao, D.; Pei, J. Classification with label noise: A Markov chain sampling framework. Data Min. Knowl. Discov. 2019, 33, 1468–1504. [Google Scholar] [CrossRef]
  44. Shanthini, A.; Vinodhini, G.; Chandrasekaran, R.M.; Supraja, P. A taxonomy on impact of label noise and feature noise using machine learning techniques. Soft Comput. 2019, 23, 8597–8607. [Google Scholar] [CrossRef]
  45. Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Pechenizkiy, O. Class noise and supervised learning in medical domains: The effect of feature extraction. In Proceedings of the 19th IEEE International Symposium on Computer-Based Medical Systems, Salt Lake City, UT, USA, 22–23 June 2006; pp. 708–713. [Google Scholar]
  46. Khoshgoftaar, T.M.; Hulse, J.V. Empirical case studies in attribute noise detection. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2009, 39, 379–388. [Google Scholar] [CrossRef]
  47. Sculley, D.; Cormack, G.V. Filtering email spam in the presence of noisy user feedback. In Proceedings of the 5th Conference on Email and Anti-Spam, Mountain View, CA, USA, 21–22 August 2008; pp. 1–10. [Google Scholar]
  48. Bi, J.; Zhang, T. Support vector classification with input data uncertainty. In Proceedings of the Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2004; Volume 17, pp. 161–168. [Google Scholar]
  49. Liang, D.; Yi, B. Two-stage three-way enhanced technique for ensemble learning in inclusive policy text classification. Inf. Sci. 2021, 547, 271–288. [Google Scholar] [CrossRef]
  50. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  51. Moyano, J.; Gibaja, E.; Cios, K.; Ventura, S. Review of ensembles of multi-label classifiers: Models, experimental study and prospects. Inf. Fusion 2018, 44, 33–45. [Google Scholar] [CrossRef]
  52. Singhal, Y.; Jain, A.; Batra, S.; Varshney, Y.; Rathi, M. Review of bagging and boosting classification performance on unbalanced binary classification. In Proceedings of the 8th International Advance Computing Conference, Greater Noida, India, 14–15 December 2018; pp. 338–343. [Google Scholar]
  53. Pakrashi, A.; Namee, B.M. KalmanTune: A Kalman filter based tuning method to make boosted ensembles robust to class-label noise. IEEE Access 2020, 8, 145887–145897. [Google Scholar] [CrossRef]
  54. Baldomero-Naranjo, M.; Martínez-Merino, L.; Rodríguez-Chía, A. A robust SVM-based approach with feature selection and outliers detection for classification problems. Expert Syst. Appl. 2021, 178, 115017. [Google Scholar] [CrossRef]
  55. Dietterich, T. Experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Mach. Learn. 2000, 40, 139–157. [Google Scholar] [CrossRef]
  56. Zhang, D.; Zhou, X.; Leung, S.; Zheng, J. Vertical bagging decision trees model for credit scoring. Expert Syst. Appl. 2010, 37, 7838–7843. [Google Scholar] [CrossRef]
  57. Cherif, I.L.; Kortebi, A. On using extreme gradient boosting (XGBoost) machine learning algorithm for home network traffic classification. In Proceedings of the 2019 Wireless Days, Manchester, UK, 24–26 April 2019; p. 8734193. [Google Scholar]
  58. Hansch, R. Handbook of Random Forests: Theory and Applications for Remote Sensing; World Scientific Publishing: Singapore, 2018. [Google Scholar]
  59. Quinlan, J. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef] [Green Version]
  60. Grunwald, P.; Roos, T. Minimum description length revisited. Int. J. Math. Ind. 2019, 11, 1930001. [Google Scholar] [CrossRef] [Green Version]
  61. Baringhaus, L.; Gaigall, D. Efficiency comparison of the Wilcoxon tests in paired and independent survey samples. Metrika 2018, 81, 891–930. [Google Scholar] [CrossRef]
  62. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Main steps of bagging-based ensembles.
Figure 1. Main steps of bagging-based ensembles.
Mathematics 10 01892 g001
Figure 2. Distributions of robustness results (ELA) for each classification algorithm (C4.5, RIPPER, PART, and C5.0) at each noise level and borderline noise model (neighborwise and non-linearwise). (a) C4.5 (neighborwise); (b) RIPPER (neighborwise); (c) PART (neighborwise); (d) C5.0 (neighborwise); (e) C4.5 (non-linearwise); (f) RIPPER (non-linearwise); (g) PART (non-linearwise); (h) C5.0 (non-linearwise).
Figure 2. Distributions of robustness results (ELA) for each classification algorithm (C4.5, RIPPER, PART, and C5.0) at each noise level and borderline noise model (neighborwise and non-linearwise). (a) C4.5 (neighborwise); (b) RIPPER (neighborwise); (c) PART (neighborwise); (d) C5.0 (neighborwise); (e) C4.5 (non-linearwise); (f) RIPPER (non-linearwise); (g) PART (non-linearwise); (h) C5.0 (non-linearwise).
Mathematics 10 01892 g002
Table 1. Datasets used, along with their number of samples (sa), attributes (at), and classes (cl).
Table 1. Datasets used, along with their number of samples (sa), attributes (at), and classes (cl).
DatasetsaatclDatasetsaatcl
balance62543lsvt1263092
banana530022miceprotein552778
banknote137242pageblocks5473105
biodeg1055412parkinson195222
breast10696pendigits10,9921610
bupa34562pima76882
climatemuq540182seeds21073
column2C31062segment2310197
column3C31063sonar208602
energyheat768837spectf267442
glass21496transfusion74842
haberman30632userkw40354
ionosphere351342wdbc569302
iris15043wine178133
landsat6435366wisconsin68392
leaf3401430wpbc194332
letter20,0001626wqred1599116
libras3609015wqwhite4898117
Table 2. Accuracy results of baseline and bagging classifiers with borderline label noise.
Table 2. Accuracy results of baseline and bagging classifiers with borderline label noise.
Neighborwise Borderline Label Noise
Method0%5%10%15%20%25%30%35%40%
ACCC4.50.81200.81200.80440.79060.77250.75400.72960.70300.6716
Bag-C4.50.83990.83980.83210.82330.80490.78500.75950.73440.7004
BestC4.5334454656
Bag-C4.5333332323132303130
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCRIPPER0.79900.79260.78540.78390.76470.75230.72850.70150.6766
Bag-RIPPER0.83010.82530.82390.81920.80720.78460.76210.73880.7093
BestRIPPER654426668
Bag-RIPPER303232333430303028
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCPART0.81360.80800.80090.78460.76770.74940.72070.69740.6644
Bag-PART0.84280.83730.83380.82290.81060.78800.76410.73290.7040
BestPART634036374
Bag-PART303432363330332932
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCC5.00.81350.81170.80750.79510.77900.75240.73150.70830.6815
Bag-C5.00.83710.83680.83200.82290.80680.78680.75890.73260.7025
BestC5.0564157579
Bag-C5.0313032353129312927
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
Non-Linearwise Borderline Label Noise
Method0%5%10%15%20%25%30%35%40%
ACCC4.50.81200.80580.79940.78280.76630.74490.72000.69630.6681
Bag-C4.50.83990.83440.82380.81020.79490.77790.75220.72910.7032
BestC4.5354582666
Bag-C4.5333132312834303030
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCRIPPER0.79900.79430.78450.77310.75900.74350.72360.70100.6851
Bag-RIPPER0.83010.82530.81670.80710.79400.77500.76040.73670.7124
BestRIPPER663456457
Bag-RIPPER303034343130333230
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCPART0.81360.80710.79760.78260.76960.74710.72390.69070.6632
Bag-PART0.84280.83850.82790.81400.79930.78350.76210.73050.7034
BestPART643563348
Bag-PART303233333033333228
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ACCC5.00.81350.81220.79990.78300.76720.74250.72050.69880.6680
Bag-C5.00.83710.83210.82470.81320.79470.77360.75300.72700.7026
BestC5.0585467787
Bag-C5.0312831323029292829
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
Table 3. Robustness results of baseline and bagging classifiers with borderline label noise.
Table 3. Robustness results of baseline and bagging classifiers with borderline label noise.
Neighborwise Borderline Label Noise
Method0%5%10%15%20%25%30%35%40%
ELAC4.50.26130.25910.26810.28330.30520.32580.35370.38440.4215
Bag-C4.50.21140.21040.21950.22880.24930.27110.30030.32840.3673
BestC4.5333455545
Bag-C4.5333333323131313231
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELARIPPER0.30150.30600.31540.31660.33960.35280.38190.41510.4413
Bag-RIPPER0.23500.24050.24070.24500.25860.28520.31050.33660.3700
BestRIPPER644426647
Bag-RIPPER303232323430303229
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELAPART0.26070.26590.27360.29200.31150.33160.36540.39260.4305
Bag-PART0.20750.21390.21680.22910.24090.26740.29440.33030.3618
BestPART644246375
Bag-PART303232343230332931
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELAC5.00.25710.25740.26230.27590.29520.32590.34960.37580.4066
Bag-C5.00.21770.21620.22160.23110.24880.27150.30350.33240.3668
BestC5.0574254488
Bag-C5.0312932343132322828
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
Non-Linearwise Borderline Label noise
Method0%5%10%15%20%25%30%35%40%
ELAC4.50.26130.26760.27360.29340.31310.33840.36700.39680.4285
Bag-C4.50.21140.21820.23010.24540.26310.28160.31120.33740.3658
BestC4.5344362556
Bag-C4.5333232333034313130
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELARIPPER0.30150.30620.31670.33290.34840.36750.38990.41770.4334
Bag-RIPPER0.23500.24020.25020.26240.27630.29870.31410.34180.3701
BestRIPPER653145145
Bag-RIPPER303133353231353231
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELAPART0.26070.26710.27770.29500.30930.33720.36360.40250.4339
Bag-PART0.20750.21220.22460.24060.25680.27390.29790.33490.3655
BestPART634664336
Bag-PART303332303032333330
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
ELAC5.00.25710.25690.27160.29100.30910.33900.36500.39060.4256
Bag-C5.00.21770.22260.23110.24370.26490.28940.31240.34150.3691
BestC5.0573245777
Bag-C5.0312933343231292929
pWil-<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001<0.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sáez, J.A.; Romero-Béjar, J.L. On the Suitability of Bagging-Based Ensembles with Borderline Label Noise. Mathematics 2022, 10, 1892. https://doi.org/10.3390/math10111892

AMA Style

Sáez JA, Romero-Béjar JL. On the Suitability of Bagging-Based Ensembles with Borderline Label Noise. Mathematics. 2022; 10(11):1892. https://doi.org/10.3390/math10111892

Chicago/Turabian Style

Sáez, José A., and José L. Romero-Béjar. 2022. "On the Suitability of Bagging-Based Ensembles with Borderline Label Noise" Mathematics 10, no. 11: 1892. https://doi.org/10.3390/math10111892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop