Next Article in Journal
Traction Inverter Open Switch Fault Diagnosis Based on Choi–Williams Distribution Spectral Kurtosis and Wavelet-Packet Energy Shannon Entropy
Previous Article in Journal
Modeling NDVI Using Joint Entropy Method Considering Hydro-Meteorological Driving Factors in the Middle Reaches of Hei River Basin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attribute Value Weighted Average of One-Dependence Estimators

1
School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China
2
School of Mechanical Engineering and Electronic Information, Wuhan University of Engineering Science, Wuhan 430200, China
3
Department of Computer Science, China University of Geosciences, Wuhan 430074, China
4
Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(9), 501; https://doi.org/10.3390/e19090501
Submission received: 8 July 2017 / Revised: 16 August 2017 / Accepted: 11 September 2017 / Published: 16 September 2017

Abstract

:
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison.

1. Introduction

A Bayesian network (BN) is a graphical model that encodes probabilistic relationships among all variables, where nodes represent attributes, edges represent the relationships between the attributes, and directed arcs can be used to explicitly represent the joint probability distribution. Bayesian networks are often used for classification. Unfortunately, it has been proved that learning the optimal BN structure from an arbitrary BN search space of discrete variables is an non-deterministic polynomial-time hard (NP-hard) problem [1].One of the most effective BN classifiers, in the sense that its predictive performance is competitive with state-of-the-art classifiers [2], is the so-called naive Bayes (NB). NB is the simplest form of BN classifiers. It runs on labeled training instances and is driven by the conditional independence assumption that all attributes are fully independent of each other given the class. The NB classifier has the simple structure shown in Figure 1, where every attribute A i (every leaf in the network) is independent from the rest of the attributes, given the state of the class variable C (the root in the network). However, it is obvious that the conditional independence assumption in NB is often violated in real-world applications, which will affect its performance with complex attribute dependencies [3,4].
In order to effectively mitigate the assumption of independent attributes for the NB classifier, appropriate structures and approaches are needed to manipulate independence assertions [5,6,7]. Among the numerous Bayesian learning approaches, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. Representative approaches include tree-augmented naive Bayesian (TAN) [8], hidden naive Bayes (HNB) [9], aggregating one-dependence estimators (AODE) [10], weighted average of one-dependence estimators (WAODE) [11].
To the best of our knowledge, few studies have focused on attribute value weighting in terms of Bayesian networks [12]. The attribute value weighting approach is a more fine-grained weighting approach, e.g., when dealing with the pregnancy-related disease classification problem and trying to analyze the importance of gender attributes. It is obvious that the value of the male gender attribute has no effect on the class value (pregnancy-related disease), whereas the value of the female gender attribute has a great effect on the class value. It will be interesting to study whether a better performance can be achieved by combining attribute value weighting with the ODEs. The resulting model which combines attribute value weighting with the ODEs inherits the effectiveness of ODEs; meanwhile, this approach is a new paradigm of weighting approach in classification learning.
In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). Our AVWAODE approach is well balanced between the ground-truth dependencies approximation and the effectiveness of probability estimation. We extend the current classification learning utilizing ODEs into the next level by introducing a new set of weight space to the problem. It is a new approach to calculating the discriminative weights for ODEs using the filter approach. We assume that the significance of each ODE can be decomposed, and in the structure of highly predictive ODE, the different root attribute value should be strongly associated with the class. Based on these assumptions, we assign a different weight to each ODE by computing the correlation between the root attribute value and the class. In order to correctly measure the amount of correlation between the root attribute value and the class, our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We conducted two groups of extensive empirical comparisons with many state-of-the-art Bayesian classifiers using 36 University of California at Irvine (UCI) datasets published on the main website of the WEKA platform [13,14]. Extensive experiments show that both AVWAODE-KL and AVWAODE-IG can achieve better performance than these state-of-the-art Bayesian classifiers used to compare.
The rest of the paper is organized as follows. In Section 2, we summarize the existing Bayesian networks learning utilizing ODEs. In Section 3, we propose our AVWAODE approach. In Section 4, we describe the experimental setup and results in detail. In Section 5, we draw our conclusions and outline the main directions for our future work.

2. Related Work

Learning Bayesian networks from data is a rapidly growing field of research that has seen a great deal of activity in recent years [15,16]. The notion of x-dependence estimators (x-DE) was proposed by Sahami [17]; x-DE allows each attribute to depend, at most, on only other x attributes in addition to the class. Webb et al. [18] defined the averaged n-dependence estimators (AnDE) family of algorithms. AnDE further relaxes the independence assumption. Extensive experimental evaluation shows that the bias-variance trade-off for averaged 2-dependence estimators (A2DE) results in strong predictive accuracy over a wide range of data sets [18]. A2DE proves to be a computationally tractable version of AnDE that delivers strong classification accuracy for large data without any parameter tuning.
To maintain efficiency, the Bayesian network appears desirable to restrict classifiers to ODEs [19]. A comparative study of linear combination schemes for superparent-one-dependence estimators was proposed by Yang [20]. Approaches which utilize ODEs have demonstrated remarkable performance [21,22]. Different from totally ignoring the attribute dependencies as naive Bayes or taking maximum flexibility for modelling dependencies as the Bayesian network, ODEs try to exploit attribute dependencies in moderate orders. By allowing one-order attribute dependencies, ODEs have been shown to be able to approximate the ground-truth attribute dependencies, whilst keeping the effectiveness of probability estimation, thus leading to excellent performance. In ODEs, directed arcs can be used to explicitly represent the joint probability distribution. The class variable is the parent of every attribute; meanwhile, every attribute has another attribute node as its parents; each attribute is independent of its nondescendants given the state of its parents. Using the independence statements encoded in ODEs, the joint probability distribution is uniquely determined by these local conditional distributions. These independencies are then used to reduce the number of parameters and to characterize the joint probability distribution.
ODEs restrict each attribute to depending only on one parent in addition to the class. If we assume that A 1 , A 2 , , A m are m attributes, then a test instance x can be represented by an attribute value vector < a 1 , a 2 , , a m > , where a i denotes the value of the i-th attribute A i ; under the attribute independence assumption, Equation (1) is used to classify the test instance x:
c ( x ) = arg   max c C P ( c ) i = 1 m P ( a i | a i p , c ) ,
where a i p is the attribute value of A i p , which is the attribute parent of A i .
Friedman et al. [8] proposed tree-augmented naive Bayesian (TAN). The TAN algorithm builds a maximum weighted spanning tree; the conditional mutual information between each pair of attributes is computed, in which the vertices are the attributes; they finally transform the resulting undirected tree to a directed one by choosing a root variable and setting the direction of all edges to be outward from it. In TAN, the class variable has no parents and each attribute can only depend on another attribute parent in addition to the class; each attribute can have one augmenting edge pointing to it. The example of TAN is shown in Figure 2.
Jiang et al. [9] proposed a structure extension-based algorithm (HNB), which is based on ODEs; each attribute in HNB has a hidden parent that is a mixture of the weighted influences from all other attributes. HNB uses conditional mutual information to estimate the weights directly from data, and establishes a novel model that can consider the influences of all the attributes; it avoids learning with intractable computational complexity. The structure of HNB is shown in Figure 3.
The approach of AODE [10] is to select a limited class of ODE and to aggregate the predictions of all qualified classifiers where there is a single attribute that is the parent of all other attributes. In order to avoid including models for which the base probability estimates are inaccurate, the AODE approach excludes models where the training data contain fewer than 30 examples of the value for a i of the parent attribute A i . An example of the aggregate of AODE is shown in Figure 4.
AODE classifies a test instance x using Equation (2).
c ( x ) = arg   max c C ( i = 1 Λ F ( a i ) 30 m P ( a i , c ) k = 1 , k i m P ( a k | a i , c ) n u m P a r e n t ) ,
where F ( a i ) is a count of the frequency of training instances having attribute value a i , and is used to enforce the frequency limit; n u m P a r e n t is the number of the root attributes, which satisfy the condition that the training instances contain more than 30 instances with the value a i for the parent attribute A i .
However, all ODEs in AODE have the same weights and are treated equally. WAODE [11] was proposed to assign different weights to different ODEs. In WAODE, a special ODE is built for each attribute. Namely, each attribute is set as the root attribute once and each special ODE is assigned a different weight. An example of the aggregate of WAODE is shown in Figure 5.
WAODE classifies a test instance x using the following Equation (3).
c ( x ) = arg   max c C ( i = 1 m W i P ( a i , c ) k = 1 , k i m P ( a k | a i , c ) i = 1 m W i ) ,
where W i is the weight of the ODE while the attribute A i is set as the root attribute, which is the parent of all other attributes. The WAODE approach assigns weight to each ODE according to the relationship between the root attribute and the class; it uses the mutual information I P ( A i ; C ) between the root attribute A i and the class variable C to define the weight W i . The detailed Equation is:
W i = I P ( A i ; C ) = a i , c P ( a i , c ) log P ( a i , c ) P ( a i ) P ( c ) .

3. AVWAODE Approach

The remarkable performance of semi-naive Bayesian classifiers utilizing ODEs suggests that ODEs are well balanced between the attribute dependencies assumption and the effectiveness of probability estimation. To the best of our knowledge, few studies have focused on attribute value weighting in terms of Bayesian networks [12]. Each attribute takes on a number of discrete values, and each discrete value has different importance with respect to the target. Therefore, the attribute value weighting approach is a more fine-grained weighting approach. It is interesting to study whether a better performance can be achieved by exploiting the ODEs while using the attribute value weighting approach. The resulting model which combines attribute value weighting with the ODEs inherits the effectiveness of ODEs; meanwhile, this approach is a new paradigm of weighting approach in Bayesian classification learning.
In this study, we extend the current classification learning utilizing ODEs into the next level by introducing a new set of weight space to the problem. Therefore, this study proposes a new dimension of weighting approach by assigning weights according to attribute values. We propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). Our AVWAODE approach is well balanced between the ground-truth dependencies approximation and the effectiveness of probability estimation. It is a new method for calculating discriminative weights for ODEs using the filter approach. The basic assumption of our AVWAODE approach is that the root attribute value should be strongly associated with the class in the structure of highly predictive ODE, and in ODEs; when a certain root attribute value is observed, it gives a certain amount of information to the class. The more information a root attribute value provides to the class, the more important the root attribute value becomes.
In AVWAODE, a special ODE is built for each attribute. Namely, each attribute is set as the root attribute once and each special ODE is assigned a different weight. Different from the existing WAODE approach which assigns different weights to different ODEs according to the relationship between the root attribute and the class, our AVWAODE approach assigns discriminative weights to different ODEs by computing the correlation between the root attribute value and the class. Since the weight of each special ODE is associated with the root attribute value, AVWAODE uses Equation (5) to classify a test instance x.
c ( x ) = arg   max c C ( i = 1 m W i , a i P ( a i , c ) k = 1 , k i m P ( a k | a i , c ) i = 1 m W i , a i ) ,
where W i , a i is the weight of the ODE when the root attribute A i has the attribute value a i . The base probabilities P ( a i , c ) and P ( a k | a i , c ) are estimated using the m-estimate as follows:
P ( a i , c ) = F ( a i , c ) + 1 . 0 / ( n i n c ) n + 1 . 0 ,
P ( a k | a i , c ) = F ( a k , a i , c ) + 1 . 0 / n k F ( a i , c ) + 1 . 0 ,
where F ( ) is the frequency with which a combination of terms appears in the training data, n is the number of training instances, n i is the number of values of the root attribute A i , n k is the number of values of the leaf attribute A k , and n c is the number of classes.
Now, the only question left to answer is how to use a proper measure which can correctly quantify the amount of correlation between the root attribute value and the class. To address this question, our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. Note that KL and IG are two widely used measures already presented in the existing literature; we just redefine them following our proposed methods.

3.1. AVWAODE-KL

The first candidate that can be used in our approach is the KL measure, which is a widely used method for calculating the correlation between the class variable and an attribute value [23]. The KL measure calculates the distances of different posterior probabilities from the prior probabilities. The main idea is that the KL measure becomes more reliable as the frequency of a specific attribute value increases. It originally was proposed in the article [24]. The KL measure calculates the average mutual information between the events c and the attribute value a i with the expectation taken with respect to a posteriori probability distribution of C [23]. The KL measure uses Equation (8) to quantify the information content of an attribute value a i :
K L ( C | a i ) = c P ( c | a i ) log P ( c | a i ) P ( c ) ,
where K L ( C | a i ) represents the attribute value-class correlation.
It is quite an intuitive argument that if the root attribute value a i gets a higher KL value, the ODE will deserve a higher weight. Therefore, we use the K L ( C | a i ) between the root attribute value a i and the class C to define the weight W i , a i of the ODE as:
W i , a i = K L ( C | a i ) = c P ( c | a i ) log P ( c | a i ) P ( c ) ,
where a i is the attribute value of the root attribute, which is the parent of all other attributes. An example of AVWAODE-KL is shown in Figure 6.
The detailed learning algorithm for our AVWAODE approach using the KL measure (AVWAODE-KL) is described briefly as Algorithm 1.
Algorithm 1: AVWAODE-KL (D, x)
Input: A training dataset D and a test instance x
Output: Class label c ( x ) of x
   1. For each class value c
   2.     Compute P ( c ) from D
   3.     For each attribute value a i
   4.       Compute F ( a i , c ) from D
   5.       Compute P ( a i , c ) by Equation (6)
   6.       For each attribute value a k ( k i )
   7.          Compute F ( a k , a i , c ) from D
   8.          Compute P ( a k | a i , c ) by Equation (7)
   9. For each attribute value a i
   10.     Compute F ( a i ) from D
   11.     Compute W i , a i by Equation (9)
   12. Estimate the class label c ( x ) for x using Equation (5)
   13. Return the class label c ( x ) for x
Compared to the well-known AODE, AVWAODE-KL requires some additional training time to compute the weights of all the attribute values. According to the algorithm given above, the additional training time complexity for computing these weights is only O ( n c m v ) ; where n c is the number of classes, m is the number of attributes, and v is the average number of values for an attribute. Therefore, AVWAODE-KL has only a training time complexity of O ( n m 2 + n c m v ) , where n is the number of training instances. Note that n c v is generally much less than n m in reality. If we only take the highest order term, the training time complexity is still O ( n m 2 ) , which is the same as AODE. In addition, the classification time complexity of AVWAODE-KL is O ( n c m 2 ) , which is also the same as AODE. All of this means that AVWAODE-KL is simple and efficient.

3.2. AVWAODE-IG

The second candidate that can be used in our approach is the IG measure, which is a widely used method for calculating the importance of attributes [5,25]. The C4.5 approach [2] uses the IG measure to construct the decision tree for classifying objects. The information gain used in the C4.5 approach [2] is defined as:
H ( C ) - H ( C | A i ) = a i P ( a i ) c P ( c | a i ) l o g P ( c | a i ) - c P ( c ) l o g P ( c ) .
Equation (10) computes the difference between the entropy of a priori distribution and the entropy of a posteriori distribution of the class, and the C4.5 approach uses the difference value as the metric for deciding the branching node. The value of the IG measure can represent the significance of the attribute; the bigger the value is, the greater impact of the attribute when classifying the test instance. The correlation between the attribute A i and the class C can be measured by Equation (10). Since we require discriminative power for an attribute value, we cannot use Equation (10) directly as the measure of the discriminative power for an attribute value. The IG measure calculates the correlation between an attribute value a i and the class variable C by:
I G ( C | a i ) = c P ( c | a i ) l o g P ( c | a i ) - c P ( c ) l o g P ( c ) .
It is quite an intuitive argument that if the root attribute value a i gets a higher IG value, the ODE will deserve a higher weight. Therefore, we use the I G ( C | a i ) between the root attribute value a i and the class C to define the weight W i , a i of the ODE as:
W i , a i = I G ( C | a i ) = c P ( c | a i ) l o g P ( c | a i ) - c P ( c ) l o g P ( c ) ,
where a i is the attribute value of the root attribute, which is the parent of all other attributes. An example of AVWAODE-IG is shown in Figure 7.
The training time complexity and the classification time complexity of AVWAODE-IG is the same as AVWAODE-KL. In order to save space, we do not repeat the detailed analysis here.

4. Experiments and Results

In order to validate the classification performance of AVWAODE, we ran our experiments on all the 36 UCI datasets [26] published on the main web site of Weka platform [13,14]. In our experiments, missing values are replaced with the modes and means of the corresponding attribute values from the available data. Numeric attribute values are discretized using the unsupervised ten-bin discretization implemented in Weka platform. Additionally, we manually delete three useless attributes: the attribute “Hospital Number” in the dataset “colic.ORIG”, the attribute “instance name" in the dataset “splice”, and the attribute “animal” in the dataset “zoo”.
We compare the performance of AVWAODE with some state-of-the-art Bayesian classifiers: TAN [8], HNB [9], AODE [10], WAODE [11], NB [2] and A2DE [18]. We conduct extensive empirical comparisons in two groups with these state-of-the-art models in terms of the classification accuracy. In the first group, we compared AVWAODE-KL with all of these models, and in the second group, we compared AVWAODE-IG with these competitors.
Table 1 and Table 2 show the detailed results of the comparisons in terms of the classification accuracy. All of the classification accuracy estimates were obtained by averaging the results from 10 separate runs in a stratified ten-fold cross-validation. We then conducted corrected paired two-tailed t-tests at the 95% significance level [27] in order to compare our AVWAODE-KL and AVWAODE-IG with each of their competitors: TAN, HNB, AODE, WAODE, NB, and A2DE. The averages and the W i n / T i e / L o s e (W/T/L) values are summarized at the bottom of the tables. Each entry’s W/T/L in the tables implies that, compared to their competitors, AVWAODE-KL and AVWAODE-IG win on W datasets, tie on T datasets, and lose on L datasets. The average (arithmetic mean) of each algorithm across all datasets provides a gross indicator of the relative performance in addition to the other statistics.
Then, we employ a corrected paired two-tailed t-test with the p = 0 . 05 significance level [27] to compare each pair of algorithms. Table 3 and Table 4 show the summary test results with regard to AVWAODE-KL and AVWAODE-IG, respectively. In these tables, for each entry i ( j ) , i is the number of datasets on which the algorithm in the column achieves higher classification accuracy than the algorithm in the corresponding row, and j is the number of datasets on which the algorithm in the column achieves significant wins with the p = 0 . 05 significance level [27] with regard to the algorithm in the corresponding row.
Table 5 and Table 6 show the ranking test results with regard to AVWAODE-KL and AVWAODE-IG, respectively. In these tables, the first column is the difference between the total number of wins and the total number of losses that the corresponding algorithm achieves compared with all the other algorithms, which is used to generate the ranking. The second and third columns represent the total numbers of wins and losses, respectively.
According to these comparisons, we can see that both AVWAODE-KL and AVWAODE-IG perform better than TAN, HNB, AODE, WAODE, NB, and A2DE. We summarize the main features of these comparisons as follows.
  • Both AVWAODE-KL and AVWAODE-IG significantly outperform NB with 16 wins and one loss, and they also outperform TAN with 11 wins and zero losses.
  • Both AVWAODE-KL and AVWAODE-IG perform notably better than HNB on six datasets and worse on zero datasets, and also better than A2DE on five datasets and worse on one dataset.
  • Both AVWAODE-KL and AVWAODE-IG are markedly better than AODE with nine wins and zero losses.
  • Both AVWAODE-KL and AVWAODE-IG perform substantially better than WAODE on two datasets and one datset, respectively.
  • Seen from the ranking test results, our proposed algorithms are always the best ones, and NB is the worst one. The overall ranking (descending order) is AVWAODE-KL (AVWAODE-IG), WAODE, A2DE, HNB, AODE, TAN and NB, respectively.
  • The results of these comparisons suggest that assigning discriminative weights to different ODEs by computing the correlation between the root attribute value and the class is highly effective when setting weights for ODEs.
Yet at the same time, based on the accuracy results presented in Table 1 and Table 2, we take advantage of KEEL Data-Mining Software Tool [28,29] to complete the Wilcoxon signed-ranks test [30,31] to thoroughly compare each pair of algorithms. The Wilcoxon signed-ranks test is a non-parametric statistical test, which ranks the differences in performances of two algorithms for each dataset, ignoring the signs, and compares the ranks for positive and negative differences. Table 7, Table 8, Table 9 and Table 10 summarize the detailed results of the nonparametric statistical comparisons based on the Wilcoxon test. According to the table, for the exact critical values in the Wilcoxon test at a confidence level of α = 0 . 05 ( α = 0 . 1 ) and with datasets where N = 36 , two algorithms are considered “significantly different" if the smaller of R + and R - is equal to or less than 208 (227), and thus we could reject the null hypothesis. ∘ indicates that the algorithm in the column improves the algorithm in the corresponding row, and • indicates that the algorithm in the row improves the algorithm in the corresponding column. According to these comparisons, we can see that both AVWAODE-KL and AVWAODE-IG performed significantly better than NB, and TAN, and AVWAODE-IG performed even significantly better than A2DE at a confidence level of α = 0 . 1 .
Additionally, in our experiments, we have also observed the performance of our proposed AVWAODE in terms of the area under the Receiver Operating Characteristics (ROC) curve (AUC) [25,32,33,34]. Table 11 and Table 12 show the detailed comparison results in terms of AUC. From these comparison results, we can find that our proposed AVWAODE is also very promising in terms of the area under the ROC curve. Please note that, in order to save space, we do not present the detailed summary test results, the ranking test results, and the Wilcoxon test results.

5. Conclusions and Future Work

Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). Our AVWAODE approach is well balanced between the ground-truth dependencies approximation and the effectiveness of probability estimation. We assign discriminative weights to different ODEs by computing the correlation between the root attribute value and the class. Two different attribute value weighting measures, which are called the Kullback–Leibler (KL) measure and the information gain (IG) measure, are used to quantify the correlation between the root attribute value and the class, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. Extensive experiments show that both AVWAODE-KL and AVWAODE-IG achieve better performance than some other state-of-the-art ODE models used for comparison.
How to learn the weights is a crucial problem in our proposed AVWAODE approach. An interesting future work will be the exploration of more effective methods to estimate weights to improve our current AVWAODE versions. Furthermore, applying the proposed attribute value weighting approach to improve some other state-of-the-art classification models is another topic for our future work.

Acknowledgments

The work was partially supported by the Program for New Century Excellent Talents in University (NCET-12-0953), the excellent youth team of scientific and technological innovation of Hubei higher education (T201736), and the Open Research Project of Hubei Key Laboratory of Intelligent Geo-Information Processing (KLIGIP201601).

Author Contributions

Liangjun Yu and Liangxiao Jiang conceived the study; Liangjun Yu and Dianhong Wang designed and performed the experiments; Liangjun Yu and Lungan Zhang analyzed the data; Liangjun Yu and Liangxiao Jiang wrote the paper; All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chickering, D.M. Learning Bayesian networks is NP-Complete. Artif. Intell. Stat. 1996, 112, 121–130. [Google Scholar]
  2. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann: San Mateo, CA, USA, 1993. [Google Scholar]
  3. Wu, J.; Cai, Z. Attribute Weighting via Differential Evolution Algorithm for Attribute Weighted Naive Bayes (WNB). J. Comput. Inf. Syst. 2011, 7, 1672–1679. [Google Scholar]
  4. Zaidi, N.A.; Cerquides, J.; Carman, M.J.; Webb, G.I. Alleviating Naive Bayes Attribute Independence Assumption by Attribute Weighting. J. Mach. Learn. Res. 2013, 14, 1947–1988. [Google Scholar]
  5. Kohavi, R. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; AAAI Press: Cambridge, MA, USA, 1996; pp. 202–207. [Google Scholar]
  6. Khalil, E.H. A Noise Tolerant Fine Tuning Algorithm for the Naive Bayesian learning Algorithm. J. King Saud Univ. Comput. Inf. Sci. 2014, 26, 237–246. [Google Scholar]
  7. Hall, M. A decision tree-based attribute weighting filter for naive bayes. Knowl. Based Syst. 2007, 20, 120–126. [Google Scholar] [CrossRef]
  8. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
  9. Jiang, L.; Zhang, H.; Cai, Z. A Novel Bayes Model: Hidden Naive Bayes. IEEE Trans. Knowl. Data Eng. 2009, 21, 1361–1371. [Google Scholar] [CrossRef]
  10. Webb, G.; Boughton, J.; Wang, Z. Not So Naive Bayes: Aggregating One-Dependence Estimators. Mach. Learn. 2005, 58, 5–24. [Google Scholar] [CrossRef]
  11. Jiang, L.; Zhang, H.; Cai, Z.; Wang, D. Weighted average of one-dependence estimators. J. Exp. Theor. Artif. Intell. 2012, 24, 219–230. [Google Scholar] [CrossRef]
  12. Lee, C.H. A gradient approach for value weighted classification learning in naive Bayes. Knowl. Based Syst. 2015, 85, 71–79. [Google Scholar] [CrossRef]
  13. Witten, I.H.; Frank, E. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2005. [Google Scholar]
  14. WEKA The University of Waikato. Available online: http://www.cs.waikato.ac.nz/ml/weka/ (accessed on 15 September 2017).
  15. Qiu, C.; Jiang, L.; Li, C. Not always simple classification: Learning SuperParent for Class Probability Estimation. Expert Syst. Appl. 2015, 42, 5433–5440. [Google Scholar] [CrossRef]
  16. Hall, M. Correlation-based feature selection for discrete and numeric class machine learning. In Proceedings of the 17th International Conference on Machine Learning, Stanford, CA, USA, 29 June–2 July 2000; pp. 359–366. [Google Scholar]
  17. Sahami, M. Learning Limited Dependence Bayesian Classifiers. In Proceedings of the 2th ACM SIGKDD International Conference on Knowledge Disvovery and Data Mining, Portland, OR, USA, 2–4 August 1996; pp. 335–338. [Google Scholar]
  18. Webb, G.; Boughton, J.; Zheng, F. Learning by extrapolation from marginal to full-multivariate probability distributions: Decreasingly naive Bayesian classification. Mach. Learn. 2012, 86, 233–272. [Google Scholar] [CrossRef]
  19. Zheng, F.; Webb, G. Efficient lazy elimination for averaged one-dependence estimators. In Proceedings of the 23rd international conference on Machine learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 1113–1120. [Google Scholar]
  20. Yang, Y.; Webb, G.; Korb, K.; Boughton, J.; Ting, K. To Select or To Weigh: A Comparative Study of Linear Combination Schemes for SuperParent-One-Dependence Estimators. IEEE Trans. Knowl. Data Eng. 2007, 9, 1652–1665. [Google Scholar] [CrossRef] [Green Version]
  21. Li, N.; Yu, Y.; Zhou, Z. Semi-naive Exploitation of One-Dependence Estimators. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, Miami, FL, USA, 6–9 December 2009; pp. 278–287. [Google Scholar]
  22. Xiang, Z.; Kang, D. Attribute weighting for averaged one-dependence estimators. Appl. Intell. 2017, 46, 616–629. [Google Scholar] [CrossRef]
  23. Lee, C.H.; Gutierrez, F.; Dou, D. Calculating Feature Weights in Naive Bayes with Kullback-Leibler Measure. In Proceedings of the 11th IEEE International Conference on Data Mining, Vancouver, BC, Canada, 11–14 December 2011; pp. 1146–1151. [Google Scholar]
  24. Kullback, S.; Leibler, R. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  25. Jiang, L. Learning random forests for ranking. Front. Comput. Sci. China 2011, 5, 79–86. [Google Scholar] [CrossRef]
  26. Merz, C.; Murphy, P.; Aha, D. UCI Repository of Machine Learning Databases. In Department of ICS, University of California. 1997. Available online: http://www.ics.uci.edu/mlearn/MLRepository.html (accessed on 15 September 2017).
  27. Nadeau, C.; Bengio, Y. Inference for the generalizaiton error. Mach. Learn. 2003, 52, 239–281. [Google Scholar] [CrossRef]
  28. Alcalá-Fdez, J.; Fernandez, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F. KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. J. Mult.-Valued Logic Soft Comput. 2011, 17, 255–287. [Google Scholar]
  29. Knowledge Extraction based on Evolutionary Learning. Available online: http://sci2s.ugr.es/keel/ (accessed on 15 September 2017).
  30. Demsar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  31. Garcia, S.; Herrera, F. An extension on statistical comparisons of classifiers over multiple data sets for all pairwise comparisons. J. Mach. Learn. Res. 2008, 9, 2677–2694. [Google Scholar]
  32. Bradley, P.; Andrew, P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef] [Green Version]
  33. Hand, D.; Till, R. A simple generalisation of the area under the roc curve for multiple class classification problems. Mach. Learn. 2001, 45, 171–186. [Google Scholar] [CrossRef]
  34. Ling, C.X.; Huang, J.; Zhang, H. AUC: A statistically consistent and more discriminating measure than accuracy. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, Ribeirao Preto, Brazil, 23–27 October 2006; Morgan Kaufmann: Burlington, MA, USA, 2003; pp. 329–341. [Google Scholar]
Figure 1. An example of naive Bayes (NB).
Figure 1. An example of naive Bayes (NB).
Entropy 19 00501 g001
Figure 2. An example of tree-augmented naive Bayesian (TAN).
Figure 2. An example of tree-augmented naive Bayesian (TAN).
Entropy 19 00501 g002
Figure 3. A structure of hidden naive Bayes (HNB).
Figure 3. A structure of hidden naive Bayes (HNB).
Entropy 19 00501 g003
Figure 4. An example of aggregating one-dependence estimators (AODE).
Figure 4. An example of aggregating one-dependence estimators (AODE).
Entropy 19 00501 g004
Figure 5. An example of weighted average of one-dependence estimators (WAODE).
Figure 5. An example of weighted average of one-dependence estimators (WAODE).
Entropy 19 00501 g005
Figure 6. An example of attribute value weighted average of one-dependence estimators (AVWAODE)-KL.
Figure 6. An example of attribute value weighted average of one-dependence estimators (AVWAODE)-KL.
Entropy 19 00501 g006
Figure 7. An example of AVWAODE-IG.
Figure 7. An example of AVWAODE-IG.
Entropy 19 00501 g007
Table 1. Accuracy comparisons for AVWAODE-KL versus TAN, HNB, AODE, WAODE, NB, and A2DE.
Table 1. Accuracy comparisons for AVWAODE-KL versus TAN, HNB, AODE, WAODE, NB, and A2DE.
DatasetAVWAODE-KLTANHNBAODEWAODENBA2DE
anneal98.8696.73 •97.74 •96.74 •98.5694.32 •98.36
anneal.ORIG91.1090.4989.8788.79 •89.80 •88.16 •90.32
audiology76.1765.35 •69.04 •71.66 •76.2671.40 •74.84
autos80.7572.54 •75.4973.48 •80.3663.97 •76.97
balance-scale88.5386.14 •89.1489.7889.2891.44 ∘91.44 ∘
breast-cancer70.6769.5373.0972.5371.9772.9470.63
breast-w96.0895.4595.6797.1196.5797.3097.17
colic80.9680.1181.4480.9080.6678.8681.12
colic.ORIG75.5067.71 •75.6675.3075.9374.2176.68
credit-a85.1784.1085.8085.9184.4384.7484.83
credit-g75.3174.8876.2976.4276.3875.9375.19
diabetes75.0376.3176.0076.3775.8375.6875.74
glass59.4758.6959.0261.1359.6257.6963.14
heart-c81.9879.7082.3182.4882.6183.4481.75
heart-h82.2981.2783.2184.0683.1183.6481.74
heart-statlog82.8579.4882.7083.6782.3083.7883.81
hepatitis83.8283.0083.9284.8284.1484.0685.26
hypothyroid93.4293.3693.4993.5393.5492.79 •93.17
ionosphere92.1191.3492.0092.0892.9490.8692.88
iris95.1394.2793.9394.4795.7394.3394.33
kr-vs-kp94.3292.88 •92.36 •91.01 •94.1887.79 •93.69
labor90.9089.0092.7395.3091.7396.7092.67
letter89.2482.69 •84.68 •85.54 •88.86 •70.09 •80.44 •
lymph85.3583.6983.9086.2584.1685.9784.75
mushroom99.9999.9999.9499.9599.9895.52 •99.52 •
primary-tumor48.1444.7747.6647.6747.9447.2047.32
segment95.4193.91 •93.72 •92.94 •95.0689.03 •94.83
sick98.1197.70 •97.7797.51 •97.9996.78 •97.55 •
sonar77.4675.3481.7579.0478.2876.3574.30
soybean94.2894.9893.8893.2894.3392.20 •94.45
splice96.3194.95 •95.84 •96.12 •96.36 •95.42 •95.40 •
vehicle73.3573.3572.1571.6273.0961.03 •72.48
vote94.4694.4394.4394.5294.4690.21 •93.26
vowel92.6791.8991.3489.52 •92.5666.09 •92.62
waveform-500083.6180.44 •83.7984.2484.0079.97 •82.17
zoo98.5096.6397.7394.6698.1194.3795.65
Average85.4883.5384.9985.0185.5982.3485.01
W/T/L-11/25/06/30/09/27/02/34/016/19/15/30/1
∘, • statistically significant improvement or degradation over AVWAODE-KL.
Table 2. Accuracy comparisons for AVWAODE-IG versus TAN, HNB, AODE, WAODE, NB, and A2DE.
Table 2. Accuracy comparisons for AVWAODE-IG versus TAN, HNB, AODE, WAODE, NB, and A2DE.
DatasetAVWAODE-IGTANHNBAODEWAODENBA2DE
anneal98.7596.73 •97.74 •96.74 •98.5694.32 •98.36
anneal.ORIG89.4590.4989.8788.7989.8088.1690.32
audiology76.0465.35 •69.04 •71.66 •76.26 •71.40 •74.84
autos80.1672.54 •75.4973.48 •80.3663.97 •76.97
balance-scale88.4686.14 •89.1489.7889.2891.44 ∘91.44 ∘
breast-cancer71.8469.5373.0972.5371.9772.9470.63
breast-w96.2595.4595.6797.1196.5797.3097.17
colic80.9380.1181.4480.9080.6678.8681.12
colic.ORIG75.3367.71 •75.6675.3075.9374.2176.68
credit-a85.4584.1085.8085.9184.4384.7484.83
credit-g75.7974.8876.2976.4276.3875.9375.19
diabetes75.6076.3176.0076.3775.8375.6875.74
glass58.9558.6959.0261.1359.6257.6963.14
heart-c82.1579.7082.3182.4882.6183.4481.75
heart-h81.9881.2783.2184.0683.1183.6481.74
heart-statlog82.5279.4882.7083.6782.3083.7883.81
hepatitis84.5283.0083.9284.8284.1484.0685.26
hypothyroid93.4893.3693.4993.5393.5492.79 •93.17
ionosphere93.0291.3492.0092.0892.9490.86 •92.88
iris95.1394.2793.9394.4795.7394.3394.33
kr-vs-kp94.1392.88 •92.36 •91.01 •94.1887.79 •93.69
labor91.4089.0092.7395.3091.7396.7092.67
letter89.2482.69 •84.68 •85.54 •88.86 •70.09 •80.44 •
lymph84.4783.6983.9086.2584.1685.9784.75
mushroom99.9999.9999.9499.9599.9895.52 •99.52 •
primary-tumor48.5044.7747.6647.6747.9447.2047.32
segment95.4193.91 •93.72 •92.94 •95.0689.03 •94.83
sick98.0397.70 •97.7797.51 •97.9996.78 •97.55 •
sonar77.1875.3481.7579.0478.2876.3574.30
soybean94.5094.9893.8893.28 •94.3392.20 •94.45
splice96.3994.95 •95.84 •96.1296.3695.42 •95.40 •
vehicle73.3473.3572.1571.6273.0961.03 •72.48
vote94.2594.4394.4394.5294.4690.21 •93.26
vowel92.6791.8991.3489.52 •92.5666.09 •92.62
waveform-500083.6180.44 •83.7984.2484.0079.97 •82.17 •
zoo98.4196.6397.7394.6698.1194.3795.65
Average85.4883.5384.9985.0185.5982.3485.01
W/T/L-11/25/06/30/09/27/01/35/016/19/15/30/1
∘, • statistically significant improvement or degradation over AVWAODE-IG.
Table 3. Summary test results with regard to AVWAODE-KL.
Table 3. Summary test results with regard to AVWAODE-KL.
AlgorithmAVWAODE-KLTANHNBAODEWAODENBA2DE
AVWAODE-KL-3 (0)14 (0)17 (0)20 (0)11 (1)11 (1)
TAN33 (11)-27 (6)27 (9)31 (10)17 (6)25 (9)
HNB22 (6)9 (0)-22 (2)27 (5)10 (2)18 (5)
AODE19 (9)9 (3)14 (3)-20 (9)6 (1)19 (5)
WAODE16 (2)5 (0)9 (0)16 (0)-9 (1)13 (1)
NB25 (16)19 (13)26 (16)30 (13)27 (17)-26 (13)
A2DE25 (5)11 (2)18 (4)17 (3)23 (5)9 (0)-
Table 4. Summary test results with regard to AVWAODE-IG.
Table 4. Summary test results with regard to AVWAODE-IG.
AlgorithmAVWAODE-IGTANHNBAODEWAODENBA2DE
AVWAODE-IG-6 (0)17 (0)17 (0)19 (0)10 (1)11 (1)
TAN30 (11)-27 (6)27 (9)31 (10)17 (6)25 (9)
HNB19 (6)9 (0)-22 (2)27 (5)10 (2)18 (5)
AODE19 (9)9 (3)14 (3)-20 (9)6 (1)19 (5)
WAODE17 (1)5 (0)9 (0)16 (0)-9 (1)13 (1)
NB26 (16)19 (13)26 (16)30 (13)27 (17)-26 (13)
A2DE25 (5)11 (2)18 (4)17 (3)23 (5)9 (0)-
Table 5. Ranking test results with regard to AVWAODE-KL.
Table 5. Ranking test results with regard to AVWAODE-KL.
ResultsetWins−LossesWinsLosses
AVWAODE-KL47492
WAODE42464
A2DE153419
HNB92920
AODE−32730
TAN−331851
NB−771188
Table 6. Ranking test results with regard to AVWAODE-IG.
Table 6. Ranking test results with regard to AVWAODE-IG.
ResultsetWins−LossesWinsLosses
AVWAODE-IG46482
WAODE43463
A2DE153419
HNB92920
AODE−32730
TAN−331851
NB−771188
Table 7. Ranks computed by the Wilcoxon test on AVWAODE-KL.
Table 7. Ranks computed by the Wilcoxon test on AVWAODE-KL.
AlgorithmAVWAODE-KLTANHNBAODEWAODENBA2DE
AVWAODE-KL-636.5426.5370.5264.5525.0427.0
TAN29.5-100.0130.035.0362.5131.5
HNB239.5566.0-312.5172.0521.0298.0
AODE295.5536.0353.5-259.5593.0315.5
WAODE401.5631.0494.0406.5-547.0446.0
NB141.0303.5145.073.0119.0-135.5
A2DE239.0534.5368.0350.5220.0530.5-
Table 8. Summary of the Wilcoxon test on AVWAODE-KL. ∘: The algorithm in the column improves the algorithm in the corresponding row. •: The algorithm in the row improves the algorithm in the corresponding column. Lower diagonal level of significance α = 0 . 05 ; upper diagonal level of significance α = 0 . 1 .
Table 8. Summary of the Wilcoxon test on AVWAODE-KL. ∘: The algorithm in the column improves the algorithm in the corresponding row. •: The algorithm in the row improves the algorithm in the corresponding column. Lower diagonal level of significance α = 0 . 05 ; upper diagonal level of significance α = 0 . 1 .
AlgorithmAVWAODE-KLTANHNBAODEWAODENBA2DE
AVWAODE-KL-
TAN-
HNB -
AODE -
WAODE -
NB -
A2DE -
Table 9. Ranks computed by the Wilcoxon test on AVWAODE-IG.
Table 9. Ranks computed by the Wilcoxon test on AVWAODE-IG.
AlgorithmAVWAODE-IGTANHNBAODEWAODENBA2DE
AVWAODE-IG-628.0431.5380.5255.0525.5440.5
TAN38.0-100.0130.035.0362.5131.5
HNB234.5566.0-312.5172.0521.0298.0
AODE285.5536.0353.5-259.5593.0315.5
WAODE411.0631.0494.0406.5-547.0446.0
NB140.5303.5145.073.0119.0-135.5
A2DE225.5534.5368.0350.5220.0530.5-
Table 10. Summary of the Wilcoxon test on AVWAODE-IG. ∘: The algorithm in the column improves the algorithm in the corresponding row. •: The algorithm in the row improves the algorithm in the corresponding column. Lower diagonal level of significance α = 0 . 05 ; upper diagonal level of significance α = 0 . 1 .
Table 10. Summary of the Wilcoxon test on AVWAODE-IG. ∘: The algorithm in the column improves the algorithm in the corresponding row. •: The algorithm in the row improves the algorithm in the corresponding column. Lower diagonal level of significance α = 0 . 05 ; upper diagonal level of significance α = 0 . 1 .
AlgorithmAVWAODE-IGTANHNBAODEWAODENBA2DE
AVWAODE-IG-
TAN-
HNB -
AODE -
WAODE -
NB -
A2DE -
Table 11. Area under the ROC curve (AUC) comparisons for AVWAODE-KL versus TAN, HNB, AODE, WAODE, NB, and A2DE.
Table 11. Area under the ROC curve (AUC) comparisons for AVWAODE-KL versus TAN, HNB, AODE, WAODE, NB, and A2DE.
DatasetAVWAODE-KLTANHNBAODEWAODENBA2DE
anneal99.7699.6399.6899.34 •99.7398.88 •99.73
anneal.ORIG97.2496.8096.78 •95.71 •97.01 •95.45 •96.91
audiology97.7395.92 •96.39 •96.98 •97.6796.93 •97.66
autos94.1492.6593.3292.5694.1688.10 •93.89
balance-scale92.1094.00 ∘96.40 ∘94.85 ∘92.1696.22 ∘96.15 ∘
breast-cancer68.1365.3668.7870.8168.5370.1866.75
breast-w99.1798.8999.2499.3599.1999.2599.39
colic86.3085.5886.8886.6086.5484.3684.55
colic.ORIG83.0472.02 •82.6582.2984.3581.1883.11
credit-a91.1990.4992.0092.3891.4991.8691.98
credit-g77.5677.1080.32 ∘79.69 ∘78.81 ∘79.1077.56
diabetes82.1781.8782.9182.8582.7682.6182.21
glass83.9080.73 •81.7682.6483.7580.17 •82.27
heart-c89.6987.9090.4091.00 ∘89.9391.0890.20
heart-h88.1187.0989.3489.9788.6690.0488.47
heart-statlog89.9889.3690.8491.2690.0191.3491.16
hepatitis87.8387.1587.4689.3187.6089.3690.31
hypothyroid84.7683.16 •83.47 •84.0384.6483.4584.01
ionosphere97.9698.0597.4397.3298.2093.69 •98.19
iris99.1398.9698.5998.7799.1798.9798.96
kr-vs-kp98.4998.2698.2197.42 •98.5795.19 •98.27
labor97.2993.7597.8398.7997.7598.6797.38
letter99.6899.15 •99.40 •99.45 •99.66 •97.12 •98.98 •
lymph92.1891.9893.4393.1692.4192.8892.67
mushroom100.00100.00100.00100.00100.0099.80 •100.00 •
primary-tumor83.5780.97 •82.65 •82.8983.6482.7982.94
segment99.7099.53 •99.55 •99.42 •99.68 •98.36 •99.60 •
sick98.6898.0897.53 •97.07 •98.8295.87 •98.28
sonar88.3583.79 •90.3388.9289.1785.5083.59 •
soybean99.7099.7999.7699.57 •99.6999.48 •99.70
splice99.5199.18 •99.46 •99.4599.5199.35 •99.24 •
vehicle91.0591.7791.0290.7690.9683.47 •91.21
vote98.8398.7598.7898.6798.7597.15 •98.43
vowel99.6399.5499.5699.43 •99.6195.76 •99.59
waveform-500096.1994.50 •96.52 ∘96.60 ∘96.38 ∘95.32 •95.35 •
zoo99.9699.9199.9699.9299.9699.9299.88
Average92.5791.4392.7492.7692.7591.6392.46
W/T/L-10/25/18/25/39/23/43/31/217/18/16/29/1
∘, • statistically significant improvement or degradation over AVWAODE-KL.
Table 12. AUC comparisons for AVWAODE-IG versus TAN, HNB, AODE, WAODE, NB, and A2DE.
Table 12. AUC comparisons for AVWAODE-IG versus TAN, HNB, AODE, WAODE, NB, and A2DE.
DatasetAVWAODE-IGTANHNBAODEWAODENBA2DE
anneal99.7699.6399.6899.34 •99.7398.88 •99.73
anneal.ORIG97.0096.8096.7895.71 •97.0195.45 •96.91
audiology97.7195.92 •96.39 •96.98 •97.6796.93 •97.66
autos94.1492.6593.3292.5694.1688.10 •93.89
balance-scale92.1094.00 ∘96.40 ∘94.85 ∘92.1696.22 ∘96.15 ∘
breast-cancer69.5865.3668.7870.8168.5370.1866.75
breast-w99.2198.8999.2499.3599.1999.2599.39
colic86.3985.5886.8886.6086.5484.3684.55
colic.ORIG84.0272.02 •82.6582.2984.3581.1883.11
credit-a91.1890.4992.0092.3891.4991.8691.98
credit-g78.1877.1080.32 ∘79.69 ∘78.8179.1077.56
diabetes82.6681.8782.9182.8582.7682.6182.21
glass83.2980.7381.7682.6483.7580.17 •82.27
heart-c89.5187.9090.4091.00 ∘89.9391.08 ∘90.20
heart-h87.9987.0989.3489.9788.6690.0488.47
heart-statlog89.5889.3690.8491.2690.0191.3491.16
hepatitis87.9087.1587.4689.3187.6089.3690.31
hypothyroid85.0883.16 •83.47 •84.0384.6483.45 •84.01
ionosphere98.0998.0597.4397.3298.2093.69 •98.19
iris99.1398.9698.5998.7799.1798.9798.96
kr-vs-kp98.3898.2698.2197.42 •98.57 ∘95.19 •98.27
labor97.1793.7597.8398.7997.7598.6797.38
letter99.6899.15 •99.40 •99.45 •99.66 •97.12 •98.98 •
lymph92.2491.9893.4393.1692.4192.8892.67
mushroom100.00100.00100.00100.00100.0099.80 •100.00 •
primary-tumor83.6480.97 •82.65 •82.8983.6482.7982.94
segment99.7099.53 •99.55 •99.42 •99.68 •98.36 •99.60 •
sick98.6798.08 •97.53 •97.07 •98.8295.87 •98.28
sonar88.2683.79 •90.3388.9289.1785.5083.59 •
soybean99.6999.7999.7699.57 •99.6999.48 •99.70
splice99.5299.18 •99.46 •99.4599.5199.35 •99.24 •
vehicle91.0591.7791.0290.7690.9683.47 •91.21
vote98.9098.7598.7898.6798.7597.15 •98.43
vowel99.6399.5499.5699.43 •99.6195.76 •99.59
waveform-500096.1994.50 •96.52 ∘96.60 ∘96.38 ∘95.32 •95.35 •
zoo99.9799.9199.9699.9299.9699.9299.88
Average92.6491.4392.7492.7692.7591.6392.46
W/T/L-10/25/17/26/39/23/42/32/218/16/26/29/1
∘, • statistically significant improvement or degradation over AVWAODE-IG.

Share and Cite

MDPI and ACS Style

Yu, L.; Jiang, L.; Wang, D.; Zhang, L. Attribute Value Weighted Average of One-Dependence Estimators. Entropy 2017, 19, 501. https://doi.org/10.3390/e19090501

AMA Style

Yu L, Jiang L, Wang D, Zhang L. Attribute Value Weighted Average of One-Dependence Estimators. Entropy. 2017; 19(9):501. https://doi.org/10.3390/e19090501

Chicago/Turabian Style

Yu, Liangjun, Liangxiao Jiang, Dianhong Wang, and Lungan Zhang. 2017. "Attribute Value Weighted Average of One-Dependence Estimators" Entropy 19, no. 9: 501. https://doi.org/10.3390/e19090501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop