Next Article in Journal
Groundwater Prediction Using Machine-Learning Tools
Next Article in Special Issue
An Evaluation Framework and Algorithms for Train Rescheduling
Previous Article in Journal
Modalflow: Cross-Origin Flow Data Visualization for Urban Mobility
Previous Article in Special Issue
A Comparison of Ensemble and Dimensionality Reduction DEA Models Based on Entropy Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Rule Generation for Associative Classification

by
Chartwut Thanajiranthorn
* and
Panida Songram
Department of Computer Science, Faculty of Informatics, Mahasarakham University, Mahasarakham 44150, Thailand
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(11), 299; https://doi.org/10.3390/a13110299
Submission received: 2 September 2020 / Revised: 14 November 2020 / Accepted: 14 November 2020 / Published: 17 November 2020
(This article belongs to the Special Issue Algorithms in Decision Support Systems)

Abstract

:
Associative classification (AC) is a mining technique that integrates classification and association rule mining to perform classification on unseen data instances. AC is one of the effective classification techniques that applies the generated rules to perform classification. In particular, the number of frequent ruleitems generated by AC is inherently designated by the degree of certain minimum supports. A low minimum support can potentially generate a large set of ruleitems. This can be one of the major drawbacks of AC when some of the ruleitems are not used in the classification stage, and thus (to reduce the rule-mapping time), they are required to be removed from the set. This pruning process can be a computational burden and massively consumes memory resources. In this paper, a new AC algorithm is proposed to directly discover a compact number of efficient rules for classification without the pruning process. A vertical data representation technique is implemented to avoid redundant rule generation and to reduce time used in the mining process. The experimental results show that the proposed algorithm archives in terms of accuracy a number of generated ruleitems, classifier building time, and memory consumption, especially when compared to the well-known algorithms, Classification-based Association (CBA), Classification based on Multiple Association Rules (CMAR), and Fast Associative Classification Algorithm (FACA).

1. Introduction

Nowadays, there a number of classification techniques that have been applied to various real-world applications, i.e., graph convolutional networks for text classification [1], automated classification of epileptic electroencephalogram (EEG) signals [2], iris cognition [3], and anomaly detection [4]. Associative classification (AC) is a well-known classification technique that was first introduced by Lui et al. [5]. It is a combination of two data-mining techniques, association rule mining, and classification. Association rule mining discovers the relationship between items in a dataset. Meanwhile, classification aims to predict the class label of any given instance from learning-labeled dataset. AC focuses on finding Class Association Rules (CARs) that satisfy certain minimum support and confidence thresholds in the form x c , where x is a set of attribute values and c is a class label. AC has been reported in the literature to outperform other traditional classifiers [6,7,8,9,10,11,12,13], In addition, a CAR is an if–then rule that can be easily understood by general users. Therefore, AC is applied in many fields, i.e., phishing website detection [6,7,11], heart disease prediction [8,9], groundwater detection [12], and detection of low-quality information in social networks [10].
In traditional AC algorithms, minimum support threshold is a significant key parameter that is used to select frequent ruleitems and to then eliminate frequent ruleitems in which confidence values do not satisfy minimum confidence. This manner leads to a large number of frequent ruleitems. Nguyen and Nguyen [14] demonstrated that the number of 4 million frequent ruleitems can be generated when the minimum support threshold is set to 1%. Moreover, a number of AC-based techniques, i.e., Classification-based Association (CBA) [5], Fast Associative Classification Algorithm (FACA) [11], CAR-Miner-diff [14], Predictability-Based Class Collative Class Association Rules (PCAR) [15], Weighted Classification Based on Association Rules (WCBA) [16], and Fast Classification Based on Association Rules (FCBA) [17], create all possible CARs in order to determine a set of valid CARs that can be used in the classification process. Recently, Active Pruning Rules (APR) [13] has been proposed as a novel evaluation method. APR can be used to avoid generating all CARs. However, the exhaustive search for finding rules in classifiers may cause an issue in large datasets or low minimum support. Creating candidate CARs consumes intensive computational times and memory. The minimal process of candidate generation is still challenging because it is quite affected in terms of training time, input/output (I/O) overheads, and memory usage [18].
In this paper, a new algorithm is proposed to directly generate a small number of efficient CARs for classification. A vertical data format [19] is used to represent ruleitems associated with their transaction IDs. The intersection technique is used to easily calculate support and confidence values from the format. The ruleitems with 100% of confidence will be added to the classifier as a CAR. Whenever a CAR with 100% confidence is found, the transaction associated with the CAR will be removed by using a set difference to avoid generating redundant CARs. Finally, a compact classifier is built for classification. In conclusion, the contribution of this paper is as follows.
  • To avoid pruning and sorting processes, the proposed algorithm directly generates CARs with 100% confidence to build compact classifiers. The CARs with 100% confidence are anticipated to result in high prediction rates.
  • The proposed algorithm eliminates unnecessary transactions to avoid generating redundant CARs in each stage.
  • Simple set theories, intersection, and set difference are exploited to reduce computational time used in mining process and to reduce memory consumption.
This paper is structured as follows. In Section 2, related works of AC are described. The basic definitions are delineated in Section 3. The proposed algorithm is introduced in Section 4. The discussion on the experimental results is in Section 5. Lastly, the conclusion of the study is stated in Section 6.

2. Related Work

In the past, AC-based algorithms have been proposed and studied. The study’s objective is to understand some drawbacks and to increase the effectiveness of the algorithms. Lui et al. [5] introduced the CBA algorithm which integrated association rule mining and classification. The process of the CBA algorithm is divided into two steps. First, CARs are generated based on the famous search method the Apriori algorithm [20]. Second, CARs are sorted and then pruned to select efficient CARs in a classifier. The CBA algorithm was proven to produce a lower error rate than C4.5 [21]. Unfortunately, the CBA algorithm encounters a large number of candidate generation problems due to Apriori inheritance which finds all possible frequent rules at each level.
Li et al. [22] presented the Classification based on Multiple Association Rules (CMAR) algorithm. Unlike CBA, CMAR adopts a Frequent pattern tree (FP-tree) and a Cosine R-tree (CR-tree) for rule generation and classification phases. It divides the subset in FP-tree to search frequent ruleitems and then adds the frequent ruleitems to CR-tree according to their frequencies. Hence, CMAR only needs to scan the database once. The CMAR algorithm uses multiple rules to predict unseen instances based on chi-square method. In the experiment, CMAR was compared with CBA and C4.5 in terms of accuracy. The experimental result shows that CMAR performs better than the others.
Abdelhamid [6] proposed an Enhanced Multi-label Classifier-based Associative Classification (eMCAC) for phishing website detection. It generates rules with multiple class labels from a single dataset without recursive learning. The eMCAC algorithm applies a vertical data format to represent datasets. The support and confidence values for a multi-label rule are calculated based on the average support and confidence values of all classes. The class is assigned to the test instance if attribute values are fully matched to the rule’s antecedent. The experimental results show that the eMCAC algorithm outperforms CBA, PART, C4.5, jRiP, and MCAR [23] on the real-world phishing data in terms of accuracy.
Hadi et al. [11] proposed the FACA algorithm for phishing website detection. It applies a Diffset [24] in the rule-generation process to increase the speed of classifier building time. First, the FACA algorithm discovers k-ruleitems by extending frequent (k − 1)-ruleitems. Then, ruleitems are ranked according to the number of attribute values, confidence, support, and occurrence. To predict unseen data, the FACA algorithm utilizes the All Exact Match Prediction Method. The method matches unseen data with all CARs in the classifiers. Next, unseen data are assigned to the class label with the highest count. From the experimental result, the FACA algorithm outperforms CBA, CMAR, MCAR, and ECAR [25] in terms of accuracy.
Song and Lee [15] introduced Predictability-Based Collective Class Association Rule algorithm (PCAR) to enhance rule evaluation. The PCAR algorithm uses inner cross-validation between the test dataset and train dataset to calculate a predictability value of CARs. Then, CARs are ranked according to rule predictive values, rule confidence, rule support, rule antecedent length, and rule occurrences. Finally, the full-matching method is applied to assign a class label for unseen data. To evaluate the performance of PCAR, PCAR was compared with C4.5, RIPPER, CBA, and MCAR on the accuracy, and PCAR was shown to outperform the others.
Alwidian et al. [16] proposed the WCBA algorithm to enhance the accuracy of a classifier based on the weighting technique. WCBA assumes that the importance of attributes is not equal. For example, in medicine, some attributes are more important than other attributes for prediction. Consequently, weights of all attributes are assigned by experts in the domain. Then, the weighted method is used to select useful CARs and a statistical measure is used for the pruning process. In addition, CARs are priors sorted by using the harmonic mean, which is an average value between support and confidence. The WCBA algorithm is more significantly accurate than CBA, CMAR, MCAR, FACA, and ECBA. However, the WCBA algorithm generates CARs based on the Apriori technique that scans the database many times.
Rajab proposed [13] the Active Pruning Rule (APR) algorithm. The new pruning process was introduced in APR. CARs are ranked by confidence, support, and rule length. Each training instance is matched over a set of CARs. The first rule that matches an instance is added to the classifier. Then, instances containing the first rule are removed. The support and confidence of remaining rules are recalculated, and all CARs are re-ranked. The APR algorithm was proven to reduce the size of the classifier and to maintain predictive accuracy performance. However, the APR algorithm still has to face a massive number of candidates from a rule-generation process. From previous works, the advantages and disadvantages are shown in Table 1.
The previous algorithms on AC generally result in high predictability of rules. However, most of them produce k-ruleitems from (k − 1)-ruleitems. They have to calculate supports when a new ruleitems is recovered. To calculate support and confidence values, they have to search all transactions in databases multiple times. Moreover, a huge number of candidate CARs are generated and pruned later to reduce unnecessary CARs. To reduce the problems, the proposed algorithm will directly generate efficient CARs for classification so that the pruning and sorting processes are not necessary. The efficient CARs in our works are rules with 100% confidence which are generated based on the idea that some attribute values can immediately indicate the class label if all attribute values belong to a class label. To easily check attribute values belonging to any class label, vertical data representation is used in the proposed algorithm. Furthermore, simple set theories, intersection, and set difference are adapted to easily calculate support and confidence values without scanning a database multiple times.

3. Basic Definitions

Let A = { a 1 , a 2 , , a m } be a finite set of all attributes in dataset. C = { c 1 , c 2 , , c n } is a set of classes, g ( x ) is a set of transactions containing itemset x, and | g ( x ) | is the number of transactions containing x.
Definition 1.
An item can be described as an attribute a i containing a value v j , denoted as ( a i , v j ) .
Definition 2.
An itemset is the set of items, denoted as ( a i 1 , v i 1 ) , ( a i 2 , v i 2 ) , . . . , ( a i k , v i k ) .
Definition 3.
A ruleitem is of the form i t e m s e t , c j , which represents an association between itemsets and class in a dataset; basically, it is represented in the form i t e m s e t c j .
Definition 4.
The length of a ruleitem is the number of items, denoted as k r u l e i t e m .
Definition 5.
The absolute support of ruleitem r is the number of transactions containing r, denoted as s u p ( r ) . The support of r can be found from (1).
s u p ( r ) = | g ( r ) |
Definition 6.
The confidence of ruleitem i t e m s e t , c j is the ratio of the number of transactions that contains the itemset in class in c j and the number of transactions containing the itemset, as in (2).
c o n f ( i t e m s e t , c j ) = | g ( i t e m s e t , c j ) | | g ( i t e m s e t ) | × 100
Definition 7.
Frequent ruleitem is a ruleitem in which support is not less than the minimum support threshold ( m i n s u p ).
Definition 8.
Class Association Rule (CAR) is a frequent ruleitem in which confidence is not less than the minimum confidence threshold ( m i n c o n f ).

4. The Proposed Algorithm

In this section, a new algorithm, called the Efficient Class Association Rule Generation (ECARG) algorithm, is presented. The pseudo code of the proposed algorithm is shown in Algorithm 1.
Algorithm 1: Efficient Class Association Rule Generation (ECARG) algorithm main process
Algorithms 13 00299 i001
First, 1-frequent ruleitems are generated (line 1). To quickly find 1-frequent ruleitems, the proposed algorithm takes the advantage of a vertical data format to calculate the support of the ruleitems. The support of the ruleitems can be obtained from | g ( i t e m s e t ) g ( c k ) | . If any 1-ruleitem does not meet the minimum support threshold, it will not be extended with the other ruleitems. Moreover, the confidence of the frequent ruleitems can be calculated from Equation (2) by using the vertical data format. If the confidence of the ruleitem is 100%, the ruleitems will be added to the classifier directly (line 7); otherwise, it will be considered extended with the others (line 5).
After discovering the most effective CAR with 100% confidence, the transaction IDs associated with the CAR will be removed to avoid redundant CARs (line 8). To remove the transaction IDs, a set difference plays an important role in our algorithm. Let r i be a CAR with 100% confidence and T be a set of ruleitems in the same class of r i . For all r j T , the new transaction IDs of r j is g ( r j ) = g ( r j ) g ( r i ) . Then, the new transaction IDs, support, and confidence values of all rules are updated (line 9).
In each iteration, if there is no CAR with 100% confidence, the ruleitem r with the highest confidence will be first to be considered extended in a breadth-first search manner. It will be combined with other ruleitems in the same class until the new CAR has 100% confidence (line 5). If r i is extended with r j to be r n e w and g ( r j ) g ( r i ) , then c o n f ( r n e w ) = 100 % . After the extended CAR is added to the classifier, the transaction IDs associated with the CAR will be removed. Finally, if no ruleitem satisfies the minimum support threshold, the CAR generation will be stopped.
The proposed algorithm continues to find a default class in order to insert it to the classifier. The class with the most remaining transaction IDs is selected as the default class (line 12).
To demonstrate the examples, the dataset in Table 2 is used as example data. The minimum support and confidence thresholds are set to 2 and 50%, respectively.
The vertical data format represents associated transaction IDs of 1-ruleitem, as shown in Table 3. The last 2 columns of Table 3 show the support and confidence of ruleitems that are calculated. From Table 2, the a 2 value in a t r 1 occurs in transaction IDs 5, 6, 7, and 9, denoted as g ( a t r 1 , a 2 ) = { 5 , 6 , 7 , 9 } . Class A is in transaction IDs 1, 2, 3, 4, 8, and 9, denoted as g ( A ) = { 1 , 2 , 3 , 4 , 8 , 9 } , while class B is in transaction IDs 5, 6, and 7, denoted as g ( B ) = { 5 , 6 , 7 } . The transaction IDs containing a t r 1 , a 2 A are g ( a t r 1 , a 2 ) g ( A ) = { 5 , 6 , 7 , 9 } { 1 , 2 , 3 , 4 , 8 , 9 } = { 9 } , so the supports of a t r 1 , a 2 A are 1. The rule a t r 1 , a 2 A will not be extended because its support is less than the minimum support threshold. Transaction IDs containing a t r 1 , a 2 B are g ( a t r 1 , a 2 ) g ( B ) = { 5 , 6 , 7 , 9 } { 5 , 6 , 7 } = { 5 , 6 , 7 } , so the supports of a t r 1 , a 2 B are 3. Hence, this rule is a frequent ruleitem.
The confidence of a t r 1 , a 2 B can be obtained from | g ( 5 , 6 , 7 ) | | g ( 5 , 6 , 7 , 9 ) | × 100 = 3 4 × 100 = 75 % . The confidence of a t r 1 , a 2 B is not 100% so it will be extended, whereas the confidence of a t r 1 , a 1 A is | g ( 1 , 2 , 3 , 4 ) | | g ( 1 , 2 , 3 , 4 ) | × 100 = 4 4 × 100 = 100 % , so it is the first CAR added to the classifier.
After discovering the first CAR, the transaction IDs associated with the CAR will be removed. From Table 3, if a t r 1 , a 1 is found, the class will absolutely be A. Hence, a t r 1 , a 1 A does not need to be extended with the other attribute values and transaction IDs 1, 2, 3, and 4 should be removed. The ECARG algorithm adopts a set difference, which can help to remove transaction IDs more conveniently.
For example, g ( a t r 1 , a 1 A ) = { 1 , 2 , 3 , 4 } and g ( a t r 3 , c 1 A ) = { 1 , 3 , 4 , 9 } . The new transaction IDs of g ( a t r 3 , c 1 A ) = g ( a t r 3 , c 1 A ) g ( a t r 1 , a 1 A ) = { 1 , 3 , 4 , 9 } { 1 , 2 , 3 , 4 } = { 9 } . Then, the new transaction IDs, support, and confidence values of all rules are updated as shown in Table 4.
From Table 4, there is no CAR with 100% confidence. a t r 1 , a 2 B has the maximum confidence, and a t r 3 , c 2 B = { 5 , 6 } is a subset of g ( a t r 1 , a 2 B ) = { 5 , 6 , 7 } . Hence, the new rule ( a t r 1 , a 2 ) , ( a t r 3 , c 2 ) B is found with 100% confidence. Then the extension of ( a t r 1 , a 2 ) , ( a t r 3 , c 2 ) B is stopped. For 2-ruleitem extended from a t r 1 , a 2 B , there is only one rule with 100% confidence and it is added to the classifier as the second CAR.
After the second CAR is added to classifiers, the transaction IDs associated with CAR are removed. The remaining transaction IDs are shown in Table 5. There is only one ruleitem that satisfies the minimum support threshold: the ruleitem a t r 2 , b 3 A which does not meet 100% of confidence. No ruleitem passes the minimum support threshold to be extended with the ruleitem a t r 2 , b 3 A so CAR generation is stopped.
With the remaining transaction IDs in Table 5, the ECARG algorithm continues to find a default class and to add it to the classifier. In this step, the class with the most relevant transaction IDs is selected as the default class. In Table 5, class A remains in transaction IDs 8 and 9 while class B remains in transaction ID 7. The remaining transaction IDs are relevant to class A the most, so the default class is A. In case the number of associated remaining transaction IDs with each class is not changed, the majority class in the classifier is the default class. Finally, all CARs in the classifier are shown in Table 6.
To observe the effect of 100% confidence ruleitems, we tested another version of ECARG, ECARG2. The difference in ECARG2 is ruleitem extension. If a ruleitem with 100% confidence cannot be found from the extension, the ruleitem with the highest confidence will be selected as a CAR and added to classifiers. For example, in Table 5, ruleitem a t r 2 , b 3 A is the only ruleitem that satisfies the minimum support and minimum confidence. Hence, ECARG2 selects the ruleitem as the third CAR. The associated transaction IDs are removed, and the remaining transaction ID is shown in Table 7. There is only one transaction ID with class B. Consequently, the default class is B. Finally, all CARs from ECARG2 are shown in Table 8.

5. Experimental Setting and Result

The experiments were implemented and tested on a system with the following environment: Intel Core i3-6100u 2.3 GHz processor with 8 GB DDR4 main memory, running Microsoft Windows 10 64-bit version. Our algorithm is compared with the well-known algorithms CBA, CMAR, and FACA. All algorithms were implemented in java. The implementing java version of the CBA algorithm using CR-tree is from WEKA [26]. The implementation of CMAR in JAVA is from [27]. Four algorithms are tested on 14 datasets from the UCI Machine Learning Repository. The characteristics of the datasets are shown in Table 9. Ten-fold cross-validation is used to divide testing instances and training instances based on previous works [12,17,23,26,27]. Accuracy rates, the number of CARs, classifier building times, and memory consumption are used to measure the performance of the four algorithms.
To study the sensitivity of thresholds on the ECARG algorithm, we set different minimum support thresholds and different minimum confidence thresholds in the experiment. First, we set the minimum support thresholds from 1% to 4% and analyze different minimum confidence thresholds between 60%, 70%, 80%, and 90%. Figure 1 shows the accuracy rates of all datasets. The results show that, when the minimum support thresholds are increased, the accuracy rates are decreased. If the minimum confidence thresholds are increased, the accuracy rates are slightly down.
The highest accuracy rates are given in most datasets, Diabetes, Iris, Labor, Lymph, Mushroom, Post-operative, Tic-tac-toe, Vote, Wine, and Zoo, when minimum support and minimum confidence are set to 2% and 60%, respectively. Therefore, the minimum support is set to 2%, and minimum confidence is set to 60% in the next experiments.
Table 10 reports the accuracy rates of the CBA, CMAR, FACA, ECARG, and ECARG2 algorithms on the UCI datasets. The results show that both of our algorithms outperform the others on average. This gain resulting from the methodology found the most efficient rule in each iteration and eliminated redundant rules simultaneously. To be more precise, we further analyzed the win-lost-tie records. Based on Table 10, the win-lost-tie records of the ECARG2 algorithm against CBA, CMAR, FACA, and ECARG in terms of accuracy are 11-3-0, 11-3-0, and 9-4-1, 8-6-0, respectively. We can observe that ECARG gives an accuracy slightly less than ECARG2. However, the ECARG algorithm results in the highest accuracy in 6 of 14 datasets.
Table 11 shows the average number of CARs generated from CBA, CMAR, FACA, ECARG, and ECARG2 algorithms. The result shows that the CMAR algorithm generates the highest number of rules, while the ECARG algorithm generates the lowest. In particular, the ECARG algorithm generates 8 CARs on average against 14 datasets whereas the CBA, CMAR, FACA, and ECARG2 algorithms derive 19, 240, 13, and 18 CARs on average, respectively. The accomplishment of the proposed algorithm is the discovery of the most efficient CAR in each iteration and the elimination of unnecessary transaction IDs that leads to redundant CARs.
Table 12 shows the average classifier building time of the proposed algorithm against CBA, CMAR, and FACA. The experimental result clearly shows that our algorithm is the fastest among all algorithms in the 14 datasets. ECARG takes fewer seconds to construct the classifier than CBA, CMAR, FACA, and ECARG2 by 2.134, 0.307, 2.883, and 0.0162, respectively. This can be explained by the fact that CBA and FACA uses an Apriori-style approach to generate candidates. When the value for minimum support is low on large datasets, it is costly to handle a large number of candidate ruleitems. The CMAR algorithm based on FP-growth is better than CBA and FACA in some cases, but it takes more classifier-generating time than ECARG and ECARG2.
Table 13 reveals the memory consumption in the classifier building process of all 5 algorithms. The results show that ECARG consumes less memory than CBA, CMAR, FACA, and ECARG2 by 22.62 MB, 73.15 MB, 36.57 MB, and 0.98 MB, respectively. The memory consumption of ECARG is the best since it eliminates unnecessary data in each iteration. From the result in Table 14, our proposed algorithm gives a higher F-measure on average than the other algorithms. In particular, the ECARG2 outperformed CBA, CMAR, FACA, and ECARG by 3.82%, 25.38%, 25.38%, 12.74%, and 1.84%, respectively.
Table 15 shows standard deviations of accuracy rate, the number of generated rules, building times, memory consumption, and F-measure of ECARG. The standard deviation values of building time and memory consumption are low and show that the building time and memory consumption in each fold is approximately marginal. The standard deviation values of the number of generated rules are relevant.
The standard deviation values of accuracy rates and F-measure show that the values of accuracy rates and F-measure in each fold are marginally different on almost all datasets. However, when evaluating the small datasets, Contact-lenses, Labor, Lymph, and Post-operative, the standard deviation values are high because 10-fold cross-validation splits a very small testing set that can potentially affect the efficiency of the classifier. For example, the Contact-lenses dataset composes only 2 or 3 transactions in each testing set. Consequently, only one false classification occurs in the testing set and then reduces the accuracy rate dramatically.
From the experimental results, the ECARG algorithm outperforms CBA, CMAR, and FACA in terms of accuracy rate and the number of generated rules. A key achievement of the ECARG algorithm is that the technique generates valid rules with 100% confidence to build classifiers. The high confidence demonstrates the high possibility of class occurrences occurring in an itemset. Therefore, the ECARG algorithm produces a small classifier but gives high accuracy. While the CBA, CMAR, and FACA algorithms build classifiers from CARs that meet the minimum confidence threshold, some of the CARs have low confidences so they may predict incorrect classese and then the accuracies of CBA, CMAR, and FACA are lower than the proposed algorithm in the most dataset.
Moreover, ECARG outperforms the others in terms of building time and memory consumption. This key achievement applies simple set theories, i.e., intersection and set difference, processing on vertical data, which can potentially reduce time and memory consumption. Furthermore, the search space can be reduced as unnecessary transactions are eliminated in each stage and, therefore, the classifier building time is minimized.

6. Conclusions

This paper proposes algorithms to enhanced associative classification. Unlike the traditional algorithms, the proposed algorithms do not need a sorting and pruning process. Candidate generation is carried out by attempting to select a first general rule with the highest accuracy. Moreover, a search space is reduced early by cutting down items with low statistical significance. Furthermore, a vertical data format, intersection, and set difference methods are applied to calculate support and confidence and to remove unnecessary transaction IDs, decreasing computation time and memory consumption.
The experiments were conducted on 14 UCI datasets. The experimental results show that the ECARG algorithm outperforms the CBA, CMAR, and FACA algorithms in terms of accuracy by 4.78%, 17.79%, and 1.35%, respectively. Furthermore, ECARG generates smaller rules than the other algorithms in almost all datasets. In addition, ECARG results in the most optimal classifier-generating time and memory usage on average. We can conclude that the proposed algorithm gives a compact classifier with a high accuracy rate, improves computation time, and reduces memory usage.
However, the ECARG algorithm does not well perform on imbalanced datasets, such as Breast, Car, Diabetes, and Post-operative. This is because the ECARG algorithm tends to find 100% confidence CARs and to eliminate unnecessary transactions. Therefore, ruleitems belonging to minority classes will not meet the minimum support threshold or 100% confidence and they are eliminated accordingly. Consequently, the classifier cannot classify the minority class correctly.

Author Contributions

Methodology, C.T.; supervision, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by Mahasarakham University (Grant year 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yao, L.; Mao, C.; Luo, Y. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 7370–7377. [Google Scholar]
  2. Jukic, S.; Saracevic, M.; Subasi, A.; Kevric, J. Comparison of Ensemble Machine Learning Methods for Automated Classification of Focal and Non-Focal Epileptic EEG Signals. Mathematics 2020, 8, 1481. [Google Scholar] [CrossRef]
  3. Adamović, S.; Miškovic, V.; Maček, N.; Milosavljević, M.; Šarac, M.; Saračević, M.; Gnjatović, M. An efficient novel approach for iris recognition based on stylometric features and machine learning techniques. Future Gener. Comput. Syst. 2020, 107, 144–157. [Google Scholar] [CrossRef]
  4. Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S.A.; Binder, A.; Müller, E.; Kloft, M. Deep one-class classification. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 4393–4402. [Google Scholar]
  5. Liu, B.; Yiming, M.; Hsu, W. Integrating Classification and Association Rule Mining. In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 27–31 August 1998. [Google Scholar]
  6. Abdelhamid, N. Multi-label rules for phishing classification. Appl. Comput. Inform. 2015, 11, 29–46. [Google Scholar] [CrossRef]
  7. Abdelhamid, N.; Ayesh, A.; Thabtah, F. Phishing detection based associative classification data mining. Expert Syst. Appl. 2014, 41, 5948–5959. [Google Scholar] [CrossRef]
  8. Jabbar, M.; Deekshatulu, B.; Chandra, P. Heart Disease Prediction System using Associative Classification and Genetic Algorithm. arXiv 2013, arXiv:1303.5919. [Google Scholar]
  9. Singh, J.; Kamra, A.; Singh, H. Prediction of heart diseases using associative classification. In Proceedings of the 5th International Conference on Wireless Networks and Embedded Systems (WECON), Rajpura, India, 14–16 October 2016; pp. 1–7. [Google Scholar] [CrossRef]
  10. Wang, D. Analysis and detection of low quality information in social networks. In Proceedings of the 2014 IEEE 30th International Conference on Data Engineering Workshops, Chicago, IL, USA, 31 March–4 April 2014; pp. 350–354. [Google Scholar] [CrossRef]
  11. Hadi, W.; Aburub, F.; Alhawari, S. A new fast associative classification algorithm for detecting phishing websites. Appl. Soft Comput. 2016, 48, 729–734. [Google Scholar] [CrossRef]
  12. Hadi, W.; Issa, G.; Ishtaiwi, A. ACPRISM: Associative classification based on PRISM algorithm. Inf. Sci. 2017, 417, 287–300. [Google Scholar] [CrossRef]
  13. Rajab, K.D. New Associative Classification Method Based on Rule Pruning for Classification of Datasets. IEEE Access 2019, 7, 157783–157795. [Google Scholar] [CrossRef]
  14. Nguyen, L.; Nguyen, N.T. An improved algorithm for mining class association rules using the difference of Obidsets. Expert Syst. Appl. 2015, 42, 4361–4369. [Google Scholar] [CrossRef]
  15. Song, K.; Lee, K. Predictability-based collective class association rule mining. Expert Syst. Appl. 2017, 79, 1–7. [Google Scholar] [CrossRef]
  16. Alwidian, J.; Hammo, B.H.; Obeid, N. WCBA: Weighted classification based on association rules algorithm for breast cancer disease. Appl. Soft Comput. 2018, 62, 536–549. [Google Scholar] [CrossRef]
  17. Alwidian, J.; Hammo, B.; Obeid, N. FCBA: Fast Classification Based on Association Rules Algorithm. Int. J. Comput. Sci. Netw. Secur. 2016, 16, 117. [Google Scholar]
  18. Abdelhamid, N.; Jabbar, A.A.; Thabtah, F. Associative classification common research challenges. In Proceedings of the 2016 45th International Conference on Parallel Processing Workshops (ICPPW), Philadelphia, PA, USA, 16–19 August 2016; pp. 432–437. [Google Scholar]
  19. Ogihara, Z.P.; Zaki, M.; Parthasarathy, S.; Ogihara, M.; Li, W. New algorithms for fast discovery of association rules. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining, Newport Beach, CA, USA, 14–17 August 1997. [Google Scholar]
  20. Agrawal, R.; Srikant, R. Fast algorithms for mining association rules. In Proceedings of the 20th International Conference Very Large Data Bases, VLDB, Santiago, Chile, 12–15 September 1994; Volume 1215, pp. 487–499. [Google Scholar]
  21. Quinlan, J. C4.5: Programs for Machine Learning; Morgan Kaufmann Publisher, Inc.: Los Altos, CA, USA, 1993. [Google Scholar]
  22. Li, W.; Han, J.; Pei, J. CMAR: Accurate and efficient classification based on multiple class-association rules. In Proceedings of the 2001 IEEE International Conference on Data Mining, San Jose, CA, USA, 29 November–2 December 2001; pp. 369–376. [Google Scholar]
  23. Thabtah, F.; Cowling, P.; Peng, Y. MCAR: Multi-class classification based on association rule. In Proceedings of the 3rd ACS/IEEE International Conference on Computer Systems and Applications, Cairo, Egypt, 6 January 2005. [Google Scholar] [CrossRef]
  24. Zaki, M.; Gouda, K. Fast Vertical Mining Using Diffsets. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–27 August 2003; ACM: New York, NY, USA, 2003; pp. 326–335. [Google Scholar] [CrossRef]
  25. Hadi, W. ECAR: A new enhanced class association rule. Adv. Comput. Sci. Technol. 2015, 8, 43–52. [Google Scholar]
  26. Mutter, S. Class JCBA. 2013. Available online: https://github.com/bnjmn/weka (accessed on 30 September 2018).
  27. Padillo, F.; Luna, J.M.; Ventura, S. LAC: Library for associative classification. Knowl.-Based Syst. 2019, 193, 105432. [Google Scholar] [CrossRef]
Figure 1. Accuracy rates in various m i n s u p and m i n c o f on all datasets.
Figure 1. Accuracy rates in various m i n s u p and m i n c o f on all datasets.
Algorithms 13 00299 g001
Table 1. Advantages and disadvantages of Associative classification (AC) algorithms.
Table 1. Advantages and disadvantages of Associative classification (AC) algorithms.
AlgorithmsAdvantageDisadvantage
CBAIt adopted the association rule technique to classify data that is proven to be more accurate than the traditional classification technique.It has to face a sensitivity of the minimum support threshold. A massive number of rules are generated when a low minimum support threshold is given.
CMARIt uses an efficient FP-tree, which consumes less memory and space compared to CBA.The FP-tree will not always fit in the main memory, especially when the number of attributes is large.
eMCACIt adopts vertical data representation to reduce space usage to find a multi-label class.It is based on an Apriori-like technique that can result in a large number of frequent itemsets.
FACASet difference is adopted to consume low memory and to reduce the mining time.It is based on an Apriori-like technique; therefore, the algorithm is required to search for all frequent itemsets from all possible candidate itemsets at each level.
PCARIt uses predictability value to prune unnecessary rules.The execution time is slow since it includes the inner cross-validation phase for calculation predictability value.
WCBAIt uses a weighted method to select useful rules and to improve the performance of the classifier.Weighted factors are subject to change due to the decisions of experts which can cause a different experimental result.
APRA new evaluation method with a small classifier and high accuracy rate.Generation of a large number of rules when a low minimum support threshold is given.
Table 2. A sample dataset.
Table 2. A sample dataset.
TIDatr1atr2atr3Class Label
1 a 1 b 1 c 1 A
2 a 1 b 1 c 2 A
3 a 1 b 2 c 1 A
4 a 1 b 3 c 1 A
5 a 2 b 1 c 2 B
6 a 2 b 2 c 2 B
7 a 2 b 3 c 1 B
8 a 3 b 2 c 2 A
9 a 2 b 3 c 1 A
Table 3. The rules that meet minimum support threshold (white background cell).
Table 3. The rules that meet minimum support threshold (white background cell).
RuleitemTIDsSupConf (%)
a t r 1 , a 1 A 1, 2, 3, 44100
a t r 1 , a 2 A 91-
a t r 1 , a 2 B 5, 6, 7375
a t r 1 , a 3 A 81-
a t r 2 , b 1 A 1, 2266.67
a t r 2 , b 1 B 51-
a t r 2 , b 2 A 31-
a t r 2 , b 2 B 61-
a t r 2 , b 3 A 4, 8, 9375
a t r 2 , b 3 B 71-
a t r 3 , c 1 A 1, 3, 4, 9380
a t r 3 , c 1 B 71-
a t r 3 , c 2 A 2, 8250
a t r 3 , c 2 B 5, 6250
Table 4. The remained transaction IDs after generating the first Class Association Rule (CAR).
Table 4. The remained transaction IDs after generating the first Class Association Rule (CAR).
RuleitemTIDsSupConf (%)
a t r 1 , a 2 A 91-
a t r 1 , a 2 B 5, 6, 7375
a t r 1 , a 3 A 81-
a t r 2 , b 1 B 51-
a t r 2 , b 2 B 61-
a t r 2 , b 3 A 8, 9266.67
a t r 2 , b 3 B 71-
a t r 3 , c 1 A 91-
a t r 3 , c 1 B 71-
a t r 3 , c 2 A 81-
a t r 3 , c 2 B 5, 6266.67
Table 5. Transaction IDs after generating the second CAR.
Table 5. Transaction IDs after generating the second CAR.
RuleitemTIDsSupConf (%)
a t r 1 , a 2 A 91-
a t r 1 , a 2 B 71-
a t r 1 , a 3 A 81-
a t r 2 , b 3 A 8, 9266.67
a t r 2 , b 3 B 71-
a t r 3 , c 1 A 91-
a t r 3 , c 1 B 71-
a t r 3 , c 2 A 81-
Table 6. All CARs from ECARG.
Table 6. All CARs from ECARG.
CAR IDCAR
R1 a t r 1 , a 1 A
R2 ( a t r 1 , a 2 ) , ( a t t r 3 , c 2 ) B
Default ClassA
Table 7. Transaction IDs after ECARG2 generated the third CAR.
Table 7. Transaction IDs after ECARG2 generated the third CAR.
Rule ItemTIDsSupConf (%)
a t r 1 , a 2 B 71-
a t r 2 , b 3 B 71-
a t r 3 , c 1 B 71-
Table 8. All CARs from ECARG2.
Table 8. All CARs from ECARG2.
CAR IDCAR
R1 a t r 1 , a 1 A
R2 ( a t r 1 , a 2 ) , ( a t t r 3 , c 2 ) B
R3 a t r 2 , b 3 A
Default ClassB
Table 9. Characteristics of the experiment datasets.
Table 9. Characteristics of the experiment datasets.
Data Sets# of Attributes# of ClassesInstances
Anneal386798
Breast112699
Cars641728
Contact-lenses4324
Diabetes72768
Iris43150
Labor17257
Lymph184148
Mushroom2228214
Post-operative9490
Tic-tac-toe92958
Vote162435
Wined133178
Zoo177101
Table 10. Accuracies of CBA, CMAR, FACA, ECARG, and ECARG2.
Table 10. Accuracies of CBA, CMAR, FACA, ECARG, and ECARG2.
DatasetsCBACMARFACAECARGECARG2
Anneal83.1973.2787.3195.2196.77
Breast67.1674.8372.4470.3373.02
Cars78.2973.7370.0273.4387.79
Contact66.6737.563.3370.8365.00
Diabetes74.4757.0373.5667.3273.7
Iris92.6797.3396.0095.3396.00
Labor75.6726.3287.6792.6784.00
Lymph77.7643.2482.4388.5181.81
Mushroom93.4086.2596.5298.1598.9
Post-oper.56.6770.0067.7870.0060.00
Tic-tac-toe99.1653.0390.2365.3488.94
Vote94.0292.6491.9295.3195.17
Wine89.9762.9292.1698.8797.16
Zoo60.2779.2186.0095.0096.00
Average79.2466.2482.6784.0284.42
Table 11. The average number of generated rules on the UCI datasets.
Table 11. The average number of generated rules on the UCI datasets.
Data SetsCBACMARFACAECARGECARG2
Anneal3165151417
Breast1612723335
Cars252729518
Contact925578
Diabetes5611524438
Iris1138749
Labor82971599
Lymph26465151920
Mushroom828161213
Post-oper.3551121127
Tic-tac-toe2871312633
Vote30658121110
Wined52371197
Zoo1097111010
Average1924013818
Table 12. The classifier building time in seconds.
Table 12. The classifier building time in seconds.
Data SetsCBACMARFACAECARGECARG2
Anneal1.0500.0980.8770.1230.164
Breast0.6700.1690.1850.0070.027
Cars0.2200.2490.6400.0570.062
Contact0.0100.0750.0040.0010.002
Diabetes1.1600.1070.5580.0320.085
Iris0.0300.0080.0100.0040.004
Labor1.1700.9240.0270.0050.005
Lymph1.3203.7823.7000.0160.016
Mushroom25.8300.10421.5004.0494.128
Post-oper.0.0900.0410.0630.0080.012
Tic-tac-toe0.2300.2350.8000.1010.135
Vote1.5402.6015.3000.0340.034
Wined0.1200.2730.1900.0070.007
Zoo0.9000.0050.0470.0200.013
Average2.4530.6232.4220.3190.335
Table 13. The classifier building memory consumption in megabytes.
Table 13. The classifier building memory consumption in megabytes.
Data SetsCBACMARFACAECARGECARG2
Anneal73.4729.1610.7810.6813.38
Breast25.4423.9624.41.923.54
Cars60.0821.178.983.053.76
Contact2.650.991.871.781.84
Diabetes28.0826.7424.613.017.30
Iris4.162.401.881.171.17
Labor18.34420.88124.011.951.95
Lymph27.31250.93231.752.862.86
Mushroom28.8929.1224.5224.2724.31
Post-oper.15.178.7816.382.032.61
Tic-tac-toe31.7662.2312.764.738.44
Vote23.572.653.133.093.15
Wine20.87175.3659.521.821.82
Zoo21.4134.3331.932.202.13
Average27.2377.7641.184.615.59
Table 14. F-measure of Classification-based Association (CBA), Classification based on Multiple Association Rules (CMAR), Fast Associative Classification Algorithm (FACA), ECARG, and ECARG2.
Table 14. F-measure of Classification-based Association (CBA), Classification based on Multiple Association Rules (CMAR), Fast Associative Classification Algorithm (FACA), ECARG, and ECARG2.
Data SetsCBACMARFACAECARGECARG2
Anneal75.9343.7343.6461.2489.28
Breast66.1566.3266.3958.4268.43
Cars73.8533.3131.0445.8171.57
Contact53.3143.0849.9471.6761.67
Diabetes74.449.0174.356.5671.21
Iris92.7097.9893.5690.4196.16
Labor70.8528.2975.4688.8985.87
Lymph78.8348.1453.6381.9472.08
Mushroom93.7587.6196.5296.5898.90
Post-oper.52.5120.5956.0054.4539.57
Tic-tac-toe98.9043.8864.4095.4487.64
Vote94.9572.8291.8293.8294.44
Wine87.0369.1492.4798.6594.37
Zoo54.6162.0053.7391.7689.37
Average76.2754.7167.3578.2580.09
Table 15. Standard deviations of ECARG.
Table 15. Standard deviations of ECARG.
Data SetsAccuracy# of RulesBuilding TimeMemoryF-1
AVGS.D.AVGS.D.AVGS.D.AVGS.D.AVGS.D.
Anneal95.212.10140.940.1230.0510.680.0361.246.98
Breast70.335.1031.400.0070.011.920.0258.421.78
Car73.436.1150.820.0570.053.050.0045.817.16
Contact70.8328.8170.920.0010.001.780.0271.6730.54
Diabetes67.326.7541.340.0320.013.010.0356.563.41
Iris95.335.4440.520.0040.001.170.0090.415.48
Labor92.6714.0591.260.0050.001.950.0288.8913.72
Lymph88.5110.00193.370.0160.002.860.0081.9415.5
Mushroom98.150.32120.004.0490.6424.270.0396.580.31
Post-oper70.0017.41113.340.0080.002.030.0254.4514.21
Tic-tac-toe65.346.1662.130.1010.064.730.0695.443.85
Vote95.312.72112.260.0340.013.090.0393.822.84
Wined98.872.3490.530.0070.001.820.0398.652.31
Zoo95.006.99100.700.0200.012.200.0391.7613.65
Average84.028.1681.3950.3200.064.610.0278.258.70
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thanajiranthorn, C.; Songram, P. Efficient Rule Generation for Associative Classification. Algorithms 2020, 13, 299. https://doi.org/10.3390/a13110299

AMA Style

Thanajiranthorn C, Songram P. Efficient Rule Generation for Associative Classification. Algorithms. 2020; 13(11):299. https://doi.org/10.3390/a13110299

Chicago/Turabian Style

Thanajiranthorn, Chartwut, and Panida Songram. 2020. "Efficient Rule Generation for Associative Classification" Algorithms 13, no. 11: 299. https://doi.org/10.3390/a13110299

APA Style

Thanajiranthorn, C., & Songram, P. (2020). Efficient Rule Generation for Associative Classification. Algorithms, 13(11), 299. https://doi.org/10.3390/a13110299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop