Next Article in Journal
On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems
Previous Article in Journal
A System of Mining Semantic Trajectory Patterns from GPS Data of Real Users
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Learning Method for Drift and Imbalance Problem in Client Credit Assessment

Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(7), 890; https://doi.org/10.3390/sym11070890
Submission received: 18 June 2019 / Revised: 2 July 2019 / Accepted: 4 July 2019 / Published: 8 July 2019

Abstract

:
Machine learning algorithms have been widely used in the field of client credit assessment. However, few of the algorithms have focused on and solved the problems of concept drift and class imbalance. Due to changes in the macroeconomic environment and markets, the relationship between client characteristics and credit assessment results may change over time, causing concept drift in client credit assessments. Moreover, client credit assessment data are naturally asymmetric and class imbalanced because of the screening of clients. Aiming at solving the joint research issue of concept drift and class imbalance in client credit assessments, in this paper, a novel sample-based online learning ensemble (SOLE) for client credit assessment is proposed. A novel multiple time scale ensemble classifier and a novel sample-based online class imbalance learning procedure are proposed to handle the potential concept drift and class imbalance in the client credit assessment data streams. The experiments are carried out on two real-world client credit assessment cases, which present a comprehensive comparison between the proposed SOLE and other state-of-the-art online learning algorithms. In addition, the base classifier preference and the computing resource consumption of all the comparative algorithms are tested. In general, SOLE achieves a better performance than other methods using fewer computing resources. In addition, the results of the credit scoring model and the Kolmogorov–Smirnov (KS) test also prove that SOLE has good practicality in actual client credit assessment applications.

1. Introduction

Client credit assessment is an important reference for developing bank financial services and loan approval procedures. Its main purpose is to determine the probability of default and to help banks reduce risk. The earliest client credit assessment method was empirical discriminant provided by credit analysts. This qualitative analysis method relies too much on the professional quality and experience of the evaluators and lacks objectivity.
With the digital technology applications available in banking, a large amount of client data and their credit information are collected. Many studies using machine learning algorithms have been conducted [1], such as linear discriminant analysis (LDA) [2], logistic regression (LR) [3], decision trees (DTs) [4], naive Bayes (NB) [5], artificial neural networks (ANNs) [6,7], support vector machines (SVMs) [8], and ensemble approaches [4,9], to achieve automatic credit assessment. The essence of the client credit assessment is a classification problem. According to the default risk, clients are divided into two categories: “good” and “risk” [10].
In actual situations, client data are collected in chronological order as the bank transaction proceeds. The relationship between client characteristics and credit assessment results is not static. With the developing market environment and economic cycles, the relationship between client characteristics and credit assessment results may vary [11]. Traditional machine learning algorithms create a learning model based on historical client data and then use the model to predict new client data. In addition, the actual applications also require the evaluation speed of the credit assessment model. Traditional machine learning algorithms cannot update the model in real-time as new client data arrives. Therefore, the learning model learns only the old relationships. A learning model established by traditional machine learning methods will have a good credit assessment ability for new clients if there is no change in the relationship between client characteristics and credit assessment results. Once the relationship has changed, the traditional machine learning models are likely to make incorrect assessments, which could create risks and cause economic losses. To ensure that the learning model can be updated with the latest client data and make an accurate assessment, this paper proposes applying online learning methods in the field of client credit assessment. In contrast to traditional machine learning methods, online learning methods process the training instance once “on arrival” without storing and make predictions using the current model at each time step [12]. Therefore, online learning algorithms can immediately capture potential changes in the relationship between client characteristics and credit assessment. Additionally, online learning has been proven to achieve good performance in practical applications [13,14].
Online learning is often challenged by the joint issue of concept drift and class imbalance in the field of client credit assessment. In contrast to traditional machine learning algorithms that are trained on static datasets, online learning algorithms address the arriving training instances from the data stream one by one. Suppose the data stream has an underlying probability distribution Pt(x,yi) [15], and the instances in the data stream are generated by it. Once the distribution Pt(x,yi) varies, the characteristic of the instances will change and concept drift will occur. In general, concept drift can be classified into sudden drift and gradual drift according to the changing concept speed [16].
As noted, in the field of credit assessment, the relationship between the client characteristics and the credit assessment results is not static. Many client characteristics, such as income, expenditure, asset value, capital gain, and expected income, are susceptible to macroeconomic and market conditions. The income and asset value of a client with good credit in an economic expansion cycle will be significantly difference to that of the same client in an economic contraction. This phenomenon constitutes concept drift in the client credit assessment data stream. Although existing research [11] has recognized potential concept drift in client credit assessment, it has used only a simple integrated approach to improve the overall accuracy.
Another important challenge is the imbalanced class distribution in the client credit assessment data [17,18]. In an actual credit business, many possible default clients are directly rejected in the initial screening, leading to a different number of “good” clients and “risk” clients in the data collection period. The number of good clients is larger than the number of risk clients leading to a problem of asymmetry. Moreover, the cost of misclassifying risk clients as good clients, and misclassifying good clients as risk clients is different. Misclassification of risk clients is more expensive and should be avoided as much as possible. Therefore, the learning model must focus on the minority class (risk clients). To address the class imbalance in the field of client credit assessment, many algorithms apply sampling techniques to balance the training dataset. Brown and Mues [2] applied simple random undersampling and oversampling methods to deal with the class imbalance. Zieba et al. [19] used the synthetic minority oversampling technique (SMOTE) to generate risk clients and achieved better performance than the random sampling methods. However, introducing too much synthetic client data will create more subjective factors. In addition, these sampling methods based on static datasets cannot be used in online learning conditions. In summary, the main challenges of credit assessment are the evaluation speed, the concept drift problem and the class imbalance problem.
Many online learning algorithms have been proposed to address concept drift and class imbalance in data streams. Online learning algorithms for concept drift are commonly categorized into active approaches and passive approaches [20]. Active approaches [21] apply a drift detection mechanism to identify the occurrence of concept drift and then take action to address it, which can handle sudden concept drift better. However, passive approaches [22,23] evolve the classifier continuously without detecting concept drift and are good at overcoming gradual drift. In an actual client credit assessment data stream, the type of concept drift is unknown, which requires the learning model to be capable of handling different types of concept drift. To handle the class imbalance in online learning conditions, Wang et al. [24] proposed a time-decayed indicator to evaluate the real-time class imbalance degree of the data stream. Then the indicator was used to change the sampling times of the ensemble learning algorithm online bagging (OB) [25] to propose oversampling online bagging (OOB) and undersampling online bagging (UOB).
In this paper, a novel sample-based online learning ensemble for client credit assessment is proposed to solve the deficiencies of traditional machine learning methods and handle concept drift and class imbalance in the credit assessment data stream. First, a novel multiple time scale ensemble classifier is proposed that contains a stable classifier and dynamic classifiers. To address gradual concept drift, the stable classifier learns the whole data streams from the moment the learning procedure begins. To handle sudden drift, the dynamic classifier exists for only a period and learns a partial concept. Second, to overcome the class imbalance, a novel sample-based online class imbalance learning procedure is proposed, which combines both oversampling for the minority instances and undersampling for the majority instances. Two bank client credit assessment cases, Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) and Gateway Mobile Switching Centre (GMSC), are analyzed. The proposed sample-based online learning ensemble (SOLE) is compared with other state-of-the-art online learning methods by multiple metrics, such as accuracy, recall, F-score, G-mean and prequential area under the curve (PAUC). Two base classifiers, the Hoeffding tree and the naive Bayes, are used to test the preference of the online learning algorithms for the base classifier. The time and memory consumption of the algorithms are also reported. In general, the proposed SOLE achieves better performance using fewer computing resources and shows high adaptability for different base classifiers. In addition, to verify the feasibility of SOLE in practical applications, the credit scoring model and Kolmogorov–Smirnov (KS) test are applied. The results show that SOLE is suitable for practical application and can help services that need fast credit assessments, such as the E-commerce, online loan approval and financial security testing.
The rest of this paper is organized as follows. Section 2 proposes the multiple time scale ensemble classifier. SOLE is proposed in Section 3. Then, the experimental results and analysis reports are presented in Section 4. Section 5 provides the conclusion.

2. Multiple Time Scale Ensemble Classifier

Online learning algorithms should have the ability to handle potential concept drift in the client credit assessment data stream. In this section, a novel multiple time scale ensemble classifier is proposed to solve this issue. The main contribution of the ensemble method is that it is capable of different types of concept drift. As the sudden concept drift varies immediately, a learning algorithm that only learns the latest instances can adapt to the new concept rapidly, while a classifier that learns more instances can help handle gradual concept drift [26]. In addition, if the old concept reappears, the learning model that is trained over a long time and has learned the old concept will perform better. Therefore, the proposed ensemble classifier has base classifiers of different time scales to address different potential concept drifts. Figure 1 shows the structure of the multiple time scale ensemble classifier.
From Figure 1, the multiple time scale ensemble classifier has a long-term stable classifier Cs, and a dynamic classifier damped sliding window Cd (d = 1, 2, …, D). Suppose S is an unlimited data stream …, xi, xi+1, xi+2, …. The real label of the instances xt, which arrives at time t, is yt. To capture the concept of the data stream at different times, each I instance of the data stream is seen as a time cycle T. Therefore, as instances continuously come from the data stream, the data stream can be regarded as consequent time cycles T1, T2, … Tn, Tn+1 ….
Base classifiers in the multiple time scale ensemble classifier learn different time scale instances. The stable classifier Cs learns all the instances since the learning procedure handles the gradual and cyclic concept drift of the data stream. The dynamic classifier Cd learns only instances in the limited time cycle, which is defined as:
{ S s = k = 1 n T k S d = k = d D T n - D + k ,   d = 1 ,   2 ,   D
where the current time cycle is Tn. The stable classifier and dynamic classifiers learn instances of different time scales: those learned by the stable classifier are given by Ss and those by the dynamic classifier are given by Sd. The dynamic classifier Cd learns only instances of the most recent D-d+1 time cycle. Each classifier in the ensemble classifier has its predictive weight, and the actual prediction result is the weighted combination of all classifiers:
f l E ( x ) = w s f   l C s ( x ) + d = 1 D w d f   l C d ( x ) ,   l = 0 , 1
where f l E ( x ) is the ensemble prediction that the instance x belongs to class l, ws is the weight of the stable classifier and wd (d = 1, 2, …, D) are the weights of the dynamic classifiers. The weight of the stable classifier ws is a constant, but the weight of dynamic classifiers wd decreases over time. The initial weight of the newly created dynamic classifier is 1/D, and the weight of the old dynamic classifiers decreases repeatedly according to the creation time, which is shown in Equation (3).
{ ω s 1 2 ω d ω d ( 1 - 1 D ) ω D 1 D                         , d = 1 ,   2 ,   D - 1
Therefore, old dynamic classifiers have less predictive weight than the new dynamic classifier, which helps the ensemble classifier to focus more on the latest instances of the data stream. Additionally, the stable classifier and dynamic classifiers both have a half predictive weight when the ensemble classifier makes predictions. The SOLE learning procedure is shown in Algorithm 1.
Algorithm 1. Learning Procedure
Inputs:
1. S: data stream
2. I: number of instances in a time cycle
3. D: number of the dynamic classifiers
4. α: sampling parameter
Output: Ensemble classifier E
Initiation:
1. p = 0: the indicator of processed instances
2. k = 0: the indicator of dynamic classifier
3. The stable classifier Cs
4. The candidate classifier Cc
Process:
1. while (S. hasNext()) do
2. xnew = S. getnextInstance()
3. TrainOnInstance(xnew,α,DCRI[l])
4. p = p + 1
5. if (p%I == 0) then    // reach a new time cycle
6.  K = k + 1
7.  if (k > D) then    //number of dynamic classifiers beyond D
8.   CdCd+1 (d = 1, …, D-1)
9.   CDCc
10.  else
11.   CkCc   //copy Cc as the new dynamic classifier
12.  end if
13.  Cc. resetlearning()  //reset the candidate classifier
14. end if
15.  Calculate weight of classifiers according to equation (3)
16.  Compute the damped class imbalance ratio by (4)
17. end if
18. end while
At the early stages of the learning procedure, a stable classifier Cs and a candidate classifier Cc are created. The stable classifier exists during the whole learning procedure. The candidate classifier learns only instances of a time cycle. Whenever a new time cycle occurs (line 5), the candidate classifier trained by the last time cycle is copied as a new dynamic classifier and included in the dynamic classifier damped sliding window. Then, the candidate classifier is reset. If the number of dynamic classifiers is greater than D (lines 7–9), the subsequent classifier replaces the former classifiers one by one, CdCd+1 (d = 1, …, D−1), to keep the number of dynamic classifiers at D. The candidate classifier is used as the new dynamic classifier CD. Then, the algorithm updates the weight of the classifiers according to Equation (3) and calculates the damped class imbalance ratio using Equation (4). In the dynamic classifier damped sliding window, older dynamic classifiers are trained by more former arriving instances and have less predictive weight, which helps the ensemble classifier always attach importance to the latest data stream concept. The stable classifier exists for the whole learning process and can address gradual concept drift and cyclic concept drift better.

3. Sample-Based Online Class Imbalance Learning Procedure

If a data stream is class imbalanced, the model will lack minority class training instances. The sampling technique is a commonly used method to address class imbalance learning. However, traditional sampling techniques, such as random sampling or smart sampling [2,19], are not applicable to online learning conditions because that online learning requires the learning model to address one instance for each time step. For the online learning condition, sampling methods can be realized by changing the instance training times. OB [25] is an online ensemble learning algorithm that modifies offline bagging, which contains multiple base classifiers. For each instance, the base classifier is updated K times, following the Poisson(λ = 1) distribution. For the class imbalance learning condition, the categories are divided into a majority (negative) and a minority (positive) according to their amounts. To balance the class imbalance ratio, the proposed sample-based online class imbalance learning procedure oversamples the positive instances and undersamples the negative instances according to the real-time class imbalance ratio of the data stream. Therefore, the classifier can be trained by a nearly equal number of instances of different classes.
To determine the training times of an instance, the damped class imbalance ratio (DCIR) which calculates the class imbalance ratio in real time, is proposed. The candidate classifier is copied as the dynamic classifier to the ensemble classifier. As the candidate classifier is trained by instances of a time cycle, each dynamic classifier Cd has its part of initialization instances Rd. The class distribution Hd(l) of Rd is also calculated. Therefore, the DCIR is calculated as:
D C I R ( l ) = d = 1 D H d [ l ] w d l = 0 1 d = 1 D H d [ l ] w d
where wd is the weight of the dynamic classifier Cd. First, the number of the instances of each class (0: negative, 1: positive) in the initialization dataset Rd is summed, and DCIR(l) is the weighted summation of class l divided by the weighted summation of total classes. According to Equation (3), the weight of dynamic classifier wd is time-decayed and the earlier generated dynamic classifier has a lower weight. Therefore, the DCIR is mainly determined by the latest class imbalance ratio of the data stream. The sample-based online class imbalance learning procedure is shown in Algorithm 2.
Algorithm 2. TrainOnInstance(x,α,DCRI[l])
Input:
1. x: instance to be processed
2. α: sampling parameter
3. DCIR[l]: damped class imbalance ratio (l = negative, positive)
Output: The classifier Cnew
Process:
1. y = getreallabel(x)
2. if (y →negative) then
3. sampleweight = DCIR[positive]/DCIR[negative]
4. else
5. sampleweight = DCIR[negative]/DCIR[positive]
6. end if
7. For each classifier in the ensemble, set K ~ Poisson(sampleweight*α). Then, update the classifier with x for K times
For a new instance x from the data stream, the algorithm obtains the real label y of the instance x. Then, the algorithm determines whether instance x belongs to the minority class. If x belongs to the majority class (i.e., x is negative) (line 2), the algorithm applies undersampling to learn instance x. The sample weight is set to DCIR [positive] divided by DCIR [negative] (line 3). Otherwise, the algorithm applies the oversampling method to instance x (line 5). The sampleweight influences the λ of the Poisson distribution. Finally, the stable classifier, candidate classifier and all the dynamic classifiers train on the instance x K times by the Poisson distribution (sampleweight*α). In general, the classifiers train more times on the positive instance and train fewer times on the minority class, which can balance the training set in the online learning condition. Particularly, this sampling method is based on only the latest instances without using historical information, which can avoid using the instances of an old concept.

4. Experiments

In this section, a systematic experiment comparison of online learning algorithms for credit assessment is presented. The proposed SOLE is compared with other state-of-the-art online learning algorithms on two real-world client credit assessment cases by multiple evaluation metrics. Section 4.1 introduces the client credit assessment cases and experimental materials. Section 4.2 presents the evaluation metrics. The comparative algorithms and the contrast base classifier are shown in Section 4.3. The experimental results of SOLE compared with other online learning algorithms are presented in Section 4.4. Section 4.5 applies the credit scoring model and KS test to verify the practicality of SOLE in actual client credit assessment applications. Finally, Section 4.6 presents the resource consumption.

4.1. Research Cases and Materials

Two bank credit assessment cases, PAKDD [11] and GMSC [22], are used in the experimental comparison. GMSC is a classic credit assessment case that comes from the Kaggle competition. GMSC contains the data of bank clients, such as age, monthly income, number of credit cards, and dependents, which are used to determine whether a loan should be granted. GMSC contains 150,000 instances, 10 explanatory attributes and 1 predictive attribute. PAKDD is another commonly used benchmark data stream in online learning. It comes from the PAKDD 2009 competition and mainly tests the impact of market changes in several business years on the performance of the classifier. The class distribution of PAKDD is also imbalanced. PAKDD contains 50,000 instances, 28 explanatory attributes and 1 predictive attribute. Table 1 shows the main characteristics of the two experimental credit assessment cases.
Traditional machine learning evaluation methods divide a dataset into a training set and testing set. The model is trained on the static training set and then used to predict the testing set. Online learning methods obtain instances one by one, and the learning models can only learn one instance at each time step, which is a more challenging learning condition. In the comparative experiments, prequential evaluation is used: when a new instance arrives, first test it and then train on it. All the experiments are carried out on the Massive Online Analysis (MOA) [27], which is designed for online learning conditions. The experimental machine has an eight-core CPU (Intel i7-6700) and 32 GB RAM.

4.2. Evaluation Metrics

Traditional overall accuracy and error rate are commonly used indicators to evaluate classification performance. However, when the learning datasets are class imbalanced, they can only reflect the overall performance. Therefore, other metrics have been adopted in the binary class imbalance learning condition [28]. Generally, the minority class is treated as the positive class, and the majority class is usually treated as negative. Table 2 shows the confusion matrix of the binary classification problem, which generates four numbers on the testing data.
Then, the recall is defined as:
r e c a l l = T P T P + F N
p r e c i s i o n = T P T P + F P
Recall is the ratio of the True Positive rate with the positive instance percentage. Precision is the proportion of the True Positive rate with respect to the sum of the True Positive rate and the False Positive rate. The best situation of online class imbalance learning is improving recall without decreasing precision. However, it is conflicting to improve recall and precision together. Thus, the F-score is used to show the trade-off between them:
F - s c o r e = ( 1 + β 2 ) * p r e c i s i o n * r e c a l l β 2 p r e c i s i o n + r e c a l l
where β is the relative importance factor of recall and precision, and which is usually set to 1. G-mean is also an indicator to replace overall accuracy:
G - m e a n = T P T P + F N T N T N + F P
The area under the curve (AUC) computes the area under the receiver operating characteristic (ROC) curve, which is also a good measurement for class imbalance learning condition. However, AUC is only suitable for the offline learning situation. To improve the traditional AUC for online learning conditions, Brzezinski et al. [29] modified the AUC and proposed the prequential area under the curve (PAUC). The value of the indicator is continually updated in the online learning situation. Whenever an instance is processed, the indicator changes. Overall, the experiments in this paper compare the performance of the proposed SOLE with online learning algorithms on traditional accuracy, recall, F-score, G-mean and PAUC.

4.3. Comparative Methods

In this section, the comparative methods in the experiments are introduced. The selected comparative methods are all state-of-the-art online learning methods for concept drift and class imbalance. As the ensemble method tends to be more accurate than the base classifier, all the applied algorithms are ensemble online learning methods. Overall, Leveraging Bagging (LB) [23], learn++NIE [26], OOB, UOB [24], Online Accuracy Update Ensemble (OAUE) [30] and Adaptive Random Forest (ARF) [22] are compared. To control the performance of the base classifier for all algorithms, all the comparative algorithms use the same base classifiers except ARF. The Hoeffding tree and the naive Bayes are used as the base classifier for comparison. The default parameter settings of the base classifier are shown in Table 3.
The Hoeffding tree is an incremental, anytime DT induction algorithm, it is very efficient and suitable for the classification of large amounts of data. The naive Bayes classifier is a relatively simple probability classifier based on Bayes’ theorem and has a low computational resource cost. Both the Hoeffding tree and the naive Bayes are widely used as the base classifiers in studies of the online learning situation. In particular, ARF uses the ARFHoeffding tree, which specifically modifies the traditional Hoeffding tree as the base classifier. OAUE can only apply the Hoeffding tree as a base classifier.

4.4. Experimental Results

4.4.1. Comparison of Online Learning Algorithms

In this section, the experimental results of all the comparison methods are presented using different metrics. All the comparative experiments are carried out 10 times and the average is taken as the result. All the ensemble methods use 10 base classifiers. The default parameters of SOLE are D = 10, T = 500, and ε = 1. D is the number of dynamic classifiers. T is the size of the time cycle, and α is the sampling parameter. Table 4 shows the results using the Hoeffding tree and the ARFHoeffding tree as the base classifiers, including SOLE, LB, learn++NIE, OOB, UOB, OAUE and ARF. And the bold number is the optimal results.
For the GMSC (imbalance ratio: 1/14), LB and ARF achieve the same best accuracy, which is 0.2% higher than the learn++NIE accuracy. However, accuracy reflects only the overall classification performance. For the recall, three sample-based methods, SOLE, OOB and UOB, perform better than other online learning methods, which proves the effectiveness of the sampling technique in dealing with class imbalance. SOLE obtains the highest recall, which is 7.9% higher than UOB. Compared with the accuracy and recall, it can be concluded that while the overall classification performance is good, the actual classification performance on the minority class may be poor. In addition, SOLE achieves first place in all three metrics, F-score, G-mean, and PAUC. SOLE, OOB and UOB perform better than the other methods for these three metrics. Furthermore, learn++NIE and OAUE achieve recall values of only 2.2% and 0.0%, which shows that the algorithm predicts nearly all instances as the majority class. Thus, learn++NIE and OAUE lose their judgment capacity and cannot be used in practical applications.
As for the PAKDD (imbalance ratio: 1/4), learn++NIE and OAUE achieve first place in accuracy, which is 0.1% higher than the LB accuracy. The accuracy of SOLE is lower than that of LB, learn++NIE, OAUE and ARF, but higher than that of the other sampling methods, OOB and UOB. For the recall metric, LB, learn++NIE, OAUE and ARF obtain very low values, meaning that these algorithms regard almost all the instances as the majority. UOB has the best recall value, of 68.1%, which is 12.4% higher than SOLE. However, the overall accuracy of UOB is only 53.5, which means that UOB misclassifies almost half of the instances. Therefore, it can be concluded that UOB obtains a better classification performance for the minority class (i.e., recall value) at the expense of misclassifying many majority class instances. In practical client credit assessment applications, treating too many good clients as risk clients will cause adverse effects on the business. For the F-score, G-mean, and PAUC metrics, SOLE achieves first place and is better than the other algorithms.
For the experiments using the naive Bayes as the base classifier, only SOLE, LB, learn++NIE, OOB, and UOB are included in the comparison. Table 5 shows the average results using the naive Bayes as the base classifier. For the GMSC (imbalance ratio: 1/14), LB achieves the best classification accuracy, which is 0.2% higher than OOB and UOB. SOLE obtains the highest value on the recall metric, which is 0.4% higher than learn++NIE. LB, OOB and UOB achieve only low recall values, which means they perform poorly for the minority class. In addition, SOLE performs best on the F-score, G-mean, and PAUC metrics and is better than the other algorithms. For the PAKDD (imbalance ratio: 1/4), LB also achieves the best performance for the accuracy metrics but obtains the lowest recall value. OOB achieves the best recall value of 79.7%, and UOB achieves the second place of 79.4%. However, their classification accuracy is lower than 50%, reflecting the fact that they acquire a better performance for the minority class at the expense of poor performance for the majority class. Moreover, SOLE achieves first place for three metrics: F-score, G-mean, and PAUC. SOLE performs better because it proposed a multi-time scale ensemble classifier and a sample-based learning method to address the joint problem of concept drift and class imbalance, while other state-of-the-art methods only focus on solving one problem.

4.4.2. Base Classifier Preference

Compared with the performances of the algorithms using different base classifiers, it can be concluded that the algorithms have preferences for the base classifier. The base class is also biased against the dataset. Table 6 shows the bias of the base classifier by using different colors. The values in the table are the results using the Hoeffding tree as the base classifier minus the results using the naive Bayes as the base classifier. A positive value means that an algorithm performs better using the Hoeffding tree as the base classifier. A negative value means that an algorithm performs better using the naive Bayes as the base classifier. Orange cells indicate that the Hoeffding tree performs better, and green cells indicate that the naive Bayes performs better. The depth of the color has three levels according to the size of the value (0–10%, 10–20%, >20%).
First, the performance for different credit assessment cases is compared. It can be concluded that the number of green cells for PAKDD is larger than that for GMSC. This phenomenon may be related to the class imbalance ratio or the characteristics of the dataset. Second, the preferences of different algorithms are compared. Learn++NIE performs better using the naive Bayes as the base classifier. The performance gap between using the two base classifiers is apparent. LB performs better using the Hoeffding tree for GMSC, while it achieves a better performance using the naive Bayes for PAKDD. SOLE, OOB and UOB achieve better results using the Hoeffding tree. In general, SOLE has the best adaptability, as the performance gaps of SOLE between different base classifiers are minimal.

4.4.3. PAUC–Time Curves

In this section, to intuitively show the classification performance of different algorithms, the PAUC–time curves are plotted. Figure 2 shows the PAUC–time curves for GMSC using different base classifiers. SOLE achieves the highest PAUC using both the Hoeffding tree and the naive Bayes as the base classifier. For the PAUC–time curves using the Hoeffding tree as the base classifier, SOLE achieves a low value at the beginning of the learning process. The PAUC value of SOLE continues growing and obtains first place at the end of the process. For the PAUC–time curves using the naive Bayes as the base classifier, the UOB achieves first place at the beginning. However, the PAUC of UOB continues to decrease, and the PAUC of SOLE continues to increase. Thus, SOLE performs better than UOB after approximately the 15 k time step.
Figure 3 shows the PAUC–time curves for PAKDD using different base classifiers. SOLE achieves the highest PAUC for both using the Hoeffding tree and the naive Bayes as the base classifier. For the PAUC–time curves using the Hoeffding tree as the base classifier, SOLE achieves fourth place at the beginning. Additionally, the SOLE PAUC value continues to increase, but the PAUC of the other algorithms is stable. Therefore, SOLE achieves the highest PAUC at the end. Then, for the PAUC–time curves using the naive Bayes as the base classifier, SOLE is better than the other algorithms throughout the whole learning procedure.

4.4.4. Parameter Sensitivity of SOLE

To explore the impact of parameter settings on the classification performance of SOLE, in this section, the parametric comparison experiment is carried out. Experiments compare the main parameters, number of dynamic classifiers D, time cycle T, and the sampling parameter α whose default settings are D = 10, T = 500, and ε = 1. As for each group of parameter settings, the comparative experiments are carried out in parallel 10 times on both the cases and the averages are calculated. Table 7 shows the average results with different parameter settings by using the Hoeffding tree as the base classifier. Only the Hoeffding tree as the base classifier is presented because of the same phenomena as using the naive Bayes as the base classifier.
From Table 7, all the parameters have an impact on the classification performance of SOLE. First, increasing the number of dynamic classifiers can help improve the performance to some extent. Within the results of all the parameters, D = 15 is the best setting for GMSC, and D = 20 is the best setting for PAKDD. Second, SOLE performs better at time cycles of 500 and 750 for PAKDD, but the time cycle shows no apparent influence on GMSC. It shows that the best value of the time cycle is influenced by the features of the data streams. Finally, the classification accuracy improves when the algorithm uses a larger sampling parameter. However, the classification recall shows the opposite situation. Intuitively, the larger sampling parameter will increase the classification performance for the majority class but decrease the classification performance for the minority class.

4.5. Feasibility Analysis in Practical Application

4.5.1. Credit Scoring Model

The traditional credit assessment models provide only the classification probability that a client is a good clients or a risk clients. However, they are not easy to understand and use in practical applications. To present the credit condition of each client, the credit score is calculated based on the predicted probability provided by the proposed SOLE. The credit score is calculated as [31]:
s c o r e   =   l n P 1 - P * f a c t o r + o f f s e t
where P is the probability that a client is a good clients, 1−P is the probability that a client is a risk clients, factor is the linear transform coefficient, which is usually a logarithmic value, and offset is the adjustment constant that keeps the credit score in the target interval.

4.5.2. Bad Debts and the KS Test

To verify the feasibility of SOLE in practical applications, the credit scores of the clients for GMSC are calculated. The learning model is SOLE using the naive Bayes as the base classifier. factor is set to ln60, and offset is set to 600. The credit score results are sorted from high to low and then divided into ten intervals. Each score interval has 15 k clients. Table 8 shows the distribution of credit scores.
As shown in Table 8, risk clients are mainly distributed in the score intervals under the credit score of 604.6. As the credit score increases, the bad debt ratio decreases rapidly. Therefore, SOLE effectively reflects the real credit condition of the clients. The credit score can be used as the basis to assess the credit condition of the clients. In the actual financial risk control business, the KS test is usually used to evaluate the performance of a model. The KS test uses the difference between the accumulative percentage of good clients and the accumulative percentage of risk clients to show the distinguishing ability of a model. The larger the value of the KS test, the better the distinguishing ability. Table 9 shows the relationship between the KS test and distinguishing ability [31].
Figure 4 shows the KS curves for the GMSC of SOLE using the naive Bayes as the base classifier. The KS value is 0.47, which represents a high distinguishing ability. Thus, it proves that SOLE can be used in practical credit assessment.

4.6. Resources Consumption

For online learning models, resource consumption is an important feature for verifying whether the algorithms are capable of real-world applications. Table 10 shows the resource consumption of the algorithms for the two cases. In general, SOLE consumes the least time and memory for both GMSC and PAKDD using the naive Bayes as the base classifier. It also consumes the least time and memory for GMSC using the Hoeffding tree as the base classifier. For the experiments on PAKDD using the Hoeffding tree as the base classifier, learn++NIE costs the least time, OAUE costs the least memory, and SOLE also has a good performance on the resource consumption.
First, the resource consumption results by using the Hoeffding tree as the base classifier are compared. LB and ARF consume significantly more resources than other methods, meaning that LB and ARF will have more delays in real-time practical applications. OOB costs more time and memory than UOB because of the different sampling strategy. As the class imbalance ratio increases, OOB will require more resources for minority class instances, but UOB will cost less resources by undersampling the majority class instances. Therefore, the resource consumption of OOB for GMSC is higher than that for PAKDD.
Second, the experimental results are compared by using the naive Bayes as the base classifier. Learn++NIE costs significantly more time and memory. As the naive Bayes is a simpler base classifier than the Hoeffding tree, all the comparative methods except learn++NIE consume fewer resources using the naive Bayes as the base classifier. Therefore, it can be concluded that the learn++NIE has its preference by using the naive Bayes as the base classifier. By using the naive Bayes as the base classifier, learn++NIE will consume more resources and achieve better performance. In summary, the datasets, algorithm characteristics and the base classifiers all affect the source consumption.

5. Conclusions

Machine learning algorithms have been used in client credit assessment applications. However, there has been no detailed research focusing on solving the joint issues of concept drift and class imbalance in client credit assessment. To handle the potential concept drift and class imbalance in the client credit assessment data stream, the novel SOLE method is proposed for client credit assessment, which also first applies online learning methods in client credit assessment applications. A novel multiple time scale ensemble classifier contains a stable classifier and many dynamic classifiers to address different types of concept drift. A novel sample-based online learning method based on both oversampling the minority instances and undersampling the majority instances is proposed to balance the class distribution. The experiments provide a comprehensive comparison between the proposed SOLE and other state-of-the-art online learning algorithms. Two real-world credit assessment cases, GMSC and PAKDD, were used in the comparison. The results showed that SOLE performs better than other state-of-the-art online learning algorithms and consumes fewer computing resources. To verify that SOLE is capable of practical applications, the credit scoring model and the KS test were also applied to translate the prediction results to the credit score and quantify the distinguishing ability of the model. Compared with the traditional machine learning and modeling methods, SOLE can address the joint problems of concept drift and class imbalance, and shows rapid adaptability, which benefits from the characters of online learning.

Author Contributions

H.Z. conceived the algorithm; H.Z. and Q.L. designed the experiments; H.Z. performed the experiments and wrote the paper. H.Z. and R.W. edited the manuscript. All authors reviewed and approved the final manuscript.

Funding

This research was funded by the China Advance Research Fund (No. 9140C830304150C83352).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Munkhdalai, L.; Munkhdalai, T.; Namsrai, O.-E.; Lee, J.Y.; Ryu, K.H. An empirical comparison of machine-learning methods on bank client credit assessments. Sustainability 2019, 11, 699. [Google Scholar] [CrossRef]
  2. Brown, I.; Mues, C. An experimental comparison of classification algorithms for imbalanced credit scoring data sets. Expert Syst. Appl. 2012, 39, 3446–3453. [Google Scholar] [CrossRef] [Green Version]
  3. Arminger, G.; Enache, D.; Bonne, T. Analyzing credit risk data: A comparison of logistic discrimination, classification tree analysis, and feedforward networks. Soc. Sci. Electron. Publ. 1997, 12, 293–310. [Google Scholar]
  4. Lessmann, S.; Baesens, B.; Seow, H.V.; Thomas, L.C. Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. Eur. J. Oper. Res. 2015, 247, 124–136. [Google Scholar] [CrossRef] [Green Version]
  5. Kultur, Y.; Caglayan, M.U. Hybrid approaches for detecting credit card fraud. Expert Syst. 2017, 34, e12191. [Google Scholar] [CrossRef]
  6. Khemakhem, S.; Said, F.B.; Boujelbene, Y. Credit risk assessment for unbalanced datasets based on data mining, artificial neural network and support vector machines. J. Mod. Man. 2018, 13, 932–951. [Google Scholar] [CrossRef]
  7. Li, C.; Peng, H. Credit Risk Assessment for Rural Credit Cooperatives Based on Improved Neural Network. In Proceedings of the International Conference on Smart Grid & Electrical Automation, Changsha, China, 27–28 May 2017. [Google Scholar]
  8. Li, Z.; Ye, T.; Ke, L.; Zhou, F.; Wei, Y. Reject inference in credit scoring using semi-supervised support vector machines. Expert Syst. Appl. 2017, 74, 105–114. [Google Scholar] [CrossRef]
  9. Nanni, L.; Lumini, A. An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring. Expert Syst. Appl. 2009, 36, 3028–3033. [Google Scholar] [CrossRef]
  10. Huang, C.L.; Chen, M.C.; Wang, C.J. Credit scoring with a data mining approach based on support vector machines. Expert Syst. Appl. 2007, 33, 847–856. [Google Scholar] [CrossRef]
  11. Linhart, C.; Harari, G.; Abramovich, S.; Buchris, A. Pakdd Data Mining Competition 2009: New Ways of Using Known Methods. In Proceedings of the Pacific-Asia International Conference on Knowledge Discovery & Data Mining: New Frontiers in Applied Data Mining, Bangkok, Thailand, 27–30 April 2009. [Google Scholar]
  12. Wang, S.; Minku, L.L.; Yao, X. Online class imbalance learning and its applications in fault detection. Int. J. Comput. Intell. Appl. 2013, 12, 2340–2345. [Google Scholar] [CrossRef]
  13. Frances-Villora, J.V.; Rosado-Muñoz, A.; Bataller-Mompean, M.; Barrios-Aviles, J.; Guerrero-Martinez, J.F. Moving learning machine towards fast real-time applications: A high-speed fpga-based implementation of the os-elm training algorithm. Electronics 2018, 7, 308. [Google Scholar] [CrossRef]
  14. Sousa, M.R.; Gama, J.; Brandão, E. A new dynamic modeling framework for credit risk assessment. Expert Syst. Appl. 2016, 45, 341–351. [Google Scholar] [CrossRef] [Green Version]
  15. Ditzler, G.; Roveri, M.; Alippi, C.; Polikar, R. Learning in nonstationary environments: A survey. IEEE Comput. Intell. Mag. 2015, 10, 12–25. [Google Scholar] [CrossRef]
  16. Lu, J.; Liu, A.; Dong, F.; Gu, F.; Gama, J.; Zhang, G. Learning under concept drift: A review. IEEE Trans. Knowl. Data Eng. 2018, 1. [Google Scholar] [CrossRef]
  17. Zhang, L.; Wang, W.X. A re-sampling method for class Imbalance Learning with credit data. In Proceedings of the International Conference of Information Technology, Las Vegas, NV, USA, 11–13 April 2011. [Google Scholar]
  18. Marqués, A.I.; García, V.; Sánchez, J.S. On the suitability of resampling techniques for the class imbalance problem in credit scoring. J. Oper. Res. Soc. 2013, 64, 1060–1070. [Google Scholar] [CrossRef] [Green Version]
  19. Zieba, M.; Härdle, W.K. Beta-boosted ensemble for big credit scoring data. In Handbook of Big Data Analytics; Springer: Cham, Switzerland, 2018; pp. 523–538. [Google Scholar]
  20. Wang, S.; Minku, L.L.; Yao, X. A systematic study of online class imbalance learning with concept drift. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1–20. [Google Scholar] [CrossRef] [PubMed]
  21. Gama, J.; Medas, P.; Castillo, G.; Rodrigues, P. Learning with Drift detection. In Proceedings of the Brazilian Symposium on Advances in Artificial Intelligence-Sbia, Sao Luis, Brazil, 29 September–1 October 2004. [Google Scholar]
  22. Gomes, H.M.; Bifet, A.; Read, J.; Barddal, J.P.; Enembreck, F.; Pfharinger, B.; Holmes, G.; Abdessalem, T. Adaptive random forests for evolving data stream classification. Mach. Learn. 2017, 106, 1469–1495. [Google Scholar] [CrossRef]
  23. Bifet, A.; Holmes, G.; Pfahringer, B. Leveraging bagging for evolving data streams. In Proceedings of the European Conference on Machine Learning & Knowledge Discovery in Databases, Barcelona, Spain, 20–24 September 2010. [Google Scholar]
  24. Wang, S.; Minku, L.L.; Yao, X. Resampling-based ensemble methods for online class imbalance learning. IEEE Trans. Knowl. Data Eng. 2015, 27, 1356–1368. [Google Scholar] [CrossRef]
  25. Oza, N.C. Online bagging and boosting. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 10–12 October 2005; Volume 2343, pp. 2340–2345. [Google Scholar]
  26. Ryan, E.; Robi, P. Incremental learning of concept drift in nonstationary environments. IEEE Trans. Neural Netw. Learn. Syst. 2011, 22, 1517–1531. [Google Scholar]
  27. Bifet, A.; Holmes, G.; Kirkby, R.; Pfahringer, B. Moa: Massive online analysis. J. Mach. Learn. Res. 2010, 11, 1601–1604. [Google Scholar]
  28. Luque, A.; Carrasco, A.; Martín, A.; Lama, J.R. Exploring symmetry of binary classification performance metrics. Symmetry 2019, 11, 47. [Google Scholar] [CrossRef]
  29. Brzezinski, D.; Stefanowski, J. Prequential auc: Properties of the area under the roc curve for data streams with concept drift. Knowl. Inf. Syst. 2017, 52, 531–562. [Google Scholar] [CrossRef]
  30. Brzezinski, D.; Stefanowski, J. Combining block-based and online methods in learning ensembles from concept drifting data streams. Inf. Sci. 2014, 265, 50–67. [Google Scholar] [CrossRef]
  31. Shan, L.; Mao, X.L. Modeling and Application of Consumer Credit Score in Internet Finance Times, 1st ed.; Electronic Industry Press: Beijing, China, 2015. [Google Scholar]
Figure 1. Structure of the multiple time scale ensemble classifier.
Figure 1. Structure of the multiple time scale ensemble classifier.
Symmetry 11 00890 g001
Figure 2. PAUC–time curves for GMSC using different base classifiers.
Figure 2. PAUC–time curves for GMSC using different base classifiers.
Symmetry 11 00890 g002
Figure 3. PAUC–time curves for PAKDD using different base classifiers.
Figure 3. PAUC–time curves for PAKDD using different base classifiers.
Symmetry 11 00890 g003
Figure 4. KS curves on GMSC.
Figure 4. KS curves on GMSC.
Symmetry 11 00890 g004
Table 1. Characteristics of GMSC (Gateway Mobile Switching Centre) and PAKDD (Pacific-Asia Conference on Knowledge Discovery and Data Mining.
Table 1. Characteristics of GMSC (Gateway Mobile Switching Centre) and PAKDD (Pacific-Asia Conference on Knowledge Discovery and Data Mining.
CaseNo. InstancesNo. AttributesNo. PositiveNo. NegativeImbalance Ratio
GMSC150 k10100261399741/14
PAKDD50 k289874401261/4
Table 2. Confusion matrix of the binary problem.
Table 2. Confusion matrix of the binary problem.
Predicted PositivePredicted Negative
Actual PositiveTrue Positive (TP)False Negative (FN)
Actual NegativeFalse Positive (FP)True Negative (TN)
Table 3. Default parameter settings of the Hoeffding tree and the naive Bayes.
Table 3. Default parameter settings of the Hoeffding tree and the naive Bayes.
Base classifierDefault Parameter SettingAlgorithms
Hoeffding treememory estimate period: 2, 000, 000SOLE
grace period: 100LB
split criterion: InfoGainSplitLearn++NIE
split confidence: 0OOB
tie threshold: 0.05UOB
leaf prediction: NBAdaptiveOAUE
nbThreshold: 0
Naive BayesNaive Bayes classifier is a relatively simple probability classifier based on Bayes’ theorem. Naive refers to the assumption that all features in the model are highly independent and the correlation between features is not considered.SOLE
LB
Learn++NIE
OOB
UOB
ARFHoeffding treesubspace size: 2ARF
memory estimate period: 1, 000, 000
grace period: 100
split criterion: InfoGainSplit
split confidence: 0
tie threshold: 0.05
leaf prediction: NBAdaptive
nbThreshold: 0
Table 4. Average results using the Hoeffding tree as the base classifier (%).
Table 4. Average results using the Hoeffding tree as the base classifier (%).
CasesAlgorithmsAccuracyRecallF-scoreG-meanPAUC
GMSC (IR: 1/14)SOLE89.768.547.079.088.7
LB93.510.217.431.984.1
Learn++NIE93.22.20.015.050.0
OOB89.755.541.971.584.9
UOB85.760.636.172.879.1
OAUE93.30.0-0.050.5
ARF93.59.115.630.080.4
PAKDD (IR: 1/4)SOLE75.855.747.967.169.1
LB80.20.61.27.763.8
Learn++NIE80.30.10.00.050.1
OOB70.738.033.954.763.5
UOB53.568.136.658.361.3
OAUE80.30.10.00.050.3
ARF80.11.73.212.963.2
Table 5. The average results using the naive Bayes as the base classifier.
Table 5. The average results using the naive Bayes as the base classifier.
CasesAlgorithmsAccuracyRecallF-scoreG-meanPAUC
GMSC (IR: 1/14)SOLE87.452.336.168.577.8
LB93.23.105.717.667.6
Learn++NIE74.551.921.462.967.8
OOB93.04.47.821.068.2
UOB93.010.316.431.971.7
PAKDD (IR: 1/4)SOLE52.375.638.859.369.7
LB64.940.831.453.858.3
Learn++NIE49.664.133.454.356.1
OOB43.879.735.952.860.2
UOB45.279.436.454.161.2
Table 6. The performance bias of the base classifier 1.
Table 6. The performance bias of the base classifier 1.
CasesAlgorithmsAccuracyRecallF-scoreG-meanPAUC
GMSC (IR: 1/14)SOLE2.316.210.910.510.9
LB0.37.111.714.316.5
Learn++NIE18.7−49.7−21.4−47.9−17.8
OOB−3.351.134.150.516.7
UOB−7.350.319.740.97.4
PAKD (IR: 1/4)SOLE23.5−19.99.17.88.9
LB15.3−40.2−30.2−46.15.5
Learn++NIE30.7−64−33.4−54.3−6
OOB26.9−41.7−21.93.3
UOB8.3−11.30.24.20.1
1 Orange: Algorithms perform better using the Hoeffding tree. Green: Algorithms perform better using the naive Bayes.
Table 7. Average results (%) with different parameter settings.
Table 7. Average results (%) with different parameter settings.
CasesParametersAccuracy RecallF-scoreG-meanPAUC
GMSC (IR: 1/14)D = 184.766.436.775.682.4
D = 587.867.642.577.786.9
D = 1089.768.547.079.088.7
D = 1590.468.948.979.689.9
D = 2090.168.848.179.489.4
I = 10089.568.446.578.988.4
I = 25089.768.447.079.088.7
I = 50089.768.547.079.088.7
I = 75089.768.547.079.088.7
I = 100089.868.547.279.188.9
ε = 189.768.547.079.088.7
ε = 591.366.450.478.687.7
ε = 1091.765.351.278.286.5
PAKDD (IR: 1/4)D = 171.253.942.863.865.7
D = 573.354.144.865.067.8
D = 1075.855.747.967.169.1
D = 1576.355.948.567.569.4
D = 2076.555.948.867.669.4
I = 10073.151.943.663.865.5
I = 25074.753.445.865.467.4
I = 50075.855.747.967.169.1
I = 75075.955.748.067.169.2
I = 100075.254.646.866.268.4
ε = 175.855.747.967.169.1
ε = 576.152.346.765.566.8
ε = 1078.449.647.965.267.6
Table 8. Credit score distribution for GMSC of SOLE using the naive Bayes.
Table 8. Credit score distribution for GMSC of SOLE using the naive Bayes.
Score IntervalNum. “Good”Num. “Risk”“Good” Percentage“Risk” PercentageBad Debts
0–598.91019448060.070.480.32
598.9–601.91357914210.170.620.09
601.9–603.5141818190.270.700.05
603.5–604.6143396610.370.770.04
604.6–605.7144645360.480.820.04
605.7–606.8145524480.580.870.03
606.8–608.3145774230.690.910.03
608.3–610.3146743260.790.940.02
610.3–615.1146863140.890.970.02
615.1–734.8147302701.001.000.02
Table 9. The relationship between the Kolmogorov–Smirnov (KS) test and distinguishing ability.
Table 9. The relationship between the Kolmogorov–Smirnov (KS) test and distinguishing ability.
KS Test ValueDistinguishing Ability
<0.20none
0.20–0.30low
0.30–0.40medium
0.40–0.50high
0.50–0.90extremely high
>0.90anomaly
Table 10. Resource consumption of different algorithms.
Table 10. Resource consumption of different algorithms.
Hoeffding Tree as the Base ClassifierNaive Bayes as the Base Classifier
GMSC (IR: 1/14)PAKDD (IR: 1/4)GMSC (IR: 1/14)PAKDD (IR: 1/4)
TimeRAMTimeRAMTimeRAMTimeRAM
SOLE4.21.6 × 10−73.12.4 × 10−73.53.7 × 10−81.83.9 × 10−8
LB48.31.2 × 10−421.25.6 × 10−55.89.6 × 10−82.66.7 × 10−8
Learn++NIE21.93.1 × 10−52.22.5 × 10−768.19.2 × 10−58.71.3 × 10−6
OOB15.11.1 × 10−56.53.9 × 10−64.74.8 × 10−82.34.9 × 10−8
UOB4.64.0 × 10−73.45.8 × 10−74.44.5 × 10−82.24.7 × 10−8
OAUE4.52.0 × 10−72.41.5 × 10−7----
ARF69.63.5 × 10−428.01.5 × 10−4----

Share and Cite

MDPI and ACS Style

Zhang, H.; Liu, Q. Online Learning Method for Drift and Imbalance Problem in Client Credit Assessment. Symmetry 2019, 11, 890. https://doi.org/10.3390/sym11070890

AMA Style

Zhang H, Liu Q. Online Learning Method for Drift and Imbalance Problem in Client Credit Assessment. Symmetry. 2019; 11(7):890. https://doi.org/10.3390/sym11070890

Chicago/Turabian Style

Zhang, Hang, and Qingbao Liu. 2019. "Online Learning Method for Drift and Imbalance Problem in Client Credit Assessment" Symmetry 11, no. 7: 890. https://doi.org/10.3390/sym11070890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop