Next Article in Journal
Moisture Estimation in Cabinet Dryers with Thin-Layer Relationships Using a Genetic Algorithm and Neural Network
Next Article in Special Issue
Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening
Previous Article in Journal
Generalized F-Contractions on Product of Metric Spaces
Previous Article in Special Issue
Automatic Melody Composition Using Enhanced GAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning

Department of Computer Science, Yonsei University, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1041; https://doi.org/10.3390/math7111041
Submission received: 10 September 2019 / Revised: 20 October 2019 / Accepted: 23 October 2019 / Published: 3 November 2019
(This article belongs to the Special Issue Recent Advances in Deep Learning)

Abstract

:
Peer-to-Peer (P2P) lending transactions take place by the lenders choosing a borrower and lending money. It is important to predict whether a borrower can repay because the lenders must bear the credit risk when the borrower defaults, but it is difficult to design feature extractors with very complex information about borrowers and loan products. In this paper, we present an architecture of deep convolutional neural network (CNN) for predicting the repayment in P2P social lending to extract features automatically and improve the performance. CNN is a deep learning model for classifying complex data, which extracts discriminative features automatically by convolution operation on lending data. We classify the borrower’s loan status by capturing the robust features and learning the patterns. Experimental results with 5-fold cross-validation show that our method automatically extracts complex features and is effective in repayment prediction on Lending Club data. In comparison with other machine learning methods, the standard CNN has achieved the highest performance with 75.86%. Exploiting various CNN models such as Inception, ResNet, and Inception-ResNet results in the state-of-the-art performance of 77.78%. We also demonstrate that the features extracted by our model are better performed by projecting the samples into the feature space.

1. Introduction

Peer-to-Peer (P2P) lending belongs to FinTech services that directly match the lenders with borrowers through online platforms without the intermediation of financial institutions such as banks [1]. P2P lending has grown rapidly, attracting many users and generating huge transaction data. For example, the total loan issuance of the Lending Club reached about $31 billion in the second half of 2017.
When a borrower applies to the platform, many lenders select a borrower and lend money. It is the financial loss of the lender that the borrowers do not pay or only partially pay to them in the repayment period. The lenders may suffer due to the default of the borrowers [2]. To reduce the financial risk of the lenders, it is important to predict defaults and assess the creditworthiness of the borrowers [3].
Since P2P social lending data is processed online, large and various data is generated, and the P2P lending platform provides much information on borrowers’ characteristics to solve problems, such as information asymmetry and transparency [4,5]. The availability and prevalence of transaction data on P2P lending have attracted many researchers’ attention. Recent studies mainly address the issues such as assessing credit risk, portfolio optimization and predicting default.
They extract features from information on borrowers and loan products of transaction data and solve the problems using machine learning methods with extracted features [6]. Most studies design feature extractors based on statistical methods [7], and extract hand-crafted feature representations [8].
However, these studies are potentially faced with problems such as scale limitation and variety. The conventional machine learning is difficult to train and test large data [9] and tree-based classification methods with high performance require many features [10]. Also, statistical methods and hand-crafted methods are limited in extracting features by capturing the relationship between complex variables inherent in various data.
In the case of the Lending Club in the United States, it provides a total of one million data, consisting of 42,535 in 2007–2011, 188,181 in 2012–2013, 235,629 in 2014, 421,095 in 2015 and 434,407 in 2016 (March 2017, https://www.lendingclub.com). The amount of data in the P2P lending is increasing, and the data structure is very large and complex. Table 1 shows the statistics of the data from the Lending Club and Table 2 shows the description of some attributes.
Figure 1 shows some of the correlation plots for the loan status of the samples after normalizing the raw data. As can be seen in the figure, the “charged off” and the “fully paid” have very similar plot correlations. These loan status classes can be easily confused with each other. It is difficult to extract discriminative features for the loan status.
Deep learning, which has become a huge tide in the field of big data and artificial intelligence, has made a significant breakthrough in machine learning and pattern recognition research. It provides a predictive model for large and complex data [11], which automatically extracts non-linear features by stacking layers deeply. Especially, deep convolutional neural network (CNN), one of the deep learning models, extracts hierarchically local features through weighted filters [12]. Several researchers have studied mainly to recognize patterns using images [13], video [14], speech [15], text [16], and other datasets [17]. It is also applied to other problems such as recognizing the emotions of people [18,19] and predicting power consumption [20].
In this paper, we exploit a deep CNN approach for repayment prediction in social lending. The CNN is well-known as powerful tool for image classification, but has not been explored for general data mining tasks. We aim to extend the edge of the applications of CNN to large-scale data classification. The social lending data contains a specific pattern for the borrowers and the loan product. The convolutional layer captures the various features of borrowers from the transaction data, and the pooling layer merges similar features into one. By stacking several convolutional layers and the pooling layers, the basic features are extracted in the lower layer, and the complex ones are derived in the higher layer. The deep CNN model can classify the loan status of borrowers by extracting discriminative features among them and learning patterns in lending data.
We confirm the potential of CNN in the problem of social lending by designing one-dimensional CNN, and analyzing the features extracted and the performance in Lending Club data, evaluating whether the feature representation is generalized for other lenders. We show how various convolutional architectures affect overall performance, and how the systems that do not require hand-crafted features outperform other machine learning algorithms. We provide empirical rules and principles for designing deep CNN architectures for repayment prediction in social lending.
This paper is organized as follows. In Section 2, we discuss the related work on social lending. Section 3 explains the proposed deep CNN architecture. Section 4 presents the experimental results, and Section 5 concludes this paper.

2. Related Works

Milne et al. stress that P2P lending platforms are increasing in many countries around the world, and that the probability of increased defaults and potential problems are important [21]. As shown in Table 3, there are many studies on the default of borrowers and the credit risk in P2P social lending.
Most researchers have mainly used a few data and attributes by extracting features using a statistical method or hand-crafted ones. They presented a default prediction model and a credit risk assessment model using a machine learning method. Lin et al. proposed a credit risk assessment model using Yooli data from a P2P lending platform in China [7]. They extracted the features affecting defaults by analyzing the demographic characteristics of borrowers using a nonparametric test. As a result, ten variables, including gender, age, marital status and loan amount were extracted, and a credit risk assessment model established using logistic regression. Malekipirbazari and Aksakalli assessed credit risk in social lending using random forest [8]. Data pre-processing and manipulation tasks were used to extract 15 features and evaluate the performance according to the number of features. As the number of features grew, it performed better. They achieved higher performance compared to other methods.
All of these studies have been hand-designed to derive unique features, which makes it difficult to compare them with other experimental grounds [29]. As the amount of data and the number of attributes increase, it is difficult to extract discriminative features of the borrower. However, because big data brings about new opportunities for discovering new values [30], it is important to use all the information of the borrower to predict the repayment of the borrower accurately.
On the other hand, there have been studies using a lot of data. Kim and Cho used semi-supervised learning to improve the performance leveraging the unlabeled data of Lending Club data [23]. They predicted the default of borrowers using a decision tree with unlabeled data. Vinod Kumar et al. analyzed the credit risk by labeling new classes to all data as “Good” or “Bad”, such as “current,” “default,” “late”, including the data of the borrower who was “fully paid” and “charged off” [10]. However, these studies also require the process of extracting features. In this paper, we show that deep CNN can overcome the problem of default prediction by using all the data and attributes.

3. Deep Learning for Repayment Prediction

Figure 2 shows the overall architecture for social lending repayment prediction using deep CNN. We train deep CNN using the formulations defined below. The key idea is to learn feature space that captures inherent properties such as the characteristics of the borrower or the loan product using the data of many borrowers. We continue to train classifiers to model the characteristics of each borrower using this feature space.
The learned network is used to project the social lending data into the representation space learned by CNN and predict the repayment of the borrowers through the softmax classifier. The network can easily predict the repayment of borrowers by extracting features and by capturing the characteristics of the borrower through the convolution layers and the pooling layers.

3.1. Convolutional Neural Network

Convolutional neural networks perform convolution operations instead of matrix multiplication [17]. In continuous case, convolution of two functions f and g is defined as follows:
( f g ) ( t ) = f ( τ ) g ( t τ ) d τ = f ( t τ ) g ( τ ) d τ
In discrete case, the integral is replaced by the summation:
( f g ) ( n ) = m = f ( m ) g ( n m ) = m = f ( n m ) g ( m )
Discriminative features are extracted from information about borrowers and loan products in Lending Club data through local connection leveraging convolution operations. Suppose that x i 0 = [ x 1 ,   x 2 , ,   x N ] be preprocessed lending data. The output y i l ,   j is obtained from the input vector x i 0 through the convolution layer as the following Equation (3). Several feature-maps are generated from the lending data using the trained convolution filter, and complex features of the lending data are captured by the following activation function.
y i l ,   j = σ ( k = 0 K w k l , j x i + k 1 l 1 ,   j )
where y i l ,   j is calculated by output vector x of the previous layer and the convolution weight w . l is the index of the layer, K is the filter size, k is the index of the filter, and σ is the activation function. Here, we use ReLU for the activation function: σ (x) = max(0, x).
In the pooling layer, the semantically similar features extracted from the convolution layer are merged into one [31]. The pooling layers are used to extract representative values of features from Lending Club data. The maximum of local patches in one feature map is computed to reduce the dimension and distortion. Equation (4) represents the process of extracting the maximum from the l th pooling layer. R means a pooling size of a certain range, and T means the stride to move pooling.
f i j l = m a x r   R   x i j × T l 1
Several convolution and pooling layers are stacked up. These layers perform as a role of feature extraction hierarchically on lending data. They extracted informative and discriminative representations based on the data, and appeared as more complex features from the bottom up [9].
The feature maps generated by repeating several convolution and pooling layers from lending data are connected one-dimensionally through the fully connected layer, and the data is classified using the activation function as loan status. The combination of a fully-connected layer and a softmax classifier is used to predict the repayment of the borrower. The features extracted from the convolution and pooling layers are flattened to form the feature vector f l = [ f 1 , f 2 ,   ,   f I ] where I means the number of units in the last pooling layer. This is used as the input of the fully-connected layer. Equation (5) shows the process of calculating the hidden node in the fully-connected layer. σ is an activation function, w is a weight connected between nodes, and b i is bias term.
h i l = j w j i l 1 ( σ ( f i l 1 ) + b i )
The output of the last layer through the softmax classifier is loan status c (charge off, fully paid). In Equation (6) L is the index of the last layer, and N C is the total number of classes.
P ( c | f ) = argmax c C exp ( f L 1 w L + b L ) k = 1 N C exp ( f L 1 w k ) ,
Forward propagation is performed using Equations (3)–(6), and gives us the error of network. The deep CNN weights are updated using a backpropagation algorithm based on the RMSProp optimizer [32] that minimizes the categorical cross entropy in the mini-batches of the lending data. RMSprop is a method to maintain the relative difference between variables of recent change using exponential averaging. We set the learning rate as 0.001 and the number of samples per batch as 512.
g t 2 = γ g t 1 2 + ( 1 γ ) ( θ J ( θ t ) ) 2
θ t + 1 = θ t η g t 2 + ϵ θ J ( θ t )
where g t 2 is the gradient of the sum of squares, η is the learning rate, γ is the momentum term of every parameter θ t at every time step t , and J ( θ ) is the gradient of the objective function. When the criterion is satisfied, forward and back propagation is stopped.

3.2. Architecture and Hyperparameters

A deep CNN can be manifested in many structures by the combination of hyperparameters. Hyperparameters affect the process of extracting features, learning time and performance [33]. To determine the optimal architecture of deep CNN including hyperparameters, it is necessary to understand the domain. In our case, it means to classify the repayment in P2P social lending.
Lending Club data, unlike images, do not have a strong relationship between attributes, so a small window size should be used to minimize the loss of information on the convolution and pooling layers, and the stride of window should be small. An activation function such as rectified linear unit (ReLU) also should be used to extract nonlinear patterns [34]. We design the network as shown in Table 4. The input of the network is 1 × 72 size as 1D data. The Lending Club data go through the convolution and pooling layers followed by two fully-connected layers.

3.3. Dropout

Overfitting occurs as the layer is deeper or the network is complex [35]. Overfitting is overly fit to the data, resulting in low accuracy for new data. To prevent overfitting, we can use regularization, dropout or data augmentation, and in this paper we choose the dropout.
Dropout is a regularization technique to avoid overfitting, omitting a portion of the network [36]. It deletes hidden nodes, except input and output nodes, and uses only some of the weights contained in the layer, thereby allowing robust features to be learned without relying on other neurons [37]. The dropout is accompanied by a probability of inclusion, and is performed independently for each node and each training data on the lending data. The probability of dropout will affect the performance, and overfitting or underfitting can occur if it is too large or too small. We set the value as 0.25, and use it before the last fully-connected layer.

4. Experiments

4.1. Lending Club Dataset

In this paper, we use the data from Lending Club, the biggest US P2P lending company. A total of 855,502 data were collected during the year 2015–2016, which consist of the predictor variables of 110 attributes such as loan amount, payment amount and loan period. 143,823 data with 63 attributes were used. The attributes that were ruled out include what cannot be used for prediction, such as borrower ID, URL and the description of loans, what the missing values are over 80% and what are filled after the borrower starts to repay [10].
As the input of CNN ranges in [0, 1], we preprocess the 63 attributes used for prediction. The categorical variables were created as dummy to represent binary variables, and the continuous variables were normalized by removing 1% of the outliers as follows.
X = X X m i n X m a x X m i n

4.2. Result and Analysis

Experimental results with the proposed method are described in this section. First, we show the experimental results for the validation set used to design the architecture for the proposed method: We evaluate the performance with various loss functions and hyperparameters, and compare with other methods. Afterward, we analyze the misclassification cases in the confusion matrix and the deep CNN models using t-distributed stochastic neighbor embedding (t-SNE) [38].
The hyperparameters are adjusted while maintaining the best configuration based on the hyperparameters mentioned in Section 3.2. The size of the input vector is 1 × 72, and the range of the parameter values is determined as shown in Table 5. The hyperparameters were tuned in 100 epochs for a validation set and saved a model that achieves the highest performance.
The parameters that most affected the performance are a stride of pooling layer and batch size, and parameters that are least affected are the number of filters in the convolution layer. The highest performance is obtained when the number of filters was 32, the size of the kernel is 3, the size of pooling windows is 2, the size of pooling stride is 1, dropout is 0.25, hidden size is 512, and batch size is 512. Appendix A presents the experimental results for hyperparameters and loss functions.
Comparison of performance with other methods. We present a comparison of accuracy with other methods on the test set. The best performing deep CNN model as described in Appendix A is used. The number of hidden nodes for multi-layer perceptron is set to 15, the kernel, C and gamma for SVM are set to rbf, 300 and 1.0, the k-nearest neighbor is set to k = 3, the depth of decision tree is set to 25, and the depth and number of random forests are set to 30 and 200. All the hyperparameters for the comparing methods are optimized with several experiments.
We obtained an accuracy of 75.86% and achieved higher performance than the conventional machine learning methods. Figure 3 shows the comparison of the proposed method with other methods.
5-fold cross-validation is performed to verify the usefulness of the proposed method. Deep CNN showed the highest performance compared to other machine learning methods, followed by random forest, decision tree and multi-layer perceptron. Figure 4 shows a comparison of accuracy on 5-fold cross-validation.
Preprocessing and feature extraction are important steps to develop a classification system. Table 6 presents a comparative study with the preprocessing, feature selection and extraction methods. The feature selection methods employ mutual information, information gain and chi-square statistics. The features are selected according to the order of the importance of features. Recently, the restricted Boltzmann machine (RBM) is exploited to extract effective features [39]. The basis classifier for this experiment is a softmax classifier with two fully-connected layers (#features × 512 × 2). Without preprocessing, the model tends to fail to learn, and has classified all the data into one class. Feature extraction methods have produced a higher performance than feature selection methods. RBM has achieved almost similar performance to the CNN.
We further compare the performance with a variety of CNN models by running additional experiment with Inception-v3 [40], ResNet [41], and Inception-ResNet v4 [42] models. Table 7 shows the performance of each model. The improved CNN model has achieved higher performance and demonstrates the potential of CNNs for predicting the repayment in social lending.
Analysis of misclassification cases.Table 8 shows the confusion matrix of the deep CNN model. Our model tends to well classify the repaid borrowers and not to classify the non-repaid borrowers. This problem appears because the number of non-repaying borrowers is less than the number of repaid borrowers.
R. Emekter et al. found significant variables on repayment of the borrower such as interest rate, home ownership, revolving line utilized, and totally funded in the delinquency prediction model using the Lending Club data [43]. We compared the well-classified data with the misclassified data based on these variables.
Figure 5a shows the distribution of well-classified samples (A, B in Table 8), and Figure 5b shows the distribution of misclassified samples (C, D in Table 8). The misclassified data showed a tendency opposite to the well-classified data. The data show the opposite distribution on other variables such as the loan period, the verification status, and dti including the variables that they mentioned. In Figure 5, dti means a ratio calculated using the borrower’s total monthly debt payments on the total debt obligations.
Analysis of deep CNN model. We analyze the feature space of the learned model by projecting the samples in the validation set using t-SNE to verify that our model extracted discriminative features. This analysis helps visualize non-representational deep features as dimensional reduction techniques that can maintain the local structure of the data while revealing the important global structures [44]. The more samples of different types are separable in the map, the better this feature performed.
We use the saved model above from 10,000 samples to extract the features and project the samples in two-dimensional space at the layer before classification by performing forward propagation. Figure 6a shows the t-SNE results projected in two dimensions. The distribution of the repaid borrowers and the non-repaid borrowers is well-clustered. On the other hand, there are clusters that are often mixed like the marked parts. Three samples are selected at random from each cluster as shown in Figure 6b. It can be seen that those samples have similar patterns in features even though they belong to different classes. However, it has turned out that the trained model with the extracted features works out for repayment prediction very well.

5. Conclusions

We have presented an architecture of deep CNN for repayment prediction in P2P social lending. It is confirmed that the deep CNN model is very effective in repayment prediction compared to the feature engineering and machine learning algorithms. The presented model can help choice of the lenders. The visualization analysis reveals that the feature space is clustered well depending on the success of repayment, and verifies that the extracted features of the deep CNN are effective to the prediction.
In addition, we have analyzed the features extracted by the deep CNN model with the misclassification cases based on the confusion matrix, which shows the problem of skewed distribution of classes.
To solve this problem, we need data from borrowers that have not been repaid. In reality, however, it is difficult to collect data, because there are fewer borrowers who did not repay than the borrower who has been repaid. This problem can be worked out by giving more weight to the data on the less observable side (non-repaid borrowers), or more losses when the data is misclassified. In addition, an architecture of deep CNN can be deeply established by extracting dense information and sparse information at the same time using a various size of filters, and it can extract features of the borrower who did not repay. This problem remains for a future work that we must address. We also need more effort to find the various parameters of deep CNN automatically such as the number of layers and the order of layers in addition to the basic parameters such as the number of filters and the size of the kernel in order to determine the optimal architecture. For fairer comparison, we also need to adopt more sophisticated classifiers such as gradient boosting trees.

Author Contributions

Conceptualization, S.-B.C.; formal analysis, J.-Y.K.; funding acquisition, S.-B.C.; investigation, J.-Y.K.; methodology, S.-B.C.; supervision, S.-B.C.; validation, J.-Y.K.; writing—original draft, J.-Y.K.; writing—review & editing, S.-B.C.

Funding

This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) [2016-0-00562(R0124-16-0002), Emotional Intelligence Technology to Infer Human Emotion and Carry on Dialogue Accordingly).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison of performance by parameters. The explanation on this table is in the text.
Table A1. Comparison of performance by parameters. The explanation on this table is in the text.
LayerDropouthdn_SizeFilter_Num3264
Kernel_Size2323
Pool_Size23232323
Pool_Stride bth_Size1212121212121212
20.2525625668.7165.8569.2664.6270.5970.4564.2466.4569.3158.1069.3973.6572.5372.6973.5371.76
51268.2869.4665.3371.9169.8573.5871.8772.0767.3264.3467.0665.3073.5869.0273.6372.59
51225670.9267.6167.0369.9764.9463.8473.3964.8269.5667.6862.6367.4561.7163.9368.5369.40
51269.8270.2663.1072.3365.7470.1973.9373.0069.1168.6369.3170.9969.5268.9366.4066.65
0.525625670.5569.2371.4269.8672.3067.8673.2770.4670.6673.2068.6469.8772.9171.7072.9771.90
51266.6569.7871.0975.1768.3564.4565.0668.0466.8769.1468.3072.8463.8165.0671.6270.63
51225667.5871.8871.2672.8874.4368.3872.9065.3071.0273.9173.5873.1174.1273.9669.8968.80
51270.5662.4263.8868.9268.4769.1469.8274.9173.5069.1468.2374.1174.6870.4866.8061.75
40.2525625674.7071.6072.7071.8271.1766.3872.3274.4871.4270.8966.8162.4273.2368.1069.2866.78
51273.0072.7869.9667.9972.3171.9974.0969.8673.8073.0272.9973.0975.0972.6772.2870.63
51225674.2662.5570.9073.3971.7172.3673.5871.3269.3471.7074.1061.9275.4973.6071.2672.02
51274.7173.4271.5673.3675.0171.6174.7673.3773.3073.5372.8872.2472.0473.9574.4872.50
0.525625668.5768.2569.5371.2069.0367.5765.9162.8969.6567.4867.0856.6368.4266.0165.6768.95
51271.4468.3471.3268.6267.7272.5369.8871.3869.6073.4070.0267.9670.7168.4372.2771.35
51225670.3371.6471.6664.2972.1469.4768.3969.2671.7867.4969.2459.0172.4464.8571.9862.47
51272.0571.1073.9069.4068.7871.3861.4871.3373.5169.1275.5664.7173.1770.0574.4173.14
60.2525625671.6770.0071.5264.6273.5169.5371.8168.6870.6871.3870.2164.4471.3365.9372.4668.92
51273.3870.6870.3570.1872.7871.4373.2568.1574.2072.1274.6673.0574.0271.3073.6872.69
51225666.4275.2268.4574.1972.0469.6273.0674.7770.8758.5373.9365.8073.6960.4671.9969.88
51274.3570.5371.8971.6872.9671.3074.7669.8675.0169.2475.0469.8068.3368.2673.1074.11
0.525625667.7158.1169.7270.9167.2462.2666.1962.1867.6665.5063.2166.3870.9764.7971.0073.59
51270.0770.2172.2369.0171.4169.1072.1369.1973.3772.2169.3270.2171.0169.8272.4868.96
51225671.7665.9470.7158.2873.3767.4468.7064.8169.7269.9571.0856.4271.5563.3067.0963.46
51275.3871.0973.7368.0575.7165.1168.8670.3474.3069.5875.0061.6272.7773.2474.5266.61
80.2525625671.4969.7071.1066.1373.8062.6964.8956.4872.1860.1559.6362.9071.0661.0570.5360.70
51273.4571.2572.5464.9673.9568.5971.8067.6674.8172.3173.6968.8573.5669.5172.5269.82
51225665.9173.0072.8271.3966.2466.3070.0564.6673.7868.0070.6056.9671.6865.0566.6167.12
51269.7271.0572.4571.4870.3366.4272.9869.0775.3572.9074.3969.5473.4370.2772.1368.87
0.525625667.6970.9163.9654.5066.8459.3264.6961.6566.0167.0166.8965.9162.4565.2668.4664.09
51273.1666.2271.5866.1172.4965.2871.3261.1269.4865.4669.0365.0372.4968.1071.1666.65
51225670.2871.3769.0165.1868.5466.4268.8156.9164.2157.1367.8573.5566.4262.7867.9966.34
51274.3969.0473.3768.8870.2563.8973.8068.0873.8968.3767.5568.5472.9565.6966.9068.60
100.2525625674.6466.7768.3363.6371.9258.9469.6760.8663.4861.8060.0659.1572.4664.8765.8766.10
51273.4265.5672.5061.8373.9259.6462.1660.4068.8366.5766.2565.3071.4566.6072.1467.50
51225673.6462.8874.0256.4271.2161.9567.9561.4270.8460.5666.0661.3660.5364.0670.1163.29
51267.6266.1272.0868.5571.5465.2771.1260.9672.4169.5559.9358.3771.8070.0274.9167.98
0.525625667.8863.4867.8568.0164.3061.6966.0658.7561.1462.4468.3761.7862.6860.8768.4463.84
51265.7660.2762.6258.5971.9260.8374.1455.1874.3265.4070.4465.9767.9467.8963.8967.16
51225668.9061.8868.5960.7869.9159.6367.1660.0671.4164.1168.2661.7066.4160.7565.5363.49
51274.2368.4073.3463.2072.4060.4574.2461.9070.0265.3870.6863.2374.8967.7657.6165.19
In Table A1, the deeper the red, the higher the performance; the darker the blue, the lower the performance. Empirically, the more layers are stacked, the lower the performance, and the performance was the highest when stacking four layers. Performance varied from 54% to 75% depending on the parameter settings. The performance was the highest when the number of layers is four, and the performance on validation data was decreased with increasing number of layers. In this paper, we propose to use four layers that show the best performance.
In the case of the dropout, the performance decreases as its probability increases, and the performance gets worse as more layers are stacked. The experiments show that 0.25 is an optimal choice in terms of performance and efficiency. When we removed the dropout layers, we observed a degraded performance of 2% on average. This is because the use of dropouts prevents overfitting. However, as the probability of dropout increases, underfitting occurs and performance tends to decline.
We experimented to set the optimal parameters of the pooling layer, which was the most significant performance difference depending on the set values. The smaller the size of pooling and the smaller the stride, the higher the performance. Especially, when the strides increase from 1 to 5, they decrease a performance of about 7%. In this paper, since the relationship between variables is low, the grower the stride, the higher the loss of information. To minimize it, the setting of the optimal pooling size and stride is essential.
Table A2 shows the effect of various optimizers and activation functions for CNN training. All the loss functions of the optimizer were categorical cross-entropy, and the learning rates set 0.01 and 0.001. The optimizer shows a performance difference of about 7%. RMSprop is the highest, but SGD is lower, which did not learn well. The activation function shows a difference of about 7%. Both ReLU and LeakyReLU [45] extract nonlinear relations and show higher performance than other functions.
Table A2. Comparison with optimizer and activation function.
Table A2. Comparison with optimizer and activation function.
Optimizer AccuracyActivation FunctionAccuracy
SGD68.24sigmoid68.04
RMSProp75.86tanh72.85
Adadelta73.31ReLU75.86
Adam74.07LeakyReLU74.70

References

  1. Zhao, H.; Ge, Y.; Liu, G.; Wang, G.; Chen, E.; Zhang, H. P2P lending survey: Platforms, recent advances and prospects. ACM Trans. Intell. Syst. Technol. 2017, 8, 72. [Google Scholar] [CrossRef]
  2. Xu, J.; Chen, D.; Chau, M. Identifying features for detecting fraudulent loan requests on P2P platforms. In Proceedings of the IEEE Conference on Intelligence and Security Informatics, Tucson, AZ, USA, 28–30 September 2016; pp. 79–84. [Google Scholar]
  3. Serrano-Cinca, C.; Gutiérrez-Nieto, B.; López-Palacios, L. Determinants of default in P2P lending. PLoS ONE 2015, 10, e0139427. [Google Scholar] [CrossRef] [PubMed]
  4. Yan, J.; Yu, W.; Zhao, J.L. How signaling and search costs affect information asymmetry in P2P lending: The economics of big data. Financ. Innov. 2015, 1, 19. [Google Scholar] [CrossRef]
  5. Serrano-Cinca, C.; Gutiérrez-Nieto, B. The use of profit scoring as an alternative to credit scoring systems in peer-to-peer (P2P) lending. Decis. Support Syst. 2016, 89, 113–122. [Google Scholar] [CrossRef]
  6. Everett, C.R. Group membership, relationship banking and loan default risk: The case of online social lending. Bank. Financ. Rev. 2015, 7. [Google Scholar] [CrossRef]
  7. Lin, X.; Li, X.; Zheng, Z. Evaluating borrower’s default risk in peer-to-peer lending: Evidence from a lending platform in China. Appl. Econ. 2017, 49, 3538–3545. [Google Scholar] [CrossRef]
  8. Malekipirbazari, M.; Aksakalli, V. Risk assessment in social lending via random forests. Expert Syst. Appl. 2015, 42, 4621–4631. [Google Scholar] [CrossRef]
  9. Jiao, Z.; Gao, X.; Wang, Y.; Li, J.; Xu, H. Deep convolutional neural networks for mental load classification based on EEG data. Pattern Recognit. 2018, 76, 582–595. [Google Scholar] [CrossRef]
  10. Vinod Kumar, L.; Natarajan, S.; Keerthana, S.; Chinmayi, K.M.; Lakshmi, N. Credit risk analysis in peer-to-peer lending system. In Proceedings of the IEEE International Conference on Knowledge Engineering and Applications, Singapore, 28–30 September 2016; pp. 193–196. [Google Scholar]
  11. Chen, X.-W.; Lin, X. Big data deep learning: Challenges and perspectives. IEEE Access 2014, 2, 514–525. [Google Scholar] [CrossRef]
  12. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  13. Hijazi, S.; Kumar, R.; Rowen, C. Using Convolutional Neural Networks for Image Recognition; Cadence Design Systems Inc.: San Jose, CA, USA, 2015; pp. 1–12. [Google Scholar]
  14. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
  15. Siniscalchi, S.M.; Salerno, V.M. Adaptation to new microphones using artificial neural networks with trainable activation function. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1959–1965. [Google Scholar] [CrossRef] [PubMed]
  16. Kim, Y. Convolutional neural networks for sentence classification. arXiv 2014, arXiv:1408.5882. [Google Scholar]
  17. Kim, K.-H.; Lee, C.-S.; Jo, S.-M.; Cho, S.-B. Predicting the success of bank telemarketing using deep convolutional neural network. In Proceedings of the IEEE Conference of Soft Computing and Pattern Recognition, Fukuoka, Japan, 3–15 November 2015; pp. 314–317. [Google Scholar]
  18. Jeong, M.; Ko, B.C. Driver’s facial expression recognition in real-time for safe driving. Sensors 2018, 18, 4270. [Google Scholar] [CrossRef] [PubMed]
  19. Kwon, Y.-H.; Shin, S.-B.; Kim, S.-D. Electroencephalography based fusion two-dimensional (2D)-convolution neural network (CNN) model for emotion recognition system. Sensors 2018, 18, 1388. [Google Scholar] [CrossRef] [PubMed]
  20. Salerno, V.; Rabbeni, G. An extreme learning machine approach to effective energy disaggregation. Electronics 2018, 7, 235. [Google Scholar] [CrossRef]
  21. Milne, M.; Parboteeah, P. Deep convolutional neural networks for mental load classification based on EEG data The business models and economics of peer-to-peer lending. Eur. Credit Res. Inst. 2016, 17, 1–31. [Google Scholar]
  22. Chen, Y. Research on the credit risk assessment of chinese online peer-to-peer lending borrower on logistic regression model. DEStech Trans. Environ. Energy Earth Sci. 2017, 216–221. [Google Scholar] [CrossRef]
  23. Kim, A.; Cho, S.-B. Dempster-Shafer fusion of semi-supervised learning methods for predicting defaults in social lending. In Proceedings of the International Conference on Neural Information Processing, Guangzhou, China, 14–18 November 2017; pp. 854–862. [Google Scholar]
  24. Guo, Y.; Zhou, W.; Luo, C.; Liu, C.; Xiong, H. Instance-based credit risk assessment for investment decisions in P2P lending. Eur. J. Oper. Res. 2016, 249, 417–426. [Google Scholar] [CrossRef]
  25. Polena, M.; Regner, T. Determinants of borrowers’ default in P2P lending under consideration of the loan risk class. Jena Econ. Res. Pap. 2016, 23, 82. [Google Scholar] [CrossRef]
  26. Bitvai, Z.; Cohn, T. Predicting peer-to-peer loan rates using bayesian non-linear regression. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2203–2209. [Google Scholar]
  27. Byanjankar, A.; Heikkilä, M.; Mezei, J. Predicting credit risk in peer-to-peer lending: A neural network approach. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 719–725. [Google Scholar]
  28. Zang, D.; Qi, M.; Fu, Y. The credit risk assessment of P2P lending based on BP neural network. In Proceedings of the International Conference on Industrial Engineering and Management Science, Hong Kong, China, 8–9 August 2014; pp. 91–94. [Google Scholar]
  29. Ronao, C.A.; Cho, S.-B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
  30. Chen, M.; Mao, S.; Liu, Y. Big data: A survey. Mob. Netw. Appl. 2014, 19, 171–209. [Google Scholar] [CrossRef]
  31. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  32. Tieleman, T.; Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA Neural Netw. Mach. Learn. 2012, 2, 26–31. [Google Scholar]
  33. He, K.; Sun, J. Convolutional neural networks at constrained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5353–5360. [Google Scholar]
  34. Nair, V.; Hinton, G.E. Rectified linear units improve restricted Boltzmann machines. In Proceeding of the International Conference on Machine Learning, Haifa, Israel, 21–25 June 2010; pp. 807–814. [Google Scholar]
  35. Caruana, R.; Lawrence, S.; Giles, L. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2000; pp. 402–408. [Google Scholar]
  36. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  37. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
  38. Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  39. Tran, S.N.; Wolff, D.; Weyde, T.; Garcez, A. Feature preprocessing with RBMs for music similarity learning. Learning 2014, 9, 16. [Google Scholar]
  40. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  41. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  42. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  43. Emekter, R.; Tu, Y.; Jirasakuldech, B.; Lu, M. Evaluating credit risk and loan performance in online peer-to-peer (P2P) lending. Appl. Econ. 2015, 47, 54–70. [Google Scholar] [CrossRef]
  44. Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Learning features for offline handwritten signature verification using deep convolutional neural networks. Pattern Recognit. 2017, 70, 163–176. [Google Scholar] [CrossRef] [Green Version]
  45. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 30. [Google Scholar]
Figure 1. Correlation plot for the loan status. * means low correlation, ** middle correlation, and *** high correlation.
Figure 1. Correlation plot for the loan status. * means low correlation, ** middle correlation, and *** high correlation.
Mathematics 07 01041 g001
Figure 2. The overall architecture of the proposed method.
Figure 2. The overall architecture of the proposed method.
Mathematics 07 01041 g002
Figure 3. Comparison of our method with machine learning models.
Figure 3. Comparison of our method with machine learning models.
Mathematics 07 01041 g003
Figure 4. The accuracy of 5-fold cross validation.
Figure 4. The accuracy of 5-fold cross validation.
Mathematics 07 01041 g004
Figure 5. Comparison of distribution of classified samples. (a) Distribution of well-classified samples. (b) Distribution of misclassified samples.
Figure 5. Comparison of distribution of classified samples. (a) Distribution of well-classified samples. (b) Distribution of misclassified samples.
Mathematics 07 01041 g005
Figure 6. (a) 2D projections of the feature vectors using t-SNE. (b) Three samples selected at random from two clusters mixed.
Figure 6. (a) 2D projections of the feature vectors using t-SNE. (b) Three samples selected at random from two clusters mixed.
Mathematics 07 01041 g006
Table 1. The statistics of Lending Club data.
Table 1. The statistics of Lending Club data.
YearThe Amount of Data# of AttributesCharged OffFully Paid
2007–201142,53556566234,108
2012–2013188,18111127,664145,185
2014235,62911129,48398,495
2015421,09511129,17887,989
2016434,407111456730,427
Table 2. The summary of data attributes.
Table 2. The summary of data attributes.
CategoryNameTypeDescription
PredictorLoan StatusBinaryCurrent status of the loan (Charged Off or Fully Paid)
Borrower InfoAnnual IncNumericThe self-reported annual income.
Emp LengthNominalEmployment length in years. (<1~10>)
Home OwnershipNominalRENT, OWN, MORTGAGE, OTHER.
Addr StateNominalThe state provided by the borrower in the loan application.
Grade[Sub Grade]NominalLC assigned loan grade.
Loan InfoTotal PymntNumericPayments received to date for total amount funded.
Funded AmntNumericThe total amount committed to that loan at that point in time.
Issue dDateThe month which the loan was funded.
RecoveriesNumericPost charge off gross recovery.
Loan AmntNumericThe listed amount of the loan applied for by the borrower.
TermNominal36 or 60 months.
InstallmentNumericThe monthly payment owed by the borrower.
PurposeNominalPurpose for the loan request.
Credit InfoTot Cur BalNumericTotal current balance of all accounts
Total bc LimitNumericTotal bankcard high credit/credit limit
Acc Now DelinqNumericThe number of accounts on which the borrower is now delinquent.
Table 3. Related works in P2P lending.
Table 3. Related works in P2P lending.
Author (Year)Dataset#Data#AttributesMethod
Chen (2017) [22]Paipai317711Logistic regression
Kim & Cho (2017) [23]Lending Club332,84417Decision tree
Lin et al. (2017) [7]Yooli48,78410Logistic regression
Guo et al. (2016) [24]Lending Club20166Logistic regression
Prosper41286
Serrano-Cinca & Gutiérrez-Nieto (2016) [5]Lending Club40,90726Linear regression, decision tree
Polena & Regner (2016) [25]Lending Club70,67314Regression
Vinod Kumar et al. (2016) [10]Lending Club279,16970Decision tree, random forest, bagging
Bitvai & Cohn (2015) [26]Funding Circle350015Bayesian non-linear regression
Byanjankar et al. (2015) [27]Bondora16,03715Artificial neural network
Malekipirbazari & Aksakalli (2015) [8]Lending Club68,00015Random forest
Zang et al. (2014) [28]Lending Club10,6497BP neural network
Table 4. The proposed deep convolutional neural network (CNN) architecture.
Table 4. The proposed deep convolutional neural network (CNN) architecture.
TypeConfiguration#Parameters
Convolutionfilter 64 × 1 × 3, stride 1 × 1, zero padding, ReLU256
Poolingpooling size 1 × 2, stride 1 × 10
Convolutionfilter 64 × 1 × 3, stride 1 × 1, zero padding, ReLU12,352
Poolingpooling size 1 × 2, stride 1 × 10
Fully-connected5122,359,808
ActivationReLU0
Dropout0.250
Fully-connected2 (class)1026
softmaxclassifier0
Table 5. The list of hyperparameters.
Table 5. The list of hyperparameters.
NameDescriptionValue
LayerThe total number of layers2~10
FilterThe number of filters32~128
Kernel sizeThe size of the convolution windows1~5
Pool sizeThe size of the pooling windows1~5
Pool strideThe size of the pooling stride1~5
Zero paddingWhether to use zero paddingYes/No
DropoutProbability of dropout0~1
Hidden sizeThe number of neurons in the fully connected layer256~512
Batch sizeThe number of samples per gradient update256~512
EpochThe number of times to iterate in training100
Table 6. Comparison of the performance by preprocessing and feature extraction methods.
Table 6. Comparison of the performance by preprocessing and feature extraction methods.
Method#FeaturesAccuracyF1-ScoreAUC
No-preprocessing (not-scaled)7278.44%87.92%0.5
Mutual information1061.68%73.12%0.57
Information gain1065.00%74.37%0.70
Chi-square statistics1056.66%66.43%0.62
Extraction based on RBM7275.77%85.47%0.66
CNN7275.86%85.45%0.67
Table 7. Comparison of the performance with other CNNs.
Table 7. Comparison of the performance with other CNNs.
ModelAccuracyPrecisionRecallF1-ScoreAUC
CNN75.86%80.73%90.75%85.45%0.67
Inception-v376.53%80.41%92.46%86.02%0.65
ResNet76.45%80.65%91.88%85.90%0.66
Inception-ResNet v477.78%79.31%96.77%87.18%0.65
Table 8. Confusion matrix.
Table 8. Confusion matrix.
Predict TrueCharged OffFully Paid
Charged off2155 (A)7300 (C)
Fully paid3115 (D)30,575 (B)

Share and Cite

MDPI and ACS Style

Kim, J.-Y.; Cho, S.-B. Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning. Mathematics 2019, 7, 1041. https://doi.org/10.3390/math7111041

AMA Style

Kim J-Y, Cho S-B. Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning. Mathematics. 2019; 7(11):1041. https://doi.org/10.3390/math7111041

Chicago/Turabian Style

Kim, Ji-Yoon, and Sung-Bae Cho. 2019. "Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning" Mathematics 7, no. 11: 1041. https://doi.org/10.3390/math7111041

APA Style

Kim, J. -Y., & Cho, S. -B. (2019). Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning. Mathematics, 7(11), 1041. https://doi.org/10.3390/math7111041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop