Next Article in Journal
RETRACTED: Reconstructing the Dynamic Processes of the Taimali Landslide in Taiwan Using the Waveform Inversion Method
Previous Article in Journal
Prediction of the Energy Consumption of School Buildings
Previous Article in Special Issue
Dynamic Modeling for Resilience Measurement: NATO Resilience Decision Support Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pattern Classification Based on RBF Networks with Self-Constructing Clustering and Hybrid Learning

1
Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan
2
Department of Electrical Engineering and Intelligent Electronic Commerce Research Center, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(17), 5886; https://doi.org/10.3390/app10175886
Submission received: 22 July 2020 / Revised: 7 August 2020 / Accepted: 23 August 2020 / Published: 25 August 2020
(This article belongs to the Special Issue New Frontiers in Computational Intelligence)

Abstract

:
Radial basis function (RBF) networks are widely adopted to solve problems in the field of pattern classification. However, in the construction phase of such networks, there are several issues encountered, such as the determination of the number of nodes in the hidden layer, the form and initialization of the basis functions, and the learning of the parameters involved in the networks. In this paper, we present a novel approach for constructing RBF networks for pattern classification problems. An iterative self-constructing clustering algorithm is used to produce a desired number of clusters from the training data. Accordingly, the number of nodes in the hidden layer is determined. Basis functions are then formed, and their centers and deviations are initialized to be the centers and deviations of the corresponding clusters. Then, the parameters of the network are refined with a hybrid learning strategy, involving hyperbolic tangent sigmoid functions, steepest descent backpropagation, and least squares method. As a result, optimized RBF networks are obtained. With this approach, the number of nodes in the hidden layer is determined and basis functions are derived automatically, and higher classification rates can be achieved. Furthermore, the approach is applicable to construct RBF networks for solving both single-label and multi-label pattern classification problems.

1. Introduction

Pattern classification is a process related to categorization, concerned with classifying patterns into one or more categories. It plays an important role in many fields, such as media, medicine, science, business, and security [1,2,3,4,5]. A classification problem can be converted to multiple binary classification problems in two ways, one-vs.-rest (OvR) and one-vs.-one (OvO), by creating several systems each of which is responsible for one category. However, OvR may suffer from unbalanced learning, since typically the set of negative training data is much larger than the set of positive training data. In contrast, OvO may suffer from ambiguities: two or more categories may receive the same number of votes and one category has to be picked arbitrarily. Therefore, researchers tend to seek other ways to solve classification problems.
In general, classification tasks can be divided into two kinds: single-label and multi-label. Single-label classification concerns classifying a pattern to only one category, but multi-label classification may classify a pattern to one or more categories. Many algorithms, based on multilayered perceptrons (MLP) [6], decision trees [7,8], k-nearest neighbors (KNN) [9], probability-based classifiers [10], support vector machines (SVM) [11,12], or random vector functional-link (RVFL) networks [13,14], have been developed for pattern classification.
Most developed algorithms are for single-label classification. KNN [9] is a lazy learning algorithm. It is based on similarity between input patterns and training data. The category to which the input pattern belongs to is determined by its k-nearest neighbors. Decision tree methods [7,15] build decision trees, e.g., ID3 and C4.5, from a set of training data using the concept of information entropy. An unseen pattern traverses down from the root of the tree until reaching a leaf which tells the category to which the unseen is assigned. Probability-based classifiers [16,17] assume independence among features. These classifiers can be trained for the estimation of parameters necessary for classification. MLP [18,19,20] is a feedforward multi-layer network model consisting of several layers of nodes, with two adjacent layers fully connected to each other. Each node in the network is associated with an activation function, linear or nonlinear. RVFL networks [13,14,21] are also fully-connected feedforward networks, with only one single hidden layer. The weights between input and hidden nodes are assigned to random values. The values of the weights between hidden and output nodes, instead, are learned from training patterns. An RBF network [22,23,24,25,26,27] is a two-layer network, using radial basis functions as the activation functions in the hidden layer. The output of the network is a linear combination of the outputs from the hidden layer. In SVM [11,12], training patterns are mapped into the points in a high-dimensional space and the points of different categories are separated in the space by a gap as wide as possible. For an unseen pattern, the same mapping is performed and the classification is predicted according to the side of the gap it falls on. For solving multi-label classification problems [28], two approaches are generally adopted [29]: In one approach, a multi-label classification task is transformed to several single-label classification tasks which are instead solved by single-label classification methods. In the other approach, the capability of a specific single-label classification algorithm is extended to handle directly the multi-label data [30,31,32,33].
Since first introduced in 1988 by Broomhead and Lowe [34], RBF networks have been becoming popular in classification applications. However, as pointed out in [34,35], in the construction phase of such networks, there are several issues encountered, such as the determination of the number of nodes in the hidden layer, the form and initialization of the basis functions, and the learning of the parameters involved in the networks. In this paper, we present a novel approach for constructing RBF networks for pattern classification problems. An iterative self-constructing clustering algorithm is used to produce a desired number of clusters from the training data. Accordingly, the number of nodes in the hidden layer is determined. Basis functions are then formed, and their centers and deviations are initialized to be the centers and deviations of the corresponding clusters. Then the parameters of the network are refined with a hybrid learning strategy, involving hyperbolic tangent sigmoid functions, steepest descent backpropagation and least squares method. As a result, optimized RBF networks are obtained. Our approach can offer advantages in practicality. The number of nodes in the hidden layer is determined and basis functions are derived automatically, and higher classification rates can be achieved through the hybrid learning process. Furthermore, the approach is applicable to construct RBF networks for solving both single-label and multi-label pattern classification problems. Experimental results have shown that the proposed approach can be used to solve classification tasks effectively.
We have been working on RBF networks for years, and have developed different techniques [26,27,36,37]. This paper is mainly based on the MS thesis of the first author, Z.-R. He (who preferred to use a different English name, Tsan-Jung He, in the thesis) [38], supervised by S.-J. Lee, with the following contributions:
  • An iterative self-constructing clustering algorithm is applied to determine the number of hidden nodes and associated basis functions.
  • The centers and deviations of the basis functions are refined through the steepest descent backpropagation method.
  • Tikhonov regularization is applied in the optimization process to maintain the robustness of the output parameters.
  • The hyperbolic tangent sigmoid function is used as the activation function of the output nodes when learning the parameters of basis functions.
The rest of this paper is organized as follows: Section 2 gives an overview of the related work. The proposed approach is described in detail in Section 3, Section 4 and Section 5. Basis functions are derived by clustering from training data and a hybrid learning is used to optimize network parameters. Experimental results are presented in Section 6. The effectiveness of the proposed approach on single-label and multi-label classification is demonstrated. Finally, concluding remarks are given in Section 7.

2. Related Work

Many single-label classification algorithms have been proposed. The extended nearest neighbor (ENN) method [39], unlike the classic KNN, makes the prediction in a “two-way communication” style. It considers not only what are the nearest neighbors of the input pattern, but also those of which the input pattern is their nearest neighbors. The Natural Neighborhood Based Classification Algorithm (NNBCA) [40] provides a good classification result without artificially selecting the neighborhood parameter. Unlike KNN, it predicts different k for different samples. An enhanced general fuzzy min-max neural network (EGFM) classification model is proposed in [19] to perform supervised classification of data. New hyperbox expansion, overlap, and contraction rules are used to overcome some unidentified cases in some regions. SaE-ELM [41] is an improved algorithm of RVFL networks. In SaE-ELM, the hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore–Penrose generalized inverse. In [42], basis functions in an RBF network are interpreted as probability density functions. The weights are seen as prior probabilities. Models that output class conditional densities or mixture densities were proposed. In [24], an RBF network based on KNN with adaptive selection radius is proposed. An adaptive selection radius is calculated according to the population density sampling method. The RBF network is trained to locate the sound source by solving nonlinear equations about the time difference between arrival and source of the sound.
Several problem transformation methods exist for multi-label classification. One popular transformation method, called binary relevance [43], transforms the original training dataset into p datasets, where p is the number of categories involved with the dataset. Each resulting dataset contains all the patterns of the original dataset, with each pattern assigned to one of the two labels: “belonging to” or “not belonging to” a particular category. Since the resulting datasets are all single-labeled, any single-label classification technique is applicable to them. The label powerset (LP) transformation [44] method creates one binary classifier for every label combination present in the training set. For example, eight possible label combinations are created if the number of categories associated with the original dataset is three. Ensemble methods were developed to create multi-label ensemble classifiers [45,46]. A set of multi-class single-label classifiers are created. For an input example, each classifier outputs a single class. These predictions are then combined by an ensemble method.
Some classification algorithms/models have been adapted to the multi-label task, without requiring problem transformations. Clare [29] is an adapted C4.5 version for multi-label classification. Other decision tree classification methods based on multi-valued attributes were also developed [47]. Zhang et al. [32] propose several multi-label learning algorithms including back-propagation multi-label learning (BP-MLL) [48], multi-label k-nearest neighbors (ML-KNN) [49] and multi-label RBF (ML-RBF) [50]. BP-MLL is a multi-label version of the back-propagation neural network. To take care of multi-labels, label co-occurrence is incorporated into the pairwise ranking loss function. However, it has a complex global error function to be minimized. ML-KNN is a lazy learning algorithm which requires a big run-time search. ML-RBF is a multi-label RBF neural network which is an extension of the traditional RBF learning algorithm. Multi-label with Fuzzy Relevance Clustering (ML-FRC), proposed by Lee and Jiang [51], is a fuzzy relevance clustering based method for the task of multi-label text categorization. Kurata et al. [52] treat some of the nodes in the final hidden layer as dedicated neurons for each training pattern assigned to more than one label. These dedicated neurons are initialized to connect to the corresponding co-occurring labels with stronger weights than to others. A multi-label metamorphic relations prediction approach, named RBF-MLMR [33], is proposed to find a set of appropriate metamorphic relations for metamorphic testing. It uses soot analysis tool to generate the control flow graph and labels. The extracted nodes and the path properties constitute multi-label data sets for the control flow graph. Then, a multi-label RBF network prediction model is established to predict for multiple metamorphic relations.

3. Proposed Approach

Let X = { ( x ( i ) , y ( i ) ) | 1 i N } be a finite set of N training patterns, where x ( i ) n is an input vector with n feature values, i.e., x ( i ) = [ x 1 ( i ) x 2 ( i ) x n ( i ) ] T , and y ( i ) is the corresponding category vector with m components, i.e., y ( i ) = [ y 1 ( i ) y 2 ( i ) y m ( i ) ] T , of pattern i , defined as
y f ( i ) = { + 1 , if   x ( i )   belong   to   category   f ; 1 , if   x ( i )   does   not   belong   to   category   f
for 1 f m . Note that m denotes the number of categories associated with the training dataset X. For convenience, the categories are labelled as 1, 2, ⋯, m respectively. For single-label classification, one of the elements in y ( i ) is +1 and the other m−1 elements are −1. For a multi-label classification, two or more components in y ( i ) can be +1. Pattern classification is concerned with constructing a predicting model from X, and for any input vector x the model can respond with output y ^ showing which categories x belongs to.
In this work, we construct RBF networks as predicting models for classification. The data for training and testing the models will be collected from the openly accessible websites. The number of hidden nodes will be determined by clustering on the training data. The parameters of an RBF network will be learned by applying a hybrid learning algorithm. The performance of the constructed models will be analyzed by five-fold cross validation. Experiments for comparing performance with other methods will be conducted.
Our RBF network architecture, shown in Figure 1, is a two-layer network consisting of one hidden layer and one output layer. There are n input nodes, each node receiving and feeding forward one feature value of the input vector to the hidden layer. The hidden layer has J nodes, each with a basis function as its activation function. Each input node is fully connected to every node in the hidden layer. The output layer has m nodes, each corresponding to one distinct category. The hidden and output layers are also fully connected, i.e., each node in the hidden layer is connected to each node in the output layer. The weight between node j , 1 j J , of the hidden layer and node f , 1 f m , of the output layer is denoted by wf,j for 1 j J and 1 f m .
When an input vector x = [ x 1 x 2 x n ] T is presented to the input nodes of the network, the network produces the output vector y ^ = [ y ^ 1 y ^ 2 y ^ m ] T at the output layer. The network operates as follows.
  • Input node i , 1 i n , takes x i and dispatches it to every node in the hidden layer.
  • Hidden layer. Hidden node j , 1 j J , is responsible for computing the value of its designated basis function as
    B j ( x ) = k = 1 n exp [ ( x k c k , j α v k , j ) 2 ]
    where c k , j and v k , j denote the center and deviation, respectively, of the kth dimension. Note that α is a constant for controlling the scope of the function. Let
    h j = B j ( x )
    which is forwarded to the output layer.
  • Output layer. Node f , 1 f m , in this layer calculates the weighted sum
    s f = w f , 1 h 1 + w f , 2 h 2 + + w f , J h J + b f
    where w f , 1 , w f , 2 , , w f , J are the weights and b f is the bias associated with it. Then it provides the network output y ^ f by
    y ^ f = sign ( s f )
    where sign ( · ) is the sign function defined as
    sign ( s f ) = { + 1 , if s f 0 ; 1 , if s f < 0 .
    Therefore, the network output y ^ for the input vector x is
    y ^ = [ y ^ 2 y ^ 2 y ^ m ] T
Each component in y ^ is either +1 or –1. If component f is +1, 1 f m , x is predicted to belong to category f . Note that one component or more in y ^ can be +1. Therefore, the proposed network architecture is applicable to solving both single-label and multi-label pattern classification problems.
We describe below how the network of Figure 1 is created from the given training set. Two phases, network setup and parameter refinement, are involved. In the first phase, the network structure is built and the initial values of the parameters are set. Then the parameters of the network are optimally refined in the second phase.

4. Network Setup Phase

In this phase, the number of nodes and basis functions in the hidden layer are determined automatically. Furthermore, the centers and deviations of the basis functions, and the weights and biases associated with the output layer, are initialized appropriately.

4.1. Initialization of Basis Functions

To determine the number of nodes and basis functions in the hidden layer, we find a group of clusters in the training set X through a clustering process. Firstly, we divide the training set X into m training subsets X 1 , X 2 , , X m as follows:
X h = { x ( i ) | 1 i N y h ( i ) = 1 }
for 1 h m . Then, each subset is fed to an iterative self-constructing clustering algorithm [53] and is grouped into K 1 , , K m clusters, respectively. A cluster contains patterns which are similar to each other, and is characterized by its center and deviation.
Let K = K 1 + + K m . Therefore, we have K clusters in total and let them be denoted as C 1 , C 2 , , C K with centers c k = [ c 1 , k c n , k ] T and deviations v k = [ v 1 , k v n , k ] T , 1 k K . Then we set the number of hidden nodes, J , to be equal to K , i.e., J = K . For hidden node j , its associated basis function, B j ( x ) , is also determined, with its center and deviation initialized to be the center and deviation of cluster C j , i.e., we set
B j ( x ) = i = 1 n exp [ ( x i c i , j α v i , j ) 2 ]
for 1 j J , with c i , j and v i , j being the center and deviation, respectively, in the ith dimension of C j .

4.2. Initialization of Weights and Biases

Next, we derive initial values for the weights w f , j , 1 j J , 1 f m , between the hidden layer and the output layer, and for the biases b f , 1 f m , in the output layer. Consider the sum of squared error of the whole training set X :
i = 1 N y ( i ) y ^ ( i ) 2 = i = 1 N f = 1 m [ y f ( i ) y ^ f ( i ) ] 2
By Equations (4) and (5), we have
y ^ f ( i ) = sign ( s f ( i ) ) ,
s f ( i ) = w 1 , f h 1 ( i ) + w 2 , f h 2 ( i ) + + w J , f h J ( i ) + b f
where h J ( i ) = B j ( x ( i ) ) , for 1 f m and 1 i N . For pattern i , we have
s 1 ( i ) = w 1 , 1 h 1 ( i ) + w 1 , 2 h 2 ( i ) + + w 1 , J h J ( i ) + b 1 s 2 ( i ) = w 2 , 1 h 1 ( i ) + w 2 , 2 h 2 ( i ) + + w 2 , J h J ( i ) + b 2 = , s m ( i ) = w m , 1 h 1 ( i ) + w m , 2 h 2 ( i ) + + w m , J h J ( i ) + b m
In total, there are N × m such equations. Although only the sign of s f ( i ) matters in Equation (11), to be robust we’d like s f ( i ) as close to y f ( i ) , which is either +1 or −1, as possible. Therefore, we demand
y 1 ( i ) = w 1 , 1 h 1 ( i ) + w 1 , 2 h 2 ( i ) + + w 1 , J h J ( i ) + b 1 y 2 ( i ) = w 2 , 1 h 1 ( i ) + w 2 , 2 h 2 ( i ) + + w 2 , J h J ( i ) + b 2 = , y m ( i ) = w m , 1 h 1 ( i ) + w m , 2 h 2 ( i ) + + w m , J h J ( i ) + b m
for 1 i N . Equation (14) can be rewritten as
A W Y 2
where
A = [ h 1 ( 1 ) h 1 ( 2 ) h 1 ( N ) h 2 ( 1 ) h 2 ( 2 ) h 2 ( N ) h J ( 1 ) h J ( 2 ) h J ( N ) 1 1 1 ] T ,
W = [ w 1 , 1 w 1 , 2 w 1 , J b 1 w 2 , 1 w 2 , 2 w 2 , J b 2 w m , 1 w m , 2 w m , J b m ] T ,
Y = [ y 1 ( 1 ) y 1 ( 2 ) y 1 ( N ) y 2 ( 1 ) y 2 ( 2 ) y 2 ( N ) y m ( 1 ) y m ( 2 ) y m ( N ) ] T
with size of N × ( J + 1 ) , ( J + 1 ) × m , and N × m , respectively.
To improve the robustness of the system [54], instead of minimizing Equation (15), we would like to minimize the penalized residual sum of squares:
A W Y 2 + λ W 2
where λ is a pre-defined non-negative coefficient to balance the robustness against collinearity and the capability of fitting the desired outputs well. To find the solution to the minimization problem, we first rewrite Equation (19) as
A W Y 2 + λ W 2 = A W Y 2 + λ I W 0 2 = [ A λ I ] W [ Y 0 ] 2
where I is the ( J + 1 ) × ( J + 1 ) identity matrix and 0 is the ( J + 1 ) × m zero matrix. Let A = [ A λ I ] and Y = [ Y 0 ] . Then Equation (19) becomes
A W Y 2
By the least squares method [55], the solution minimizing Equation (19) is
W ^ = { ( A ) T A } 1 A Y = { [ A λ I ] T [ A λ I ] } 1 [ A λ I ] [ Y 0 ] = { [ A T λ I ] [ A λ I ] } 1 [ A T λ I ] [ Y 0 ] = { A T A + λ I } 1 A T Y
Note that the solution is unique even when A has dependent columns. This solution, W ^ , provides initial values for the weights w f , j , 1 j J , 1 f m , and biases b f , 1 f m , which can be refined later.

5. Parameter Refinement Phase

In this phase, the optimal values of the parameters of the network are learned from the training dataset X . Such parameters include the centers c j = [ c 1 , j c n , j ] T and deviations v j = [ v 1 , j v n , j ] T , 1 j J , associated with the basis functions in the hidden layer, and the weights w f , j , 1 j J , 1 f m , between the hidden layer and the output layer, and the biases and biases b f , 1 f m , associated with the output nodes in the output layer. A hybrid learning process is applied iteratively. In each iteration, we first treat all the weights and biases as fixed and use the steepest descent backpropagation to update the centers and deviations associated with the basis functions. Then we treat all the centers and deviations associated with the basis functions as fixed and use the least squares method to update the weights and biases associated with the output layer. The process is iterated until convergence is reached.

5.1. Centers and Deviations

First, we describe how to refine the centers and deviations associated with the basis functions in the hidden layer. To make the refinement possible, we express y ^ f in Equation (5) in terms of the hyperbolic tangent sigmoid function which is analytic, i.e.,
y ^ f a f = g ( s f ) = e β s f e β s f e β s f + e β s f
for 1 f m , where β is a slope-controlling constant and s f is defined in Equation (4). Note that a f provides a good approximation to the sign function and it is differentiable at all points.
Let us denote the initial values of the centers and deviations, shown in Equation (9), as c k , j ( 0 ) and v k , j ( 0 ) , 1 k n , 1 j J , and the initial values of the weights and biases, obtained in Equation (21), as w f , i ( 0 ) and b f ( 0 ) , 1 f m , 1 i J . As in [56], the mean squared error for the training set X can be estimated as
E ^ ( k ) = y ( k ) y ^ ( k ) 2 y ( k ) a ( k ) 2 = l = 1 m [ y l ( k ) a l ( k ) ] 2
where ( x ( k ) , y ( k ) ) can be any pattern in the training dataset X . By steepest descent backpropagation, the centers and deviations of the basis functions in the hidden layer are updated as
c i , j ( k + 1 ) = c i , j ( k ) η E ^ ( k ) c i , j , 1 i n , 1 j J
v i , j ( k + 1 ) = v i , j ( k ) η E ^ ( k ) v i , j , 1 i n , 1 j J
for k 0 , where η is the learning rate. From Equation (23), we have
E ^ ( k ) c i , j = 2 l = 1 m [ y l ( k ) a l ( k ) ] a l ( k ) c i , j
E ^ ( k ) v i , j = 2 l = 1 m [ y l ( k ) a l ( k ) ] a l ( k ) v i , j
By chain rule, we have
a l ( k ) c i , j = a l ( k ) s l s l ( k ) h j h j ( k ) c i , j
Since a l ( k ) = g ( s l ( k ) ) , s l ( k ) = w l , 1 ( k ) h 1 ( k ) + + w l , J ( k ) h J ( k ) + b l ( k ) , and h j ( k ) = r = 1 n e ( x r ( k ) c r , j ( k ) α v r , j ( k ) ) 2 , we have
a l ( k ) s l = g ˙ ( s l ( k ) ) = [ e β s l ( k ) e β s l ( k ) ] s l 1 e β s l ( k ) e β s l ( k ) + [ e β s l ( k ) e β s l ( k ) ] [ e β s l ( k ) e β s l ( k ) ] 1 s l = β ( 1 a l 2 ( k ) )
s l ( k ) h j = w l , j ( k )
h j ( k ) c i , j = e ( x i ( k ) c i , j ( k ) α v i , j ( k ) ) 2 c i , j r i n e ( x r ( k ) c r , j ( k ) α v r , j ( k ) ) 2 = [ ( x i ( k ) c i , j ( k ) α v i , j ( k ) ) 2 ] c i , j e ( x i ( k ) c i , j ( k ) α v i , j ( k ) ) 2 r i n e ( x r ( k ) c r , j ( k ) α v r , j ( k ) ) 2 = 2 [ x i ( k ) c i , j ( k ) ] [ α v i , j ( k ) ] 2 h j ( k )
Similarly, we have
a l ( k ) v i , j = a l ( k ) s l s l ( k ) h j h j ( k ) v i , j
h j ( k ) v i , j = 2 [ x i ( k ) c i , j ( k ) ] 2 α 2 v i , j 3 ( k ) h j ( k )
Therefore, we have
c i , j ( k + 1 ) = c i , j ( k ) + 4 η x i ( k ) c i , j ( k ) [ α v i , j ( k ) ] 2 h j ( k ) l = 1 m [ y l ( k ) a l ( k ) ] g ˙ ( s l ( k ) ) ˙ w l , j ( k )
v i , j ( k + 1 ) = v i , j ( k ) + 4 η [ x i ( k ) c i , j ( k ) ] 2 α 2 v i , j 3 ( k ) h j ( k ) l = 1 m [ y l ( k ) a l ( k ) ] g ˙ ( s l ( k ) ) ˙ w l , j ( k )
for 1 ≤ in, 1 ≤ jJ. In matrix form this becomes:
c j ( k + 1 ) = c j ( k ) + 4 η h j ( k ) D 1 , j ( k ) q 1 , j ( k ) w j ( k ) T G ˙ ( s ( k ) ) e ( k )
v j ( k + 1 ) = v j ( k ) + 4 η h j ( k ) D 2 , j ( k ) q 2 , j ( k ) w j ( k ) T G ˙ ( s ( k ) ) e ( k )
for 1 j J , where
q 1 , j ( k ) = [ x 1 ( k ) c 1 , j ( k ) x 2 ( k ) c 2 , j ( k ) x n ( k ) c n , j ( k ) ] T
q 2 , j ( k ) = [ [ x 1 ( k ) c 1 , j ( k ) ] 2 [ x 2 ( k ) c 2 , j ( k ) ] 2 [ x n ( k ) c n , j ( k ) ] 2 ] T
D 1 , j ( k ) = [ 1 [ α v 1 , j ( k ) ] 2 0 0 0 0 0 1 [ α v 2 , j ( k ) ] 2 0 0 0 0 0 0 0 1 [ α v n , j ( k ) ] 2 ]
D 2 , j ( k ) = [ 1 [ α 2 v 1 , j 3 ( k ) ] 2 0 0 0 0 0 1 [ α 2 v 2 , j 3 ( k ) ] 2 0 0 0 0 0 0 0 1 [ α 2 v n , j 3 ( k ) ] 2 ]
w j ( k ) = [ w 1 , j ( k ) w 2 , j ( k ) w m , j ( k ) ] T
G ˙ ( s ( k ) ) = [ g ˙ ( s 1 ( k ) ) 0 0 0 0 0 g ˙ ( s 2 ( k ) ) 0 0 0 0 0 0 0 g ˙ ( s m ( k ) ) ]
e ( k ) = [ y 1 ( k ) a 1 ( k ) y 2 ( k ) a 2 ( k ) y m ( k ) a m ( k ) ] T
Note that the algorithm described above is the stochastic steepest descent backpropagation which involves incremental training. However, we perform batch training in which the complete gradient is computed, after all inputs are applied to the network, before the centers and deviations are updated. To implement the batch training, in each iteration the individual gradients for all the inputs in the training set are averaged to get the total gradient, and the update equations become
c i , j ( k + 1 ) = c i , j ( k ) η N p = 1 N E ^ p ( k ) c i , j
v i , j ( k + 1 ) = v i , j ( k ) η N p = 1 N E ^ p ( k ) v i , j
for 1 i n , 1 j J , where E ^ p ( k ) is the mean squared error E ^ ( k ) induced from pattern p , ( x ( p ) , y ( p ) ) . In matrix form, the batch training becomes:
c j ( k + 1 ) = c j ( k ) + 4 η N p = 1 N { h j ( p ) D 1 , j ( p ) q 1 , j ( p ) w j ( p ) T G ˙ ( s ( p ) ) e ( p ) }
V j ( k + 1 ) = v j ( k ) + 4 η N p = 1 N { h j ( p ) D 2 , j ( p ) q 2 , j ( p ) w j ( p ) T G ˙ ( s ( p ) ) e ( p ) }
for 1 j J .

5.2. Weights and Biases

Next, we describe how to refine weights and biases associated with the output layer in each iteration. For pattern i , 1 i N , by Equation (13), we want
y 1 ( i ) = w 1 , 1 ( k + 1 ) h 1 ( i ) ( k + 1 ) + + w 1 , J ( k + 1 ) h J ( i ) ( k + 1 ) + b 1 ( b + 1 ) y 2 ( i ) = w 2 , 1 ( k + 1 ) h 1 ( i ) ( k + 1 ) + + w 2 , J ( k + 1 ) h J ( i ) ( k + 1 ) + b 2 ( b + 1 ) = y m ( i ) = w m , 1 ( k + 1 ) h 1 ( i ) ( k + 1 ) + + w m , J ( k + 1 ) h J ( i ) ( k + 1 ) + b m ( b + 1 )
where
h j ( i ) ( k + 1 ) = r = 1 n e ( x r ( i ) c r , j ( k + 1 ) α v r , j ( k + 1 ) ) 2
for 1 j J and 1 i N . Note that c r , j ( k + 1 ) and v r , j ( k + 1 ) are the newly updated values obtained in the first part of the current iteration. Let
A ( k + 1 ) = [ h 1 ( 1 ) ( k + 1 ) h 1 ( 2 ) ( k + 1 ) h 1 ( N ) ( k + 1 ) h 2 ( 1 ) ( k + 1 ) h 2 ( 2 ) ( k + 1 ) h 2 ( N ) ( k + 1 ) h J ( 1 ) ( k + 1 ) h J ( 2 ) ( k + 1 ) h J ( N ) ( k + 1 ) 1 1 1 ] T
and
W ( k + 1 ) = [ w 1 , 1 ( k + 1 ) w 1 , 2 ( k + 1 ) w 1 , J ( k + 1 ) b 1 ( k + 1 ) w 2 , 1 ( k + 1 ) w 2 , 2 ( k + 1 ) w 2 , J ( k + 1 ) b 2 ( k + 1 ) w m , 1 ( k + 1 ) w m , 2 ( k + 1 ) w m , J ( k + 1 ) b m ( k + 1 ) ] T
and Y be the same matrix as shown in Equation (18). As before, we’d like to minimize
A ( k + 1 ) W ( k + 1 ) Y 2 + λ W ( k + 1 ) 2
The solution is { A T ( k + 1 ) A ( k + 1 ) + λ I } 1 A T ( k + 1 ) Y which provides the new values of the weights and biases, i.e., w f , j ( k + 1 ) and b f ( k + 1 ) , 1 j J , 1 f m .

6. Experimental Results

In this section, experimental results are presented to show the effectiveness of the proposed approach. Comparison results with other classification methods are also presented. For convenience, our approach is abbreviated as RBF-ISCC (RBF with Iterative Self-Constructing Clustering). We used a computer with Intel(R) Core(TM) i7 [email protected] and 16 GB of RAM to conduct the experiments.
A five-fold cross validation was adopted to measure the performance of a classifier in the following experiments. Each dataset was randomly divided into five disjoint subsets, in such a way that the data of each category were evenly involved in five folds. Then, a classifier performed five runs for each dataset. In each run, four of the five subsets were used for training in the training phase and the remaining subset was used for testing in the testing phase. The results of the five runs were then averaged to indicate the performance of the classifier on the dataset. Note that the data for training were different from the data for testing in each run. The pre-defined constants, such as α of our method, were determined in the training phase and were not changed in the testing phase.

6.1. Experiment 1—Single-Label Data Sets

In this experiment, we would like to show the performance of single-label classification of different methods. Average accuracy (AACC), macro-averaged recall (MAREC), macro-averaged precision (MAPRE), and macro-averaged F-measure (MAFM) are adopted metrics for performance evaluation [57]. For category i , 1 i m , let TPi, FPi, FNi, TNi are the number of true positives, false positives, false negatives, and true negatives, respectively. AACC, MAREC, MAPRE, and MAFM are defined as
AACC = i = 1 m TP i + TN i TP i + FP i + FN i + TN i m MAREC = i = 1 m TP i TP i + FN i m MAPRE = i = 1 m TP i TP i + FP i m MAFM = 2 × MAREC × MAPRE MAREC + MAPRE
Clearly, higher values of the above metrics indicate a better classification. Several benchmark data sets are used in this experiment. Table 1 lists the characteristics of these data sets. For example, the dataset “Liver Disorders” contains 345 patterns, and each patterns has 7 features. There are two categories involved in the dataset. Note that cardinality denotes the average number of categories each pattern belongs to. Since each pattern belongs to one and only one category, the cardinality of the dataset is 1.000. The data sets of Table 1 are taken from the repository of the machine learning database at the University of California at Irvine (UCI) [58].
Table 2 and Table 3 shows the testing AACC and MMFM, respectively, obtained by different methods, including LinearSVC [59], SVC-rbf (Support Vector Classifier with RBF kernel), SVM [11], NN (Neural Network) [60], RBF [22], and RBF-ISCC for the data sets in Table 1. SVC-rbf, support vector classification with RBF kernel, is implemented based on libsvm [61] and using one-vs.-rest scheme for classification. LinearSVC, Linear support vector classification, is similar to SVC with linear kernel, but implemented in terms of liblinear [59] rather than libsvm. We also used one-vs.-rest scheme for classification for LinearSVC. SVM is a support vector machine based on the MATLAB toolbox (http://www.mathworks.com). NN is a three-layer neural network, using the hyperbolic tangent sigmoid transfer function (tanh) in the hidden layer. NN optimizes the log-loss function using LBFGS (Limited-memory BFGS) [60], which is a BFGS (Broyden–Fletcher–Goldfarb–Shanno) based optimizer in the family of quasi-Newton methods. The codes of LinearSVC, SVC-rbf, and NN are downloaded from the scikit-learn library (http://scikit-learn.org) [62,63]. RBF is a radial basis function network downloaded from the website http://neupy.com/apidocs. In the tables, the best performance for each data set is boldfaced. The average matric values obtained by each method are shown in the bottom line of Table 2 and Table 3. Note that RBFISCC performs best, having the highest average AACC, 87.72% and the highest average MAFM, 77.96%. In particular, RBF-ISCC gets the highest AACC and MAFM for 8 and 5, respectively, out of 10 data sets. Table 4 and Table 5 show the MAREC and MAPRE values, respectively, obtained by different methods for all the data sets.
For these data sets, we set η = 0.1 in RBF-ISCC. The other constants are shown in Table 6. As mentioned earlier, their values are determined in the training phase and are not changed in the testing phase.

6.2. Experiment 2—Multi-Label Data Sets

In this experiment, we show the performance of different methods on multi-label classification datasets. Exact match ratio (EMR), labeling F-measure (LFM), and Hamming loss (HL) are adopted for performance evaluation [57]. Let Nt be the number of patterns. EMR, LFM, and HL are defined as
EMR = i = 1 N t I ( y ( i ) , y ^ ( i ) ) N t LFM = i = 1 N t 2 h = 1 m ( 1 + y h ( i ) 2 ) ( 1 + y ^ h ( i ) 2 ) h = 1 m [ 1 + y h ( i ) 2 + 1 + y ^ h ( i ) 2 ] N t HL = i = 1 N t H d ( y ( i ) , y ^ ( i ) ) m N t
where I ( a , b ) = 1 if a is identical to b and I ( a , b ) = 0 otherwise, and H d ( a , b ) is the Hamming distance between a and b. Note that higher values in EMR and LFM, and lower value in HL, indicate a better classification. Several benchmark data sets, taken from the MULAN (Multi-Label Learning) library [30], are used in this experiment. The characteristics of these data sets are shown in Table 7. A pattern may belong to more than one category in average, so the cardinality of a data set can be greater than 1. For instance, the cardinality of “Yeast-ML” is 4.237, indicating each pattern belonging to 4.237 categories in average.
Table 8, Table 9 and Table 10 show the testing EMR, LFM, and HL obtained by different methods, including ML-KNN (K-nearest neighbors classifier for multi-label) [49], MLPC (Multi-layer Perceptron classifier) [64], ML-SVC (Support vector classifier for multi-label), ML-RBF (RBF network for multi-label) [50], and RBF-ISCC, for the data sets in Table 7. The codes of ML-KNN, MLPC, and ML-SVC are downloaded from the scikitlearn library, while the codes of ML-RBF is downloaded from the public domain: http://cse.seu.edu.cn/people/zhangml/resources.htm. For these data sets, we set η = 0.1 in RBF-ISCC, except η = 1.2 for “Scene”. The other constants are shown in Table 11. Note that their values are determined in the training phase and are not changed in the testing phase.
The average metric values obtained by each method are shown in the bottom line of Table 8, Table 9 and Table 10. From these tables, we can see that RBF-ISCC performs the best in average EMR and average HL for these data sets. Table 12 ranks different methods based on the average results in Table 8, Table 9 and Table 10. Clearly, RBF-ISCC is ranked the first place.

6.3. Discussions

Our proposed approach includes two phases, network setup and parameter refinement. In the network setup phase, the iterative self-constructing clustering algorithm is used to produce J clusters. Assume that P1 iterations is performed. In each iteration, each n-dimensional instance is compared against each of the J existing clusters. Therefore, the complexity of this phase is O(nNJP1). In the parameter refinement phase, a hybrid learning, integrating the steepest descent and least squares methods, is applied to refine the parameters associated with the network. Let P2 be the number of iterations performed in this phase. The complexity of the least squares method is O(NJ2), while the complexity of steepest descent is O(nNJ). Therefore, the complexity of the parameter refinement phase is O(nNJP2+NJ2P2). As a result, the total complexity of our approach is O(nNJP1 + nNJP2 + NJ2P2). We feel that it may not be appropriate to compare the performance from the speed viewpoint among different methods. The programs of the other existing methods are taken from different websites. They might be implemented in different languages or with different optimizing mechanisms. For support-vector based classifiers, e.g., LinearSVC and SVC-rbf, the quadratic programming solver takes a complexity between O(nN2) and O(nN3) [11,59]. For NN with k hidden layers each having h neurons, the time complexity of backpropagation is O(nNmhkP) where P is the number of iterations [60]. For KNN, the brute-force implementation takes a complexity of O(nN2). The computational time of each method is usually of the order of seconds for data sets of small or moderate size.
High-dimensional datasets may cause data sparsity and computational intractability, leading to low forecasting accuracy of classification algorithms. Two strategies, correlation analysis and dimensionality reduction, can be employed to reduce the dimensionality of a dataset. By correlation analysis, the correlation, e.g., Pearson product-moment correlation coefficient [65], for each pair of variables, is calculated. Variables are sorted out according to their relevance to the target to be predicted. Four levels of correlation, uncorrelated, weakly correlated, moderately correlated, and strongly correlated, are defined [66]. It was shown in [67] that by ignoring weakly correlated or uncorrelated variables, forecasting accuracy can be improved. Principal component analysis (PCA) [68] is a popular method for dimensionality reduction. PCA reduces the dimensionality of a dataset while retaining the variation present in the dataset up to the maximum extent. This is done by transforming the original variables to a new set of variables known as the principal components. Dimensionality reduction is done by choosing a small number of principal components. Some strategies were proposed to determine an appropriate number of principal components. One common strategy chooses g principal components so that g is as small as possible and the cumulative energy is above a certain threshold.
To address the scalability concern about N, two methods—sampling [69] and clustering [70]—can be adopted. We may take a random sample, with size Ns, from the N instances, such that Ns is smaller than N. We may apply clustering to reduce the training set size. The N instances are grouped into clusters and the obtained clusters are used as the representatives for training the model. Another approach through exploiting the parallel or distributed facility of the computing system has long been adopted to address the scalability problem [71]. The N instances are divided and dispatched to different CPUs or computing nodes which can run simultaneously to improve the training efficiency of the classification task.

7. Concluding Remarks

We have presented a novel approach of constructing RBF networks for solving supervised classification problems. An iterative self-constructing clustering algorithm is used to determine the number of nodes in the hidden layer. Basis functions are formed, and their centers and deviations are initialized to be the centers and deviations of the corresponding clusters. Then, the parameters of the network are refined with a hybrid learning strategy, involving the steepest descent backpropagation and least squares method. Hyperbolic tangent sigmoid functions and Tikhonov regularization are employed. As a result, optimized RBF networks are obtained. The proposed approach is applicable to construct RBF networks for solving both single-label and multi-label pattern classification problems. Experimental results have shown that the proposed approach can be used to solve classification tasks effectively.
Note that all the dimensions of a training pattern are treated to be equally important, i.e., no different weights are involved. It might be beneficial to incorporate a kind of weighting mechanism which suggests different weights in different dimensions [72]. Furthermore, the clustering algorithm may encounter difficulties finding useful clusters in high-dimensional data sets. Dimensionality reduction [65,68,69,70] may need to be applied in this case. We will take these as our future work.

Author Contributions

Data curation, Z.-R.H., Y.-T.L., C.-Y.W., and Y.-J.Y.; Formal analysis, S.-J.L.; Funding acquisition, S.-J.L.; Methodology, Z.-R.H. and S.-J.L.; Project administration, S.-J.L.; Resources, Z.-R.H., Y.-T.L., C.-Y.W., and Y.-J.Y.; Software, Z.-R.H., Y.-T.L., C.-Y.W., and Y.-J.Y.; Validation, Z.-R.H. and S.-J.L.; Writing—original draft, Z.-R.H.; Writing—review & editing, S.-J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by grants MOST-108-2221-E-110-046-MY2 and MOST-107-2622-E-110-008-CC3, Ministry of Science and Technology, and by the “Intelligent Electronic Commerce Research Center” from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.

Acknowledgments

The anonymous reviewers are highly appreciated for their comments which were very helpful in improving the quality and presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ozay, M.; Esnaola, I.; Vural, F.T.Y.; Kulkarni, S.R.; Poor, H.V. Machine learning methods for attack detection in the smart grid. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1773–1786. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Sutar, R.G.; Kothari, A.G. Intelligent electrocardiogram pattern classification and recognition using low-cost cardio-care system. IET Sci. Meas. Technol. 2015, 9, 134–143. [Google Scholar] [CrossRef]
  3. Leite, J.P.R.R.; Moreno, R.L. Heartbeat classification with low computational cost using Hjorth parameters. IET Signal Process. 2018, 12, 431–438. [Google Scholar] [CrossRef]
  4. Hajek, P.; Henriques, R. Mining corporate annual reports for intelligent detection of financial statement fraud—A comparative study of machine learning methods. Knowl. Based Syst. 2017, 128, 139–152. [Google Scholar] [CrossRef]
  5. Parisi, L.; RaviChandran, N.; Manaog, M.L. Decision support system to improve postoperative discharge: A novel multi-class classification approach. Knowl. Based Syst. 2018, 152, 1–10. [Google Scholar] [CrossRef]
  6. Haykin, S. Neural Networks: A Comprehensive Foundation, 1st ed.; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  7. Quinlan, J.R. Simplifying decision trees. Int. J. Hum. Comput. Stud. 1999, 51, 497–510. [Google Scholar] [CrossRef] [Green Version]
  8. Ignatov, D.Y.; Ignatov, A.D. Decision stream: Cultivating deep decision trees. In Proceedings of the IEEE 29th International Conference on Tools for Artificial Intelligence, Boston, MA, USA, 6–8 November 2017; pp. 905–912. [Google Scholar]
  9. Aha, D.W. Lazy Learning; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  10. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education Limited: Kuala Lumpur, Malaysia, 2016. [Google Scholar]
  11. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  12. Suykens, J.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  13. Schmidt, W.F.; Kraaijveld, M.A.; Duin, R.P.W. Feedforward neural networks with random weights. In Proceedings of the 11st IAPR International Conference on Pattern Recognition, Conference B: Pattern Recognition Methodology and Systems, Hague, The Netherlands, 30 August–3 September 1992; Volume II, pp. 1–4. [Google Scholar]
  14. Pao, Y.; Park, G.H.; Sobajic, D.J. Learning and generalization characteristics of random vector functional-link net. Neurocomputing 1994, 6, 163–180. [Google Scholar] [CrossRef]
  15. Karabadji, N.E.I.; Seridi, H.; Bousetouane, F.; Dhifli, W.; Aridhi, S. An evolutionary scheme for decision tree construction. Knowl. Based Syst. 2017, 119, 166–177. [Google Scholar] [CrossRef]
  16. Liu, Z.G.; Pan, Q.; Mercier, G.; Dezert, J. A new incomplete pattern classification method based on evidential reasoning. IEEE Trans. Cybern. 2015, 45, 635–646. [Google Scholar] [CrossRef] [PubMed]
  17. Dadaneh, S.Z.; Dougherty, E.R.; Qian, X. Optimal Bayesian classification with missing values. IEEE Trans. Signal Process. 2018, 66, 4182–4192. [Google Scholar] [CrossRef]
  18. Liu, Z.; Zhuo, C.; Xu, X. Efficient segmentation method using quantised and non-linear CeNN for breast tumour classification. Electron. Lett. 2018, 54, 737–738. [Google Scholar] [CrossRef]
  19. Donglikar, N.V.; Waghmare, J.M. An enhanced general fuzzy min-max neural network for classification. In Proceedings of the 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), Melur, India, 15–16 June 2017; pp. 757–764. [Google Scholar]
  20. Acharya, U.R.; Fujita, H.; Lih, O.S.; Adam, M.; Tan, J.H.; Chua, C.K. Automated detection of coronary artery disease using different durations of ECG segments with convolutional neural network. Knowl. Based Syst. 2017, 132, 62–71. [Google Scholar] [CrossRef]
  21. Wang, C.-R.; Xu, R.-F.; Lee, S.-J.; Lee, C.-H. Network intrusion detection using equality constrained-optimization-based extreme learning machines. Knowl. Based Syst. 2018, 147, 68–80. [Google Scholar] [CrossRef]
  22. Orr, M.J.L. Introduction to Radial Basis Function Networks. Ph.D. Thesis, University of Edinburgh, Edinburgh, UK, 1996. [Google Scholar]
  23. Mak, M.-W.; Kung, S.-Y. Estimation of elliptical basis function parameters by the EM algorithm with application to speaker verification. IEEE Trans. Neural Netw. 2000, 11, 961–969. [Google Scholar]
  24. Yang, X.; Li, Y.; Sun, Y.; Long, T.; Sarkar, T.K. Fast and robust RBF neural network based on global k-means clustering with adaptive selection radius for sound source angle estimation. IEEE Trans. Antennas Propag. 2018, 66, 3097–3107. [Google Scholar] [CrossRef]
  25. Zhou, Q.; Wang, Y.; Jiang, P.; Shao, X.; Choi, S.-K.; Hu, J.; Cao, L.; Meng, X. An active learning radial basis function modeling method based on self-organization maps for simulation-based design problems. Knowl. Based Syst. 2017, 131, 10–27. [Google Scholar] [CrossRef]
  26. Peng, H.-W.; Lee, S.-J.; Lee, C.-H. An oblique elliptical basis function network approach for supervised learning applications. Appl. Soft Comput. 2017, 60, 552–563. [Google Scholar] [CrossRef]
  27. You, Y.-J.; Wu, C.-Y.; Lee, S.-J.; Liu, C.-K. Intelligent neural network schemes for multi-class classification. Appl. Sci. 2019, 9, 4036. [Google Scholar] [CrossRef] [Green Version]
  28. Lapin, M.; Hein, M.; Schiele, B. Analysis and optimization of loss functions for multiclass, top-k, and multilabel classification. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1533–1554. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Madjarov, G.; Kocev, D.; Gjorgjevikj, D.; Džeroski, S. An extensive experimental comparison of methods for multi-label learning. Pattern Recognit. 2012, 45, 3084–3104. [Google Scholar] [CrossRef]
  30. Tsoumakas, G.; Spyromitros-Xioufis, E.; Vilcek, J.; Vlahavas, I. MULAN: A Java library for multi-label learning. J. Mach. Learn. Res. 2011, 12, 2411–2414. [Google Scholar]
  31. Lo, H.Y.; Lin, S.D.; Wang, H.M. Generalized k-labelsets ensemble for multi-label and cost-sensitive classification. IEEE Trans. Knowl. Data Eng. 2014, 26, 1679–1691. [Google Scholar]
  32. Zhang, M.L.; Zhou, Z.H. A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 2014, 26, 1819–1837. [Google Scholar] [CrossRef]
  33. Zhang, P.; Zhou, X.; Pelliccione, P.; Leung, H. RBF-MLMR: A multi-label metamorphic relation prediction approach using RBF neural network. IEEE Access 2017, 5, 21791–21805. [Google Scholar] [CrossRef]
  34. Broomhead, D.H.; Lowe, D. Multivariable functional interpolation and adaptive networks. Complex Syst. 1988, 2, 321–355. [Google Scholar]
  35. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; De Jesus, O. Neural Network Design, 2nd ed.; Martin Hagan: Notre Dame, IN, USA, 2012. [Google Scholar]
  36. He, Z.-R.; Lin, Y.-T.; Lee, S.-J.; Wu, C.-H. A RBF network approach for function approximation. In Proceedings of the IEEE International Conference on Information and Automation, Wuyishan, China, 11–13 August 2018. [Google Scholar]
  37. Tu, C.-S.; Wu, D.-Y.; Lee, S.-J.; Wu, C.-H. Regression estimation by radial basis function networks with self-constructing clustering. In Proceedings of the 5th International Conference on Systems and Informatics, Nanjing, China, 10–12 November 2018. [Google Scholar]
  38. He, T.-J. A RBF Network Approach to Pattern Classification. Master’s Thesis, National Sun Yat-Sen University, Kaohsiung, Taiwan, 2018. [Google Scholar]
  39. Tang, B.; He, H. ENN: Extended nearest neighbor method for pattern recognition. IEEE Comput. Intell. Mag. 2015, 10, 52–60. [Google Scholar] [CrossRef]
  40. Feng, J.; Wei, Y.; Zhu, Q. Natural neighborhood-based classification algorithm without parameter k. Big Data Min. Anal. 2018, 1, 257–265. [Google Scholar]
  41. Wen, H.; Fan, H.; Xie, W.; Pei, J. Hybrid structure-adaptive RBF-ELM network classifier. IEEE Access 2017, 5, 16539–16554. [Google Scholar] [CrossRef]
  42. Titsias, M.K.; Likas, A.C. Shared kernel models for class conditional density estimation. IEEE Trans. Neural Netw. 2001, 12, 987–997. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Read, J.; Pfahringer, B.; Holmes, G.; Frank, E. Classifier chains for multi-label classification. Mach. Learn. 2011, 85, 333. [Google Scholar] [CrossRef] [Green Version]
  44. Spolaor, N.; Cherman, E.A.; Monard, M.C.; Lee, H.D. A comparison of multi-label feature selection methods using the problem transformation approach. Electron. Notes Theor. Comput. Sci. 2013, 292, 135–151. [Google Scholar] [CrossRef] [Green Version]
  45. Tsoumakas, G.; Vlahavas, I. Random k-labelsets: An ensemble method for multilabel classification. Lect. Notes Artif. Intell. 2007, 4701, 406–417. [Google Scholar]
  46. Classification-Visualizers. Discrimination Threshold—Yellowbrick 0.9 Documentation. Available online: http://www.scikit-yb.org (accessed on 29 May 2020).
  47. Li, H.; Guo, Y.J.; Wu, M.; Li, P.; Xiang, Y. Combine multi-valued attribute decomposition with multi-label learning. Expert Syst. Appl. 2010, 37, 8721–8728. [Google Scholar] [CrossRef]
  48. Zhang, M.-L.; Zhou, Z.-H. Multilabel neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 2006, 18, 1338–1351. [Google Scholar] [CrossRef] [Green Version]
  49. Zhang, M.-L.; Zhou, Z.-H. ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognit. 2007, 40, 2038–2048. [Google Scholar] [CrossRef] [Green Version]
  50. Zhang, M.-L. ML-RBF: RBF neural networks for multi-label learning. Neural Process. Lett. 2009, 29, 61–74. [Google Scholar] [CrossRef]
  51. Lee, S.J.; Jiang, J.Y. Multilabel text categorization based on fuzzy relevance clustering. IEEE Trans. Fuzzy Syst. 2014, 22, 1457–1471. [Google Scholar] [CrossRef]
  52. Kurata, G.; Xiang, B.; Zhou, B. Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, 12–17 June 2016; pp. 521–526. [Google Scholar]
  53. Wang, Z.-Y. Some Variants of Self-Constructing Clustering. Master’s Thesis, National Sun Yat-Sen University, Kaohsiung, Taiwan, 2017. [Google Scholar]
  54. Tikhonov, A.N.; Goncharsky, A.V.; Stepanov, V.V.; Yagola, A.G. Numerical Methods for the Solution of Ill-Posed Problems; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1995. [Google Scholar]
  55. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012; Volume 3. [Google Scholar]
  56. Widrow, B.; Winter, R. Neural nets for adaptive filtering and adaptive pattern recognition. IEEE Comput. Mag. 1988, 21, 25–39. [Google Scholar] [CrossRef]
  57. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  58. Asuncion, A.; Newman, D. UCI Machine Learning Repository. 2007. Available online: https://archive.ics.uci.edu/ml/ (accessed on 20 June 2018).
  59. Fan, R.-E.; Chang, K.-W.; Hsieh, C.-J.; Wang, X.-R.; Lin, C.-J. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
  60. Nawi, N.M.; Ransing, M.R.; Ransing, R.S. An improved learning algorithm based on the broyden-fletchergoldfarb-shanno (BFGS) method for back propagation neural networks. In Proceedings of the 6th International Conference on Intelligent Systems Design and Applications, Jinan, China, 16–18 October 2006; Volume 1, pp. 152–157. [Google Scholar]
  61. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  62. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikitlearn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  63. Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Mueller, A.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.; Grobler, J.; et al. API design for machine learning software: Experiences from the scikit-learn project. In Proceedings of the ECML PKDD Workshop: Languages for Data Mining and Machine Learning, Dublin, Ireland, 10–14 September 2013; pp. 108–122. [Google Scholar]
  64. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th international conference on artificial intelligence and statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  65. Rodgers, J.L.; Nicewander, W.A. Thirteen ways to look at the correlation coefficient. Am. Stat. 1988, 42, 59–66. [Google Scholar] [CrossRef]
  66. RMCAB. Bogotá Air Quality Monitoring Network, Website of Environmental Information. 2015. Available online: http://201.245.192.252:81/ (accessed on 10 July 2020).
  67. Li, L.; Wu, J.; Hudda, N.; Sioutas, C.; Fruin, S.A.; Delfino, R.J. Modeling the concentrations of on-road air pollutants in southern California. Environ. Sci. Technol. 2013, 47, 9291–9299. [Google Scholar] [CrossRef] [Green Version]
  68. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  69. Wrobel, S. Scalability, search, and sampling: From smart algorithms to active discovery. In Proceedings of the European Conference on Principles of Data Mining and Knowledge Discovery, PKDD 2001, Freiburg, Germany, 3–5 September 2001; Lecture Notes in Computer Science 2168. De Raedt, L., Siebes, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  70. Liu, C.-F.; Yeh, C.-Y.; Lee, S.-J. A novel prototype reduction approach for supervised learning. Int. J. Innov. Comput. Inf. Control 2012, 8, 3963–3980. [Google Scholar]
  71. Coulouris, G.; Dollimore, J.; Kindberg, T.; Blair, G. Distributed Systems: Concepts and Design, 5th ed.; Addison-Wesley: Boston, MA, USA, 2011. [Google Scholar]
  72. Huang, X.; Ye, Y.; Xiong, L.; Lau, R.; Jiang, N.; Wang, S. Time series k-means: A new k-means type smooth subspace clustering for time series data. Inf. Sci. 2016, 367, 1–13. [Google Scholar] [CrossRef]
Figure 1. Architecture of the proposed network model.
Figure 1. Architecture of the proposed network model.
Applsci 10 05886 g001
Table 1. Single-label data sets for experiments.
Table 1. Single-label data sets for experiments.
Data Set# Features# Categories# PatternsCardinality
Liver Disorders723451.000
Heart1322701.000
Ionosphere3423511.000
Sonar6022081.000
Iris431501.000
Lung Cancer563321.000
Soybean354471.000
Glass962141.000
Ecoli883361.000
Yeast-SL81014841.000
Table 2. Testing average accuracy (AACC) obtained for single-label data sets.
Table 2. Testing average accuracy (AACC) obtained for single-label data sets.
Data SetLinearSVCSVC-RBFSupport Vector Machines (SVM)NNRadial Basis Function (RBF)RBF with Iterative Self-Constructing Clustering (RBF-ISCC)
Liver Disorders0.72460.68120.71010.62320.55070.7536
Heart0.75930.75930.77780.74070.70370.7778
Ionosphere0.88730.90140.90140.88730.83100.9296
Sonar0.80950.85710.88100.88100.80950.8333
Iris0.91110.97780.97780.97780.97780.9778
Lung Cancer0.57140.61900.52380.71430.42860.7143
Soybean1.00001.00001.00001.00001.00001.0000
Glass0.88370.89150.79070.89150.89150.8992
Ecoli0.96320.95960.90440.94850.94120.9614
Yeast-SL0.91010.92320.85320.90510.89560.9249
Ave-AACC0.84200.85700.83200.85690.80300.8772
Table 3. Testing macro-averaged F-measure (MAFM) obtained for single-label data sets.
Table 3. Testing macro-averaged F-measure (MAFM) obtained for single-label data sets.
Data SetLinearSVCSVC-rbfSVMNNRBFRBF-ISCC
Liver Disorders0.71890.66620.69830.61340.55990.7461
Heart0.75870.75870.77680.73730.69730.7776
Ionosphere0.87680.89200.89200.87500.81190.9266
Sonar0.81320.86230.88080.88080.80910.8337
Iris0.84210.96820.96820.96820.96820.9682
Lung Cancer0.49150.20000.16670.63770.13330.5000
Soybean1.00001.00001.00001.00001.00001.0000
Glass0.62660.63250.43320.60150.72130.6325
Ecoli0.89360.86790.58930.70790.72810.8663
Yeast-SL0.42510.66140.24270.54260.51110.5450
Ave-MAFM0.74470.75090.66480.75640.69400.7796
Table 4. Testing macro-averaged recall (MAREC) obtained for single-label data sets.
Table 4. Testing macro-averaged recall (MAREC) obtained for single-label data sets.
Data SetLinearSVCSVC-rbfSVMNNRBFRBF-ISCC
Liver Disorders0.71980.65860.69310.61340.56030.7448
Heart0.74170.74170.76250.72500.69170.7792
Ionosphere0.84910.86910.86910.86740.77830.9365
Sonar0.80480.85230.87950.87950.80910.8341
Iris0.80000.96670.96670.96670.96670.9667
Lung Cancer0.61110.33330.33330.61110.16670.5000
Soybean1.00001.00001.00001.00001.00001.0000
Glass0.60630.59130.37860.57300.70710.5611
Ecoli0.85480.84700.58810.71320.70000.8235
Yeast-SL0.35930.62800.23260.56920.55240.4280
Ave-MAREC0.73470.74880.67040.75190.69320.7574
Table 5. Testing macro-averaged precision (MAPRE) obtained for single-label data sets.
Table 5. Testing macro-averaged precision (MAPRE) obtained for single-label data sets.
Data SetLinearSVCSVC-rbfSVMNNRBFRBF-ISCC
Liver Disorders0.71790.67390.70360.61340.55940.7474
Heart0.77660.77660.79170.75000.70290.7761
Ionosphere0.90640.91620.91620.88270.84860.9169
Sonar0.82210.87260.88220.88220.80910.8333
Iris0.88890.96970.96970.96970.96970.9697
Lung Cancer0.41110.14290.11110.66670.11110.5000
Soybean1.00001.00001.00001.00001.00001.0000
Glass0.64820.67990.50620.63330.73610.7250
Ecoli0.91440.88980.59050.70270.75860.9139
Yeast-SL0.52050.69890.25370.51830.47550.7499
Ave-MAPRE0.76060.76210.67250.76190.69710.8132
Table 6. Pre-defined constants for RBF-ISCC for single-label data sets.
Table 6. Pre-defined constants for RBF-ISCC for single-label data sets.
Data Setαβλ αβλ
Liver Disorders420Heart480.1
Ionosphere420Sonar480
Iris440Lung Cancer620
Soybean420Glass220.1
Ecoli4160Yeast-SL4160
Table 7. Multi-label data sets for experiments.
Table 7. Multi-label data sets for experiments.
Data Set# Features# Categories# PatternsCardinality
Birds260196451.014
Scene294624071.074
Emotions7265931.869
Flags1971943.392
Yeast-ML1031424174.237
Table 8. Testing exact match ratio (EMR) obtained for multi-label data sets.
Table 8. Testing exact match ratio (EMR) obtained for multi-label data sets.
Data SetML-KNNMLPCML-SVCML-RBFRBF-ISCC
Emotions0.18320.26730.20300.21290.3020
Scene0.45760.55180.37210.54180.5360
Yeast-ML0.10450.07880.13680.03050.1526
Birds0.46750.46750.41800.48920.5232
Flags0.15380.15380.13850.13850.1846
Ave-EMR0.27330.30380.25370.28260.3400
Table 9. Testing labeling F-measure (LFM) obtained for multi-label data sets.
Table 9. Testing labeling F-measure (LFM) obtained for multi-label data sets.
Data SetML-KNNMLPCML-SVCML-RBFRBF-ISCC
Emotions0.48120.62660.57160.49260.5893
Scene0.48800.67340.55450.59240.5860
Yeast-ML0.56330.54120.60240.40660.6203
Birds0.46750.62250.57370.53720.5890
Flags0.63290.68850.68300.69550.6827
Ave-LFM0.52700.6300.59700.54490.6135
Table 10. Testing Hamming loss (HL) obtained for multi-label data sets.
Table 10. Testing Hamming loss (HL) obtained for multi-label data sets.
Data SetML-KNNMLPCML-SVCML-RBFRBF-ISCC
Emotions0.23020.21370.23430.21950.2021
Scene0.11300.11420.15680.09840.0994
Yeast-ML0.2136 0.26050.21080.27110.2017
Birds0.05100.05950.11540.08380.0422
Flags0.30990.26590.27910.28350.2967
Ave-HL0.18350.18280.19930.19130.1684
Table 11. Pre-defined constants for RBF-ISCC for multi-label data sets.
Table 11. Pre-defined constants for RBF-ISCC for multi-label data sets.
Data Setαβλ αβλ
Emotions720.1Scene1240
Yeast-ML740Birds1020
Flags2210
Table 12. Ranking of different methods for multi-label data sets.
Table 12. Ranking of different methods for multi-label data sets.
Data SetML-KNNMLPCML-SVCML-RBFRBF-ISCC
Ave-EMR42531
Ave-LFM51342
Ave-HL32541
Ave-Rank4.001.674.333.671.33

Share and Cite

MDPI and ACS Style

He, Z.-R.; Lin, Y.-T.; Wu, C.-Y.; You, Y.-J.; Lee, S.-J. Pattern Classification Based on RBF Networks with Self-Constructing Clustering and Hybrid Learning. Appl. Sci. 2020, 10, 5886. https://doi.org/10.3390/app10175886

AMA Style

He Z-R, Lin Y-T, Wu C-Y, You Y-J, Lee S-J. Pattern Classification Based on RBF Networks with Self-Constructing Clustering and Hybrid Learning. Applied Sciences. 2020; 10(17):5886. https://doi.org/10.3390/app10175886

Chicago/Turabian Style

He, Zan-Rong, Yan-Ting Lin, Chen-Yu Wu, Ying-Jie You, and Shie-Jue Lee. 2020. "Pattern Classification Based on RBF Networks with Self-Constructing Clustering and Hybrid Learning" Applied Sciences 10, no. 17: 5886. https://doi.org/10.3390/app10175886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop