Next Article in Journal
Vectorized Numerical Algorithms to Solve Internal Problems of Computational Fluid Dynamics
Previous Article in Journal
Activation-Based Pruning of Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correntropy-Based Constructive One Hidden Layer Neural Network

1
Institute for Artificial Intelligence, University of Stuttgart, 70569 Stuttgart, Germany
2
Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad 1696700, Iran
3
Department of Mathematics and Statistics, University of Turku, 20014 Turku, Finland
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(1), 49; https://doi.org/10.3390/a17010049
Submission received: 29 November 2023 / Revised: 9 January 2024 / Accepted: 11 January 2024 / Published: 22 January 2024
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

:
One of the main disadvantages of the traditional mean square error (MSE)-based constructive networks is their poor performance in the presence of non-Gaussian noises. In this paper, we propose a new incremental constructive network based on the correntropy objective function (correntropy-based constructive neural network (C2N2)), which is robust to non-Gaussian noises. In the proposed learning method, input and output side optimizations are separated. It is proved theoretically that the new hidden node, which is obtained from the input side optimization problem, is not orthogonal to the residual error function. Regarding this fact, it is proved that the correntropy of the residual error converges to its optimum value. During the training process, the weighted linear least square problem is iteratively applied to update the parameters of the newly added node. Experiments on both synthetic and benchmark datasets demonstrate the robustness of the proposed method in comparison with the MSE-based constructive network, the radial basis function (RBF) network. Moreover, the proposed method outperforms other robust learning methods including the cascade correntropy network (CCOEN), Multi-Layer Perceptron based on the Minimum Error Entropy objective function (MLPMEE), Multi-Layer Perceptron based on the correntropy objective function (MLPMCC) and the Robust Least Square Support Vector Machine (RLS-SVM).

1. Introduction

Non-Gaussian noises, especially impulse noise, and outliers are one of the most challenging issues in training adaptive systems including adaptive filters and feedforward networks (FFNs). The mean square error (MSE), the second-order statistic, is used widely as the objective function for adaptive systems due to its simplicity, analytical tractability and linearity of its derivative. The Gaussian noise assumption beyond MSE objective functions supposes that many real-world random phenomena may be modeled by Gaussian distribution. Under this assumption, MSE could be capable of extracting all information from data whose statistic is defined solely by the mean and variance [1]. Most real-world random phenomena do not have a normal distribution and the MSE-based methods may perform unsatisfactorily in such cases.
Several types of feedforward networks have been proposed by researchers. From the architecture viewpoint, these networks can be divided into four classes including fixed structure networks, constructive networks [2,3,4,5,6], pruned networks [7,8,9,10] and pruning constructive networks [11,12,13].
The constructive networks start with a minimum number of nodes and connections, and the network size is increased gradually. These networks may have an adjustment mechanism based on the optimization of an objective function. The following literature survey focuses on the single-hidden layer feedforward networks (SLFNs) and multi-hidden layer feedforward networks with incremental constructive architecture, which are trained based on the MSE objective function.
Fahlman and Lebier [2] proposed a cascade correlation network (CCN) in which new nodes are added and trained one by one, creating a multi-layer structure. The parameters of the network are trained to maximize the correlation between the output of the new node and the residual error. The authors in [3] proposed several objective functions for training the new node. They proved that the networks with such objective functions are universal approximators. Huang et al. [4] proposed a novel cascade network. They used the orthogonal least square (OLS) method to drive a novel objective function for training new hidden nodes. Ma and Khorasani [6] proposed a constructive one hidden layer feedforward network in which its hidden unit activation functions are Hermite polynomial functions. This approach results in a more efficient capture of the underlying input–output map. They proposed a new one hidden layer constructive adaptive neural network (OHLCN) scheme in which the input and output sides of the training are separated [5]. They scaled error signals during the learning process to achieve better performance. Inefficient input connections are pruned to achieve better performance. A new constructive scheme was proposed by Wu et al. [14] based on a hybrid algorithm, which is presented by combining the Levenberg–Marquardt algorithm and the least square method. In their approach, a new randomly selected neuron is added to the network when training is entrapped into local minima.
Inspired by information theoretic learning (ITL), correntropy, which is a localized similarity measure between two random variables [15,16], has recently been utilized as the objective function for training adaptive systems. Bessa et al. [17] employed maximum correntropy criterion (MCC) for training neural networks with fixed architecture. They compared the Minimum Error Entropy (MEE) and MCC-based neural networks with MSE-based networks and reported new results in wind power prediction. Singh and Principe [18] used correntropy as the objective function in the linear adaptive filter to minimize the error between the output of the adaptive filter and the desired signal, to adjust the filter weights. Shi and Lin [19] employed a convex combination scheme to improve the performance of the MCC adaptive filtering algorithm. They showed that the proposed method has better performance compared to the original signal filter algorithm. Zhao et al. [20] combined the advantage of Kernel Adaptive Filter and MCC and proposed Kernel Maximum Correntropy (KMC). The simulation results showed that KMC has significant performance in the noisy frequency doubling problem [20]. Wu et al. [21] employed MCC to train Hammerstein adaptive filters and showed that it provides a robust method in comparison to the traditional Hammerstein adaptive filters. Chen et al. [22] studied a fixed-point algorithm for MCC and showed that under sufficient conditions convergence of the fixed-point MCC algorithm is guaranteed. The authors in [23] studied the steady-state performance of adaptive filtering when MCC is employed. They established a fixed-point equation in the Gaussian noise condition to obtain the exact value of the steady-state excess mean square error (EMSE). In non-Gaussian conditions, using the Taylor expansion approach, they derived an approximate analytical expression for the steady-state EMSE. Employing stack auto-encoders and the correntropy-induced loss function, Chen et al. [24] proposed a robust deep learning model. The authors in [25], inspired by correntropy, proposed a margin-based loss function for classification problems. They showed that in their method, outliers that produce high error have little effect on discriminant function. In [26], the authors provided a learning theory analysis for the connection between the regression model associated with the correntropy-induced loss and the least square regression model. Furthermore, they studied its convergence property and concluded that the scale parameter provides a balance between the convergence rate of the model and its robustness. Chen and Principe [27] showed that maximum correntropy estimation is a smooth maximum a posteriori estimation. They also proved that when kernel size is larger than the special value and some condition is held, maximum correntropy estimation has a unique optimal solution due to the strictly concave region of the smooth posterior distribution. The authors in [28], investigated the approximation ability of a cascade network when its input parameters are calculated by the correntropy objective function with a sigmoid kernel. They reported that their method works better than the other methods introduced in [28] when data are contaminated by noise.
MCC with Gaussian kernel is a non-convex objective function that leads to local solutions for neural networks. In this paper, we propose a new method to overcome this bottleneck by adding hidden nodes one by one until the constructive network reaches a specific amount of predefined accuracy or reaches a maximum number of nodes. We prove that the correntropy of the constructive network constitutes a strictly increasing sequence after adding each hidden node and converging to its maximum.
This paper can be considered as an extension of [28]. While in [28] the correntropy measure was based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network, in this paper, the kernel in the correntropy objective function is changed from sigmoid to Gaussian kernel. This objective function is then used for training both input and output parameters of the new nodes in a single-hidden layer network. The proposed method performs better than [28] for two reasons: (1) the Gaussian kernel provides better results than the sigmoid kernel as it is a local similarity measure, and (2) in contrast to [28], in this paper, correntropy is used to train both the input and output parameters of each newly added node.
In a nutshell, the proposed method has the following advantages:
  • The proposed method is robust to non-Gaussian noises, especially impulse noise, since it takes advantage of the correntropy objective function. In particular, the Gaussian kernel provides better results than the sigmoid kernel. The reason for the robustness of the proposed method is discussed in Section 4 analytically, and in Section 5 experimentally.
  • Most of the methods that employ correntropy as the objective function to adjust their parameters suffer from local solutions. In the proposed method, the amount of correntropy of the network is increased by adding new nodes and converging to its maximum; thus, the global solution is provided.
  • The network size is determined automatically; consequently, the network does not suffer from over/underfitting, which results in satisfactory performance.
The structure of the remainder of this paper is as follows. In Section 2, some necessary mathematical notations, definitions and theorems are presented. Section 3 presents some related previous work. Then a correntropy-based constructive neural network (C2N2) is proposed in Section 4. Experimental results and a comparison with other methods are carried out in Section 5. The paper is concluded in Section 6.

2. Mathematical Notations, Definitions and Preliminaries

In this section, first, measure and function spaces that are necessary for describing previous work are defined in Section 2.1. Section 2.2 introduces the structure of the single-hidden layer feedforward network (SLFN) that is used in this paper, followed by its mathematical notations and definitions of its related variables.

2.1. Measure Space, Probability Space and Function Space

As mentioned in [3], let X be the input space that is a bounded measurable subset in R d and L 2 ( X ) be the space of all function f that is X ( f ( x ) ) 2 d μ ( x ) < . For  u , v L 2 ( X ) , the inner product is defined as follows:
u , v μ : = X u ( x ) v ( x ) d μ ( x ) .
where μ is a positive measure on input space. Under the measure μ , the  l 2 norm in L 2 ( X ) space is denoted as | · | 2 . The closeness between u and v is measured by
| u v | 2 = X ( u ( x ) v ( x ) ) 2 d μ ( x ) 1 2
The angle between u and v is defined by
θ u , v : = arccos u , v μ | u | 2 | v | 2
Definition 1
([28,29]). Let W be a probability space that is a measure space with a total measure one. This space is represented as follows:
W = ( Ω , F , P )
where Ω is its sample space. In this paper, Ω is considered a compact subset of R d , F is a sigma-algebra of events and P is a probability measure that is a measure on F with P ( Ω ) = 1 .
Definition 2
([28,29]). Let L p ( Ω , F , P ) , 1 p < be a set of all p - integrable random variables X : Ω R , i.e.,
X p = Ω X p d P 1 p = E X p 1 p <
This is a vector space and the inner product in this space is defined as follows:
X , Y : = Ω X ( ω ) Y ( ω ) d P = E ( X Y )
where X , Y L p ( Ω , F , P ) and E ( · ) is expectation in probability theory. The closeness between two random variables X and Y is measured by L p ( Ω , F , P ) norm:
X Y p = Ω | X ( ω ) Y ( ω ) | p d P 1 p = E | X Y | p 1 p X , Y L p ( Ω , F , P )
In ITL, the correlation between random variables is generalized to correntropy, which is a measure of similarity [15]. Let X and Y be two given random variables; the correntropy in the sense of [15] is defined as
V M ( X , Y ) : = E k M ( X , Y )
where k M ( · , · ) is a Mercer kernel function.
In general, a Mercer kernel function is a type of positive semi-definite kernel function that satisfies Mercer’s condition. Formally, a symmetric function k M is a Mercer kernel function if, for any positive integer m and any set of random variables X 1 , , X m L p ( Ω , F , P ) , the corresponding Gram matrix K i j = k M ( X i , X j ) is positive semi-definite.
In the definition of correntropy, E ( · ) implies the expected value of the random variable and M is replaced by α or σ if the sigmoid or Gaussian kernel (radial basis) are used, respectively. In our recent work [28], we use a sigmoid kernel, which is defined as
k α ( X , Y ) = tanh ( α X , Y + c ) ,
where α , c R are scale and offset hyperparameters of the sigmoid kernel. The offset parameter c in the sigmoid kernel influences the shape of the kernel function. A higher value of c leads to a steeper sigmoid curve, making the kernel function more sensitive to variations in the input space. It is important to note that the choice of hyperparameters, including c and α , can significantly impact the performance of a machine learning model using the sigmoid kernel. These parameters are often tuned during the training process to optimize the model for a specific task or dataset.
In contrast to [28], in this paper, we use the Gaussian kernel that is represented as
k σ ( X , Y ) = 1 2 π σ exp X Y 2 2 σ 2
where σ is the variance for the Gaussian function. Here, σ controls the width of the Gaussian kernel. A larger σ results in a smoother and more slowly decaying kernel, while a smaller σ leads to a narrower and more rapidly decaying kernel.
Let error function be defined as e : = e ( X , Y ) X Y ; the correntropy of the error function is represented as
V σ ( e ) = E k σ ( X , Y ) .
At the end of this subsection, we note that, alternatively, the Wasserstein distance, also known as the Earth Mover’s Distance (EMD), Kantorovich–Rubinstein metric, Mallows’s distance or optimal transport distance, can be used as a metric that quantifies the minimum cost of transforming one probability distribution into another and can therefore be used to quantify the rate of convergence when the error is measured in some Wasserstein distance [30]. The relationship between correntropy and Wasserstein distance is often explored in the context of kernelized Wasserstein distances. By using a kernel function, the Wasserstein distance can be defined in a reproducing kernel Hilbert space (RKHS). In this framework, correntropy can be seen as a special case of a kernelized Wasserstein distance when the chosen kernel is the Gaussian kernel.

2.2. Network Structure

This paper focuses on the single-hidden layer feedforward network. As shown in Figure 1, it has three layers, including the input layer, the hidden layer and the output layer. Without loss of generality, this paper considers the SLFN with only one output node.
The output of SLFN with L hidden nodes is represented as follows [3]:
f L = i = 1 L β i g i ( x )
where g i is the i-th hidden node and can be one of the two following types:
  • For additive nodes
g i ( x ) = g w i , x μ + b i ; w i R d and b i R
2.
For RBF nodes
g i ( x ) = g x w i 2 b i ; w i R d and b i R +
For additive nodes, the vector w i is the input weights of the i-th hidden node and b i is its bias. For RBF nodes, the vector w i is the center of the i-th radial basis function and b i is its impact factor.
All networks that can be generated are represented as the following functions set [3]:
O = L = 1 O L
where
O L = f L f L ( x ) = i = 1 L β i g i ( x ) ; β i R , g i ( x ) G
and G is a set of all possible hidden nodes. For additive nodes, we have
G = g w , x μ + b ; w R d , b R
For the RBF case, we have
G = g | x w | 2 b ; w R d , b R +
Let f be a target function that is approximated by the network with L hidden nodes. The network residual error function is defined as follows:
e L : = f f L ,
In practice, the function form of error is not available and the network is trained on finite data samples, which are described as X = x i , y i i = 1 N , where x i R d is the d dimension input vector of the i-th training sample and y i R is its target value. Thus, the error vector on the training samples is denoted as follows:
E L = E L 1 , , E L N ,
where E L i is the error of the i-th training sample for the network with L hidden nodes E L i = e L x i . Furthermore, the activation vector for the L-th hidden node is
G L = G L 1 , , G L N
where G L i is the output of the L-th hidden nodes for the i-th training sample G L i = g L x i .

3. Previous Work

There are several types of constructive neural networks. In this section, the networks that are proposed in [3,28] are introduced. In those methods, the network is constructed by adding a new node to the network in each step. The training process of the newly added node (L-th hidden node) is divided into two phases: the first phase is devoted to adjusting the input parameters and the second phase is devoted to adjusting the output weight. When the parameters of the new node are obtained, they are fixed and do not change during the training of the next nodes.

3.1. The Networks Introduced in [3]

For the input parameters’ w L , b L adjustment in [3], several objective functions are proposed to adjust the input parameters of the newly added node in the constructive network. They are as follows [3]:
V 1 = E L 1 G L T G L G L T 2 , V 2 = E L 1 G L T 2 , V 3 = E L 1 E ¯ L 1 G L G ¯ L T G L G ¯ L 2 , V 4 = V 1 , V 5 = V 2 , V 6 = V 3 , V CasCor = E L 1 E ¯ L 1 G L G L ¯ T 2 ,
where V CasCor is the objective function for the cascade correlation network, E ¯ L 1 = 1 N i = 1 N E L 1 x i . The objective function that is used to adjust the output weight of the L-th hidden node is [3]
Δ L = E L 1 E L 1 T E L E L T
and Δ L is maximized if and only if [3]
β L = E L 1 G L T G L G L T
which is the optimum output parameter of the new node. In [3], the authors also proved that for each of the objective functions V 1 to V 6 , the network error converges.
Theorem 1
([3]). Given span (G) is dense in L 2 and g G , 0 < | g | 2 < b for some b R . If  g L is selected so as to maximize e L 1 , g L μ g L 2 2 , then lim L f f L 2 = 0 .
More detailed discussion about theorems and their proofs can be found in [3].

3.2. Cascade Correntropy Network (CCOEN) [28]

The authors in [28] proved that if the input parameters of each new node in a cascade network are assessed by using the correntropy objective function with the sigmoid kernel and its output parameter is adjusted by
β L = E e L 1 g L E g L 2
then the network is a universal approximator. The following theorem investigates the approximation ability of CCOEN:
Theorem 2
([28]). Suppose span ( g ) is dense in P 2 ( Ω , F , G ) . For any continuous function f and for the sequence of error similarity feedback functions g L s ( e ) , L N , there exists a real sequence η L ; L N such that
lim L E e L 2 = 0
holds with probability one if
g L s ( e ) = argmax g L g V σ e L 1 , η L g L β L = E e L 1 g L s ( e ) E g L s ( e ) 2
It was shown that CCOEN is more robust than the networks proposed in [3] when data are contaminated by noise.

4. Proposed Method

In this section, a novel constructive neural network is proposed based on the maximum correntropy criterion with the Gaussian kernel. To the best of our knowledge, it is the first time that correntropy with Gaussian kernel is employed as the objective function for training both the input and output weights of a single-hidden layer constructive network. It must be considered that the correntropy is a non-convex objective function and it is difficult to adjust the optimum solution. This section proposes a new theorem and surprisingly proves that the proposed method that is trained by using the correntropy objective function converges to the global solution. It is shown that the performance of the proposed method is excellent in the presence of non-Gaussian noise, especially impulse noise. In the proposed network, hidden nodes are added and trained one by one and the parameters of the newly added ( L-th hidden node) nodes are obtained and then fixed (see Figure 2).
This section is organized as follows: First, some preliminaries, mathematical definitions and theorems that are necessary for presenting the proposed method and proving the convergence of the method are introduced in Section 4.1. The new training strategy for the proposed method is described in Section 4.2. In Section 4.3, the convergence of the proposed method is proven when the error and activation function are continuous random variables. In practice and during the training on the dataset, the error function is not available; thus, the error vector and activation vector are used to train the new node. Regarding this fact, in Section 4.1, two optimization problems are presented to adjust the parameters of the new node based on training data samples.

4.1. Preliminaries for Presenting the Proposed Method

This section presents a new theorem for the proposed method based on special spaces, which are defined in Definitions 1 and 2.
The following lemmas, propositions and theorems are also used in the proof of the main theorem.
Lemma 1
([31]). Given g : R R , Span g ( w , x + b ) , ( w , b ) R d × R is dense in L p for every p [ 1 , ) , if and only if g is not a polynomial (almost everywhere).
Proposition 1
([32]). For G ( z ) = exp z 2 2 σ 2 , there exists a convex conjugated function ϕ, such that
G ( z ) = sup α R α z 2 2 σ 2 ϕ ( α )
Moreover, for a fixed z, the supremum is reached at α = G ( z ) .
Theorem 3
([33]). If X n is any sequence of random variables which are positive (take values in [ 0 , ) , increasingly converge X n ( ω ) X ( ω ) for any ω Ω , and the expectation exists X n L 1 for all n ), then E X n E ( X ) .
Theorem 4
([33]). (Monotonicity) Let X and Y be random variables with X Y , then E ( X ) E ( Y ) , with equality if and only if X = Y almost surely.
Theorem 5
([34]). (Convergence) Every upper bounded increasing sequence converges to its supremum.

4.2. C2N2: Objective Function for Training the New Node

In this subsection, we combine the idea of constructive SLFN with the idea of correntropy and propose a new strong constructive network that is robust to impulsive noise. The proposed method employs correntropy as the objective function to adjust the input w L , b L , L = 1 , , and output β L , L = 1 , , parameters of the network. To the best of our knowledge, it is the first time that correntropy with the Gaussian kernel has been employed for training all the parameters of a constructive SLFN. C2N2 starts with zero hidden nodes. The first hidden node is added to the network. First, the input parameters w 1 , b 1 of the hidden node are calculated by employing a correntropy objective function with a Gaussian kernel. Then, they are fixed and the output parameter β 1 of the node is adjusted by the correntropy objective function with Gaussian kernel. After the parameters of the first node are obtained, they are then fixed and the next hidden node is added to the network and trained. This process is iterated until the stopping condition is satisfied.
The proposed method can be viewed as an extension of CCOEN [28] with the following differences:
  • In contrast to CCOEN, which uses correntropy with a sigmoid kernel to adjust the input parameters of a cascade network, the proposed method uses correntropy with a Gaussian kernel to adjust the whole parameters of an SLFN.
  • CCOEN uses correntropy to adjust the input parameters of the new node in a cascade network to provide a more robust method. However, the output parameter of the new node in a cascade network is still adjusted based on the least mean square error. In contrast, the proposed method uses correntropy with Gaussian kernel to obtain both the input and output parameters of the new node in a constructive SLFN. Therefore, the proposed method is more robust than CCOEN and other networks introduced in [3] when the dataset is contaminated by impulsive noise.
  • Employing Gaussian kernel for correntropy as the objective function to adjust the network’s parameters provides a closed-form formula introduced in the next section. In other words, both the input and output parameters are adjusted by two closed-form formulas.
For the proposed network, each newly added node (L-th added node where L = 1 , , is trained using the two phases.
In the first phase, the new node is selected from G , using the following optimization problem: where
g L sim ( e ) = argmax g L G V g L ,
V g L = E 1 2 π σ exp e L 1 k L g L 2 2 σ 2 .
From the definition of the kernel, the most similar g L sim ( e ) activation function to the residual error of L− 1 nodes network is selected from G as this node is selected to maximize:
V g L = E Φ e L 1 , Φ k L g L , k L R { 0 }
where Φ is feature mapping. Consequently, the biggest reduction in error is obtained and the network has a more compact architecture.
In the second phase, the output parameter β L of the new node is adjusted whereby
β L sim ( e ) = argmax β L V β L , β L R ,
V β L = E 1 2 π σ exp e L 1 β L g L sim ( e ) 2 2 σ 2 .
These two phases are iterated and a new node is added in each iteration until the certain stopping condition is satisfied. This is discussed in Section 5.
After the parameters of the new node are tuned, the correntropy of the residual error (error of the network with L hidden nodes) is shown as
V e L = E 1 2 π σ exp e L 1 β L sim ( e ) g L sim ( e ) 2 2 σ 2 ,
and the residual error is updated as follows:
e L = e L 1 β L sim ( e ) g L sim ( e ) .
It is important to note that this subsection only presents two optimization problems for adjusting the input and output parameters of the new node. In Section 4.4, we present a way to solve these problems.

4.3. Convergence Analysis

In this subsection, we prove that the correntropy of the newly constructed network undergoes a strictly increasing sequence and converges to its supremum. Furthermore, it is proven that the supremum equals the maximum. To prove the convergence of the correntropy of the network, the definitions, theorems and lemma that are presented in Section 4.1 are employed. To prove convergence of the proposed method, similarly to [3,28], we propose the following lemma and prove that the new node, which is obtained from the input side optimization problem, is not orthogonal to the residual error function.
Lemma 2.
Given span ( G ) is dense in L 2 ( Ω , F , P ) and e L 1 L 2 ( Ω , F , P ) . There exists a real number k L R { 0 } such that g L sim ( e ) is not orthogonal to e L 1 , where
g L sim ( e ) = argmax g L G E 1 2 π σ exp e L 1 k L g L 2 2 σ 2 .
Employing Lemma 2 and what is mentioned in Section 4.1, the following theorem proves that the proposed method achieves its global solution.
Theorem 6.
Given an SLFN with t a n h (tangent hyperbolic) function for the additive nodes, for any continuous function f and for the sequence of hidden nodes functions, obtained based on the residual error functions, i.e.,  g L sim ( e ) , L N , there exists a real sequence k L ; L N such that
lim L V e L = V max
holds almost everywhere, provided that
g L sim ( e ) = argmax g L G V g L . β L sim ( e ) = argmax β L R V β L ,
where
V max = 1 2 π σ .
The proof of Lemma 2 and Theorem 6 contain some pure mathematics contents and are placed in Appendix A.

4.4. Learning from Data Samples

In Theorem 6, we proved that the proposed network, i.e., the one hidden layer constructive neural network based on correntropy (C2N2), achieves an optimal solution. During the training process, the function form of the error is not available and the error and activation vectors are generated from the training samples. In the rest of this subsection, we propose a method to train the network from data samples.

4.4.1. Input Side Optimization

The optimization problem to adjust the input parameters is as follows:
V L g = max G L E 1 2 π σ exp E L 1 k L G L 2 2 σ 2 .
On training data, expectation can be approximated as:
V ^ L g = max G L 1 N 2 π σ i = 1 N exp E ( L 1 ) i k L G L i 2 2 σ 2 .
The constant term 1 N 2 π σ can be removed and the following problem can be solved instead V ^ L g :
U ^ L g = max G L i = 1 N exp E ( L 1 ) i k L G L i 2 2 σ 2 .
Consider the following equality:
E ( L 1 ) i = k L G L i i = 1 , , N .
In this paper, the tanh function is selected as the activation function, which is bipolar and invertible. Therefore:
g 1 E ( L 1 ) i k L = X i W ( d + 1 ) * 1 i = 1 , , N ,
where X i = x i 1 , W ( d + 1 ) * 1 = W L b L ,
X = X 1 T , , X N T T
in which a b s ( . ) is the absolute function. The range of g (domain of g 1 ) is [ 1 , 1 ] . Thus, it is necessary to rescale the error signal to be in the range. To do so, k L is assigned as follows:
k L = max a b s E L λ ,
where λ ( 1 , 1 ) { 0 } . Let H L i = g 1 E ( L 1 ) i k L , and therefore, the term
E ( L 1 ) i k L G L i 2
can be replaced by
H L i X i W L 2 ,
and thus the following problem is presented to adjust the input parameters
U ^ L g = max W L i = 1 N exp H L i X i W L 2 2 σ 2 .
To achieve better generalization performance, the norm of the weights needs to be kept minimized too; thus, the problem above is reformulated as
U ^ L g = max W L i = 1 N exp H L i X i W L 2 2 σ 2 C 2 W L 2 .
It should be considered that if C = 0 , both problems from above are equivalent. Since the necessary condition for convergence is that in each step and by adding each node amount of correntropy of the error should be increased, in the experiment section, C 0 and other amounts for C are checked and the best result is selected. This guarantees convergence of the method according to Theorem 6.
The half-quadratic method is employed to adjust the input parameters. Based on Proposition 1, we have
L g α , W L = max W L , α i = 1 N α i H L i X i W L 2 2 σ 2 Φ α i C 2 W L 2 .
The local solution of the above optimization problem is adjusted using the following iterative process:
α i t + 1 = G H L i X i W L W L t + 1 = arg W L max i = 1 N α i t + 1 H L i X i W L 2 2 σ 2 C 2 W L 2 ,
i.e., the following optimization problem needs to be solved in each iterate:
V L g α , W L = max W L i = 1 N α i t + 1 H L i X i W L 2 2 σ 2 C 2 W L 2 .
Since σ 2 is a constant term, it can be removed from the optimization problem. Then, the optimization problem can be multiplied by 1 C . We set C = 1 C . Thus, the following constraint optimization problem is obtained:
max i = 1 N C 2 α i t + 1 ξ i 2 1 2 W L 2 s . t . X i W L = H L i ξ i i = 1 , , N
The Lagrangian is constituted as
L ξ i , η i , W L = i = 1 N C 2 α i t + 1 ξ i 2 W L 2 i = 1 N η i X i W L H L i + ξ i .
The derivations of the Lagrangian function with respect to its variables are the following
L W L = 0 W L = i = 1 N η i X i T = X T η ,
where η = η 1 , , η N T .
L ξ i = 0 η i = C α i t + 1 ξ i i = 1 , , N , L η i = 0 X i W L H L i + ξ i = 0 i = 1 , , N .
Now we consider two cases.
Case 1. d N
By substituting derivatives in
W L = i = 1 N C α i t + 1 ξ i X i T
we obtain
W L = i = 1 N C α i t + 1 X i W L + H L i X i T = i = 1 N C α i t + 1 X i W L X i T i = 1 N C α i t + 1 H L i X i T .
Let Ψ be a diagonal matrix with Ψ i i = α i ; therefore,
W L C X T Ψ X W L = C X T Ψ H L W L I C X T Ψ X = C X T Ψ H L W L = X T Ψ X I C 1 X T Ψ H L .
Case 2. d N
By substituting derivatives in
X W L H L + ξ = 0 X X T η H L + ξ = 0 ,
η = C Ψ ξ .
we obtain
X X T C Ψ ξ H L + ξ = 0 X X T C Ψ ξ + ξ = H L C X X T Ψ + I ξ = H L X X T Ψ + I C C ξ = H L C ξ = X X T Ψ + I C 1 H L
Then
X T + W L = Ψ X X T Ψ + I C 1 H L W L = X T Ψ X X T Ψ I C 1 H L
Thus, the input parameters are obtained by the following iterative process:
α i t + 1 = G H L i X i W L t W L t + 1 = X T Ψ t X I C 1 X T Ψ t H L or W L t + 1 = X T Ψ t X X T Ψ t I C 1 H L

4.4.2. Output Side Optimization

When the input parameters of the new node are obtained from the previous step, the new node is named G L sim ( e ) = G L 1 sim ( e ) , , G L N sim ( e ) , where G L i sim ( e ) = g L sim ( e ) x i and the output parameter is adjusted using the following optimization problem:
V L β = max β L E 1 2 π σ exp e L 1 β L g L sim ( e ) 2 2 σ 2 .
The expectation can be approximated on training samples:
V ^ L β = max β L 1 N 2 π σ i = 1 N exp E ( L 1 ) i β L G L i sim ( e ) 2 2 σ 2 .
The constant term 1 N 2 π σ can be removed and the following problem can be solved instead V ^ L β :
U ^ L β = max β L i = 1 N exp E ( L 1 ) i β L G L i s i m ( e ) 2 2 σ 2 .
Similar to the previous step, the half-quadratic method is employed to adjust the output parameter. Based on Proposition 1, we obtain
u L β γ , β L = max β L , γ i = 1 N γ i E ( L 1 ) i β L G L i sim ( e ) 2 2 σ 2 Φ γ i .
The local solution of the above optimization problem is adjusted using the following iterative process:
γ i t + 1 = k σ E L i β L t G L i sim ( e ) β L t + 1 = arg β L max i = 1 N γ i t + 1 E L i β L t G L i s i m ( e ) 2 σ 2 .
i.e., the following optimization problem is required to be solved in each iteration:
V L β γ , β L = max β L i = 1 N γ i t + 1 E L i β L G L i sim ( e ) 2 2 σ 2 , ν L β γ , β L = max β L 1 2 σ 2 E L β L G L sim ( e ) Θ E L β L G L sim ( e ) T .
where Θ is a diagonal matrix with Θ i i = γ i , i = 1 , , N .
V L β γ , β L = max 1 2 σ 2 E L Θ E L T + β L 2 G L sim ( e ) Θ G L sim ( e ) T β L E L Θ G L sim ( e ) T β L G L sim ( e ) Θ E L T .
The optimum output weight is adjusted by differentiating V L β γ , β L with respect to β L as
2 β L G L sim ( e ) Θ G L sim ( e ) T E L Θ G L sim ( e ) T G L sim ( e ) Θ E L T = 0 , β L = E L Θ G L sim ( e ) T G L sim ( e ) Θ G L sim ( e ) T .
Finally, the output weight is adjusted by the following iterative process:
γ i t + 1 = k σ E L i β L t G L i s i m ( e ) β L t + 1 = E L Θ G L sim ( e ) T G L sim ( e ) Θ G L sim ( e ) T .
In these two phases, the parameters of the new node (L-th added node where L N ) are tuned and then fixed. This process is iterated for each new node until the predefined condition is satisfied. The following proposition demonstrates that for each node, the algorithm converges.
Proposition 2.
The sequences L g α t , W L t , t = 1 , 2 , and U L β γ t , β L t , t = 1 , 2 , converge.
Proof. 
From Theorem 5 and Proposition 1, we have u L β γ t , β L t u L β γ t + 1 , β L t u L β γ t + 1 , β L t + 1 and L g α t , W L t u L g α t + 1 , W L t u L g α t + 1 , W L t + 1 . Thus, the non-decreasing sequence u L β γ t , β L t , t = 1 , 2 , u L g α t , W L t , t = 1 , 2 , converges since the correntropy is upper bounded.    □
Proposition 3.
When Θ = I , the output weight that is adjusted by the correntropy criterion is equivalent to the output weight that is adjusted by the MSE-based method such as IELM.
Proof. 
Suppose that Θ = I , by  β L = E b θ G L sim ( e ) T G L sim ( e ) Θ G L sim ( e ) T , we have
β L = E L G L sim ( e ) T G L sim ( e ) G L sim ( e ) T .
   □
The training process of the proposed method is summarized in the following Algorithm 1 (C2N2).
Algorithm 1 C2N2
  • Input: training samples χ = { x i , y i } i = 1 N
  • Output: Optimal input and output weights β L , W L , L = 1 , . . . , L m a x
  •    I n i t i a l i z a t i o n : Maximum number of hidden nodes L m a x , regularization term C , maximum input side and output side iterations I T 1 , I T 2 , error E 0 = [ y 1 , . . . y N ] .
  •    For L = 1 : L m a x
  •      S t e p 1 : Calculate H L and X
  •        For k = 1 : I T 1
  •         Update input parameters
  •        End
  •      S t e p 2 : Calculate the hidden node vector, G L s i m ( e ) by previous step
  •        For k = 1 : I T 2
  •         Update output weight
  •        End
  •       Update error as E L = E L 1 β L s i m ( e ) G L s i m ( e )
  • End
Remark 1.
The auxiliary variables γ i and α i , i = 1 , , N are utilized to reduce the effect of noisy data. For the samples with a high amount of error, these variables are very small; thus, these samples have slight effects on the optimization of the parameters of the network, which results in a more robust network.

5. Experimental Results

This section compares C2N2 with RBF, CCN and other constructive networks that are presented in [3]. The networks, whose hidden nodes’ input parameters are trained by the objective functions V 1 , V 2 , V 3 , V 1 , V 2 , V 3 that are introduced in [3], are denoted by N 1 , , N 6 . In addition to the mentioned methods, the proposed method is compared to the state-of-the-art constructive networks such as the orthogonal least square cascade network (OLSCN) [4] and the one hidden layer constructive network (OHLCN) introduced in [5]. Moreover, C2N2 is also compared with state-of-the-art robust learning methods including Multi-Layer Perceptron based on MCC (MLPMCC) [17], Multi-Layer Perceptron based on Minimum Error Entropy (MLPMEE) [17] and Robust Least Square Support Vector Machine (RLS-SVM) [35] and the recent work, CCOEN [28].
The rest of this section is organized as follows. Section 5.1 describes a framework for the experiments. The presented theorem and hyperparameters L , C and η are investigated in Section 5.2. In Section 5.3, the presented method is compared to N1-N6, CCN, RBF and some state-of-the-art constructive networks including OHLCN and OLSCN. Experiments are performed on several synthetic and benchmark datasets that are contaminated with impulsive noise (one of the most popular types of non-Gaussian noise). In this part, experiments are also performed in the absence of impulsive noise. Section 5.4 compares the proposed method with state-of-the-art robust learning methods including MLPMEE, MLPMCC, RLS-SVM and CCOEN on various types of datasets.

5.1. Framework for Experiments

This part presents a framework for the experiments. The framework includes the type of activation function for C2N2 and other mentioned methods, type of kernel, kernel parameters ( σ ) , range of hyperparameters L , C , λ and dataset specification.

5.1.1. Activation Function and Kernel

For the proposed method, the tangent hyperbolic activation function is used. It is represented as follows (see Figure 3):
tanh ( x ) = e x e x e x + e x
For networks N 1 N 6 , CCN, OLSCN, OHLCN, MLPMCC and MLPMEE, the sigmoid activation function is used. This function is represented and displayed as follows (see Figure 4):
For the proposed method and RLS-SVM, RBF kernel is used. It is shown as
K ( X , Y ) = exp X Y 2 2 σ 2
In the experiments, the optimum kernel parameter ( σ ) is selected from the set
{ 0.1 , 0.5 , 1 , 10 , 15 } .

5.1.2. Hyperparameters

The method has three hyperparameters. These parameters help to avoid over or underfitting, which improves performance. The first parameter is a number of hidden nodes ( L ) . The optimum number of hidden nodes is selected from the set { 1 , , 8 } . Due to the boundedness of tanh function ( 1 < tanh ( x ) < 1 ) , the error signal must be scaled to this range. Thus, λ should be selected from the set { 0.9 , , 0.1 , 0.1 , 0.2 , , 0.9 } . Figure 5 shows that accuracy is symmetric with respect to λ . Thus, λ should be selected from the set { 0.1 , , 0.9 } . The possible range for C is investigated in the next part.

5.1.3. Data Normalization

In this paper, the input vector of data samples is normalized into the range [ 1 , 1 ] . For regression datasets, their targets are normalized into the range [ 0 , 1 ] .
In this paper, most of the datasets are taken from the UCI Machine Learning Repository [36] and Statlib [37]. These datasets are specified in Table 1 and Table 2.

5.2. Convergence

This part investigates the convergence of the proposed method (Theorem 6), followed by an investigation of the hyperparameters.

5.2.1. Investigation of Theorem 6

The main goal of this paper is to maximize the correntropy of the error function. Regarding the kernel definition and due to maximization of correntropy, the approximator (output of the neural network, f L ) has the most similarity to the target function ( f ) . Theorem 6 proves that the proposed method obtains the optimal solution, i.e., the correntropy of the error function is maximized. This part investigates the convergence of the proposed method. In this experiment, the kernel parameter is set to 10; thus, the optimum value for correntropy is v ( e = 0 ) = V max = 0.0399 . Figure 6 shows the convergence of C2N2 to the optimum value in the approximation of the sinc function.

5.2.2. Hyperparameter Evaluation

To evaluate parameters C and λ , C2N2 with only one hidden node was experimented on using the diabetes dataset. Figure 7 shows that the best amount for parameter C is in the range [ 0 , 1 ] . Thus, in the experiments, parameter C is selected from the set 0.05 , 0.15 , 0.25 , , 0.95 .

5.3. Comparison

This part compares the proposed method with the networks N 1 , , N 6 , CCN, OHLCN, OLSCN, and RBF in the presence and absence of non-Gaussian noise. One of the worst types of non-Gaussian noise is impulsive noise.
For λ = 0, the accuracy is set to zero. This type of noise adversely affects the performance of MSE-based methods such as the networks N 1 N 6 , CCN, OHLCN and OLSCN. In this part and part D, we perform experiments similar to [4]. We calculated the RMSE (classification accuracy) on the testing dataset after each hidden unit was added and reported the lowest (highest) RMSE (accuracy) along with the corresponding network size. Similar to [4], experiments were carried out in 20 trials and the results (RMSE (accuracy) and number of nodes) averaged over 20 trials are listed in Table 3, Table 4, Table 5 and Table 6.
In all result tables, the best results are shown in bold and underlined. The results that are close to the best ones are in bold.

5.3.1. Synthetic Dataset (Sinc Function)

Figure 8 compares C2N2 with RBF and the network N1 in the approximation of the Sinc function. In Figure 9, the experiment is performed in the presence of impulsive noise.

5.3.2. Other Synthetic Dataset

The following regression problems are used to evaluate the performance of the proposed method in comparison to the networks N 1 , N 2 and N 3 .
f ( 1 ) x 1 , x 2 = 10.391 x 1 0.4 x 2 0.6 + 0.36 f ( 2 ) x 1 , x 2 = 24.234 r 2 0.75 r 2
where
r = x 1 0.5 2 + x 2 0.5 2 f ( 3 ) x 1 , x 2 = 42.659 0.1 + r 1 0.05 + r 1 4 10 r 1 2 r 2 2 + 5 r 2 4
r 1 = x 1 0.5 and r 2 = x 2 0
f ( 4 ) x 1 , x 2 =
1.3365 1.5 1 x 1 + e 2 x 1 1 sin 3 π x 1 0.6 2 + e 3 x 2 0.5 + sin 4 π x 2 0.9 2
For each of the above functions, 225 pairs x 1 , x 2 are generated randomly in the interval [ 0 , 1 ] . For each piece of training data, its target is assigned as
y j i = f l ( j ) x i 1 , x i 2 = f ( j ) x i 1 , x i 2 + η i l i = 1 , , 225 and j = 1 , , 4 and l = 0 , 1
where η i is noise that is added to the target of the data samples. In this section, the index of the noise ( l ) is:
0 . without noise 1 . η i 1 = impulse noise
For impulse noise, the outputs of five data samples are changed by extra high values using the uniform distribution.

5.4. Discussion

5.4.1. Discussion on Table 7

Table 7 compares C2N2 with the networks N 1 , N 2 and N 3 . From the table, we can see that in the absence of noise, the proposed method outperforms the other methods on the datasets f 0 ( 2 ) , f 0 ( 3 ) and f 0 ( 4 ) . For the dataset f 0 ( 1 ) , the best result is for the network N 2 . We added impulsive noise to the dataset and again performed experiments. From Table 7, we can see that the proposed method is more stable when data are contaminated with impulsive noise. For example, for the dataset f 0 ( 2 ) , the RMSEs of C2N2 and N 1 to N 3 are close. However, in the presence of impulsive noise for the dataset f 1 ( 2 ) , the RMSEs for C2N2 and N 1 to N 3 are 0.2487, 0.3676, 0.2790, and 0.3226 respectively. This means that noisy data samples have less effect on the proposed method in comparison to the other mentioned methods and the proposed method is more stable. The goal of any learning method is to increase performance. Thus, in this paper, we focus on RMSE. However, from the architecture viewpoint, the proposed method tends to have a smaller number of nodes in 50 % of the datasets.
Table 7. Performance comparison of C2N2 and the networks N 1 ,   N 2 and N 3 : synthetic regression dataset.
Table 7. Performance comparison of C2N2 and the networks N 1 ,   N 2 and N 3 : synthetic regression dataset.
DatasetsC2N2 N 1 N 2 N 3
Testing
RMSE
#N Time (s) Testing
RMSE
#N Time (s) Testing
RMSE
#N Time (s) Testing
RMSE
#N Time (s)
f 0 ( 1 ) 0.1358 3 . 70 ̲ 0.690.15554.851.48 0 . 1284 ̲ 6.301.780.15367.050.81
f 1 ( 1 ) 0 . 1587 ̲ 6.250.910.3244 3 . 70 0.440.2460 3 . 20 ̲ 0.720.27504.402.40
f 0 ( 2 ) 0 . 1963 2.70 ̲ 1.09 0 . 1953 5.901.91 0 . 1953 4.551.57 0 . 1954 5.452.18
f 1 ( 2 ) 0 . 2487 ̲ 3 . 55 3.980.36765.251.080.2790 3 . 25 ̲ 0.890.32265.651.16
f 0 ( 3 ) 0.0892 ̲ 3 . 15 ̲ 1.050.09406.800.120.0943 3 . 30 1.340.09356.400.34
f 1 ( 3 ) 0 . 1416 ̲ 4.151.120.29494.150.640.2309 2.35 ̲ 0.510.25554.350.57
f 0 ( 4 ) 0 . 1136 2.10 ̲ 1.81 0 . 1141 4.551.34 0 . 1139 8.102.020.19634.700.87
f 1 ( 4 ) 0 . 1360 ̲ 7.703.010.2928 1 . 65 ̲ 1.540.25694.200.980.29614.850.45

5.4.2. Why C2N2 Denies Impulse Noises?

Regarding optimization problems, for noisy (impulsive noise) data, auxiliary variables α i t , γ i t are low. Thus, such data have a small role in optimization problems; parameters of the new node are obtained based on noise-free data (Remark 1). Table 8 shows the auxiliary variables for several noisy and noise-free data samples.

5.4.3. Benchmark Dataset

In this part, several regression and classification datasets are contaminated with impulse noise. At this time, as in [38], we produce impulsive noise by generating random real numbers from the following distribution function, and then we add them to data samples:
η = 0.95 N 0 , 10 4 + 0.05 N ( 0 , 10 )
where N ( μ , σ ) is a Gaussian distribution function with the mean μ and variance σ . For the regression dataset, we add noise to its target. For the classification dataset, we add noise to its input feature vector. Experiments on these datasets confirm the robustness of the proposed method in comparison with N 4 , , N 6 , CCN and RBF, OLSCN and OHLCN.

5.4.4. Discussion on Table 3

Table 3 compares C2N2 with the networks N 4 , N 5 , N 6 , CCN and RBF on the Autoprice, Baloon and Pyrim datasets in the presence and absence of impulsive noise. It shows that C2N2 is more stable than the networks N4 to N6, CCN and RBF in the presence of non-Gaussian noise. For example, for the Autoprice dataset, RMSEs for C2N2, N4-N6, CCN and RBF are 0.2689 , 0.2996 , 0.2689 , 0.3681 , 0.2775 and 0.2725 , respectively. After adding noise, RMSEs are 0.2770 , 0.5521 , 0.3610 , 0.4295 , 0.4768 and 0.9082 respectively. These results confirm that the proposed method is robust to impulsive noise in comparison to the mentioned methods.

5.4.5. Discussion on Table 4

This table compares C2N2 with the networks N 4 , N 5 , N 6 , CCN and RBF on the Ionosphere, Colon, Leukemia and Dimdata datasets in the presence of impulsive noise. It shows that C2N2 outperforms the other mentioned methods on the Ionosphere, Colon and Leukemia datasets. For the dataset Dimdata, RBF outperforms the proposed method. However, the result is obtained with 1000 nodes for RBF and with an average of 3.5 nodes for C2N2.

5.4.6. Discussion on Table 5 and Table 6

These tables compare C2N2 with state-of-the-art constructive networks on the regression and classification datasets. They show that the best results are for C2N2. For the Housing dataset, RMSEs for C2N2, OLSCN and OHLCN are 0.0966 , 0.0988 and 0.0993 , respectively. In the presence of impulsive noise, RMSEs are 0.0978 , 0.2411 and 0.1824 . Thus, the proposed method has the best stability among the other mentioned state-of-the-art constructive networks when data samples are contaminated with impulsive noise. From the architecture viewpoint, OLSCN has the most compact architecture. However, it has the worst training time. Table 6 shows that C2N2 outperforms OLSCN and OHLCN in the presence of impulsive noise in the classification dataset.

5.4.7. Computational Complexity

Let L be the maximum number of hidden units to be added to the network. For each newly added node, its input parameter is adjusted as specified in Section 4.4.1. The order of adjusting the inverse of the d × d matrix is d 3 . Thus, h = O L k min d 3 , N 3 + N 2 , where k is the constant term (number of iterations). Thus, we have
h = O L min d 3 , N 3 + N 2 .

5.5. Comparison

This part compares C2N2 with state-of-the-art robust learning methods on several benchmark datasets. These methods are Robust Least Square SVM (RLS-SVM), MLPMEE and MLPMCC. As mentioned in Section 5.4, and similar to [4], in this part, experiments are performed in 20 trials and average results of RMSE (accuracy for classification) and the number of hidden nodes(#N) are reported in Table 9.

5.5.1. Discussion on Table 9

From the table, we can see that for Pyrim, Prim (noise) and baskball (noise), C2N2 absolutely outperforms the other robust methods in terms of RMSE. For Bodyfat and Bodyfat (noise), C2N2 slightly outperforms other methods in terms of RMSE. Thus, to compare them, we need to check the number of nodes and training times. For both datasets, C2N2 has a fewer number of nodes. In the presence of noise, C2N2 has a better training time in comparison to RLS-LSVM.
Thus, the proposed method outperforms the other methods for these two datasets. Among these six datasets, RLS-SVM only outperforms C2N2 in one dataset, i.e., Baskball; however, it has a worse training time and more nodes. It can be seen that among the robust methods, the proposed method has the most compact architecture.

5.5.2. Discussion on Table 10

This table compares the recent work, CCOEN, with the proposed method on three datasets in the presence and absence of noise. According to the table, the proposed method outperforms CCOEN in all cases except the Cloud dataset in the presence of noise where CCOEN has a slightly better performance with more hidden nodes. From the architecture viewpoint, the proposed method has a fewer number of nodes in comparison to CCOEN in most cases. Therefore, correntropy with the Gaussian kernel provides better results in comparison to the sigmoid kernel.
Table 10. Performance comparison of C2N2 and the recent work. CCOEN: benchmark regression dataset.
Table 10. Performance comparison of C2N2 and the recent work. CCOEN: benchmark regression dataset.
DatasetC2N2CCOEN
Testing
RMSE
#NodesTesting
RMSE
#Nodes
Abalone 0 . 075 ̲ 6 ̲ 0.0908.8
Abalone (Noise) 0 . 079 ̲ 2.6 ̲ 0 . 091 7.8
Cleveland 0 . 061 ̲ 4.2 ̲ 0.7916.1
Cleveland (Noise) 0 . 066 ̲ 2 ̲ 0.8218.5
Cloud 0 . 277 ̲ 4.2 ̲ 0.2934.7
Cloud (noise)0.3025.6 0 . 290 ̲ 4 . 8 ̲

6. Conclusions

In this paper, a new constructive feedforward network is presented that is robust to non-Gaussian noises. Most of the other existing constructive networks are trained based on the mean square error (MSE) objective function and consequently act weak in the presence of non-Gaussian noises, especially impulsive noise. Correntropy is a local similarity measure of two random variables and is successfully used as the objective function for the training of adaptive systems such as adaptive filters. In this paper, this objective function with a Gaussian kernel is utilized to adjust the input and output parameters of the newly added node in a constructive network. It is proved that the new node obtained from the input side optimization is not orthogonal to the residual error of the network. Regarding this fact, correntropy of the residual error converges to its optimum when the error and the activation function are continuous random variables in L 2 ( Ω , F , P ) space where the triple ( Ω , F , P ) is considered as a probability space. During the training on datasets, the function form of error is not available; thus, we provide a method to adjust the input and output parameters of the new node from training data samples. The auxiliary variables that appear in input and output side optimization problems decrease the effect of the non-Gaussian noises. For example, for impulsive noise, these variables are close to zero; thus, these data samples have little role in optimizing the parameters of the network. For the MSE-based constructive networks, the data samples that are contaminated by impulsive noises have a great role in optimizing the parameters of the network, and consequently, the network is not robust. The experiments are performed on some synthetic and benchmark datasets. For the synthetic datasets, the experiments are performed in the presence and absence of impulsive noises. We saw that for the datasets that are contaminated by impulsive noises, the proposed method has significantly better performance than the state-of-the-art MSE-based constructive network. For the other synthetic and benchmark datasets, in most cases, the proposed method has satisfactory performance in comparison to the MSE-based constructive network and radial basis function (RBF). Furthermore, C2N2 was compared with state-of-the-art robust learning methods such as MLPMEE, MLPMCC and the robust version of the Least Square Support Vector Machine and CCOEN. The performances are obtained with compact architectures due to the input parameters being optimized. We also see that correntropy with Gaussian kernel provides better results in comparison to the correntropy with sigmoid kernel.
The use of the correntropy-based function introduced in this research may also benefit networks with other architectures toward enhancing the generalization performance and robustness level. In the context of further research, the validity of similar results can be verified for various classes of neural networks. In addition, since impulsive noise is one of the worst cases of non-Gaussian noise, it can be expected that a different non-Gaussian noise will yield a result between clean data and data with impulsive noise. This should be verified in further experiments.
It is also necessary to point out here other novel modern avenues and similar research directions. For example, ref. [39] delves into modal regression, presenting a statistical learning perspective that could enrich the discussion on learning algorithms and their efficiency in different noise conditions. In particular, it points out that correntropy-based regression can be interpreted from a modal regression viewpoint when the tuning parameter goes to zero. At the same time, [40] depicts a big picture of correntropy-based regression by showing that with different choices of the tuning parameter, correntropy-based regression learns a location function.
Correntropy not only has inferential properties that could be used for neural network analysis, but another approach could be, for example, cross-sample entropy-based techniques. One such direction was shown to be effective in [41] with reported results of simulation on exchange market datasets.
Finally, it is also worth mentioning that the choice of the algorithm applied for optimizing the objective functions can influence the results. The usage of non-smooth methodology focusing on bundle-based algorithms [42] as a possible efficient tool in machine learning and neural network analysis can also be tested.

Author Contributions

All authors contributed to the paper. The experimental part was mainly done by the first author, M.N. Conceptualization was done by M.R. and H.S.Y. Verification and final editing of the manuscript was done by Y.N., A.M. and M.M.M. who played a role of a research director. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data and codes can be requested from the first author.

Acknowledgments

The authors would like to express their thanks to the GA. Hodtani and Ehsan Shams Davodly for their constructive remarks and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
C2N2Correntropy-based constructive neural network
MCCMaximum correntropy criterion
MSEMean square error
MEEMinimum Error Entropy
EMSEExcess mean square error
OLSOrthogonal least square
CCOENCascade correntropy network
MLPMEEMulti-Layer Perceptron based on Minimum Error Entropy
MLPMCCMulti-Layer Perceptron based on correntropy
RLS-SVMRobust Least Square Support Vector Machine
FFNFeedforward network
RBFRadial basis function
ITLInformation theoretic learning
CGNCascade correlation network
OHLCNOne hidden layer constructive adaptive neural network

Appendix A

Appendix A.1. Proof of Lemma 1

Proof. 
Similar to [28], let k L = e L 1 g L γ , γ R { 0 } where
g L G cos θ e L 1 , g L * cos θ e L 1 , g L .
Thus,
1 2 π σ exp e L 1 k L g L 2 2 σ 2 =
1 2 π σ exp e L 1 2 + e L 1 2 γ 2 L 2 g I L 2 2 γ e L 1 g g 2 L L cos θ e L 1 , g L 2 σ 2 .
Let A = g L g L L , we have,
1 2 π σ exp e L 1 k L g L 2 2 σ 2 =
1 2 π σ exp e L 1 2 1 + A 2 γ 2 2 γ A cos θ e L 1 , g L 2 σ 2 .
We need to prove that k L R { 0 } exists such that
1 2 π σ exp e L 1 2 1 + γ 2 2 γ cos θ e L 1 , g L * 2 σ 2
1 2 π σ exp e L 1 2 1 + A 2 γ 2 2 γ A cos θ e L 1 , g L 2 σ 2 , g L G .
Suppose that the inequality above does not hold.
Two possible conditions may happen:
  • cos θ e L 1 , g L * 0
1.1. If γ > 0 , we have
γ > 0 2 γ cos θ e L 1 , g L * A cos θ e L 1 , g L *
2 γ cos θ e L 1 , g L * A cos θ e L 1 , g L γ 2 1 A 2 , g L G
γ > 0 2 γ cos θ e L 1 , g L * ( 1 A ) γ 2 1 A 2 , g L G .
Again, there are two possible conditions: First, ( 1 A ) 0 . Then
γ > 0 2 cos θ e L 1 , g L * γ ( 1 + A ) g L G .
γ > 0 2 cos θ e L 1 , g L * ( 1 + A ) γ , g L G .
Let γ = cos θ e L 1 , g L * ( 1 + A ) cos θ e L 1 , g L * 0 , g L G . From the assumption and the above inequality, we have cos θ e L 1 , g L * = 0 .
This is contradicted by the fact that span ( G ) is dense in L 2 ( Ω , F , P ) . Second: ( 1 A ) 0 .
γ > 0 2 cos θ e L 1 , g L * γ ( 1 + A ) g L G .
γ > 0 2 cos θ e L 1 , g L * ( 1 + A ) γ g L G .
Let γ = 3 cos θ e L 1 , g L * ( 1 + A ) cos θ e L 1 , g L * 0 g L G .
From the assumption above, we have cos θ e L 1 , g L * = 0 .
This is contradicted by the fact that span ( G ) is dense in L 2 ( Ω , F , P ) .
2.
cos θ e L 1 , g L * 0 .
2.1. If γ < 0 , we have
γ < 0 2 γ cos θ e L 1 , g L * A cos θ e L 1 , g L * 2 γ cos θ e L 1 , g L * A cos θ e L 1 , g L
γ 2 1 A 2 , g L G .
γ < 0 2 γ cos θ e L 1 , g L * ( 1 A ) γ 2 1 A 2 g L G
γ < 0 2 cos θ e L 1 , g L * ( 1 A ) γ 1 A 2 g L G .
Again, there are two possible conditions: First: ( 1 A ) 0 . Then
γ < 0 2 cos θ e L 1 , g L * γ ( 1 + A ) g L G ,
γ < 0 2 cos θ e L 1 , g L * ( 1 + A ) γ g L G .
Let γ = 3 cos θ e L 1 , g L * ( 1 + A ) cos θ e L 1 , g L * 0 , g L G .
G cos θ e L 1 , g L * = 0 .
From the assumption, we have cos θ e L 1 , g L * = 0 .
This is contradicted by the fact that span ( G ) is dense in L 2 ( Ω , F , P ) . Second: ( 1 A ) 0
γ < 0 2 cos θ e L 1 , g L * γ ( 1 + A ) g L G .
γ < 0 2 cos θ e L 1 , g L * ( 1 + A ) γ g L G .
Let γ = cos θ e L 1 , g L * ( 1 + A ) cos θ e L 1 , g L * 0 , g L G .
From the assumption above, we have, cos θ e L 1 , g L * = 0 , and this is contradicted by the fact that span ( G ) is dense in L 2 ( Ω , F , P ) . Based on the above arguments, a real number γ 0 exists such that the following inequality holds:
1 2 π σ exp e L 1 2 1 + γ 2 2 γ cos θ e L 1 , g L * 2 σ 2
1 2 π σ exp e L 1 2 1 + A 2 γ 2 2 γ A cos θ e L 1 , g L 2 σ 2 , g L G .
Thus, from Theorem 4, we have
E 1 2 π σ exp e L 1 2 1 + γ 2 2 γ cos θ e L 1 , g L * 2 σ 2
E 1 2 π σ exp e L 1 2 1 + A 2 γ 2 2 γ A cos θ e L 1 , g L 2 σ 2 , g L G .
Thus, there exists a real number k L R { 0 } such that
E 1 2 π σ exp e L 1 k L g L * 2 2 σ 2 E 1 2 π σ exp e L 1 k L g L 2 2 σ 2 , g L G ,
V g L * V g L , g L G .
This means that g L sim ( e ) = g L * and this completes the proof. □

Appendix A.2. Proof of Theorem 6

Proof. 
Inspired by [3], the proof of this theorem is divided into two parts: First, we prove that the correntropy of the network strictly increases after adding each hidden node, and then we prove that the supremum of the correntropy of the network is V max = 1 2 π σ .
Step 1: The correntropies of an SLFN with L 1 and L hidden nodes are:
V e L 1 = E 1 2 π σ exp e L 1 2 2 σ 2 ,
V e L = E 1 2 π σ exp e L 2 2 σ 2
= E 1 2 π σ exp e L 1 β L s i m ( e ) g L sim ( e ) 2 2 σ 2 ,
respectively.
In the following, it is proved that k L , g L e exists such that
V e L > V e L 1 .
Let
k L = η e L 1 , η R { 0 } ,
then we have
V g L = E 1 2 π σ exp e L 1 η e L 1 g L 2 2 σ 2 = E 1 2 π σ exp U 2 σ 2 ,
where
U = e L 1 2 + η 2 e L 1 2 g L 2 2 η e L 1 2 g L cos θ e L 1 , g L
Thus,
V g L = E 1 2 π σ exp e L 1 2 1 + η 2 g L 2 2 η g L cos θ e L 1 , g L 2 σ 2 .
We need to prove that g L G and η R { 0 } exist such that
V g L > V e L 1 .
Suppose that there are no η R { 0 } and g L G such that
exp e L 1 2 1 + η 2 g L 2 2 η g L cos θ e L 1 , g L 2 σ 2 exp e L 1 2 2 σ 2 ,
exp e L 1 2 2 σ 2 exp e L 1 2 1 + η 2 g L 2 2 η g L cos θ e L 1 , g L 2 σ 2 1 ,
g L G and η R { 0 } .
Then the following inequality holds,
exp e L 1 2 η 2 g L 2 2 η g L cos θ e L 1 , g L 2 σ 2 exp ( 0 ) ,
g L G η R { 0 } .
e L 1 2 η 2 g L 2 2 η g L cos θ e L 1 , g L 0 ,
g L G and η R { 0 }
η 2 g L 2 2 η g L cos θ e L 1 , g L 0 ,
g L G and η R { 0 } .
Let
η = cos θ e L 1 , g L g L , cos 2 θ e L 1 , g L 2 cos 2 θ e L 1 , g L 0 g L G ,
cos 2 θ e L 1 , g L 0 g L G .
Thus, cos 2 θ e L 1 , g L = 0 g L G . This is contradictory to span ( G ) being dense in L 2 ( Ω , F , P ) ; thus, we have
g L G , k L R { 0 } 1 2 π σ exp e L 1 k L g L 2 2 σ 2 > 1 2 π σ exp e L 1 2 2 σ 2 .
Based on Theorem 4, the following inequality holds with probability one:
g L G , k L R { 0 }
E 1 2 π σ exp e L 1 k L g L 2 2 σ 2 > 1 2 π σ exp e L 1 2 2 σ 2 , i.e.,
g L G , k L R { 0 } , E k e L 1 k L g L > E k e L 1
almost surely.
Based on the above argument, with probability one, we have: V e L > V e L 1
Based on Theorem 5, since the correntropy is strictly increasing, with probability one it converges to its supremum.
Step 2: We know that
V L β = E exp e L 1 β L g L sim ( e ) 2 2 σ 2 =
E sup α R α e L 1 β L g L sim ( e ) 2 2 σ 2 ϕ ( α ) ,
and according to Proposition 1, we have
α = G e L 1 β L g L sim ( e ) 2 2 σ 2 ,
There is
V L m a x β = E α e L 1 β L g L sim ( e ) 2 2 σ 2 ϕ ( α ) .
Therefore, the optimum β L is
β L = E γ e L 1 g L E γ g L 2 .
In the previous step, we showed that correntropy converges; the norm of error converges and γ converges to a constant term. In the case of constant γ , similar to [3], the error sequence constitutes a Cauchy sequence and because the mentioned probability space is complete, e * L p ( Ω , F , P ) . Therefore, L > N , g G
e L e * ,
lim L E e L 1 g L 2 E g L 2 = 0 .
Thus, similar to [3], we have
As we know, E e * g = 0 , g G , and we have
lim L E e L 1 g = 0 , g G
and
e * = 0 .
Based on Theorem 3 and lim L e L = 0 , we have lim L E k e L = E lim L k e L = E 1 2 π σ e lim L e L 2 = E ( k ( 0 ) ) = 1 2 π σ = V m a x . Based on step 1 and step 2, we have lim L V e L = V max almost surely.
This completes the proof. □

References

  1. Erdogmus, D.; Principe, J.C. An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems. Signal Process. IEEE Trans. 2002, 50, 1780–1786. [Google Scholar] [CrossRef]
  2. Fahlman, S.E.; Lebiere, C. The cascade-correlation learning architecture. In Proceedings of the Advances in Neural Information Processing Systems 2, NIPS Conference, Denver, CO, USA, 27–30 November 1989; pp. 524–532. [Google Scholar]
  3. Kwok, T.-Y.; Yeung, D.-Y. Objective functions for training new hidden units in constructive neural networks. Neural Netw. IEEE Trans. 1997, 8, 1131–1148. [Google Scholar] [CrossRef] [PubMed]
  4. Huang, G.; Song, S.; Wu, C. Orthogonal least squares algorithm for training cascade neural networks. Circuits Syst. Regul. Pap. IEEE Trans. 2012, 59, 2629–2637. [Google Scholar] [CrossRef]
  5. Ma, L.; Khorasani, K. New training strategies for constructive neural networks with application to regression problems. Neural Netw. 2004, 17, 589–609. [Google Scholar] [CrossRef] [PubMed]
  6. Ma, L.; Khorasani, K. Constructive feedforward neural networks using Hermite polynomial activation functions. Neural Netw. IEEE Trans. 2005, 16, 821–833. [Google Scholar] [CrossRef] [PubMed]
  7. Reed, R. Pruning algorithms-a survey. Neural Netw. IEEE Trans. 1993, 4, 740–747. [Google Scholar] [CrossRef] [PubMed]
  8. Castellano, G.; Fanelli, A.M.; Pelillo, M. An iterative pruning algorithm for feedforward neural networks. Neural Netw. IEEE Trans. 1997, 8, 519–531. [Google Scholar] [CrossRef] [PubMed]
  9. Engelbrecht, A.P. A new pruning heuristic based on variance analysis of sensitivity information. Neural Netw. IEEE Trans. 2001, 12, 1386–1399. [Google Scholar] [CrossRef]
  10. Zeng, X.; Yeung, D.S. Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neurocomputing 2006, 69, 825–837. [Google Scholar] [CrossRef]
  11. Sakar, A.; Mammone, R.J. Growing and pruning neural tree networks. Comput. IEEE Trans. 1993, 42, 291–299. [Google Scholar] [CrossRef]
  12. Huang, G.-B.; Saratchandran, P.; Sundararajan, N. A generalized growing and pruning RBF (GGAPRBF) neural network for function approximation. Neural Netw. IEEE Trans. 2005, 16, 57–67. [Google Scholar] [CrossRef]
  13. Huang, G.-B.; Saratchandran, P.; Sundararajan, N. An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks. Syst. Man. Cybern. Part Cybern. IEEE Trans. 2004, 34, 2284–2292. [Google Scholar] [CrossRef] [PubMed]
  14. Wu, X.; Rozycki, P.; Wilamowski, B.M. A Hybrid Constructive Algorithm for Single-Layer Feedforward Networks Learning. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 1659–1668. [Google Scholar] [CrossRef] [PubMed]
  15. Santamaría, I.; Pokharel, P.P.; Principe, J.C. Generalized correlation function: Definition, properties, and application to blind equalization. Signal Process. IEEE Trans. 2006, 54, 2187–2197. [Google Scholar] [CrossRef]
  16. Liu, W.; Pokharel, P.P.; Príncipe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing. Signal Process. IEEE Trans. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  17. Bessa, R.J.; Miranda, V.; Gama, J. Entropy and correntropy against minimum square error in offline and online three-day ahead wind power forecasting. Power Syst. IEEE Trans. 2009, 24, 1657–1666. [Google Scholar] [CrossRef]
  18. Singh, A.; Principe, J.C. Using correntropy as a cost function in linear adaptive filters. In Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 2950–2955. [Google Scholar]
  19. Shi, L.; Lin, Y. Convex Combination of Adaptive Filters under the Maximum Correntropy Criterion in Impulsive Interference. Signal Process. Lett. IEEE 2014, 21, 1385–1388. [Google Scholar] [CrossRef]
  20. Zhao, S.; Chen, B.; Principe, J.C. Kernel adaptive filtering with maximum correntropy criterion. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 2012–2017. [Google Scholar]
  21. Wu, Z.; Peng, S.; Chen, B.; Zhao, H. Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion. Entropy 2015, 17, 7149–7166. [Google Scholar] [CrossRef]
  22. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a fixed-point algorithm under Maximum Correntropy Criterion. Signal Process. Lett. IEEE 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  23. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. Signal Process. Lett. IEEE 2014, 21, 880–884. [Google Scholar]
  24. Chen, L.; Qu, H.; Zhao, J.; Chen, B.; Principe, J.C. Efficient and robust deep learning with Correntropyinduced loss function. Neural Comput. Appl. 2015, 27, 1019–1031. [Google Scholar] [CrossRef]
  25. Singh, A.; Principe, J.C. A loss function for classification based on a robust similarity metric. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–6. [Google Scholar]
  26. Feng, Y.; Huang, X.; Shi, L.; Yang, Y.; Suykens, J.A. Learning with the maximum correntropy criterion induced losses for regression. J. Mach. Learn. Res. 2015, 16, 993–1034. [Google Scholar]
  27. Chen, B.; Príncipe, J.C. Maximum correntropy estimation is a smoothed MAP estimation. Signal Process. Lett. IEEE 2012, 19, 491–494. [Google Scholar] [CrossRef]
  28. Nayyeri, M.; Yazdi, H.S.; Maskooki, A.; Rouhani, M. Universal Approximation by Using the Correntropy Objective Function. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4515–4521. [Google Scholar] [CrossRef]
  29. Athreya, K.B.; Lahiri, S.N. Measure Theory and Probability Theory; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  30. Fournier, N.; Guillin, A. On the rate of convergence in Wasserstein distance of the empirical measure. Probab. Theory Relat. Fields 2015, 162, 707–738. [Google Scholar] [CrossRef]
  31. Leshno, M.; Lin, V.Y.; Pinkus, A.; Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 1993, 6, 861–867. [Google Scholar] [CrossRef]
  32. Yuan, X.-T.; Hu, B.-G. Robust feature extraction via information theoretic learning. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 1193–1200. [Google Scholar]
  33. Klenke, A. Probability Theory: A Comprehensive Course; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  34. Rudin, W. Principles of Mathematical Analysis; McGraw-Hill: New York, NY, USA, 1964; Volume 3. [Google Scholar]
  35. Yang, X.; Tan, L.; He, L. A robust least squares support vector machine for regression and classification with noise. Neurocomputing 2014, 140, 41–52. [Google Scholar] [CrossRef]
  36. Newman, D.; Hettich, S.; Blake, C.; Merz, C.; Aha, D. UCI Repository of Machine Learning Databases; Department of Information and Computer Science, University of California: Irvine, CA, USA, 1998; Available online: https://archive.ics.uci.edu/ (accessed on 29 November 2023).
  37. Meyer, M.; Vlachos, P. Statlib. 1989. Available online: https://lib.stat.cmu.edu/datasets/ (accessed on 29 November 2023).
  38. Pokharel, P.P.; Liu, W.; Principe, J.C. A low complexity robust detector in impulsive noise. Signal Process. 2009, 89, 1902–1909. [Google Scholar] [CrossRef]
  39. Feng, Y.; Fan, J.; Suykens, J.A. A Statistical Learning Approach to Modal Regression. J. Mach. Learn. Res. 2020, 21, 1–35. [Google Scholar]
  40. Feng, Y. New Insights into Learning with Correntropy-Based Regression. Neural Comput. 2021, 33, 157–173. [Google Scholar] [CrossRef]
  41. Ramirez-Parietti, I.; Contreras-Reyes, J.E.; Idrovo-Aguirre, B.J. Cross-sample entropy estimation for time series analysis: A nonparametric approach. Nonlinear Dyn. 2021, 105, 2485–2508. [Google Scholar] [CrossRef]
  42. Bagirov, A.; Karmitsa, N.; Mäkelä, M.M. Introduction to Nonsmooth Optimization: Theory, Practice and Software; Springer International Publishing: Cham, Switzerland; Heidelberg, Germany, 2014; Volume 12. [Google Scholar]
Figure 1. SLFN with additive nodes.
Figure 1. SLFN with additive nodes.
Algorithms 17 00049 g001
Figure 2. Constructive network in which the last added node is referred to by L.
Figure 2. Constructive network in which the last added node is referred to by L.
Algorithms 17 00049 g002
Figure 3. Tangent hyperbolic function.
Figure 3. Tangent hyperbolic function.
Algorithms 17 00049 g003
Figure 4. Sigmoid function.
Figure 4. Sigmoid function.
Algorithms 17 00049 g004
Figure 5. Effect of λ on accuracy. Parameter C is set to 0.45 . The experiment is performed on the diabetes dataset with the network with only one hidden node.
Figure 5. Effect of λ on accuracy. Parameter C is set to 0.45 . The experiment is performed on the diabetes dataset with the network with only one hidden node.
Algorithms 17 00049 g005
Figure 6. Convergence of the proposed method in the approximation of Sinc function when σ = 10 . It converges to V max = 0.0399 .
Figure 6. Convergence of the proposed method in the approximation of Sinc function when σ = 10 . It converges to V max = 0.0399 .
Algorithms 17 00049 g006
Figure 7. Effect of hyperparameters C , λ on accuracy.
Figure 7. Effect of hyperparameters C , λ on accuracy.
Algorithms 17 00049 g007
Figure 8. Comparison of C2N2 with RBF and the network N 1 . The experiment is performed on the approximation of the Sinc function.
Figure 8. Comparison of C2N2 with RBF and the network N 1 . The experiment is performed on the approximation of the Sinc function.
Algorithms 17 00049 g008
Figure 9. Comparison of C2N2 with RBF and the network N 1 . The experiment is performed on the approximation of the Sinc function and in the presence of impulsive noise.
Figure 9. Comparison of C2N2 with RBF and the network N 1 . The experiment is performed on the approximation of the Sinc function and in the presence of impulsive noise.
Algorithms 17 00049 g009
Table 1. Specification of the regression problem.
Table 1. Specification of the regression problem.
Datasets#Train#Test#Features
Baskball64324
Strike4162096
Bodyfat1688414
Quake14527263
Autoprice106539
Baloon13346672
Pyrim492527
Housing33716913
Abalone83633418
Cleveland14914813
Cloud54547
Table 2. Specification of the classification problem.
Table 2. Specification of the classification problem.
Dataset#Train#Test#Features
Ionosphere17517634
Australian Credit4602306
Diabetes5122568
Colon32302000
Liver2301156
Leukemia36367129
Dimdata1000319214
Table 3. Performance comparison of C2N2 and the networks N 4 , N 5 , N 6 , CCN and RBF: benchmark regression dataset.
Table 3. Performance comparison of C2N2 and the networks N 4 , N 5 , N 6 , CCN and RBF: benchmark regression dataset.
DatasetsC2N2 N 4 N 5 N 6 CCNRBF
Testing
RMSE
#N Time
(s)
Testing
RMSE
# N Time
(s)
Testing
RMSE
# N Time
(s)
Testing
RMSE
# N Time
(s)
Testing
RMSE
# N Time
(s)
Testing
RMSE
# N Time
(s)
Autoprice 0 . 2689 ̲ 4.80.540.299621.49 0 . 2689 ̲ 9.23.990.368156.990.2758 1 . 5 ̲ 1.670.2725790.008
Autoprice
(Noise)
0 . 2770 ̲ 6.70.700.5521 1 . 33 1.070.36102.600.430.42953.771.410.4768 1.2 ̲ 1.890.9082790.009
Baloon0.10655.910.730.11635.755.210.1066104.960.12578.39.970.1317 3 . 58 ̲ 2.09 0 . 0563 ̲ 1500.086
Baloon
(Noise)
0 . 1056 5.13.250.1166 2 ̲ 1.320.12528.94.660.128153.310.13582.92.12 0 . 1051 ̲ 15010.1351
Pyrim 0 . 0482 ̲ 6.70.230.0843 1 ̲ 0.270.2062 1.2 ̲ 0.070.1696 1 ̲ 0.170.1694 1 ̲ 0.170.0842370.0090
Pyrim
(noise)
0 . 0521 ̲ 4.90.160.6666 1 . 5 ̲ 0.320.4150 1 ̲ 0.0360.6203 1 ̲ 0.620.5712 1 ̲ 0.621.5034370.0043
Table 4. Performance comparison of RBF, N 4 , N 5 , N 6 , CCN and C2N2: classification datasets.
Table 4. Performance comparison of RBF, N 4 , N 5 , N 6 , CCN and C2N2: classification datasets.
Datasets N 4 N 5 N 6 RBFCCNC2N2
Testing
Rate (%)
# N Time
(s)
Testing
Rate (%)
# N Time
(s)
Testing
Rate (%)
# N Time
(s)
Testing
Rate (%)
# N Time
(s)
Testing
Rate (%)
# N Time
(s)
Testing
Rate (%)
# N Time
(s)
Ionospher70.97 1 . 10 0.1265.45 1 ̲ 0.1178.982.400.2582.611750.1478.04 1 . 60 ̲ 0.21 86.70 ̲ 1.300.22
Colon62.00 1 ̲ 0.0263.00 1 . 15 0.0362.33 1 . 35 0.3890.50320.1264.04 1 . 05 ̲ 0.29 91.50 ̲ 1 ̲ 0.15
Leukemia64.44 1 ̲ 0.0364.44 1 ̲ 0.0572.44 1 ̲ 0.0488.61360.2283.71 1 . 3 ̲ 0.3194.7210.20
Dimdata89.747.0515.3188.446.609.3988.774.307.34 95 . 05 ̲ 10004.3688.37 3 . 60 ̲ 8.9893.733.508.29
Table 5. Performance comparison of C2N2 and the state-of-the-art constructive networks OLSCN and OHLCN: benchmark regression dataset.
Table 5. Performance comparison of C2N2 and the state-of-the-art constructive networks OLSCN and OHLCN: benchmark regression dataset.
DatasetC2N2OLSCNOHLCN
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Housing 0 . 0966 ̲ 5.60.790.0988 1.9 ̲ 1.440.09932.80.38
Housing
(Noise)
0 . 0978 ̲ 5.871.070.2411 1 . 1 ̲ 1.530.18244.30.09
Strike 0 . 2807 ̲ 2.81.11 0 . 2818 2 ̲ 2.770.28883.30.62
Strike
(Noise)
0 . 2817 ̲ 4.40.870.3017 2 ̲ 2.210.39126.00.07
Quake 0 . 1784 ̲ 6.62.050.1821 2 ̲ 3.880.181560.023
Quake
(noise)
0 . 1744 ̲ 2 0.420.1870 1 . 75 ̲ 2.210.184940.021
Table 6. Performance comparison of C2N2 AND the state-of-the-art constructive networks OLSCN and OHLCN: benchmark classification dataset.
Table 6. Performance comparison of C2N2 AND the state-of-the-art constructive networks OLSCN and OHLCN: benchmark classification dataset.
DatasetC2N2OLSCNOHLCN
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Australian (Noise) 86 . 77 ̲ 30.48 86 . 67 210.88 86 . 09 1 ̲ 1.01
Liver (Noise) 65 . 70 ̲ 40.25 65 . 12 2 ̲ 1.0153.4990.02
Diabete (noise) 79 . 95 ̲ 1 ̲ 1.1178.6523.3640.56 1 ̲ 6.23
Table 8. Amount of auxiliary variables for noisy and noise-free data. The experiment is performed on the f 1 dataset.
Table 8. Amount of auxiliary variables for noisy and noise-free data. The experiment is performed on the f 1 dataset.
Noisy Data # 3 # 10 # 36 # 55 # 100
( α i , γ i ) ( 0.2351 , 3.29 * 10 22 ) 0.2551 , 4.3 * 10 22 ) ( 0.2310 , 1.04 * 10 24 ) ( 0.2607 , 8.06 * 10 22 ) ( 0.2482 , 6.12 * 10 23 )
Noise-free data # 4 # 11 # 37 # 56 # 101
α i , γ i ( 0.3837 , 0.3989 ) ( 0.3983 , 0.3989 ) ( 0.3989 , 0.3955 ) ( 0.3989 , 0.3982 ) ( 0.3988 , 0.3988 )
Table 9. Performance comparison of C2N2 and the state-of-the-art robust methods MLPMEE, MLPMCC and RLS-LSVM: benchmark regression dataset.
Table 9. Performance comparison of C2N2 and the state-of-the-art robust methods MLPMEE, MLPMCC and RLS-LSVM: benchmark regression dataset.
DatasetC2N2MLPMEEMLPMCCRLS-LSVM
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Testing
RMSE
#Nodes Time
(s)
Bodyfat 0 . 00281 ̲ 5 ̲ 0.46230.004510- 0 . 00291 ̲ 40- 0 . 00295 ̲ 101 0 . 0789 ̲
Bodyfat (Noise) 0 . 00241 ̲ 4 ̲ 0 . 2751 ̲ 0 . 00251 10- 0 . 00257 ̲ 10-0.004511019.651
Pyrim 0 . 0488 ̲ 7 ̲ 0.27120.079820-0.088240-0.081737.3 0 . 0352 ̲
Pyrim (Noise) 0 . 05213 ̲ 4 . 4 ̲ 0 . 1521 ̲ 0 . 0586 10-0.1203430-0.12345374.495
Baskball0.12293 5 . 5 ̲ 0 . 2454 ̲ 0.135230-0.1311420- 0 . 1143 ̲ 480.3687
Baskball (noise) 0 . 11981 ̲ 1 . 33 ̲ 0 . 0238 ̲ 0.1435220-0.132820-0.128394822.569
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nayyeri, M.; Rouhani, M.; Yazdi, H.S.; Mäkelä, M.M.; Maskooki, A.; Nikulin, Y. Correntropy-Based Constructive One Hidden Layer Neural Network. Algorithms 2024, 17, 49. https://doi.org/10.3390/a17010049

AMA Style

Nayyeri M, Rouhani M, Yazdi HS, Mäkelä MM, Maskooki A, Nikulin Y. Correntropy-Based Constructive One Hidden Layer Neural Network. Algorithms. 2024; 17(1):49. https://doi.org/10.3390/a17010049

Chicago/Turabian Style

Nayyeri, Mojtaba, Modjtaba Rouhani, Hadi Sadoghi Yazdi, Marko M. Mäkelä, Alaleh Maskooki, and Yury Nikulin. 2024. "Correntropy-Based Constructive One Hidden Layer Neural Network" Algorithms 17, no. 1: 49. https://doi.org/10.3390/a17010049

APA Style

Nayyeri, M., Rouhani, M., Yazdi, H. S., Mäkelä, M. M., Maskooki, A., & Nikulin, Y. (2024). Correntropy-Based Constructive One Hidden Layer Neural Network. Algorithms, 17(1), 49. https://doi.org/10.3390/a17010049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop