Next Article in Journal
MHD Mixed Convection and Entropy Generation in a Lid-Driven Triangular Cavity for Various Electrical Conductivity Models
Next Article in Special Issue
KStable: A Computational Method for Predicting Protein Thermal Stability Changes by K-Star with Regular-mRMR Feature Selection
Previous Article in Journal
A Novel Image Encryption Scheme Based on Collatz Conjecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Minimum Complex Kernel Risk-Sensitive Loss Algorithm

1
College of Electronic and Information Engineering, Brain-inspired Computing & Intelligent Control of Chongqing Key Laboratory, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China
2
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
*
Authors to whom correspondence should be addressed.
Entropy 2018, 20(12), 902; https://doi.org/10.3390/e20120902
Submission received: 4 November 2018 / Revised: 23 November 2018 / Accepted: 23 November 2018 / Published: 25 November 2018
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)

Abstract

:
The maximum complex correntropy criterion (MCCC) has been extended to complex domain for dealing with complex-valued data in the presence of impulsive noise. Compared with the correntropy based loss, a kernel risk-sensitive loss (KRSL) defined in kernel space has demonstrated a superior performance surface in the complex domain. However, there is no report regarding the recursive KRSL algorithm in the complex domain. Therefore, in this paper we propose a recursive complex KRSL algorithm called the recursive minimum complex kernel risk-sensitive loss (RMCKRSL). In addition, we analyze its stability and obtain the theoretical value of the excess mean square error (EMSE), which are both supported by simulations. Simulation results verify that the proposed RMCKRSL out-performs the MCCC, generalized MCCC (GMCCC), and traditional recursive least squares (RLS).

1. Introduction

As many noises are non-Gaussian distributed in practice, the performance of traditional second-order statistics-based similarity measures may deteriorate dramatically [1,2]. To efficiently handle the non-Gaussian noise, a higher order statistic called correntropy [3,4,5,6] was proposed. The correntropy is a nonlinear and local similarity measure widely used in adaptive filters [7,8,9,10,11,12,13,14,15], and usually employs a Gaussian function as the kernel function thanks to flexible and positive definiteness. However, the Gaussian kernel is not always the best choice [16]. Hence, Chen et al. proposed the generalized maximum correntropy criterion (GMCC) algorithm [16,17] using a generalized Gaussian density function as the kernel. Compared with traditional maximum correntropy criterion (MCC), the GMCC behaves better when the shape parameter is properly selected. In addition, the MCC can be regarded as a special case of GMCC. Considering that the error performance surface of correntropic loss is highly non-convex, Chen et al. proposed another algorithm named the minimum kernel risk-sensitive loss (MKRSL), which is defined in kernel space but also inherits the original form of risk-sensitive loss (RSL) [18,19]. The performance surface of the kernel risk-sensitive loss (KRSL) is more efficient than the MCC, resulting in a faster convergence speed and a higher accuracy. Furthermore, KRSL is also not sensitive to outliers.
Generally, adaptive filter has been mainly focused on the real domain and cannot be used to deal with complex-valued data directly. Recently, the complex domain adaptive filter has drawn more attention. Guimaraes et al. proposed the maximum complex correntropy criterion (MCCC) [20,21] and provided a probabilistic interpretation [20]. MCCC shows an obvious advantage over the least absolute deviation (LAD) [22], complex least mean square (CLMS) [23], and recursive least squares (RLS) algorithms [24]. The stability analysis and the theoretical EMSE of the MCCC have been derived [25]. The MCCC has been extended to the generalized case [26]. The generalized MCCC (GMCCC) algorithm employs a complex generalized Gaussian density as a kernel and offers a desirable performance for handling the complex-valued data. In addition, a gradient-based complex kernel risk-sensitive loss (CKRSL) defined in kernel space has shown a superior performance [27]. Until now, there has been no report about the recursive complex KRSL (CKRSL) algorithm. Therefore, in this paper we first propose a recursive minimum CKRSL (RMCKRSL) algorithm. Then, we analyze the stability and calculate the theoretical value of the EMSE. Simulations show that the RMCKRSL is better than the MCCC, GMCCC, and traditional RLS. Simultaneously, the correctness of the theoretical analysis is also demonstrated by simulations.
The remaining parts of this paper are organized as follows: In Section 2, we provide the loss function of the CKRSL and propose the recursive MCKRSL algorithm. In Section 3 we analyze the stability and obtain the theoretical value of the EMSE for the proposed algorithm. In Section 4, simulations are performed to verify the superior convergence of the RMCKRSL algorithm and the correctness of the theoretical analysis. Finally, in Section 5 we draw a conclusion.

2. Fixed Point Algorithm under Minimizing Complex Kernel Risk-Sensitive Loss

2.1. Complex Kernel Risk-Sensitive Loss

Supposing there are two complex variables C 1 = X 1 + j Y 1 and C 2 = X 2 + j Y 2 , the complex kernel risk-sensitive loss (CKRSL) is defined as [27]:
L λ c ( C 1 , C 2 ) = 1 λ E [ exp [ λ ( 1 κ σ c ( C 1 C 2 ) ) ] ]  
where X 1 , X 2 , Y 1 and Y 2 are real variables, λ is the risk-sensitive parameter, and κ σ c ( C 1 C 2 ) is the kernel function.
This paper employs a Gaussian kernel which is expressed as:
κ σ c ( C 1 C 2 ) = exp ( ( C 1 C 2 ) ( C 1 C 2 ) * 2 σ 2 )  
where σ is the kernel width.

2.2. Recursive Minimum Complex Kernel Risk-Sensitive Loss (RMCKRSL)

2.2.1. Cost Function

We define the cost function of MCKRSL as:
J M C K R S L = E [ L λ c ( e ( k ) ) ] = 1 λ E [ exp [ λ ( 1 κ σ c ( e ( k ) ) ) ] ]
where
e ( k ) = d ( k ) w H x ( k )  
denotes the error at the k t h iteration, d ( k ) represents the expected response at the k t h iteration, w = [ w 1 w 2 w m ] denotes the estimated weight vector, m is the length of adaptive filter, and x ( k ) = [ x ( k ) x ( k 1 ) x ( k m + 1 ) ] T is the input vector, ( ) H and ( ) T denote the conjugate transpose and transpose, respectively.

2.2.2. Recursive Solution

Using the Wirtinger Calculus [28,29], the gradient of J M C K R S L with respect to w * is derived:
J M C K R S L w * = E [ L λ c ( e ) × ( d * x + x x H w ) ]  
By making J M C K R S L w * = 0 , we obtain the optimal solution
w = R 1 p  
where
R = E { h ( e ) x x H }  
p = E { h ( e ) d * x }
h [ e ] = exp [ λ ( 1 κ σ c ( e ) ) ] κ σ c ( e )  
It is noted that Equation (6) is actually a fixed point solution because R and p depend on w . In practice, R and p are usually estimated as follows when the samples are finite:
R ^ = 1 N k = 1 N h [ e ( k ) ] x ( k ) x H ( k )  
p ^ = 1 N k = 1 N h [ e ( k ) ] d * ( k ) x ( k )  
Hence, R ^ , p ^ and w are updated as follows:
R ^ k = R ^ k 1 + h [ e ( k ) ] x ( k ) x H ( k ) p ^ k = p ^ k 1 + h [ e ( k ) ] d * ( k ) x ( k ) w ( k ) = R ^ k 1   p ^ k
Using the matrix inversion lemma [30], we may rewrite R ^ k 1 in Equation (12) as:
R ^ k 1 = R ^ k 1 1 R ^ k 1 1 x ( k ) ( h 1 ( e ( k ) ) + x H ( k ) R ^ k 1 1 x ( k ) ) 1 × x H ( k ) R ^ k 1 1  
b ( k ) = R ^ k 1 1 x ( k ) ( h 1 ( e ( k ) ) + x H ( k ) R ^ k 1 1 x ( k ) ) 1  
( h 1 ( e ( k ) ) + x H ( k ) R ^ k 1 1 x ( k ) ) b ( k ) = R ^ k 1 1 x ( k )  
and
b ( k ) = h ( e ( k ) ) R ^ k 1 1 x ( k ) b ( k ) h ( e ( k ) ) x H ( k ) R ^ k 1 1 x ( k ) = h ( e ( k ) ) ( R ^ k 1 1 b ( k ) x H ( k ) R ^ k 1 1 ) x ( k ) = h ( e ( k ) ) R ^ k 1 x ( k )
After some algebraic manipulations, we may derive the recursive form of w ( k ) as follows:
w ( k ) = R ^ k 1   { h ( e ( k ) ) d * ( k ) x ( k ) + p ^ k 1 } = w ( k 1 ) + b ( k ) e * ( k )
Finally, Algorithm 1 summarizes the recursive MCKRSL (RMCKRSL) algorithm.
Algorithm 1: RMCKRSL.
Input: σ , λ , d ( k ) , x ( k )
1. Initializations: δ = 0.0001 ,   p 0 = 0 , w ( 0 ) = 0 , R 0 = δ I , R 0 1 = δ 1 I
2. While { x ( k ) d ( k ) } available, do
3.  e ( k ) = d ( k ) w H ( k ) x ( k )
4.  κ σ c ( e ( k ) ) = exp ( | e ( k ) | 2 / 2 σ 2 )
5.  h [ e ( k ) ] = exp [ λ ( 1 κ σ c ( e ( k ) ) ) ] κ σ c ( e ( k ) )
6.  b ( k ) = R ^ k 1 1 x ( k ) ( h 1 ( e ( k ) ) + x H ( k ) R ^ k 1 1 x ( k ) ) 1
7.  w ( k ) = w ( k 1 ) + b ( k ) e * ( k )
8.  R ^ k 1 = R ^ k 1 1 R ^ k 1 1 x ( k ) ( h 1 ( e ( k ) ) + x H ( k ) R ^ k 1 1 x ( k ) ) 1 × x H ( k ) R ^ k 1 1
9. End while
10.  w ^ 0 = w ( k )
Output: Estimated filter weight w ^ 0

3. Convergence Analysis

3.1. Stability Analysis

Supposing the desired signal is as follows:
d ( k ) = w 0 H x ( k ) + v ( k )  
we rewrite the error as:
e ( k ) = w ˜ H ( k 1 ) x ( k ) + v ( k ) = e a ( k ) + v ( k )
where w 0 is the system parameter to be estimated, w ˜ ( k 1 ) = w 0 w ( k 1 ) , v ( k ) represents the noise at discrete time k , and e ( k ) = w ˜ H ( k 1 ) x ( k ) .
Furthermore, we rewrite w ( k ) as:
w ( k ) = w ( k 1 ) + h ( e ( k ) ) e * ( k ) R ^ k 1 x ( k ) w ( k 1 ) + a 0 k f ( e ( k ) ) R ¯ 1 x ( k )
where f ( e ( k ) ) = h ( e ( k ) ) e * ( k ) , R ¯   = E { x x H } , a 0 = 1 / E [ exp [ λ ( 1 κ σ c ( v ) ) ] κ σ c ( v ) ] , v represents the noise, and the second line is approximately obtained by using the following:
R ^ k 1 = [ l = 1 k h [ e ( l ) ] x ( l ) x H ( l ) ] 1 = [ k [ 1 k l = 1 k h [ e ( l ) ] x ( l ) x H ( l ) ] ] 1 a 0 k R ¯ 1
Remark 1.
(1) We can approximate the second line of Equation (20) when | e a ( l ) | 2 is small enough, where e a ( l ) = ( w 0 w ( l 1 ) ) H x ( l ) .
(2) According to Equation (20), the RMCKRSL can be approximately viewed as a gradient descend method with variable steps a0/k.
(3) We can estimate R ¯   by 1 N l = 1 N x ( l ) x H ( l ) , where N is the number of samples.
By multiplying R ¯ 1 / 2 on both sides of Equation (20), we obtain the following:
R ¯ 1 / 2 w ˜ ( k ) = R ¯ 1 / 2 w ˜ ( k 1 ) a 0 k f ( e ( k ) ) R ¯ 1 / 2 x ( k )
where f ( e ( k ) ) = exp [ λ ( 1 κ σ c ( e ( k ) ) ) ] κ σ c ( e ( k ) ) e * ( k ) .
Therefore,
E { R ¯ 1 / 2 w ˜ ( k ) 2 } = E { R ¯ 1 / 2 w ˜ ( k 1 ) 2 } 2 a 0 k E { Re [ e a ( k ) f ( e ( k ) ) ] } + a 0 2 k 2 E { I m F 2 | f ( e ( k ) ) | 2 }
where F represents the Frobenius norm, Re ( ) is the real part, and I m denotes the m × m identity matrix.
Then, we can determine that if
k m   a 0 E { | f ( e ( k ) ) | 2 } 2 E { Re [ e a ( k ) f ( e ( k ) ) ] }  
the sequence E { w ˜ ( k ) 2 } is decreasing and the algorithm will converge.

3.2. Excess Mean Square Error

Let S ( k ) be the excess mean square error (EMSE) and defined as:
S ( k ) = E [ | e a ( k ) | 2 ]
To derive the theoretical value of S ( k ) , we adopt some commonly used assumptions [8,27,31]:
(A1) v ( k ) is zero-mean and independently identically distributed (IID); e a ( k ) is independent of v ( k ) and also zero-mean;
(A2) x ( k ) is independent of v ( k ) , circular and stationary.
Thus, taking (23) and (25) into consideration, we obtain the following:
S ( k + 1 ) = S ( k ) 2 a 0 k E { Re [ e a ( k ) f ( e ( k ) ) ] } + m a 0 2 k 2 E { | f ( e ( k ) ) | 2 }
Similar to [27], we can obtain the following:
E { | f ( e ( k ) ) | 2 } = E { exp [ 2 λ ( 1 κ σ c ( v ) ) ] κ σ / 2 c ( v ) | v | 2 } + S ( k )   × E   { exp [ 2 λ ( κ σ c ( v ) 1 ) ] κ σ / 2 c ( v ) R 1 }
E { Re [ e a ( k ) f ( e ( k ) ) ] } = S ( k ) E { exp [ λ ( 1 κ σ c ( v ) ) ] R 2 }  
where
R 1 = [ 1 3 | v | 2 σ 2 + | v | 4 σ 4 + 3 λ | v | 2 σ 2 κ σ c ( v ) 5 λ | v | 4 2 σ 4 κ σ c ( v ) + | v | 4 σ 4 λ 2 κ σ / 2 c ( v ) ]
R 2 = κ σ c ( v ) ( 1 + 1 2 σ 2 ( λ κ σ c ( v ) 1 ) | v | 2 )  
Thus,
S ( k + 1 ) = S ( k ) 2 a 0 a 1 k S ( k ) + a 0 2 a 2 k 2 S ( k ) + a 0 2 a 3 k 2  
where a 1 = E { exp [ λ ( 1 κ σ c ( v ) ) ] R 2 } , a 2 = m E   { exp [ 2 λ ( κ σ c ( v ) 1 ) ] κ σ / 2 c ( v ) R 1 } , a 3 = m E { exp [ 2 λ ( 1 κ σ c ( v ) ) ] κ σ / 2 c ( v ) | v | 2 } .
It can be seen from Equation (31), that S(k) is the solution to a first-order difference equation. Thus, we derive that:
S ( k ) = S h ( k ) + S p ( k )  
where S h ( k ) = c 1 λ ( k 1 ) λ ( k 2 ) λ ( 1 ) is the homogeneous solution with λ ( k 1 ) = 1 2 a 0 a 1 / ( k 1 ) + a 0 2 a 2 / ( k 1 ) 2 , and S p ( k ) is the particular solution where:
S p ( k ) = ( a 0 a 3 k c 2 ) / ( 2 a 1 a 0 a 2 k c 2 )  
and c 2 2 a 0 a 1 2 a 0 a 1 [ k / ( k + 1 ) ] .
Remark 2.
The theoretical value of S ( k ) in Equation (32) is reliable only when | e a | 2 is small enough and k is large. c 1 can be obtained by using the initial value of the EMSE. However, it is not necessary to calculate c 1 in general, because S h ( k ) S p ( k ) when k is large. Thus, S ( k ) S p ( k ) .

4. Simulation

In this section, two examples are used to illustrate the superior performance of the RMCKRSL i.e., system identification and nonlinear prediction. We obtained the simulation results by averaging 1000 Monte Carlo trials.

4.1. Example 1

We chose the length of the filter as five where the weight vector w 0 = [ w 1 w 2 w 5 ] is generated randomly, where w i = w R i + j w I i , with w R i N ( 0 ,   0.1 ) and w I i N ( 0 ,   0.1 ) being the real and imaginary parts of w i , and N ( μ ,   σ 2 ) denoting the Gaussian distribution with μ and σ 2 being the mean and variance, respectively. The input signal x = x R + j x I is also generated randomly, where x R ,   x I N ( 0 ,   1 ) . An additive complex noise, v = v R + j v I , with v R and v I being the real and imaginary parts, is considered in the simulations.
First, we verify the superiority of the RMCKRSL in the presence of contaminated Gaussian noise [17,19], i.e., v ( k ) = ( 1 c ( k ) ) v 1 ( k ) + c ( k ) v 2 ( k ) , where v 1 ( k ) = v 1 R ( k ) + j v 1 I ( k ) , with v 1 R ,   v 1 I N ( 0 ,   0.1 ) , v 2 ( k ) = v 2 R ( k ) + j v 2 I ( k ) with v 2 R ,   v 2 I N ( 0 ,   20 ) represents an outlier (or impulsive disturbances), P ( c ( k ) = 1 ) = 0.06 is the occurrence probability of impulsive disturbances, P ( c ( k ) = 0 ) = 0.94 . To ensure a fair comparison, all the algorithms use the recursive iteration to search the optimal solution. The parameters for different algorithms are chosen experimentally to guarantee the desirable solution. The performances of different algorithms on the basis of weight error power w 0 w ( k ) 2 are shown in Figure 1. It is clear that compared with the MCCC, GMCCC and traditional RLS, RMCKRSL has the best filtering performance.
Then, the validity of the theoretical EMSEs for MCKRSL is demonstrated. The noise model is also a contaminated Gaussian model, where v 1 R ,   v 1 I N ( 0 ,   σ v 2 / 2 ) , v 2 R ,   v 2 I N ( 0 ,   20 ) , P ( c ( k ) = 1 ) = 0.06 and P ( c ( k ) = 0 ) = 0.94 . Figure 2 compares the values of the theoretical EMSEs and simulated ones under variations of σ v 2 . Obviously there is a good match between the theoretical EMSEs and the simulated ones. In addition, it has been shown that the value of EMSE becomes bigger with the increase of noise variance.
Next, we tested the influence of the outlier on the performance of the RMCKRSL algorithm. The noise model is also a contaminated Gaussian noise, where v 1 R ,   v 1 I N ( 0 ,   0.1 ) , v 2 R ,   v 2 I N ( 0 ,   σ B 2 / 2 ) , P ( c ( k ) = 1 ) = p and P ( c ( k ) = 0 ) = 1 p . Figure 3 compares the performances of different algorithms under different probability of outlier values ( p ), where the sample size is 5000 and the variance of the outlier is σ B 2 = 40 . One can observe that the proposed RMCKRSL algorithm is robust to the probability of an outlier and has better performance than the MCCC, GMCCC and RLS. Figure 4 depicts the performances of different algorithms under different variance of outlier ( σ B 2 ) values, where the sample size is also 5000 and the probability of an outlier is p = 0.06 . It can be observed that the proposed RMCKRSL algorithm is also robust to the variance of an outlier and has better performance than other algorithms.
Finally, the influences of the kernel width σ and risk-sensitive parameter λ on the performance of the RMCKRSL are investigated. The noise model is also a contaminated Gaussian noise, where v 1 R ,   v 1 I N ( 0 ,   0.1 ) , v 2 R ,   v 2 I N ( 0 ,   20 ) , P ( c ( k ) = 1 ) = 0.06 and P ( c ( k ) = 0 ) = 0.94 . Figure 5 and Figure 6 present the performance of the RMCKRSL under a different kernel width σ and risk-sensitive parameter λ , respectively. One can see that both the kernel width σ and risk-sensitive parameter λ play an important role in the performance of the RMCKRSL. It is challenging to choose the optimal σ and λ because it is dependent on the statistical characteristic of the noise, which is unknown in the practical case. Thus, it is suggested that the parameters are chosen by experimentation.

4.2. Example 2

In this example, the superiority of the RMCKRSL is demonstrated by the prediction of a nonlinear system, where s ( t ) = u 0 [ s 1 ( t ) + j s 2 ( t ) ] , s 1 ( t ) is a Mackey-Glass chaotic time series described as follows [15]:
d s 1 ( t ) d t = 0.1 s 1 ( t ) + 0.2 s 1 ( t 30 ) 1 + s 1 ( t 30 ) 10  
s 2 ( t ) is the reverse of s 1 ( t ) , and u 0 is a complex valued number whose real and imaginary parts are randomly generated and obey a uniform distribution over the interval [0, 1]. s ( t ) is discretized by sampling with an interval of six seconds, and affected by the contaminated Gaussian noise v ( k ) = ( 1 c ( k ) ) v 1 ( k ) + c ( k ) v 2 ( k ) , where v 1 ( k ) = v 1 R ( k ) + j v 1 I ( k ) , with v 1 R ,   v 1 I N ( 0 ,   0.1 ) , v 2 ( k ) = v 2 R ( k ) + j v 2 I ( k ) with v 2 R ,   v 2 I N ( 0 ,   20 ) , P ( c ( k ) = 1 ) = 0.06 , P ( c ( k ) = 0 ) = 0.94 . s ( k ) is predicted by x(k) = [s(k − 1) s(k − 2) ⋯ s(k − 6)] and the performance is measured by the mean square error (MSE) with MSE ( k ) = 1 N 6 l = 7 N ( | s ( l ) w H ( k ) x ( l ) | 2 ) . The convergence curves of different algorithms on the basis of MSE are compared in Figure 7. One may observe that the RMCKRSL has a faster convergence rate and better filter accuracy than other algorithms. In addition, the RLS behaves the worst since the minimum square error criterion is not robust to the impulse noise.

5. Conclusions

As a nonlinear similarity measure defined in kernel space, kernel risk-sensitive loss (KRSL) shows a superior performance in adaptive filter. However, there is no report about the recursive KRSL algorithm. Thus, in this paper we focused on the complex domain adaptive filter and proposed a recursive minimum complex KRSL (RMCKRSL) algorithm. Compared with the MCCC, GMCCC and traditional RLS algorithms, the proposed algorithm offers both a faster convergence rate and higher accuracy. Moreover, we derived the theoretical value of the EMSE, and demonstrated its correctness by simulations.

Author Contributions

Conceptualization, G.Q., D.L. and S.W.; methodology, S.W.; software, D.L.; validation, G.Q.; formal analysis, G.Q.; investigation, G.Q., D.L. and S.W.; resources, G.Q.; data curation, D.L.; writing—original draft preparation, G.Q. and D.L.; writing—review and editing, S.W.; visualization, D.L.; supervision, G.Q., D.L. and S.W.; project administration, G.Q.; funding acquisition, G.Q.

Funding

This research was funded by the China Postdoctoral Science Foundation Funded Project under grant 2017M610583, and Fundamental Research Funds for the Central Universities under grant SWU116013.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Principe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Chen, B.; Zhu, Y.; Hu, J.; Principe, J.C. System Parameter Identification: Information Criteria and Algorithms; Newnes: Oxford, UK, 2013. [Google Scholar]
  3. Liu, W.; Pokharel, P.P.; Pr´ıncipe, J. Correntropy: A localized similarity measure. In Proceedings of the 2006 IEEE International Joint Conference on Neural Network (IJCNN), Vancouver, BC, Canada, 16–21 July 2006; pp. 4919–4924. [Google Scholar]
  4. Liu, W.; Pokharel, P.P.; Principe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  5. Singh, A.; Principe, J.C. Using correntropy as a cost function in linear adaptive filters. In Proceedings of the 2009 International Joint Conference on Neural Networks (IJCNN), Atlanta, GA, USA, 14–19 June 2009; pp. 2950–2955. [Google Scholar]
  6. Singh, A.; Principe, J.C. A loss function for classification based on a robust similarity metric. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–6. [Google Scholar]
  7. Zhao, S.; Chen, B.; Principe, J.C. Kernel adaptive filtering with maximum correntropy criterion. In Proceedings of the 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, CA, USA, 31 July–5 August 2011; pp. 2012–2017. [Google Scholar]
  8. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  9. Wu, Z.; Peng, S.; Chen, B.; Zhao, H. Robust Hammerstein adaptive filtering under maximum correntropy criterion. Entropy 2015, 17, 7149–7166. [Google Scholar] [CrossRef]
  10. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a Fixed-Point Algorithm under Maximum Correntropy Criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  11. Wang, W.; Zhao, J.; Qu, H.; Chen, B.; Principe, J.C. Convergence performance analysis of an adaptive kernel width MCC algorithm. AEU-Int. J. Electron. Commun. 2017, 76, 71–76. [Google Scholar] [CrossRef]
  12. Liu, X.; Chen, B.; Zhao, H.; Qin, J.; Cao, J. Maximum Correntropy Kalman Filter with State Constraints. IEEE Access 2017, 5, 25846–25853. [Google Scholar] [CrossRef]
  13. Wang, F.; He, Y.; Wang, S.; Chen, B. Maximum total correntropy adaptive filtering against heavy-tailed noises. Signal Process. 2017, 141, 84–95. [Google Scholar] [CrossRef]
  14. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, S.; Dang, L.; Wang, W.; Qian, G.; Tse, C.K. Kernel Adaptive Filters with Feedback Based on Maximum Correntropy. IEEE Access 2018, 6, 10540–10552. [Google Scholar] [CrossRef]
  16. He, Y.; Wang, F.; Yang, J.; Rong, H.; Chen, B. Kernel adaptive filtering under generalized Maximum Correntropy Criterion. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1738–1745. [Google Scholar]
  17. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Príncipe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  18. Chen, B.; Wang, R. Risk-sensitive loss in kernel space for robust adaptive filtering. In Proceedings of the 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 21–24 July 2015; pp. 921–925. [Google Scholar]
  19. Chen, B.; Xing, L.; Xu, B.; Zhao, H.; Zheng, N.; Príncipe, J.C. Kernel Risk-Sensitive Loss: Definition, Properties and Application to Robust Adaptive Filtering. IEEE Trans. Signal Process. 2017, 65, 2888–2901. [Google Scholar] [CrossRef] [Green Version]
  20. Guimaraes, J.P.F.; Fontes, A.I.R.; Rego, J.B.A.; Martins, A.M.; Principe, J.C. Complex correntropy: Probabilistic interpretation and application to complex-valued data. IEEE Signal Process. Lett. 2017, 24, 42–45. [Google Scholar] [CrossRef]
  21. Guimaraes, J.P.F.; Fontes, A.I.R.; Rego, J.B.A.; Martins, A.M.; Principe, J.C. Complex Correntropy Function: Properties, and application to a channel equalization problem. Expert Syst. Appl. 2018, 107, 173–181. [Google Scholar] [CrossRef]
  22. Alliney, S.; Ruzinsky, S.A. An algorithm for the minimization of mixed l1 and l2 norms with application to Bayesian estimation. IEEE Trans. Signal Process. 1994, 42, 618–627. [Google Scholar] [CrossRef]
  23. Mandic, D.; Goh, V. Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models (ser. Adaptive and Cognitive Dynamic Systems: Signal Processing, Learning, Communications and Control); John Wiley & Sons: New York, NY, USA, 2009. [Google Scholar]
  24. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer-Verlag: New York, NY, USA, 2013. [Google Scholar]
  25. Qian, G.; Wang, S.; Wang, L.; Duan, S. Convergence Analysis of a Fixed Point Algorithm under Maximum Complex Correntropy Criterion. IEEE Signal Process. Lett. 2018, 24, 1830–1834. [Google Scholar] [CrossRef]
  26. Qian, G.; Wang, S. Generalized Complex Correntropy: Application to Adaptive Filtering of Complex Data. IEEE Access 2018, 6, 19113–19120. [Google Scholar] [CrossRef]
  27. Qian, G.; Wang, S. Complex Kernel Risk-Sensitive Loss: Application to Robust Adaptive Filtering in Complex Domain. IEEE Access 2018, 6, 2169–3536. [Google Scholar] [CrossRef]
  28. Wirtinger, W. Zur formalen theorie der funktionen von mehr complexen veränderlichen. Math. Ann. 1927, 97, 357–375. [Google Scholar] [CrossRef]
  29. Bouboulis, P.; Theodoridis, S. Extension of Wirtinger’s calculus to reproducing Kernel Hilbert spaces and the complex kernel LMS. IEEE Trans. Signal Process. 2011, 59, 964–978. [Google Scholar]
  30. Zhang, X. Matrix Analysis and Application, 2nd ed.; Tsinghua University Press: Beijing, China, 2013. [Google Scholar]
  31. Picinbono, B. On circularity. IEEE Trans. Signal Process. 1994, 42, 3473–3482. [Google Scholar] [CrossRef]
Figure 1. Learning curves of different algorithms.
Figure 1. Learning curves of different algorithms.
Entropy 20 00902 g001
Figure 2. EMSE as a function of noise variances.
Figure 2. EMSE as a function of noise variances.
Entropy 20 00902 g002
Figure 3. Influence of the probability of outliers.
Figure 3. Influence of the probability of outliers.
Entropy 20 00902 g003
Figure 4. Influence of the variance of outliers.
Figure 4. Influence of the variance of outliers.
Entropy 20 00902 g004
Figure 5. Influence of the σ ( λ = 3 ).
Figure 5. Influence of the σ ( λ = 3 ).
Entropy 20 00902 g005
Figure 6. Influence of the λ ( σ = 1 ).
Figure 6. Influence of the λ ( σ = 1 ).
Entropy 20 00902 g006
Figure 7. Convergence curves of different algorithms.
Figure 7. Convergence curves of different algorithms.
Entropy 20 00902 g007

Share and Cite

MDPI and ACS Style

Qian, G.; Luo, D.; Wang, S. Recursive Minimum Complex Kernel Risk-Sensitive Loss Algorithm. Entropy 2018, 20, 902. https://doi.org/10.3390/e20120902

AMA Style

Qian G, Luo D, Wang S. Recursive Minimum Complex Kernel Risk-Sensitive Loss Algorithm. Entropy. 2018; 20(12):902. https://doi.org/10.3390/e20120902

Chicago/Turabian Style

Qian, Guobing, Dan Luo, and Shiyuan Wang. 2018. "Recursive Minimum Complex Kernel Risk-Sensitive Loss Algorithm" Entropy 20, no. 12: 902. https://doi.org/10.3390/e20120902

APA Style

Qian, G., Luo, D., & Wang, S. (2018). Recursive Minimum Complex Kernel Risk-Sensitive Loss Algorithm. Entropy, 20(12), 902. https://doi.org/10.3390/e20120902

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop