Next Article in Journal
Geometric Structure behind Duality and Manifestation of Self-Duality from Electrical Circuits to Metamaterials
Previous Article in Journal
Renormalizable and Unitary Model of Quantum Gravity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diffusion Correntropy Subband Adaptive Filtering (SAF) Algorithm over Distributed Smart Dust Networks

1
School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China
2
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
3
Key Laboratory of Microwave Remote Sensing, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(11), 1335; https://doi.org/10.3390/sym11111335
Submission received: 6 September 2019 / Revised: 7 October 2019 / Accepted: 18 October 2019 / Published: 26 October 2019

Abstract

:
The diffusion subband adaptive filtering (DSAF) algorithm has attracted much attention in recent years due to its decorrelation ability for colored input signals. In this paper, a modified DSAF algorithm using the symmetry maximum correntropy criterion (MCC) with individual weighting factors is proposed and discussed to combat impulsive noise, which is denoted as the MCC-DSAF algorithm. During the iterations, the negative exponent in the Gaussian kernel of the MCC-DSAF eliminates the interference of outliers to provide a robust performance in non-Gaussian noise environments. Moreover, in order to enhance the convergence for sparse system identifications, a variant of MCC-DSAF named as improved proportionate MCC-DSAF (MCC-IPDSAF) is presented and investigated, which provides a dynamic gain assignment matrix in the MCC-DSAF to adjust the weighted values of each coefficient. Simulation results verify that the newly presented MCC-DSAF and MCC-IPDSAF algorithms are superior to the popular DSAF algorithms.

1. Introduction

Distributed estimation is required to get the interested estimated parameters from the data collected in distributed networks and sensors [1], and has been widely investigated and utilized in wireless sensor networks, target locations, environmental monitorings, medical applications, military applications, and other fields [2,3,4,5]. Diffusion strategy exchanges information between a current node and its neighbors in the networks, which makes it widely studied in distributed estimation [6]. A global solution in the network performs iterative operations when all node information is fused to the center, which requires a large amount of energy and communication resources [7]. Thus, many adaptive filtering (AF) algorithms in light of distributed estimation were reported like diffusion affine projection algorithms (DAPA) [8], diffusion least mean square algorithms (DLMS) [9], and diffusion subband adaptive filtering (SAF) algorithms (DSAF) [10]. DLMS was developed in line with the least mean square (LMS) for distributed estimation since the LMS is simple, but its convergence speed will sharply deteriorate for colored input signals. Although the DAPA and DSAF can improve the convergence when the input is colored, the complexity for DAPA is increased. DSAF is more popular because of its simple computational complexity that is similar to the DLMS [11,12,13,14,15]. However, the above mentioned algorithms are all developed based on the l 2 -norm optimization criterion, whose performance will be seriously degraded under non-Gaussian interference.
Recently, the l 1 norm optimization criterion (LOC) was used for dealing with strong impulsive noise to provide good anti-interference abilities [16]. Then, AF algorithms in line of the LOC were reported, such as the diffusion sign subband AF algorithm (DSSAF) [17], the DSSAF with enlarged cooperation (DSSAF-EC) [18], and the individual weighting factored DSSAF algorithm (IWF-DSSAF) [19]. The above mentioned algorithms are very sensitive to non-Gaussian noise, resulting in potential deterioration of their steady-state error performance, since the LOC algorithms use the sign function on the error in each iterative process. In order to ensure the reliability of signal estimation and transmission in real-life systems, it is necessary to construct a more robust method to suppress non-Gaussian noise interference.
Since the maxmimum correntropy scheme can resist the outliers, it has been used to deal with non-Gaussian noise in recent years [20]. For example, the diffusion maximum correntropy criterion (DMCC) [21,22,23,24,25] and the diffusion minimum error entropy (DMEE) algorithms [22] provide good robustness under non-Gaussian noise-interference, and have been developed over networks based on the correntropy theory. Their performance demonstrates that the computational complexity of symmetry maximum correntropy criterion (MCC)-based algorithms is less than that of minimum error entropy (MEE)-based algorithms.
Moreover, the impulse responses (IR) of many scenes are regarded as sparse [26,27,28,29,30,31], which means that the values of most IR coefficients are zero or very small, where only a small part of the IR coefficients have significant amplitude. Then, proportionate techniques [32,33] have been used to enhance the DSAF algorithms for sparse system identifications. The main idea of the proportionate adaption AF algorithms is that the proportionate assignment matrix assigns different gain to each coefficient. The large coefficients obtain large gain to provide large step-size, and vice versa. The proportionate-type scheme has been utilized in diffusion proportionate sign subband adaptive filtering (DPSSAF) [34] and its series of variants [18,19]. In this paper, the MCC [23,35,36] is adopted for constructing a new cost function in this paper, where the proposed novel algorithm is named as MCC-DSAF. The outliers in the noise make error become large, and the negative exponential term of the Gaussian kernel function in the MCC-DSAF causes the outlier to approach zero. Therefore, compared with DSSAFs, MCC-DSAF has good adaptability and stronger robustness in non-Gaussian noise environments. We also proposed an improved proportionate MCC-DSAF (MCC-IPDSAF) by giving each coefficient an independent gain. The MCC-IPDSAF not only has the ability to resist impulsive noise, but also is suitable for system identifications with different sparse characteristics. The simulated results show that the derived two algorithms have good performance if the convergence performance, steady state error and tracking behaviors are compared with popular methods.

2. The Proposed Algorithms

A smart dust network is a system that contains many sensors, robots, or other devices to build a tiny microelectromechanical systems (MEMS) to detect light, temperature, vibration, magnetism, or chemicals. Herein, an N-node smart dust network within the wireless sensor network framework is considered. At node n and time slot l, the signal d n ( l ) arises from the linear model:
d n ( l ) = u n T ( l ) w 0 + η n ( l ) , n = 1 , 2 , 3 , 4 , , N
where u n ( l ) = [ u n ( l ) , u n ( l 1 ) , , u n ( l M + 1 ) ] T denotes input vector, w 0 is the unknown vector with length M that we expect to estimate, and η n ( l ) is the system noise.

2.1. Review of the DSAF Algorithm

The DSAF is implemented based on a multiband structure, which is shown Figure 1. In Figure 1, I and I represent N interpolations and decimations, respectively. Considering H i ( z ) = m = 0 M 1 h i ( m ) z m , the desired signal d n ( l ) and inputing signal u n ( l ) for node n are assigned to I subband signals d n , i ( l ) and u n , i ( l ) , where i = 0 , 1 , , I 1 , I is the number of subbands. The subband desired vectors and subband input vectors for node n are expressed as:
u n , i ( k ) = [ u n , i ( k I ) , u n , i ( k I 1 ) , , u n , i ( k I M + 1 ) ] T ,
d n ( k ) = [ d n , 0 , D ( k ) , d n , 1 , D ( k ) , , d n , I 1 , D ( k ) ] T .
u n , i ( l ) for node n are handled by adaptive filter W ^ n ( z ) whose weight vector is w n ( l ) , which is to generate output signal y n , i ( l ) . d n , i ( l ) for node n and the output signals y n , i ( l ) are to get d n , i , D ( k ) and y n , i , D ( k ) . The subband errors e n , i , D ( k ) are the difference between d n , i , D ( k ) and y n , i , D ( k ) . Using the synthesis filter bank G i ( z ) = m = 0 M 1 g i ( m ) z m , the fullband error e n ( l ) is obtained. The original and decimated sequences are denoted by variables l and k respectively, where l = k I .
As we know, DSAF algorithms can be divided into two types, namely (Adapt-then-Combine) ATC and (Combine-then-Adapt) CTA [37]. In general, ATC performs better than CTA, and hence, the ATC type is employed herein.
For DSAF, its cost function for node n is
J ( k ) = E [ i = 0 L 1 ( d n , i , D ( k ) u n , i T ( k ) w n ( k 1 ) 2 ) ] ,
in which E denotes the expected symbol. By using ATC type in Equation (4), J ( k ) is minimized to obtain the updated equation of the DSAF:
ϕ n ( k ) = w n ( k 1 ) + 2 μ n i = 0 L 1 u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) u n , i ( k ) + ε w n ( k ) = j N n c j n ϕ j ( k ) ,
where e n , i , D ( k ) is
e n , i , D ( k ) = d n , i , D ( k ) u n , i , D T ( k ) w n ( k 1 ) .
N n represents the neighbor of node n. c j n denotes the combination coefficients of the N × N matrix C , where, j N n c j n = 1 and c j n = 0 if j N n . In other words, c j n is zero when j is not connected to n. Otherwise, the column and row of the combination coefficients c j n in C are added to one. ε is a regularization parameter which prevents the denominator from being zero, μ n is the step-size for node n. ϕ n ( k ) and w n ( k ) are the intermediate estimation and estimation of w 0 for node n.
Under the Gaussian noise interference, the DSAF has a decorrelation ability for the colored input signals and enjoys a faster convergence speed than DLMS. However, for the interference of non-Gaussian noise, the error sequences in the DSAF are not stabilized due to the pulse characteristics of non-Gaussian noise, which affects the convergence characteristics of DSAF. Then, a modified DSAF algorithm based on an MCC scheme with an individual weighting factor for improving the DSAF is proposed and given the name MCC-DSAF.

2.2. The Proposed MCC-DSAF Algorithm

Correntropy between two random variables X and Y is defined as
C ( X , Y ) = E [ κ σ ( X Y ) ] = κ ( x y ) f X , Y ( x , y ) d x d y ,
where κ σ denotes the Mercer kernel, and f X , Y ( x , y ) is the joint probability density function. A Gaussian kernel is always used and is presented as
κ σ ( x y ) = 1 σ 2 π exp ( ( x y ) 2 2 σ 2 ) ,
where σ is the kernel size. To use the correntropy in our algorithm, we define e n , i , D ( k ) = x y , x = d n , i , D ( k ) and y = u n , i T ( k ) w n ( k 1 ) . Then, the new cost function is defined as:
J g l o b ( k ) = n = 1 N i = 0 L 1 ( β π exp [ β ( d n , i , D ( k ) u n , i T ( k ) w n ( k 1 ) ) 2 u n , i T ( k ) u n , i ( k ) + ε 1 ] ) ,
where β denotes the kernel parameter related to the kernel size and β = 1 2 σ 2 , ε 1 > 0 has a small value.
In the global network, a large amount of communication resources and energy are required and the real-time requirements of the system are high. Thus, the above problem should be well solved in the distributed network, and the global cost function should be changed into a local cost function.
For diffusion networks, local cost function can be formulated as linear-combination of local weighted correntropy, which is expressed as:
J n l o c ( k ) = j N n c j n ( i = 0 L 1 ( β π exp [ β ( d n , i , D ( k ) u n , i T ( k ) w n ( k 1 ) ) 2 u n , i T ( k ) u n , i ( k ) + ε 1 ] ) .
Driven by Equation (10), the increment of weight vector Δ w n ( k ) at instant time k is written as:
Δ w n ( k ) = J n l o c ( k ) w = j N n c j n ( i = 0 L 1 ( 2 β 3 π exp [ β ( d n , i , D ( k ) u n , i T ( k ) w n ( k 1 ) ) 2 u n , i T ( k ) u n , i ( k ) + ε 1 ] ) × u n , i ( k ) ( d n , i , D ( k ) u n , i T ( k ) w n ( k 1 ) ) u n , i T ( k ) u n , i ( k ) + ε 1 ) = j N n c j n ( i = 0 L 1 ( 2 β 3 π exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] ) × u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ) .
Then, the updating of the MCC-DSAF based on the gradient method can be obtained,
w n ( k ) = w n ( k 1 ) + μ n Δ w n ( k ) = w n ( k 1 ) + μ n j N n c j n ( i = 0 L 1 ( 2 β 3 π exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] ) × u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ) .
For the diffusion strategy, the local estimation w n ( k ) is a linear combination of intermediate-estimation ϕ n ( k ) , their relationship can be expressed
w n ( k ) = j N n c j n ϕ n ( k ) .
In such way, Δ w n ( k ) is written as:
Δ w n ( k ) = j N n c j n Δ ϕ n ( k ) .
Comparing Equation (11) with Equation (14), the increment of intermediate estimation Δ ϕ n ( k ) is
Δ ϕ n ( k ) = 2 β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] × u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 .
From Equation (12), the updating of the intermediate-estimation ϕ n ( k ) is
ϕ n ( k ) = ϕ n ( k 1 ) + μ n Δ ϕ n ( k ) .
From Equation (13), w n ( k 1 ) contains more data information from neighbor nodes compared with ϕ n ( k 1 ) [7,22]. According to the diffusion in [7], by replacing ϕ n ( k 1 ) by w n ( k 1 ) in Equation (16), we have
ϕ n ( k ) = w n ( k 1 ) + μ n Δ ϕ n ( k ) .
Using Equations (12), (15), and (17), the updating equation of the MCC-DSAF is rewritten as:
ϕ n ( k ) = w n ( k 1 ) + 2 μ n β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] × u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 w n ( k ) = j N n c j n ϕ j ( k ) .
When the error e n , i , D ( k ) is interrupted by the pulse during the weight updating process, the negative exponential action in the Gaussian kernel function term exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] makes the outlier close to zero, which ensures that the MCC-DSAF has good performance under non-Gaussian noise interference.

2.3. The Proposed MCC-IPDSAF Algorithm

The MCC-IPDSAF is proposed using the adaptive gain matrix in the MCC-DSAF to obtain faster convergence when the unknown system is sparse since the adaptive gain matrix G n ( k ) can get a better balance between the convergence and steady state error. The update equation for the MCC-IPDSAF is
ϕ n ( k ) = w n ( k 1 ) + 2 μ n β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] × G n ( k ) u n , i ( k ) e n , i , D ( k ) u n , i T ( k ) G n ( k ) u n , i ( k ) + ε 1 w n ( k ) = j N n c j n ϕ j ( k ) .
There are several methods for choosing G n ( k ) to render it suitable for a sparse system [33]. As we know, the main idea of adaptive gain matrix in AFs is to obtain larger step sizes for the larger filter coefficients, which guarantees their convergence. Therefore, the MCC-IPDSAF can provide a quicker convergence compared to the MCC-DSAF in the sparse system.
G n ( k ) = d i a g [ g n , 0 ( k ) , g n , 1 ( k ) , , g n , M 1 ( k ) ] denotes the M × M adaptive gain matrix. The diagonal elements of G n ( k ) are calculated by
g n , m ( k ) = 1 α 2 M + ( 1 + α ) | w n , m ( k ) | w n ( k ) 1 + ε 2 ,
where α is a parameter related to system sparsity. ε 2 is a regularization to prevent the denominator from being zero.

3. Performance Analysis

3.1. Data Model and Assumption

Mean square analysis of the MCC-DSAF will be performed. We define the following global variables:
w ( k ) = c o l { w 1 ( k ) , w 2 ( k ) , , w N ( k ) } ,
ϕ ( k ) = c o l { ϕ 1 ( k ) , ϕ 2 ( k ) , , ϕ N ( k ) } ,
U ( k ) = d i a g { u 1 ( k ) , u 2 ( k ) , , u N ( k ) } ,
u n ( k ) = c o l { u n , 0 ( k ) , u n , 1 ( k ) , , u n , L 1 ( k ) } ,
d ( k ) = c o l { d 1 , 0 ( k ) , , d 1 , L 1 ( k ) , d N , 0 ( k ) , , d N , L 1 ( k ) } ,
v ( k ) = c o l { η 1 , 0 ( k ) , , η 1 , L 1 ( k ) , η N , 0 ( k ) , , η N , L 1 ( k ) } ,
where c o l · denotes the column vector, and d i a g { · } denotes a diagonal matrix. The desired signal of the entire network d ( k ) = U T ( k ) w 0 + v ( k ) , and w 0 = I w 0 , I = c o l { I M , I M , , I M } is an M N × M matrix.
H ( k ) is
H ( k ) = d i a g { 1 u 1 , 0 T ( k ) u 1 , 0 ( k ) + ε 1 , , 1 u 1 , L 1 T ( k ) u 1 , L 1 ( k ) + ε 1 , , 1 u N , 0 T ( k ) u N , 0 ( k ) + ε 1 , , 1 u N , L 1 T ( k ) u N , L 1 ( k ) + ε 1 } .
Herein, the P matrix is a collection of local step parameters:
P = d i a g { μ 1 , μ 2 , , μ N } I M ,
where ⊗ denotes the Kronecker product. Next, C is the combination coefficients matrix of N × N :
C = c 1 , 1 c 1 , 2 c 1 , N c 2 , 1 c 2 , 2 c 2 , N c N , 1 c N , 2 c N , N ,
and the matrix A is defined as:
A = C T I M .
The matrix Ω ( k ) is given by
Ω ( k ) = 2 β 3 π exp [ β e 1 , 0 , D 2 ( k ) u 1 , 0 T ( k ) u 1 , 0 ( k ) + ε 1 ] , , 2 β 3 π exp [ β e 1 , L 1 , D 2 ( k ) u 1 , L 1 T ( k ) u 1 , L 1 ( k ) + ε 1 ] , , 2 β 3 π exp [ β e N , 0 , D 2 ( k ) u N , 0 T ( k ) u N , 0 ( k ) + ε 1 ] , , 2 β 3 π exp [ β e N , L 1 , D 2 ( k ) u N , L 1 T ( k ) u N , L 1 ( k ) + ε 1 ] .
Using the global variables, the global update equation of the MCC-DSAF can be obtained,
ϕ ( k ) = w ( k 1 ) + P Ω ( k ) H ( k ) U ( k ) [ d ( k ) U T ( k ) w ( k 1 ) ] w ( k ) = A ϕ ( k ) .
In the analysis, we have the following assumptions:
Assumption 1.
All input regressions u n ( l ) are independent. η n ( l ) is independent and independent of u n ( l ) .
Assumption 2.
Subband colored input signal u n , i ( k ) is close to a white signal.
Assumption 3.
The Gaussian kernel function 2 β 3 π exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] is independent of u n , i ( k ) .
Assumption 1 is widely used in the analysis of AF algorithms. Assumption 3 does not really apply to the proposed algorithms, since 2 β 3 π exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] is an error function. In [21,37,38,39], the variable step size is independent of u n , i ( k ) . 2 β 3 π exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] is considered as a variable step term.
The weighted error-vector for node n is expressed as:
w ˜ n ( k ) = w 0 w n ( k ) .
The global weighted error-vector is:
w ˜ ( k ) = w 0 w ( k ) .
Due to A = C T I M and C 2 = 1 , where C is the matrix whose coefficients are c j n , and we have A = I M N . Thus, we get the relationship of w 0 = A w 0 . Replacing w ( k ) with A ϕ ( k ) in Equation (32), then Equation (34) is modified to be:
w ˜ ( k ) = w 0 w ( k ) = A w 0 A ϕ ( k ) = A w 0 A [ w ( k 1 ) + P Ω ( k ) H ( k ) U ( k ) ( d ( k ) U T ( k ) w ( k 1 ) ) ] = A w ˜ ( k 1 ) A [ P Ω ( k ) H ( k ) U ( k ) ( U T ( k ) w 0 + v ( k ) U T ( k ) w ( k 1 ) ) ] = A w ˜ ( k 1 ) A [ P Ω ( k ) H ( k ) U ( k ) ( U T ( k ) w ˜ ( k 1 ) + v ( k ) ) ] = A [ I M N P Ω ( k ) H ( k ) U ( k ) U T ( k ) ] w ˜ ( k 1 ) A P Ω ( k ) H ( k ) U T ( k ) v ( k ) .
Next, the mean square analysis for w ˜ ( k ) will be presented.

3.2. Convergence Analysis

The expectation is simultaneously exerted on both sides of (35), then we get
E [ w ˜ ( k ) ] = A [ I M N E [ P Ω ( k ) H ( k ) U ( k ) U T ( k ) ] ] E [ w ˜ ( k 1 ) ] A E [ P Ω ( k ) H ( k ) U T ( k ) v ( k ) ] .
It can be seen from Assumption 3 that Ω ( k ) is independent of U ( k ) . Thereby, we obtain
E [ P Ω ( k ) H ( k ) U ( k ) U T ( k ) ] = P E [ Ω ( k ) ] R U ,
where R U = E [ H ( k ) U ( k ) U T ( k ) ] = E [ i = 0 L 1 u n , i ( k ) u n , i T ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] .
From Assumption 1, we can find that the expectation of the last term of (36) is close to zero. Therefore, (36) is rewritten as
E [ w ˜ ( k ) ] = A [ I M N P E [ Ω ( k ) ] R U ] E [ w ˜ ( k 1 ) ] .
In Equation (38), the matrix A [ I M N P E [ Ω ( k ) ] R U ] should be stable. Thus, I μ n E 2 β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] R U should be stable for all n, which means that μ n should satisfy the following equation
1 < 1 μ n λ max ( R U ) E [ 2 β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] ] < 1 ,
then,
0 < μ n < 2 λ max ( R U ) E [ 2 β 3 π i = 0 L 1 exp [ β e n , i , D 2 ( k ) u n , i T ( k ) u n , i ( k ) + ε 1 ] ] n = 1 , 2 , , N ,
where λ max denotes the maximum eigenvalue of R U . If the l 1 norm of the weight w n ( k ) is smaller than τ , we have
| e n , i , D ( k ) | = | d n , i , D u n , i T ( k ) w n ( k 1 ) | u n , i ( k ) 1 w n ( k 1 ) 1 + | d n , i , D ( k ) | τ u n , i ( k ) 1 + | d n , i , D ( k ) | .
Next, let u n , i T ( k ) u n , i ( k ) = u n , i ( k ) 2 2 . Therefore, the following condition for the stability of the MCC-DSAF should satisfy
0 < μ n < 2 λ max ( R U ) E [ 2 β 3 π i = 0 L 1 exp [ β [ τ u n , i ( k ) 1 + | d n , i , D ( k ) | ] 2 u n , i ( k ) 2 2 + ε 1 ] ] n = 1 , 2 , , N .

3.3. Steady-State Performance

Steady-state behavior of MCC-DSAF is studied herein. Σ denotes any symmetric positive definite weighting matrix, t Σ 2 denotes the weighting squared Euclidean norm t Σ 2 = t T Σ t . Then, considering the Σ -Euclidean-norm on both sides of Equation (35) results in
w ˜ ( k ) Σ 2 = w ˜ ( k 1 ) Σ 2 + v T ( k ) Y ( k ) v ( k ) ,
where
Y ( k ) = U T ( k ) H T ( k ) Ω T ( k ) P T A T Σ A P Ω ( k ) H ( k ) U ( k ) ,
and
Σ = A T Σ A A T Σ A P Z ( k ) Z T ( k ) P T A T Σ A + Z T ( k ) P T A T Σ APZ ( K ) ,
and
Z ( k ) = Ω ( k ) H ( k ) U ( k ) U T ( k ) .
We take the expectation on both sides of Equation (43) and find:
E [ w ˜ ( k ) Σ 2 ] = E [ w ˜ ( k 1 ) Σ 2 ] + E [ v T ( k ) Y ( k ) v ( k ) ] .
E [ Σ ] is
E [ Σ ] = A T Σ A A T Σ A P E [ Z ( k ) ] E [ Z T ( k ) ] P T A T Σ A + E [ Z T ( k ) P T A T Σ APZ ( K ) ] .
Herein, let E [ Σ ] = Σ . The b v e c { } operator is to convert a block matrix into a single-column vector. The ⊙ operator denotes the block Kronecker product. From b v e c [ Q Σ P T ] = ( P Q ) ξ in [40], ξ = b v e c [ Σ ] and applying the b v e c { } operator to each item on the right side of (48) yields
b v e c [ A T Σ A ] = ( A T A T ) ξ ,
b v e c [ A T Σ AP E [ Z ( k ) ] ] = ( E [ Z T ( k ) ] I M N ) ] ( P T I M N ) ( A T A T ) ξ ,
b v e c [ E [ Z T ( k ) ] P T A T Σ A ] = ( I M N E [ Z T ( k ) ] ) ( I M N P T ) ( A T A T ) ξ ,
b v e c [ E [ Z T P T A T Σ APZ ( k ) ] ] = ( E [ Z T ( k ) Z T ( k ) ] ) ( P T P T ) ( A T A T ) ξ .
By applying the b v e c { } operator to right side of Equation (48) without ξ , the final expression is represented by matrix Q :
Q = [ I M 2 N 2 ( E [ Z ( k ) ] I M N ) ( P T I M N ) ( I M N E [ Z T ( k ) ] ) ( I M N P T ) + ( E [ Z T ( k ) Z T ( k ) ] ) ( P T P T ) ] ( A T A T ) ,
where b v e c [ Σ ] = ξ , and applying the b v e c { } operator to left side of Equation (48), we get b v e c [ Σ ] = ξ .
Finally, we have
ξ = Q ξ .
Also, applying b v e c { } operator to the second term on the right side of Equation (47) yields
b v e c [ E [ v T ( k ) Y ( k ) v ( k ) ] ] = R ξ .
Thus, R is
R = E [ ( v T ( k ) v T ( k ) ) ( U T ( k ) U T ( k ) ) ( H T ( k ) H T ( k ) ) × ( Ω T ( k ) Ω T ( k ) ) ( P T P T ) ( A T A T ) ] .
According to the above analysis, Equation (47) is obtained,
E [ w ˜ ( k ) ξ 2 ] = E [ w ˜ ( k 1 ) Q ξ 2 ] + R ξ .
The mean square deviation (MSD) for node n is given by
M S D n = E [ w ˜ n ( k ) 2 ] = E [ w 0 w n ( k ) 2 ] .
Here, m n is defined as
m n = v e c [ d i a g ( b n , N ) I M ] ,
where b n , N denotes the n-th column vector of I N , v e c stacks the columns of its matrix into a column vector. Let ξ = m n in Equation (57). When k approaches infinity, the MSD for node n is
M S D n = R [ I Q ] 1 m n .
The MSD of the entire network is defined by the average of all node MSDs:
M S D n e t w o r k = 1 N n = 1 N M S D n .

4. Simulation

The effectiveness of the proposed algorithms is verified through experimental simulation. Figure 2 shows the network topology with 20 nodes. The location coordinates for the nodes in a squared area are [ 0 , 1 . 2 ] × [ 0 , 1 . 2 ] .
The combination coefficients c j n are obtained by Metropolis criterion [11,41], where c j n = 1 max ( N n , N j ) , if n j and c j n = 1 j n c j n , if n = j . The unknown system has a length of M = 128 and the calculation of sparsity is ζ ( w 0 ) = M M M 1 w 0 1 M w 0 2 [42,43] with ζ ( w 0 ) = 0 . 751 . Figure 3 shows the non-Gaussian noise distribution of P r = 0 . 01 and P r = 0 . 1 for node n = 1 . The normalized mean square deviation (NMSD), N M S D = 1 / N n N 10 log 10 w n w 0 2 w 0 2 is used to evaluate the behaviors of the devised algorithms. The smaller the value of NMSD, the closer the estimated results are to the unknown system. In order to make it easier to compare the differences between the different algorithms, the convergence lines are run in 30 independent simulations to get the average values.

System Identification

The algorithms use four-subband cosine modulation filter banks. White, colored and speech signals are used as input in this section. The colored signal u n ( l ) is realized by Gaussian white noise via a first-order system with its transform function of H ( z ) = 1 1 0 . 95 z 1 . The variance of the input signal and the variance of the Gaussian noise are given in Figure 4. Zero-mean Gaussian noise v n ( l ) and the impulsive noise z n ( l ) are used to construct the measurement noise η n ( l ) . The impulsive noise z n ( l ) is obtained by using the Bernoulli process ϕ n ( l ) and the Gaussian process q n ( l ) , which is defined as z n ( l ) = q n ( l ) ϕ n ( l ) . The probability density function of the Bernoulli process is P { ϕ n = 0 } = 1 P r , P { ϕ n = 1 } = P r . The signal-to-noise ratio (SNR) and signal-to-interference ratio (SIR) used in [18] are given by S N R = 10 l o g 10 σ d , n 2 σ v , n 2 and S I R = 10 l o g 10 σ d , n 2 σ z , n 2 , respectively. σ d , n 2 and σ v , n 2 are the variances of u n ( l ) w 0 and v n ( l ) . σ z , n 2 = 1000 E [ ( u n ( l ) w 0 ) 2 ] . The kernel parameter β is 15, α = 0.5 , ε = ε 1 = ε 2 = 0.01 .
Figure 5 gives the performance of the DSAF, MCC-DSAF, and MCC-IPDSAF for colored input signals, where P r = 0.01 . Their step-size parameters are 0.1, 0.0158 and 0.0075 to get nearly the same initial convergence. The DSAF is severely degraded under non-Gaussian noise interference. The proposed MCC-DSAF can better suppress non-Gaussian interferences. The steady-state error of MCC-IPDSAF is smaller than MCC-DSAF, which is attributed to the adaptive gain matrix that reassigns the gains to each coefficient.
Figure 6 illustrates the performance of diffusion sign error-LMS (DSE-LMS) [44], DMCC [21], diffusion affine projection sign algorithm (DAPSA) [45], MCC-DSAF, and MCC-IPDSAF for colored input signals for P r = 0.01 . Their step-size parameters are 0.0017, 0.065, 0.11, 0.029, and 0.015. The Gaussian kernel σ D M C C = 2 and the DAPSA projection order is 4. DAPSA converges faster than the DSE-LMS and DMCC, but it converges slower than the proposed MCC-DSAF. MCC-DSAF’s convergence is significantly faster than DSE-LMS, DMCC, and DAPSA. It can be verified that the subband algorithms can speed-up the convergence. When the proposed MCC-IPDSAF maintains the same convergence speed with MCC-DSAF, its steady-state error is smaller than that of the MCC-DSAF.
Figure 7 demonstrates the performance of DSAF, DSSAF, IWF-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for a white input signal, where P r = 0.01 . Their step-size parameters are chosen to be 0.5, 0.14, 0.07, 0.02, 0.08, 0.055, and 0.01 for achieving the same initial convergence. DSAF shows the worst behavior. The DSSAF and IWF-DSSAF algorithms have almost the same performance as each other, and the IPDSSAF and IWF-IPDSSAF algorithms are similar, this is because IWF-DSSAF and IWF-IPDSSAF are subband variants of DSSAF and IPDSSAF, respectively, and subband splitting of white signals has almost no effect when the impulsiveness of non-Gaussian noise is not particularly large. The steady-state error of the proposed MCC-DSAF is smaller than the DSSAF and IPDSSAF because it is realized based on MCC, which can resist non-Gaussian noise. As a result, the behavior of MCC-IPDSAF is better than the other algorithms.
Figure 8 is the performance of DSAF, DSSAF, IWF-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for a white input signal with P r = 0.1 . Their step-size parameters are 0.25, 0.7, 0.34, 0.065, 0.4, 0.25, and 0.039, respectively. By increasing the impulsiveness of non-Gaussian noise, DSAF has completely failed, and the algorithms have similar performance to those in Figure 7.
Figure 9 discusses the behavior of DSAF, DSSAF, IWF-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for a colored input signal, where P r = 0.01 . The step-size parameters of these algorithms are 0.1, 0.42, 0.055, 0.0158, 0.3,0.06, and 0.0075, respectively. The DSAF has been severely degraded, while the proposed MCC-DSAF has a smaller steady-state-error than those of DSSAF, IWF-DSSAF, and IPDSSAF. Due to the advantages of adaptive gain matrix and MCC schemes, the proposed MCC-IPDSAF outperforms all other algorithms.
Figure 10 shows the performance of DSAF, DSSAF, IWF-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for a colored input signal, where P r = 0.1 . The step size parameters of these algorithms are 0.1, 0.25, 0.06,0.021, 0.25, 0.0765, and 0.01, respectively. Compared with Figure 9, when the impulsiveness of non-Gaussian noise is increased, the steady-state error and convergence for all algorithms become worse. However, the behavior of the proposed MCC-IPDSAF algorithm is the best.
Figure 11 shows the tracking behavior of DSAF, DSSAF, IWF-DSSAF, MCC-DSAF, IPDSSAF, MCC-IPDSAF, IWF-IPDSSAF for a colored input signal, where P r = 0.01 . The unknown system changes when the iterations reach to 20,000, the proposed MCC-DSAF and MCC-IPDSAF still have good tracking performance.
Figure 12 gives a highly correlated real speech signal and the sparse channel. The real speech signal is the input, and the sampling frequency is 8 KHz, and the sample length is 4.8 × 10 4 .
From Figure 13, DSAF fails to converge, while the steady state error of the proposed MCC-DSAF is better than those of the DSSAF and IWF-DSSAF for the non-Gaussian interference and speech input. The behavior of MCC-IPDSAF is superior to the other algorithms. Although the non-stationarity of speech input affects the behavior of the mentioned algorithms, the experiment result verifies the feasibility and effectiveness of the MCC-DSAF and MCC-IPDSAF algorithms.

5. Conclusions

In this paper, the maximum correntropy criterion and an individual weighting factor have been taken to construct a new cost function within the distributed subband adaptive filtering framework, which is named MCC-DSAF. The developed MCC-DSAF has been well derived and analyzed. The proposed MCC-DSAF algorithm can not only effectively suppress non-Gaussian noise interference, but also outperforms DSSAF and IWF-DSSAF with respect to convergence and MSD. Moreover, the proportionate adaption scheme is also introduced into MCC-DSAF to get MCC-IPDSAF, which further enhances the behavior of MCC-DSAF for identifying sparse systems. The convergence analysis and the steady-state behavior of MCC-DSAF are presented. The estimation behaviors of the algorithms are verified and the simulation results demonstrate that the proposed MCC-DSAF and MCC-IPDSAF are superior to the mentioned, popular DSAF algorithms. The algorithms in this paper will provide a better effect in the fields of radar, medical, wireless sensor networks, smart dust networks, distributed channel estimations, and hydroacoustics, etc.

Author Contributions

Y.G. wrote the draft of this paper and gave the idea and basic simulation of the devised algorithm. J.L. proposed the analysis of the algorithms and constructed the figures with other algorithms. Y.L. gave the idea and the investigation of the devised algorithm. All the authors discuss the results and the analysis of the devised algorithms together, and they have read and agreed to submit the final version of the manuscript.

Funding

This paper is supported by the National Key Research and Development Program of China (2016YFE0111100), Key Research and Development Program of Heilongjiang (GX17A016), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), the Natural Science Foundation of Beijing (4182077), China Postdoctoral Science Foundation (2017M620918, 2019T120134), the Fundamental Research Funds for the Central Universities (HEUCFG201829, 2072019CFG0801), and Opening Fund of Acoustics Science and Technology Laboratory (SSKF2016001).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sayed, A.H. Adaptive, networks. Procs. IEEE 2014, 102, 460–497. [Google Scholar] [CrossRef]
  2. Rossi, L.A.; Krishnamachai, B.; Kuo, C.C.J. Distributed parameter estimation for monitoring diffusion phenomena using physical models. In Proceedings of the 2004 First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, Santa Clara, CA, USA, 4–7 October 2004. [Google Scholar]
  3. Li, J.; AlRegib, G. Rate-constrained distributed estimation in wireless sensor network. IEEE Trans. Signal Process. 2007, 55, 1624–1643. [Google Scholar] [CrossRef]
  4. Culler, D.; Estrin, D.; Srivastava, M. Overview of sensor networks. Computer 2004, 37, 41–49. [Google Scholar] [CrossRef]
  5. Tu, S.Y.; Sayed, A.H. Mobile adaptive networks. IEEE J. Sel. Top. Signal Process. 2011, 5, 649–664. [Google Scholar]
  6. Sayed, A.H. Adaptive, learning, and optimization over networks. Found. Trends Mach. Learn. 2014, 7, 311–801. [Google Scholar] [CrossRef]
  7. Chen, F.; Shi, T.; Duan, S.; Wang, L.; Wu, J. Diffusion Least Logarithmic Absolute Difference Algorithm for Distributed estimation. Signal Process. 2018, 142, 423–430. [Google Scholar] [CrossRef]
  8. Li, L.; He, J.; Zhang, Y. Affine projection algorithms over distributed wireless networks. J. Commun. Univ. China (Sci. Technol.) 2012, 19, 34–43. [Google Scholar]
  9. Lopes, C.G.; Sayed, A.H. Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process. 2008, 56, 3122–3136. [Google Scholar] [CrossRef]
  10. Jin-gen, N.I.; Lan-shen, M.A. Distributed subband adaptive filtering algorithms. Chin. J. Electron. 2015, 43, 2222–2231. [Google Scholar]
  11. Seo, J.H.; Jung, S.M.; Park, P.G. A diffusion subband adaptive filtering algorithm for distributed estimation using variable step size and new combination method based on the MSD. Digit. Signal Process. 2016, 48, 361–369. [Google Scholar] [CrossRef]
  12. Abadi, M.S.E.; Ahmadi, M.J. Diffusion Improved Multiband-Structured Subband Adaptive Filter Algorithm with Dynamic Selection of Nodes Over Distributed Networks. IEEE Trans. Circuits Syst. 2019, 66, 507–511. [Google Scholar] [CrossRef]
  13. Abadi, M.S.E.; Shafiee, M.S. A family of diffusion normalized subband adaptive filter algorithms over distributed networks. Int. J. Commun. Syst. 2017, 30, 1–15. [Google Scholar]
  14. Abadi, M.S.E.; Shafiee, M.S. Diffusion Normalized Subband Adaptive Algorithmfor Distributed Estimation Employing Signed Regressor of Input Signal. Digit. Signal Process. 2017, 70, 73–83. [Google Scholar] [CrossRef]
  15. Wen, P.; Zhang, J. Widely-Linear Complex-Valued Diffusion Subband Adaptive Filter Algorithm. IEEE Trans. Signal Inf. Process. Netw. 2018, 47, 90–95. [Google Scholar] [CrossRef]
  16. Mathews, V.J.; Cho, S.H. Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 450–454. [Google Scholar] [CrossRef]
  17. Ni, J. Diffusion sign subband adaptive filtering algorithm for distributed estimation. IEEE Signal Process. Lett. 2015, 11, 2029–2033. [Google Scholar] [CrossRef]
  18. Shi, J.; Ni, J. Diffusion sign subband adaptive filtering algorithm with enlarged cooperation and its variant. Circuits Syst. Signal Process. 2016, 36, 1714–1724. [Google Scholar] [CrossRef]
  19. Wang, W.Y.; Zhao, H.Q.; Liu, Q.Q. Diffusion Sign Subband Adaptive Filtering Algorithm with Individual Weighting Factors for Distributed Estimation. Circuits Syst. Signal Process. 2017, 36, 2605–2621. [Google Scholar] [CrossRef]
  20. Chen, B.; Principe, J.C. Maximum correntropy estimation is a smoothed MAP estimation. IEEE Signal Process. Lett. 2012, 19, 491–494. [Google Scholar] [CrossRef]
  21. Ma, W.; Chen, B.; Duan, J.; Zhao, H. Diffusion maximum correntropy criterion algorithms for robust distributed estimation. Digit. Signal Process. 2016, 58, 10–19. [Google Scholar] [CrossRef] [Green Version]
  22. Li, C.; Shen, P.; Liu, Y. Diffusion information theoretic learning for distributed estimation over network. IEEE Trans. Signal Process. 2013, 61, 4011–4024. [Google Scholar] [CrossRef]
  23. Singh, A.; Principe, J.C. Using correntropy as a cost function in linear adaptive filters. In Proceedings of the International Joint Conference, Neural Networks (IJCNN), Atlanta, GA, USA, 14–19 June 2009; pp. 2950–2955. [Google Scholar]
  24. Li, Y.; Jiang, Z.; Shi, W.; Han, X.; Chen, B. Blocked maximum correntropy criterion algorithm for cluster-sparse system identifi-cations. IEEE Trans. Circuits Syst. II Exp. Briefs 2019. [Google Scholar] [CrossRef]
  25. Shi, W.; Li, Y.; Wang, Y. Noise-free maximum correntropy criterion algorithm in non-Gaussian environment. IEEE Trans. Circuits Syst. II Exp. Briefs 2019. [Google Scholar] [CrossRef]
  26. Haykin, S.; Widrom, B. Least-Mean-Square Adaptive Filters; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  27. Li, Y.; Wang, Y.; Jiang, T. Sparse least mean mixed-norm adaptive filtering algorithms for sparse channel estimation applications. Int. J. Commun. Syst. 2017, 30, e3181. [Google Scholar] [CrossRef]
  28. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  29. Li, Y.; Jiang, Z.; Osman, O.; Han, X.; Yin, J. Mixed norm constrained sparse APA algorithm for satellite and network echo channel estimation. IEEE Access 2018, 6, 65901–65908. [Google Scholar] [CrossRef]
  30. Shi, W.; Li, Y.; Zhao, L.; Liu, X. Controllable sparse antenna array for adaptive beamforming. IEEE Access 2019, 7, 6412–6423. [Google Scholar] [CrossRef]
  31. Zhang, X.; Jiang, T.; Li, Y.; Zakharov, Y. A novel block sparse reconstruction method for DOA estimation with unknown mutual coupling. IEEE Commun. Lett. 2019, 23, 1845–1848. [Google Scholar] [CrossRef]
  32. Jin, Z.; Li, Y.; Wang, Y. An enhanced set-membership PNLMS algorithm with a correntropy induced metric constraint for acoustic channel estimation. Entropy 2017, 19, 281. [Google Scholar] [CrossRef]
  33. Guo, Y.; Gao, Y. Comparative Study of Several Sparse Adaptive Filtering Algorithms. Microcomput. Appl. 2014, 33, 1–3. [Google Scholar]
  34. Shi, L.; Zhao, H.Q. Two Diffusion Proportionate Sign Subband Adaptive Filtering Algorithms. Circuits Syst. Signal Process. 2017, 36, 4244–4259. [Google Scholar] [CrossRef]
  35. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-statemean-square error analysisfor adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  36. Wu, Q.; Li, Y.; Zakharov, Y.; Xue, W.; Shi, W. A kernel affine projection-like algorithm in reproducing kernel hilbert space. IEEE Trans. Circuits Syst. II Exp. Briefs 2019. [Google Scholar] [CrossRef]
  37. Ma, W.T.; Duan, J.D.; Man, W.S.; Liang, J.L.; Chen, B.D. General Mixed Norm Based Diffusion Adaptive Filtering Algorithm for Distributed Estimation over Network. IEEE Access 2017, 5, 1090–1102. [Google Scholar] [CrossRef]
  38. Saeed, M.O.B.; Zerguine, A.; Zummo, S.A. A variable step-size strategy for distributed estimation over adaptive networks. EURASIP J. Adv. Signal Process. 2013, 2013, 135. [Google Scholar] [CrossRef]
  39. Xu, X.G.; Qu, H.; Zhao, J.H.; Yan, F.Y.; Wang, W.H. Diffusion Maximum Correntropy Criterion Based Robust Spectrum Sensing in Non-Gaussian Noise Environments. Entropy 2018, 20, 246. [Google Scholar] [CrossRef]
  40. Tracy, D.S.; Singh, R.P. A new matrix product and its applications in partitioned matrix differentiation. Stat. Neerl. 1972, 26, 143–157. [Google Scholar] [CrossRef]
  41. Cattivelli, F.; Sayed, A.H. Diffusion LMS Strategies for Distributed Estimation. IEEE Trans. Signal Process. 2010, 58, 1035–1048. [Google Scholar] [CrossRef]
  42. Guo, Y.; Bai, Y. Robust Affine Projection Sign Adaptive Filtering Algorithm. J. Instrum. Instrum. 2017, 38, 23–32. [Google Scholar]
  43. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  44. Ni, J.; Chen, J.; Chen, X. Diffusion sign-error LMS algorithm: Formulation and stochastic behavior analysis. Signal Process. 2016, 128, 142–149. [Google Scholar] [CrossRef]
  45. Ni, J.; Ma, L. Distributed affine projection algorithms against impulsive interferences. Tien Tzu Hsueh Pao/Acta Electron. Sin. 2016, 44, 1555–1560. [Google Scholar]
Figure 1. The scheme of the diffusion subband adaptive filtering (DSAF) algorithm.
Figure 1. The scheme of the diffusion subband adaptive filtering (DSAF) algorithm.
Symmetry 11 01335 g001
Figure 2. Diffusion network topology with 20 nodes within a squared area of [ 0 , 1.2 ] × [ 0 , 1.2 ] .
Figure 2. Diffusion network topology with 20 nodes within a squared area of [ 0 , 1.2 ] × [ 0 , 1.2 ] .
Symmetry 11 01335 g002
Figure 3. The non-Gaussian noise of P r = 0.01 (up) and P r = 0.1 (down) for node n = 1 .
Figure 3. The non-Gaussian noise of P r = 0.01 (up) and P r = 0.1 (down) for node n = 1 .
Symmetry 11 01335 g003
Figure 4. The variance of input signal and Gaussian noise. (up) σ u , n 2 ; (down) σ v , n 2 .
Figure 4. The variance of input signal and Gaussian noise. (up) σ u , n 2 ; (down) σ v , n 2 .
Symmetry 11 01335 g004
Figure 5. The performance of DSAF, maximum correntropy criterion (MCC)-DSAF, and improved proportionate MCC-DSAF (MCC-IPDSAF) for the colored input with P r = 0.01 .
Figure 5. The performance of DSAF, maximum correntropy criterion (MCC)-DSAF, and improved proportionate MCC-DSAF (MCC-IPDSAF) for the colored input with P r = 0.01 .
Symmetry 11 01335 g005
Figure 6. The performance of DSE-LMS, diffusion maximum correntropy criterion (DMCC), DAPSA, MCC-DSAF and MCC-IPDSAF for the colored input with P r = 0.01 .
Figure 6. The performance of DSE-LMS, diffusion maximum correntropy criterion (DMCC), DAPSA, MCC-DSAF and MCC-IPDSAF for the colored input with P r = 0.01 .
Symmetry 11 01335 g006
Figure 7. The performance of the DSAF, diffusion sign subband AF algorithm (DSSAF), individual weighting factored (IWF)-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF and MCC-IPDSAF for white input with P r = 0.01 .
Figure 7. The performance of the DSAF, diffusion sign subband AF algorithm (DSSAF), individual weighting factored (IWF)-DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF and MCC-IPDSAF for white input with P r = 0.01 .
Symmetry 11 01335 g007
Figure 8. The performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for white input with P r = 0.1 .
Figure 8. The performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for white input with P r = 0.1 .
Symmetry 11 01335 g008
Figure 9. The performance of IWF-DSSAF, DSSAF, DSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.01 .
Figure 9. The performance of IWF-DSSAF, DSSAF, DSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.01 .
Symmetry 11 01335 g009
Figure 10. The performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.1 .
Figure 10. The performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.1 .
Symmetry 11 01335 g010
Figure 11. The tracking performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.01 .
Figure 11. The tracking performance of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IPDSSAF, IWF-IPDSSAF, and MCC-IPDSAF for the colored input with P r = 0.01 .
Symmetry 11 01335 g011
Figure 12. The speech input signal and the sparse channel.
Figure 12. The speech input signal and the sparse channel.
Symmetry 11 01335 g012
Figure 13. The behavior of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IWF-DSSAF, IPDSSAF, and MCC-IPDSAF for speech input with P r = 0.01 .
Figure 13. The behavior of DSAF, IWF-DSSAF, DSSAF, MCC-DSAF, IWF-DSSAF, IPDSSAF, and MCC-IPDSAF for speech input with P r = 0.01 .
Symmetry 11 01335 g013

Share and Cite

MDPI and ACS Style

Guo, Y.; Li, J.; Li, Y. Diffusion Correntropy Subband Adaptive Filtering (SAF) Algorithm over Distributed Smart Dust Networks. Symmetry 2019, 11, 1335. https://doi.org/10.3390/sym11111335

AMA Style

Guo Y, Li J, Li Y. Diffusion Correntropy Subband Adaptive Filtering (SAF) Algorithm over Distributed Smart Dust Networks. Symmetry. 2019; 11(11):1335. https://doi.org/10.3390/sym11111335

Chicago/Turabian Style

Guo, Ying, Jingjing Li, and Yingsong Li. 2019. "Diffusion Correntropy Subband Adaptive Filtering (SAF) Algorithm over Distributed Smart Dust Networks" Symmetry 11, no. 11: 1335. https://doi.org/10.3390/sym11111335

APA Style

Guo, Y., Li, J., & Li, Y. (2019). Diffusion Correntropy Subband Adaptive Filtering (SAF) Algorithm over Distributed Smart Dust Networks. Symmetry, 11(11), 1335. https://doi.org/10.3390/sym11111335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop