Next Article in Journal
Threshold Dynamic Multi-Source Decisive Prototypical Network
Previous Article in Journal
A Nonlinear Volterra Filtering Hybrid Image-Denoising Method Based on the Improved Bat Algorithm for Optimizing Kernel Parameters
Previous Article in Special Issue
Optimized Snappy Compression with Enhanced Encoding Strategies for Efficient FPGA Implementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Minor Components Extraction in Parallel Based on Möller Algorithm

1
The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050081, China
2
College of Weaponry Engineering, Naval University of Engineering, Wuhan 430033, China
3
Rocket Force University of Engineering, Xi’an 710025, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(20), 4073; https://doi.org/10.3390/electronics14204073
Submission received: 24 August 2025 / Revised: 9 October 2025 / Accepted: 13 October 2025 / Published: 16 October 2025

Abstract

An MC (minor component) is usually referred to as the noise part of a time-varying signal. Extracting multiple MCs from an input signal is very useful in many practical applications. Compared with single MC estimating algorithms and subspace tracking algorithms, multiple MC extraction algorithms have a wider range of applications and greater research significance. To address the existing issues with multiple MC extraction algorithms, such as the need to estimate parameters in advance and with numerous constraints, this paper proposes a novel multiple MC extraction algorithm by adding a diagonal weighted matrix to the Möller algorithm. The proposed algorithm’s fixed points are analyzed using ODE (ordinary differential equation) methods, and it is demonstrated that the algorithm achieves stable convergence only when the weight matrix converges to the desired MCs of the signal. The simulation results illustrate the effectiveness of the proposed algorithm.

1. Introduction

In modern research, especially in the fields of signal processing and deep learning, principal component analysis (PCA) and minor component analysis (MCA), which extract the eigenvectors corresponding to the maximum or minimum eigenvalues of the signal or data autocorrelation matrix, are commonly used for data preprocessing and feature extraction, with these feature results then serving as inputs for subsequent analysis. For example, signal subspace feature vectors are employed with a flow matrix for weighted subspace fitting, ultimately estimating the direction of signal sources with high accuracy [1], while a deep learning algorithm can be combined with subspace methods to achieve the super-resolution direction finding performance of large-scale antennas using a small number of physical antennas [2]. In addition, PCA and MCA have been applied to FIR filter designing [3], TLS (total least square) [4], curve surface fitting [5], and some other fields [6,7,8,9,10,11,12,13,14]. Eigenvalue decomposition is the earliest algorithm used to calculate the minor component (MC). However, these algorithms required that the autocorrelation matrices of the input data vectors must be explicitly provided beforehand. In order to extract the MC directly from the input signal, Mathew and Reddy built a feedforward neural network structure with a sigmoidal activation function [15], but it had a high computational complexity. Compared with nonlinear neural network algorithms, Hebbian neural network algorithms have a lower complexity and are more suitable for time-varying systems. Nowadays, Hebbian neural network algorithms have become the mainstream research direction in this field and many algorithms have been proposed during the last decades [16,17,18,19].
According to extraction results, MCA algorithms can be divided into three categories: single MC extraction algorithms, subspace tracking algorithms, and multiple MC extraction algorithms. Single MC extraction algorithms can only extract the first MC of the input signal, with typical algorithms including the Möller algorithm [20], the Oja algorithm [5], the Peng algorithm [21], and so on. Subspace tracking algorithms converge to a subspace spanned by multiple MCs of the input signals, rather than the true MC, with the Douglas algorithms [22] being a typical example. Multiple MC extraction algorithms converge to true MCs, with the representative algorithm being the MDoulas algorithm [23]. The results extracted by these three types of algorithms are progressively layered and their applicability is also increasing. Therefore, researching multiple MC extraction algorithms has more general application value. According to a previous study [24], there are generally two types of multiple MC extraction algorithms: the sequential version and the parallel version. In the sequential version, the desired MCs are extracted sequentially by making explicit use of the “inflation” procedure, which must make use of a large amount of memory to store the repeatedly used input samples and may lead to an important processing delay and error propagation effect [25]. The other type is named as the parallel version of MCA. Some algorithms can extract the basis of the MS (minor subspace), such as the Kong algorithm [26], but they cannot extract the individual MCs at the same time. Therefore, it coincides with an increase in demand for algorithms to extract multiple MCs simultaneously.
To the best of our knowledge, extracting MCs from the MS is a vital problem at this time. A transfer mechanism by which an MSA is converted into a multiple MC extraction algorithm is proposed by Jankovic [23], who then applies this transfer mechanism in the Douglas algorithm; based on this, the author proposed a new multiple MC extraction algorithm. However, in this algorithm the convergence of the processing is affected by the initial values of the weight matrix. Another multiple MC extraction algorithm is proposed by Lv [27] based on a PCA neural network. Nevertheless, this algorithm does not resolve unless the largest eigenvalue is estimated ahead of processing. In addition, the convergence of an algorithm is also a crucial problem since the rule cannot preserve the orthonormality of W . In order to overcome these shortcomings in the existing above problems, we propose a novel algorithm based on the Möller MCA algorithm and weighted rules.
The main contributions of this article include the following two points. (1) A multiple MC extraction algorithm, which has no extra schemes for normalizing and requirements for the initial value, is proposed by adding a weighted matrix into the Möller MCA algorithm. (2) The convergence result of the proposed algorithm, which is justly the desired MCs, is analyzed by the ordinary differential equations.
The rest of this paper is organized as follows. After the statement of the problem in Section 2, we introduce the adaptive extracting algorithm in Section 3 and analyze its stability in Section 4. Then the result is presented via numerical simulation in Section 5. Finally, in Section 6, the conclusion follows.

2. Problem Statement

We usually name R as the auto-correlation matrix of the input serial data x , that is to say R = E [ x k x k T ] . The eigendecomposition of matrix R is as follows:
R = U Λ U T
where U = [ u 1 , u 2 , , u n ] is the matrix consisting of eigenvectors of matrix R , and Λ is the matrix composed by the eigenvalues of matrix R . Here, we list the eigenvalues ranked by descending order, λ n > λ n 1 > > λ 2 > λ 1 > 0 . So if we define π as a permutation of the set { 1 , 2 , , n } , that is, { π ( 1 ) , π ( 2 ) , , π ( n ) } = { 1 , 2 , , n } . Then the matrix we choose could be described as Λ = d i a g ( λ π ( 1 ) , λ π ( 2 ) , , λ π ( n ) ) .
Considering the definition of MCs, the eigenvectors corresponding to the eigenvalues λ 1 , λ 2 , , λ r are the first r smallest components of the matrix R , namely the MCs.
An MCA is used to extract the MCs from the input signals. The most commonly used model is the Hebbian neural network algorithm, which has a linear multiple-input–multiple-output (MIMO) relationship and is given by
y k = W k T x k ,
where y k r × 1 is the output of the neural network, W k n × r is the weighted matrix of the neural network, and x k n × 1 is a zero mean stochastic process as the input of the neural network. n and r are the dimensions of the input vector, MCs, separately. k is the current serial number of the neural network algorithm. The objective of the MCA is to propose an updating method of W k with which the weight matrix could approach the MC direction of the input signal through online calculations. Thus, an adaptive extracting algorithm is proposed below.

3. Adaptive Extracting Algorithm

As is proposed by Möller [20], the sequential MC extracting algorithm is as follows:
w ( k + 1 ) = w ( k ) + η { w T ( k ) R w ( k ) w ( k ) [ 2 w T ( k ) w ( k ) 1 ] R w ( k ) }
where the learning factor of the neural network is η ( 0 , 1 ) , w ( k ) n × 1 is the weight vector of the neural network, and w T ( k ) w ( k ) gradually converges to one with the growth of k .
However, most sequential MC analysis could extract only one MC each iteration. They do not resolve directly when it comes to tracking the MC subspace but nevertheless extract the multiple MCs. Then we modify (3) and obtain the following subspace tracking algorithm:
W ( k + 1 ) = W ( k ) + η { W ( k ) W T ( k ) R W ( k ) R W ( k ) [ 2 W T ( k ) W ( k ) I r ] } ,
where W ( k ) n × r and I r are the identity matrix with the size r   ×   r . The matrix R is the auto-correlation matrix of the input signal.
When the neural network (2) converges, the weight matrix W ( k ) converges to a basis of orthonormal space spanned by the eigenvectors corresponding to the eigenvalues λ 1 , λ 2 , , λ r . As is known to us all, the basis of such orthonormal space is not normally the MCs. So, we need to modify part of the algorithm (3) through replacing W ( k ) with W ( k ) D 1 / 2 , in order to cover this shortage when extracting multiple MCs. The proposed algorithm is given as
W ( k + 1 ) = W ( k ) + η { W ( k ) W T ( k ) R W ( k ) D R W ( k ) [ 2 W T ( k ) W ( k ) D I r ] } ,
where the matrix D = d i a g ( d 1 , d 2 , , d r ) is a weighted matrix, with d 1 > d 2 > > d r .
For the convenience of later use, we simply refer to the proposed algorithm (5) as the weighted Möller (WMöller) algorithm and make two remarks to the proposed algorithm.
Remark 1.
Comparing the Möller algorithm and the WMöller algorithm, we can find that the only difference between them lies in the weighted matrix and the dimensions of the matrix. If the matrix dimension is set to 1 and the diagonal elements of the weighted matrix are set to 1, the proposed algorithm degenerates into the Möller algorithm. Therefore, the proposed algorithm can be seen as an extension of the Möller algorithm in a high-dimensional domain, meaning that the application range of the proposed algorithm is broader.
Remark 2.
Apparently, the way to choose these matrices is similar to the ways described in previous studies [24,28,29,30], as well as the function in these algorithms. That is to say, a GSO (Gram–Schmidt Orthonormality) to the state matrix of the neural network is added to each iterative step, making sure that these state matrices converge to the MCs of the auto-correlation matrix of input signals. In addition, a detailed analysis of the mechanism of action of diagonal matrices has been provided in reference [24]. The method selected here is similar to that in the reference. To avoid repetitive description, a detailed discussion of diagonal matrices is not conducted here. Interested readers can refer to reference [24] for detail. Since there are few restrictions to choose the matrix  D , this algorithm is conducive to practical applications.

4. Convergence Analysis

In this section, the fixed points of the proposed algorithm are derived by ODE, which is related to the weighted matrix D . Then, the stability of the algorithm at the fixed points is analyzed.

4.1. The Fixed Point of the Proposed Algorithm

The function of the weighted matrix D is introduced here. According to stochastic approximation theory [31,32], it could be shown that if some conditions are satisfied, then the asymptotic limit of the above discrete learning (5) could be solved by applying the corresponding continuous time differential equation
d W d t = W ( k ) W T ( k ) R W ( k ) D R W ( k ) [ 2 W T ( k ) W ( k ) D I r ] .
First of all, we present the following theorem in order to find the fixed points of the gradients (5).
Before presenting the theorem, there is a lemma should be mentioned.
Lemma 1.
Assuming that  Ω  is an diagonal matrix of size  M  , and  H  is an Hermitian matrix of the same size of  Ω  , let  π  be a permutation such that  π π = 1 K  . If and only if the  ( i , π ( i ) )  entry of the  H  is nonzero and the others are zeros,  H Ω H  is diagonal. The eigenvalues  ξ  of  H  could be given as follows:
ξ = ± h i     , i π ( i )       h i     , i = π ( i ) , i = 1 , 2 , , M .
Proof. 
Let the ( i , j ) entry of a matrix be expressed as [ M a t r i x ] ( i , j ) . Then h i , j = [ H ] ( i , j ) , and ω i , j = [ Ω ] ( i , j ) , ω i = [ Ω ] ( i , i ) for simplicity. Then [ H Ω H ] ( i , j ) = m = 1 M h i m ω m h m j .
Since the matrix H Ω H is diagonal, the entries satisfy that
m = 1 M h i m ω m h m j = 0 ,     i j m = 1 M h i m ω m h m j 0 ,     i = j .
When i j , the equation holds if and only if h i m h m j = 0 for an arbitrary ω m .
Moreover, when i = j , at least one element of h i m ω m h m j is nonzero, which means for some m , h i m = h m i 0 . This is possible if and only if one entry of a certain row h i , π ( i ) is nonzero and others are zeros, where π is a permutation.
Then, the matrix H is similar to H ˜ , which is a permutated version of H .
H ˜ = d i a g ( H 1 , , H L , H ¯ ) ,
where the sub-matrix H ˜ is of the form
H l = 0 h i h i 0 , i i | i < π ( i ) ,
H ¯ = d i a g ( , h i , ) , i i | i = π ( i ) .
Then, the determinant of the matrix H could be calculated as
det [ H ξ I ] = det [ H ˜ ξ I ] = i i | i < π ( i ) ( ξ 2 h i 2 ) i i | i = π ( i ) ( ξ h i ) .
This indicates the eigenvalues of H . □
Theorem 1.
Let  U ¯  be an  n   ×   K   matrix, where the columns of which correspond to eigenvectors of  R   in an arbitrary order without duplication. Then the fixed points are given by  W = U ¯ D 1 / 2 .
Proof. 
The concentration of this proof is  W , so we start with the SVD (singular value decomposition) of W as
W = P Σ Q T .
where P is an N   ×   K matrix consisting of orthonormal columns, Σ is an K   ×   K diagonal matrix with positive entries σ i , i 1 , 2 , , K , and Q is an orthonormal matrix with size of K   ×   K .
Then the updating rule as Equation (5) is transformed into
0 = d W d t = W ( k ) W T ( k ) R W ( k ) D R W ( k ) [ 2 W T ( k ) W ( k ) D I r ] = P Σ Q T ( P Σ Q T ) T R P Σ Q T D R P Σ Q T [ 2 ( P Σ Q T ) T P Σ Q T D I r ] = P Σ 2 P T R P Σ Q T D 2 R P Σ 3 Q T D + R P Σ Q T .
Pre and post multiplying the above equation by P T and D 1 Q , separately, yields
Σ 2 X T Λ X Σ 2 X T Λ X Σ 3 + X T Λ X Σ Q T D 1 Q = 0 .
where X = U T P . Since the correlation matrix is assumed to have distinct positive eigenvalues, X T Λ X is a non-singular matrix. So let us remark it as A = X T Λ X .
A 1 Σ 2 A Σ = 2 Σ 3 Σ Q T D 1 Q .
Obviously the right hand of Equation (16) is symmetric, and then we have
A 1 Σ 2 A Σ = Σ A Σ 2 A 1 .
which yields
Σ 2 A Σ A = A Σ A Σ 2 .
Let b i j = A Σ A i j , then the entries of this equation are σ i 2 b i j = b i j σ j 2 . Since σ i σ j , when i j , b i j = 0 . Therefore, A Σ A is a diagonal matrix. According to Lemma 1, A must be a diagonal matrix. Meanwhile, X is a rectangular permutation matrix. Then we have A 1 Σ 2 A Σ = Σ 3 , which implies Q T D 1 Q = Σ 2 and Q T D 1 / 2 Q = Σ . Since Q is an orthonormal matrix, it must be a permutation matrix. As a result
W = P Σ Q T = ( U X ) Q T D 1 / 2 = U ¯ D 1 / 2 .
where U ¯ = U ( X Q T ) , and X Q T is also a permutation matrix. □

4.2. Stability Analysis of the Proposed Algorithm

Next, we investigate the stability of the fixed points.
Theorem 2.
The algorithm (5) is stable, if and only if   W  is at the fixed point   U D 1 / 2 , where   U   is the   n × K   matrix whose columns are the  K   MCs in increasing order of the eigenvalues.
Proof. 
Let Z = U T W . Therefore, (19) is presented to be
U d Z d t = U Z ( U Z ) T R U Z D - R [ 2 ( U Z ) T U Z D - I r ] .
Then there is
d Z d t = Z Z T Λ Z D 2 Λ Z Z T Z D + Λ Z ,
where Λ = d i a g ( λ π ( 1 ) , λ π ( 2 ) , , λ π ( N ) ) with a permutation π . Since the matrix U in the fixed point W = U D 1 / 2 only consists of part columns of U , then Z ¯ = U T W ¯ is still a fixed point. That is to say, d Z / d t | Z = Z ¯ = 0 .
We consider the ordinary differential equation (ODE) of a perturbation E ( t ) at the fixed point as follows:
E ( t ) d t = d Z d t Z = Z ¯ + E ( t ) = ( Z ¯ + E ( t ) ) T Λ ( Z ¯ + E ( t ) ) D       2 Λ ( Z ¯ + E ( t ) ) ( Z ¯ + E ( t ) ) T ( Z ¯ + E ( t ) ) D + Λ ( Z ¯ + E ( t ) ) = ( Z ¯ Z ¯ T Λ Z ¯ + E Z ¯ T Λ Z ¯ + Z ¯ E T Λ Z ¯ + Z ¯ Z ¯ T Λ E + ο ( E 2 ) ) D     2 Λ ( Z ¯ Z ¯ T Z ¯ + E Z ¯ T Z ¯ + Z ¯ E T Z ¯ + Z ¯ Z ¯ T E + ο ( E 2 ) ) D     + Λ Z ¯ + Λ E .
We assume that E is small enough to be omitted in this equation. Therefore, all the entries in the matrix ο ( E 2 ) are negligibly small.
Let e i j = E i j . Then for 0 < j K < i N , the ODE is as follows:
d e i j d t = ( λ π ( i ) λ π ( j ) ) e i j ,
which indicates that e i j 0 if and only if λ π ( i ) > λ π ( j ) .
For 0 < i , j K , we have
d e i j d t = ( λ π ( j ) λ π ( i ) λ π ( i ) d i 1 d j ) e i j + ( λ π ( j ) 2 λ π ( i ) ) d i 1 / 2 d j 1 / 2 e j i ,
Then we discuss some details of this case.
When i = j , Equation (23) reduces to
d e i i d t = 2 λ π ( i ) e i i ,
which means that e i i 0 , since λ π ( i ) > 0 .
When i j , we will see the dynamics of e i j through the second order ODE.
d 2 e i j d t 2 = ( λ π ( j ) λ π ( i ) λ π ( i ) d i 1 d j ) d e i j d t + ( λ π ( j ) 2 λ π ( i ) ) d i 1 / 2 d j 1 / 2 d e j i d t           d e j i d t = ( λ π ( i ) λ π ( j ) λ π ( j ) d j 1 d i ) e j i + ( λ π ( i ) 2 λ π ( j ) ) d j 1 / 2 d i 1 / 2 e i j                     e j i = ( λ π ( j ) 2 λ π ( i ) ) d i 1 / 2 d j 1 / 2 1 ( λ π ( j ) λ π ( i ) λ π ( i ) d i 1 d j ) e i j d e i j d t .
Then we have
d 2 e i j d t 2 ( α + β ) d e i j d t + α β e i j = 0 ,
where
α + β = λ π ( j ) d j 1 d i λ π ( i ) d i 1 d j ,
and
α β = ( λ π ( i ) λ π ( j ) ) ( λ π ( i ) d i 1 + λ π ( j ) d j 1 ) ( d j d i ) .
In Equations (28) and (29), α and β is the root of (27). It is obvious that the solution of the characteristic equation e i j will asymptotically converge to zero on the condition that all the roots of the characteristic equation have negative real parts. We could confirm this by analyzing the sum and the product of the roots. Since all d i and λ i are positive, the sum of the characteristic Equation (27) is definitely positive, that is to say, α + β > 0 . Supposing that i > j , therefore α β > 0 if and only if λ π ( i ) > λ π ( j ) since d j > d i . The conclusion is opposite when j > i . Generally speaking, all e i j will asymptotically converge to zero only if π ( i ) = i for all i . □

5. Numerical Example

5.1. Transient Behavior

The transient behavior of the learning in minor eigenvalues and corresponding eigenvectors is illustrated in the first simulation. We consider two examples with an arbitrary covariance matrix and an exact covariance matrix. Also the first example is with two distinct minor eigenvalues that are less than one, whereas the second example is with two distinct minor eigenvalues which are larger than one. Generally, in order to demonstrate the effectiveness of the proposed algorithm, the performance is usually in four perspectives which are the direction cosine, the length of the estimated minor eigenvectors, the estimated minor eigenvalues, and the orthonormality to the principal eigenvectors in each iteration. In order to have an intuitive view, we define the direction cosine and the norm of the estimated eigenvectors to show the evaluations of accuracy and speed. The norm of the estimated eigenvector is
N o r m i ( k ) = w i ( k ) .
where W i represents the i th column of matrix W and k denotes the current iteration and i [ 1 , 2 , , K ] . The direction cosine is
D i r e c t i o n C o s i n e ( i , k ) = w i T ( k ) u i w i ( k ) u i ,
where w i and k denote the same meaning of which in the norm, and u i denotes the actual i th minor eigenvector calculated by the offline method. It is easy to know that as long as w i and u i share the same direction, then the direction cosine is precisely equal to one.
Example 1.
In the first example, the data were generated by the following model:
x ( k ) = 0.88 x ( k 1 ) + e ( k )
where   e ( k )  is a Gaussian driving sequence with zero mean and unit variance. All the values of  x  are arranged in arrays with a dimension of six ( n = 6  ). The first three ( r = 3 ) MCs are extracted in parallel through the proposed weighted algorithm with a random   W ( 0 )   and   η = 0.02 . Then we suppose that the weighted matrix   A = d i a g { 3 , 2 , 1 } . The averaged results of 100 independent runs are shown in Figure 1.
We can see clearly in Figure 1 that multiple MCs are successfully extracted in parallel through the proposed algorithm. Both the estimated minor eigenvalues and the estimated minor eigenvectors asymptotically converge to the corresponding actual values. At the same time, the minor subspace spanned by the estimated minor eigenvectors approaches the orthogonal subspace of the principal subspace.
Example 2.
In the second example, the data were generated with the exact auto-correlation matrix   R  ,
R = 3.6963 1.2644 0.1562 0.1171 - 0.2446 - 0.9896 1.2644 3.9842 0.6116 - 0.3452 1.1380 - 0.4918 0.1562 0.6116 5.6049 0.1870 - 0.3049 0.6375 0.1171 - 0.3452 0.1870 4.8370 1.0312 - 1.3812 - 0.2446 1.1380 - 0.3049 1.0312 3.9352 1.1647 - 0.9896 - 0.4918 0.6375 - 1.3812 1.1647 3.7170
where the eigenvalues are   1.0520 , 2.3346 , 5.0019 , 5.2511 , 6.0117 , 6.1233 . The parameters are the same as with Example 1, and the results are shown in Figure 2.

5.2. Comparison with the Douglas Algorithm

A comparison is made between the proposed weighted algorithm and the Douglas algorithm [23], which was proposed in recent years. The data are generated in a similar way to Example 1 in Section 5.1. In both algorithms, the first three ( r = 3 ) MCs are extracted in parallel with a random W ( 0 ) . The other initial parameters are given in Table 1 and the simulation results are shown in Figure 3 and Table 2.
As is shown in Figure 3, MCs are successfully extracted in both algorithms with the same initial condition. However, the estimated eigenvectors in the proposed algorithm converge to the actual minor subspace faster than that in the Douglas algorithm. In Table 2, the computation time is obtained on a machine running MATLAB 2010b on Windows 11 with an Intel Core I5-1340 (1.9 GHz) processer and 16 GB memory. Since the convergence speed of the proposed algorithm is faster than that of the Douglas algorithm, this means that the proposed algorithm needs fewer iterations than that of the Douglas algorithm, so the running time of the algorithm is relatively shorter. In summary, compared with the Douglas algorithm, the proposed algorithm has simpler parameter selection and a faster convergence speed.

6. Conclusions

In this paper, we have proposed a novel adaptive algorithm for multiple MC extraction from signal vectors. We establish a dependent relation of choosing the weighted matrix based on the statistics of the input signal. Based on the Möller algorithm, we proposed the weighted Möller algorithm, which could extract the multiple MCs simultaneously. Compared with some similar algorithms, there is not extra computation for the orthonormality operations. The simulation results show that in the proposed algorithm the convergence is faster than other similar algorithms.
The advantage of the proposed algorithm is that it has no restrictions on the values of the diagonal matrix, but the proposed algorithm, the Douglas algorithm, and other algorithms are all based on the Hebbian neural network. One common problem for such algorithms is selecting the value of the learning rate. For single MC extraction algorithms, this work can be accomplished using the deterministic discrete time (DDT) method. However, the DDT method is not suitable for the subspace tracking algorithm and multiple MC extraction algorithm. Therefore, studying the range of learning factor values that ensure the stability of the algorithm has become the main direction of our subsequent work.

Author Contributions

Conceptualization, Y.G.; methodology, Y.G. and H.D.; software, H.D. and H.L.; validation, Y.G.; formal analysis, Z.X.; investigation, H.D. and H.L.; resources, J.L.; data curation, Y.G.; writing—original draft preparation, Y.G.; writing—review and editing, Y.G.; visualization, S.Y.; supervision, Z.X.; project administration, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62106242, 62101579).

Data Availability Statement

The data that support the findings of this study are available within the article.

Conflicts of Interest

Author Yingbin Gao was employed by the company The 54th Research Institute of China Electronics Technology Group Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zhou, C.; Gu, Y.; Fan, X.; Shi, Z.; Mao, G.; Zhang, Y.D. Direction-of-Arrival Estimation for Coprime Array via Virtual Array Interpolation. IEEE Trans. Signal Process. 2018, 66, 5956–5971. [Google Scholar] [CrossRef]
  2. Mahouti, P.; Belen, A.; Tari, O.; Belen, M.A.; Karahan, S.; Koziel, S. Data-Driven Surrogate-Assisted Optimization of Metamaterial-Based Filtenna Using Deep Learning. Electronics 2023, 12, 1584. [Google Scholar] [CrossRef]
  3. Baderia, K.; Kumar, A.; Agrawal, N.; Kumar, R. Minor Component Analysis Based Design of Low Pass and BandPass FIR Digital Filter Using Particle Swarm Optimization and Fractional Derivative. In Proceedings of the 2021 International Conference on Control, Automation, Power and Signal Processing (CAPS), Jabalpur, India, 10–12 December 2021; pp. 1–6. [Google Scholar]
  4. Tuan, D.N.; Yamada, I. A unified convergence analysis of normalized PAST algorithms for estimating principal and minor components. Signal Process. 2013, 93, 176–184. [Google Scholar] [CrossRef]
  5. Xu, L.; Oja, E.; Suen, C.Y. Modified hebbian learning for curve and surface fitting. Neural Netw. 1992, 5, 441–457. [Google Scholar] [CrossRef]
  6. Huang, C.; Song, Y.; Ma, H.; Zhou, X.; Deng, W. A multiple level competitive swarm optimizer based on dual evaluation criteria and global optimization for large-scale optimization problem. Inf. Sci. 2025, 708, 122068. [Google Scholar] [CrossRef]
  7. Ma, Y.; Cheng, J. A novel joint denoising method for gear fault diagnosis with improved quaternion singular value decomposition. Measurement 2024, 226, 114165. [Google Scholar] [CrossRef]
  8. Yi, K.; Cai, C.; Tang, W.; Dai, X.; Wang, F.; Wen, F. A Rolling Bearing Fault Feature Extraction Algorithm Based on IPOA-VMD and MOMEDA. Sensors 2023, 23, 8620. [Google Scholar] [CrossRef]
  9. Chung, D.; Jeong, B. Analyzing Russia–Ukraine War Patterns Based on Lanchester Model Using SINDy Algorithm. Mathematics 2024, 12, 851. [Google Scholar] [CrossRef]
  10. Wang, Z.; Li, S.; Xuan, J.; Shi, T. Biologically Inspired Compound Defect Detection Using a Spiking Neural Network With Continuous Time–Frequency Gradients. Adv. Eng. Inform. 2025, 65, 103–132. [Google Scholar] [CrossRef]
  11. Zhang, X.; Li, Y.; Feng, X.; Hua, J.; Yue, D.; Wang, J. Application of Multiple-Optimization Filtering Algorithm in Remote Sensing Image Denoising. Sensors 2023, 23, 7813. [Google Scholar] [CrossRef] [PubMed]
  12. Giuliani, A.; Vici, A. On the (Apparently) Paradoxical Role of Noise in the Recognition of Signal Character of Minor Principal Components. Stats 2024, 7, 54–64. [Google Scholar] [CrossRef]
  13. Xuan, J.; Wang, Z.; Li, S.; Gao, A.; Wang, C.; Shi, T. Measuring compound defect of bearing by wavelet gradient integrated spiking neural network. Measurement 2023, 223, 10. [Google Scholar] [CrossRef]
  14. Ma, Q.; Sun, Y.; Wan, S.; Gu, Y.; Bai, Y.; Mu, J. An ENSO Prediction Model Based on Backtracking Multiple Initial Values: Ordinary Differential Equations–Memory Kernel Function. Remote Sens. 2023, 15, 3767. [Google Scholar] [CrossRef]
  15. Mathew, G.; Reddy, V. Orthogonal eigensubspace estimation using neural networks. IEEE Trans. Signal Process. 1994, 42, 1803–1811. [Google Scholar] [CrossRef]
  16. Rahmat, F.; Zulkafli, Z.; Ishak, A.J.; Abdulrahman, R.Z.; Stercke, S.D.; Buytaert, W.; Tahir, W.; Abrahman, J.; Ibrahim, S.; Ismail, M. Supervised feature selection using principal component analysis. Knowl. Inf. Syst. 2024, 66, 1955–1995. [Google Scholar] [CrossRef]
  17. Dai, H. Application of PCA Numalgorithm in Remote Sensing Image Processing. Mod. Electron. Technol. 2023, 7, 17–21. [Google Scholar] [CrossRef]
  18. Gao, Y. Adaptive Generalized Eigenvector Estimating Algorithm for Hermitian Matrix Pencil. IEEE/CAA J. Autom. Sin. 2022, 9, 1967–1979. [Google Scholar] [CrossRef]
  19. Cai, H.; Kaloorazi, M.F.; Chen, J.; Chen, W.; Richard, C. Online dominant generalized eigenvectors extraction via a randomized method. In Proceedings of the 28th European Signal Processing Conference, Amsterdam, The Netherlands, 18–21 January 2021. [Google Scholar]
  20. Möller, R. A self-stabilizing learning rule for minor component analysis. Int. J. Neural Syst. 2004, 14, 1–8. [Google Scholar] [CrossRef] [PubMed]
  21. Peng, D.; Zhang, Y.; Xiang, Y.; Zhang, H. A globally convergent MC algorithm with an adaptive learning rate. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 359–365. [Google Scholar] [CrossRef] [PubMed]
  22. Tanaka, T. Generalized weighted rules for principal components tracking. IEEE Trans. Signal Process. 2005, 53, 1243–1253. [Google Scholar] [CrossRef]
  23. Jankovic, M.V.; Reljin, B. A new minor component analysis method based on Douglas-Kung-Amari minor subspace analysis method. IEEE Signal Process. Lett. 2005, 12, 859–862. [Google Scholar] [CrossRef]
  24. Ouyang, S.; Bao, Z. Fast Principal Component Extraction by a Weighted Information Criterion. IEEE Trans. Signal Process. 2002, 50, 1994–2002. [Google Scholar] [CrossRef]
  25. Du, B.; Kong, X.; Feng, X. Generalized principal component analysis-based subspace decomposition of fault deviations and its application to fault reconstruction. IEEE Access 2020, 8, 34177–34186. [Google Scholar] [CrossRef]
  26. Kong, X.; Hu, C.; Duan, Z. Principal Component Analysis Networks and Algorithms; Springer: Beijing, China, 2017. [Google Scholar]
  27. Tan, K.K.; Lv, J.; Zhang, Y.; Huang, S. Adaptive multiple minor directions extraction in parallel using a PCA neural network. Theor. Comput. Sci. 2010, 411, 4200–4215. [Google Scholar] [CrossRef]
  28. Jou, Y.-D.; Chen, F.-K. Design of equiripple FIR digital differentiators using neural weighted least-squares algorithm. In Proceedings of the 2011 8th International Conference on Information, Communications & Signal Processing, Singapore, 13–16 December 2011; pp. 1–5. [Google Scholar]
  29. Hasan, M.A. Diagonally weighted and shifted criteria for minor and principal component extraction. In Proceedings of the IEEE International Joint Conference on Neural Networks, IJCNN ‘05, Montreal, QC, Canada, 31 July–4 August 2005; Volume 1252, pp. 1251–1256. [Google Scholar]
  30. Jou, Y.-D.; Chen, F.-K.; Sun, C.-M. Neural weighted least-squares design of FIR higher-order digital differentiators. In Proceedings of the 2009 16th International Conference on Digital Signal Processing, Santorini, Greece, 5–7 July 2009; pp. 1–5. [Google Scholar]
  31. Du, K.-L.; Swamy, M.N. Neural Networks and Statistical Learning, 1st ed.; Springer: London, UK, 2019. [Google Scholar]
  32. Qiu, J.; Wang, H.; Lu, J.; Zhang, B. Neural network implementations for PCA and its extensions. ISRN Artif. Intell. 2012, 2012, 847305. [Google Scholar] [CrossRef]
Figure 1. Simulation results for the transient behavior of the proposed algorithm for Example 1. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors. (c) Estimated minor eigenvalues. (d) Orthogonality of the estimated MCs.
Figure 1. Simulation results for the transient behavior of the proposed algorithm for Example 1. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors. (c) Estimated minor eigenvalues. (d) Orthogonality of the estimated MCs.
Electronics 14 04073 g001
Figure 2. Simulation results for the transient behavior of the proposed algorithm for Example 2. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors. (c) Estimated minor eigenvalues. (d) Orthogonality of the estimated MCs.
Figure 2. Simulation results for the transient behavior of the proposed algorithm for Example 2. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors. (c) Estimated minor eigenvalues. (d) Orthogonality of the estimated MCs.
Electronics 14 04073 g002
Figure 3. Simulation results for comparison between the proposed algorithm and the Douglas algorithm. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors.
Figure 3. Simulation results for comparison between the proposed algorithm and the Douglas algorithm. (a) Direction cosine curves. (b) Norm of the estimated minor eigenvectors.
Electronics 14 04073 g003
Table 1. Initial parameters of the two algorithms.
Table 1. Initial parameters of the two algorithms.
Proposed AlgorithmDouglas’s Algorithm
Learning rate η = 0.02 η = 0.02
Other parameterA = diag ([1–3]) α = 0.1
Table 2. Computation times of the two algorithms.
Table 2. Computation times of the two algorithms.
MethodProposed AlgorithmDouglas’s Algorithm
Time (ms)2.0510.55
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Dong, H.; Xu, Z.; Li, H.; Li, J.; Yuan, S. Multiple Minor Components Extraction in Parallel Based on Möller Algorithm. Electronics 2025, 14, 4073. https://doi.org/10.3390/electronics14204073

AMA Style

Gao Y, Dong H, Xu Z, Li H, Li J, Yuan S. Multiple Minor Components Extraction in Parallel Based on Möller Algorithm. Electronics. 2025; 14(20):4073. https://doi.org/10.3390/electronics14204073

Chicago/Turabian Style

Gao, Yingbin, Haidi Dong, Zhongying Xu, Haiyan Li, Jing Li, and Shenzhi Yuan. 2025. "Multiple Minor Components Extraction in Parallel Based on Möller Algorithm" Electronics 14, no. 20: 4073. https://doi.org/10.3390/electronics14204073

APA Style

Gao, Y., Dong, H., Xu, Z., Li, H., Li, J., & Yuan, S. (2025). Multiple Minor Components Extraction in Parallel Based on Möller Algorithm. Electronics, 14(20), 4073. https://doi.org/10.3390/electronics14204073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop