1. Introduction
Passive localization technology refers to techniques that determine the location of an emission source using only electromagnetic information received by an observation platform, without emitting electromagnetic signals [
1,
2,
3]. Traditional passive localization methods typically use a two-step approach to estimate the source location. First, mathematical models are used to estimate localization parameters such as phase difference, time difference of arrival, angle of arrival, and Doppler frequency. These parameters are then associated, and equations are solved to determine the source’s position [
4,
5,
6,
7,
8,
9]. However, these methods suffer from poor robustness in low-signal-to-noise-ratio (SNR) environments and difficulty in parameter pairing in multi-emitter scenarios. To address these issues, direct localization methods, which estimate the source location directly from the received signal [
10,
11,
12,
13,
14], have been proposed. Compared to multi-station direct localization algorithms [
15,
16,
17], motion-based single-station direct localization algorithms offer the advantage of not requiring time-frequency synchronization between stations or the design of data transmission, thereby reducing system complexity. Additionally, these algorithms allow the motion platform to construct arbitrary observation configurations, greatly enhancing system flexibility. This paper focuses primarily on the research of motion-based single-station direct localization algorithms.
Weiss and Amour first introduced the concept of direct localization algorithms [
12], which process multi-observation data models based on the maximum likelihood method. However, since the maximum likelihood method requires a multidimensional search of the nonlinear cost function, it significantly increases the algorithm’s complexity. In 2008, Bruno Demissie and others used the MUSIC algorithm to fuse observation data from multiple time slots to determine the target’s position [
18] and provided the Cramér–Rao Lower Bound (CRLB) for direct localization algorithms. In 2016, Weiss and others proposed a direct localization algorithm based on the MVDR method [
19]. Unlike subspace-based algorithms, this method does not require the prior estimation of the number of signal sources, thus avoiding performance degradation caused by incorrect source number estimation. However, this method is slightly less accurate than subspace-based algorithms in high-signal-to-noise-ratio (SNR) environments. Due to the presence of a large number of non-circular signals in modern communication systems, non-circular signals have the advantage of a non-zero unconjugated covariance matrix, which not only enhances positioning accuracy but also extends the aperture of the array. In 2017, Yin and others [
20] incorporated the non-circular signal model into the direct localization framework, effectively extending the array aperture using non-circular signals. However, this algorithm may fail when circular signals are present in the incident signals. Additionally, several algorithms have been studied for direct localization in different scenarios, such as the study of intermittent signal emission by sources [
21] and algorithms utilizing Doppler information for direct localization [
22].
However, the methods mentioned above are limited to studies on uniform arrays and offer only limited improvements in an array’s degrees of freedom and observation accuracy. Sparse arrays are artificially designed non-uniform arrays that have the advantages of high direction-finding accuracy and high degrees of freedom. They mainly include coprime arrays [
23] and nested arrays [
24]. After extensive research, sparse arrays have evolved into various configurations such as generalized coprime arrays [
25], super nested arrays [
26], and augmented coprime arrays [
27], further improving the performance of the original sparse arrays. To further enhance the degrees of freedom and localization accuracy of the single-station observation system, reference [
28] integrated a coprime array into the motion platform. This system vectorizes the covariance matrix of the received signals and constructs a virtual difference array, effectively filling the gaps in the original array to some extent, thereby improving the localization accuracy and degrees of freedom. However, these types of algorithms only utilize the largest continuous portion of the coarray, discarding some information, which leads to a reduction in multi-source observation capabilities and localization accuracy. Compared to traditional direct localization algorithms, sparse reconstruction-based algorithms can fully exploit the received information from the array, offering stronger advantages in terms of degrees of freedom, resolution, and localization accuracy [
29].
However, sparse algorithms based on
norm grid-based processing methods [
30,
31,
32,
33,
34,
35], while demonstrating good performance in low-SNR and coherent signal environments [
36], suffer from a key limitation: the sparse solution is constrained to the predefined grid, which can lead to mismatch with the true solution and often results in local optimal solutions [
37]. In real-world scenarios, the positions of radiation sources are often unknown, and pre-set discrete grids cannot guarantee that the true positions will fall on them. As a result, two types of algorithms have been developed to address grid mismatch: off-grid algorithms [
38,
39,
40] and gridless algorithms [
41,
42,
43,
44]. A prominent off-grid algorithm is the Off-Grid Sparse Bayesian Learning (OGSBL) algorithm [
45], which performs a first-order Taylor expansion of the array’s true steering vector on the nearby grid. In adding the first-order Taylor expansion term of the true steering vector and an error weighting coefficient to the nominal steering vector corresponding to the predefined grid, the influence of the grid is reduced. While this method improves estimation accuracy, the large number of parameters and complex iterative process increases the computational complexity. Moreover, when quantization errors are large, the first-order approximation alone cannot adequately fit the array steering vector, leading to algorithm failure [
46]. For direct localization algorithms, covariance reconstruction is required at each observation position, and the use of off-grid methods significantly increases computational complexity [
47]. In recent years, gridless methods based on covariance fitting and atomic norm minimization [
41,
42,
43,
44] have effectively addressed the grid mismatch problem. Tang and colleagues [
41,
42] proposed gridless Direction-of-Arrival (DOA) estimation methods using the atomic norm, including approaches based on Semi-Definite Programming (SDP) and atomic norm soft thresholding. Additionally, MISHRA introduced an atomic norm method incorporating prior knowledge of the signal [
48], and Zhou and others [
43] proposed using the atomic norm to complete coarrays for DOA estimation. Wu and colleagues [
49] further applied low-rank matrix reconstruction techniques to achieve gridless parameter estimation for multiple signals.
This paper, based on the coprime array model, proposes a motion-based single-station direct position determination (DPD) algorithm capable of localizing both circular and non-circular signals. Building on the fundamental idea of covariance data fusion, the algorithm employs a gridless method to fill the gaps in both the sum and difference coarray. Then, based on the reconstructed equivalent array, it separates different signals by leveraging differences in their degree of non-circularity. To the best of our knowledge, most existing algorithms focus primarily on circular signals. This algorithm, however, expands the aperture of sparse arrays utilizing the non-circular components of the received signals. By employing an improved Subspace Data Fusion (SDF) algorithm, it achieves high-precision localization for multiple signal sources. Additionally, the algorithm reduces computational complexity using unitary transformation to shift operations from the complex domain to the real domain.
The contributions of this study are as follows:
(1) In leveraging the characteristic that the unconjugated covariance matrix of the non-circular component of the signal is non-zero, the sum coarray is constructed. This enhances the array’s degrees of freedom and localization accuracy compared to single-station DPD algorithms that integrate only the difference coarray.
(2) The virtual interpolated array technique is employed to fully utilize all actual observations, filling the discontinuities in the virtual sum and difference coarray, thus increasing the information utilization efficiency of the virtual array.
(3) Based on the recovered virtual interpolated array, the direct localization cost function for mixed signals is derived. The cost function is further improved using a unitary transformation, converting operations from the complex domain to the real domain, which effectively reduces the computational complexity of the algorithm.
The remaining chapters of this paper are organized as follows:
Section 2 introduces the DPD localization model.
Section 3 first integrates the sparse array into the DPD motion platform and then constructs the models for the sum coarray and difference coarray of the sparse array. Subsequently, a gridless method is used to fill the gaps in the virtual array, and the cost function for mixed circular and non-circular signals is derived.
Section 4 presents the results of numerical simulations.
Section 5 further discusses the significance and potential of the algorithm through simulation analysis. Finally,
Section 6 provides a summary of the paper.
Notation: denotes the transpose; ∗ denotes the conjugate; denotes the conjugate transpose; represents the expectation; ⊙ denotes the Khatri–Rao product; ⊗ denotes the Kronecker product; ∘ denotes the Hadamard product; represents the cardinality of set S; denotes the i-th element of a vector; represents the Frobenius norm; vec denotes the vectorization operation, which arranges a matrix into a column vector by stacking its columns; denotes the trace of a matrix; and represents the identity matrix.
3. Proposed Algorithm
The platform integrates a coprime sensor array consisting of two subarrays, as shown in
Figure 2. Subarray one consists of
N sensors with a spacing of
, and subarray two consists of
M sensors with a spacing of
, where
M and
N are coprime, and
. The subarrays are arranged in a straight line and share the same reference element, so the array has a total of
sensors. The positions of the sensors can be expressed as
Circularity and non-circularity [
50] are important properties of random signals. Modern communication systems use a large number of non-circular signals, such as BPSK, UQPSK, MSK, and other modulated signals. Since the unconjugated covariance matrix of non-circular signals is non-zero, it is possible to enhance the degrees of freedom of the array by extending the conjugate of the received signal. It is important to note that in array signal processing, the signal does not need to meet the condition of non-circularity strictly; satisfying the condition of pseudo-non-circularity is sufficient. The condition for pseudo-circularity is shown as follows [
51]:
As seen from Equation (
13), the unconjugated covariance of non-circular signals is non-zero. Most existing localization algorithms utilize the covariance information of the signal while ignoring the unconjugated covariance information, which leaves room for improvement in the array’s degrees of freedom and accuracy. For non-circular signals, the following condition is satisfied [
51]:
where
represents the non-circular phase, and
denotes the non-circularity rate, with a value in the range
. For Maximal Non-circularity Rated Signals,
, and for Common Non-circularity Rated Signals,
.
For
Q uncorrelated sources, the unconjugated covariance matrix of the signal is given by
where
is a diagonal matrix composed of the non-circularity rates of each signal. For Maximal Non-circularity Rated Signals,
, while
is a diagonal matrix composed of the non-circular phases of each signal.
Based on (
2) and (
15), the expression for the unconjugated covariance matrix of the received signal can be derived as
In practical applications, the covariance and unconjugated covariance matrix of each batch of received signals are replaced by the sample covariance, which is given by
By performing conjugate augmentation on the received signal vector, we obtain
The covariance matrix of the extended signal vector
for the
l-th batch is
Vectorizing
in Equation (
19), we obtain
where ⊙ denotes the Khatri–Rao product,
, where
represents a row vector with the first element as 1 and the remaining elements as 0.
, and
. The matrix
is given by
.
Vectorizing
in Equation (
19), we obtain
where
.
Thus, from Equations (
20) and (
21), the following virtual difference coarray and sum coarray can be derived:
,
.
However, the virtual array constructed in this way will have gaps. Traditional methods can only utilize the largest continuous portion of the virtual array, discarding the non-continuous parts, which leads to information loss. If we fill in the gaps of the non-continuous elements in the array, we can maximize the utilization of all the information from the elements. Let
where
and
represent the virtual array received signals of
and
, while
and
represent the virtual array received signals of
and
. This approach maximizes the utilization of the virtual elements of the sum and difference coarray. The corresponding interpolation process is illustrated in
Figure 3.
Note: The generally defined sum coarray, , is , but corresponds to , and the information it contains is identical to that of , which corresponds to . Therefore, unless otherwise specified, the sum coarray in this paper refers to .
3.1. Gridless Recovery Based on Array Interpolation
Although the SCA (sum coarray array) and DCA (difference coarray array) contain gaps, these gaps can be filled using array interpolation methods. The corresponding schematic diagram is shown in
Figure 3. The work by [
43] uses subarray division techniques to establish the relationship between the virtual coarray received signals of the difference coarray and the covariance matrix of the equivalent array’s received signals. Since the covariance matrix of the ideal signal satisfies a Toeplitz structure, we use Equation (
22) to construct the covariance matrix of the equivalent array directly:
where
. We can thus formulate the following atomic norm minimization (ANM) problem:
where
represents a Hermitian positive semi-definite matrix generated from the first column
of
. Here,
represents the steering vector of the equivalent array.
is a selection matrix that ensures
matches the zero elements of
. Since
contains all the collected information,
does not extend the available information.
Due to the difficulty in solving Equation (
25), it is convexly relaxed to the
atomic norm.
where
Similar to Equation (
24), we can construct the relationship between the virtual elements of the sum coarray’s received signal and the unconjugated covariance of the equivalent array’s received signal. Since the unconjugated covariance matrix of the ideal signal satisfies a Hankel structure, we use Equation (
23) to construct the unconjugated covariance matrix of the equivalent array directly:
where
. It is worth mentioning that after array interpolation, the number of equivalent elements in both the sum coarray and the difference coarray becomes the same, i.e.,
.
For the recovery of the unconjugated covariance, we employ low-rank structured covariance reconstruction (LRSCR), specifically
where
represents a Hankel matrix generated from the elements
of
. Equation (29) can be convexly relaxed into the following nuclear norm minimization problem:
3.2. Positioning Estimation for Circular and Non-Circular Signals
When using Equations (
26) and (
30) to recover the covariance matrix
and the unconjugated covariance matrix
of the equivalent array, we obtain
Equation (
31) can be rewritten as
For Maximal Non-circularity Rated Signals, Equation (
32) can be written as
A typical covariance-based fusion includes the MUSIC, Capon, and maximum likelihood methods. The latter two require a non-circular phase search [
52], which significantly increases computational complexity. However, by applying some matrix operations, the MUSIC algorithm can avoid the non-circular phase search [
50]. Therefore, this paper adopts the SDF algorithm [
18], which is based on MUSIC. As can be seen from Equation (
33), the MUSIC method can be used to extract the signal’s DOA information, and its eigenvalue decomposition is performed as follows:
According to [
50], the spectral function of the MUSIC algorithm for Maximal Non-circularity Rated Signals can be expressed as
where
.
For circular signals and Common Non-circularity Rated Signals, assuming there are
w Maximal Non-circularity Rated Signals and
z mixed Common Non-circularity Rated and circular signals, we can express
as
Since the non-circularity rate of Maximal Non-circularity Rated Signals is 1,
and
can be written as
After some matrix transformations, Equation (
32) can be rewritten as
where
Based on the form of Equation (
39), we can use the MUSIC algorithm to extract the DOA information for mixed Common Non-circularity Rated and circular signals. The DOA estimation spectral function for mixed signals is provided in [
53].
It is proven in [
50] that
Thus, Equation (
35) can be written as
From Equation (
41), we know that
is orthogonal to
, so signals satisfying Equation (
41) also satisfy Equation (
35).
It is worth mentioning that when using Equation (
35) for estimation in scenarios with a small number of snapshots and a low signal-to-noise ratio, spurious peaks may appear at the locations of circular and Common Non-circularity Rated Signals. However, this is beyond the scope of this paper; see [
53] for details.
Since non-circular signals significantly extend the array’s degrees of freedom, using Equation (
35) for estimation increases computational complexity. Here, we employ a unitary transformation to convert the computation from the complex domain to the real domain, thereby reducing the computational complexity.
When the matrix order is
and
, the unitary matrix
is defined as
where
is a counter-diagonal identity matrix.
By left-multiplying
by the unitary matrix
, left-multiplying the noise subspace
by the unitary matrix
, and left-multiplying
by
, Equation (
35) becomes
According to [
54],
is a real vector, thus completing the transformation from the complex domain to the real domain.
Through fusing the data from
L time periods, a MUSIC-based direct localization spectral function can be constructed as follows:
The reduction in computational complexity due to the unitary transformation is reflected in the grid search process. Let the number of grid points be . Without using the unitary transformation, each grid point requires complex multiplications, resulting in a total of complex multiplications for the two-dimensional search. Here, L is the number of observation batches, J is the number of usable array elements in the equivalent interpolated array, and Q is the number of signal sources.
When the Unitary transformation is applied, each grid point requires real multiplications, resulting in a total of real multiplications for the two-dimensional search. According to the complexity formulas, it can be seen that the unitary transformation proportionally reduces the complexity of the grid search.
Algorithm 1 illustrates the steps of this algorithm.
Algorithm 1: An Enhanced Direct Position Determination of Mixed Circular and Non-Circular Sources Using Moving Virtual Interpolation Array |
|
4. Simulation Results
In this study, several numerical experiments were conducted to illustrate the effectiveness of the algorithm in localization. All experiments in this study used a mobile platform equipped with a coprime sensor array consisting of
and
, for a total of
sensors, with sensor positions fixed at
. Without loss of generality, the signal carrier frequency was set to
, and the sensor spacing
. The proposed algorithm was compared with the uniform linear array (ULA-SDF) algorithm, the smoothed sum and difference coarray algorithm (SDCA-SDF), the SSR algorithm from [
55], and the nuclear norm minimization algorithm proposed in [
56]. The regularization parameters in the above algorithms were all set to 0.25, the grid spacing in the SSR algorithm was set to 1°, and the convex optimization problems were solved using the CVX solver in MATLAB R2023a.
4.1. Resolution
Figure 4 shows the resolution of each algorithm when dealing with closely spaced targets. The red portions in the figure indicate the platform’s movement trajectory. The target positions were set at (15.0, 15.0) km and (15.5, 15.5) km. The black triangles represent the target locations. In the scenario depicted in
Figure 4, the number of snapshots is
, and the search area was set to (0 km, 30 km) × (0 km, 30 km), with a grid search density of 500 × 500.
From
Figure 5, it can be observed that ULA- and SDCA-based algorithms show poorer resolution. However, the sparse recovery-based algorithms successfully identify the closely spaced sources set in
Figure 4. Among these, the SSR-based algorithm and the algorithm proposed in this paper demonstrate the best performance in identifying the targets.
From
Figure 6, it can be seen that when SNR drops to 5 dB, only the proposed algorithm maintains good resolution. This is because the NNM algorithm does not account for the effects of noise, and the SSR algorithm, due to its grid-based approach, experiences energy aliasing between closely spaced grid points at lower SNR levels.
4.2. Localization Accuracy
This subsection shows the experiments’ root mean square error (RMSE) variation curves as a function of the SNR and snapshots. The RMSE is defined as in Equation (
49):
In Equation (
49),
represents the number of Monte Carlo trials in the experiment, and
represents the number of actual signal sources. In this experiment, there are seven signal sources, and the number of Monte Carlo trials is set to 500.
Figure 7 shows the target and movement settings for the experiments in this section, where the number of movement batches is
. The search range for this algorithm was set to a rectangular area of (0 km, 30 km) × (0 km, 30 km), and a multi-level grid search strategy was used, with the finest grid resolution reaching 10 m × 10 m. The target positions in the scenario were set at (6, 6), (16, 5), (28, 5), (5, 16), (16, 16), (28, 16), and (6, 24) (all in km). The first two targets emit QPSK signals, the third target emits a UQPSK signal with a non-circularity rate of
, and the remaining sources emitted BPSK signals.
From
Figure 8a, it can be observed that the proposed algorithm exhibits excellent SNR performance, with increasingly precise localization as the SNR increases. Before approximately 12 dB, the uniform linear array (ULA) shows the worst localization performance, as it is constrained by the array aperture, preventing it from achieving better accuracy compared to sparse arrays. The SDCA-based localization algorithm consistently performs worse than the three sparse recovery algorithms, primarily because the SDCA algorithm discards more equivalent elements, resulting in less information being used compared to the sparse recovery algorithms. When the SNR exceeds −5dB, there remains a localization accuracy gap of several hundred meters compared to the algorithm proposed in this paper. At higher SNRs (after approximately 12 dB), this issue of lost information becomes more pronounced, with SDCA even performing worse than the ULA-based localization algorithm. Sparse recovery algorithms, which utilize all array element information, outperform the other algorithms.
It is also evident that once the SNR exceeds 5 dB, the curve for the SSR algorithm flattens, and the localization accuracy remains around 200 m. This occurs because the SSR algorithm uses a grid-based recovery strategy, and as the SNR becomes sufficiently high, the predefined grid increasingly fails to align with the true source locations, leading to the so-called “basis mismatch” problem, a major issue with grid-based algorithms. The localization performance of the NNM algorithm is inferior to that of the proposed algorithm by several hundred meters at lower signal-to-noise ratios. At higher SNRs, its localization accuracy is about 50 m worse than that of the proposed algorithm. This is because, although it uses a gridless recovery strategy, it does not account for the impact of noise, leading to greater errors in the recovered array elements compared to the other sparse recovery algorithms.
Figure 8b shows the RMSE curves of each algorithm as a function of the number of snapshots when the SNR is set to
dB. It can be observed that the ULA-based algorithms and the SDCA-based algorithm exhibit much worse localization accuracy compared to the sparse recovery algorithms. However, the situation where the ULA outperforms the SDCA, as seen in
Figure 8a, does not occur here. In fact, the localization accuracy of the uniform linear array is approximately one kilometer lower than that of the SDCA-based algorithm. This indicates that, at low SNRs, the information discarded by the SDCA algorithm is less sensitive to changes in the number of snapshots.
At lower snapshot counts, the proposed algorithm has a significant advantage over the other algorithms, with its localization accuracy being approximately 100 m better than the other two sparse recovery algorithms. When the number of snapshots increases, the recovery performances of the three sparse recovery algorithms become similar. However, since the SSR algorithm involves predefined grid operations, its curve flattens after 400 snapshots.
4.3. The Impact of the Movement Trajectory
4.3.1. Batch Number
This section primarily investigates the impact of the movement trajectory and sampling batches on multi-target localization.
Figure 9 shows a schematic of the simulation scenario. The true target positions are located at (7, 5), (7, 10), (7, 15), (7, 20), (7, 25), (15, 5), (15, 10), (15, 15), (15, 20), (15, 25), (25, 5), (25, 10), (25, 15), (25, 20), and (25, 25). Among these, the 6th, 7th, and 8th sources transmit QPSK signals, the 9th and 10th sources transmit UQPSK signals with a non-circularity rate of 0.8, and the remaining sources transmit BPSK signals. The SNR in the scenario is set to 20 dB, and the number of snapshots is
.
Figure 10 shows the localization performance for different batch numbers. It is evident that as the batch number decreases, the localization performance degrades significantly. When the batch number is reduced to 10, the sources become completely unrecognizable, with numerous false peaks appearing. This occurs because the reduction in batch number undermines the effectiveness of covariance fusion and the completeness of observations, leading to false peaks at the intersections of line-of-sight vectors (the DOA vectors generated at each observation point). As the number of sources to be estimated increases, the corresponding batch number should also increase.
4.3.2. Movement Trajectory
This section will demonstrate the impact of different movement trajectories of the platform on localization performance. Several typical scenarios will be set up for a detailed explanation. In each scenario, the batch number L was set to 50, all sources emit BPSK signals, and the SNR and the number of snapshots for each source were provided, respectively.
Figure 11a shows a scenario where the movement trajectory is close to the targets. From the localization results displayed in
Figure 11b, it can be observed that many false peaks appear between the trajectory and the true target positions. A reasonable explanation is that when the trajectory is close to the targets, the intersection of different line-of-sight vectors increases, particularly resulting in more false peaks in the middle of the trajectory and the observation weights in the middle of the trajectory are higher, which leads to more false peaks in this region.
Figure 12a shows a scenario where the movement trajectory is farther from the target scene. From the localization results displayed in
Figure 12b, it can be seen that the resolution of the targets is extremely poor. This is mainly due to the resolution limitations of the MUSIC algorithm. At farther distances, the line-of-sight vectors become wider, causing different vectors to overlap at the target locations. Increasing the SNR and the number of snapshots, as shown in
Figure 12c, results in better localization performance.
Figure 13a shows a scenario where the trajectory is a curve passing through the targets. It can be observed that the localization performance varies for different targets. Targets farther from the trajectory exhibit wider spectral peaks, while targets closer to the trajectory have narrower spectral peaks. This indicates that the trajectory has a significant impact on the localization of the targets.
Figure 14a shows a scenario where the trajectory is along the y-axis. From
Figure 14b, it can be seen that the localization performance for each point is good, and the amplitude of the false peaks is relatively low. This trajectory effectively handles the multi-target situation.
4.4. Degrees of Freedom
This section will demonstrate the array degrees of freedom for different algorithms. The scenario setup is the same as in
Figure 9, with 99 movement batches, an SNR of 20 dB, and the number of snapshots set to
. The mixed signal model is also the same as in
Figure 10. From Equation (
39), we know that
, and according to the theory of the MUSIC algorithm, the maximum theoretical number of identifiable targets for the proposed algorithm is
.
Figure 15 shows the localization performances of different algorithms for multi-target scenarios. In
Figure 15a, the algorithm is limited by the array aperture, and the maximum number of identifiable targets is
. Therefore, the last seven targets were removed, resulting in a used degree of freedom of
. In
Figure 15b, the algorithm only uses the largest continuous portion of the sum and difference coarrays, allowing it to identify a maximum of
targets, so the last six targets were removed, with a used degree of freedom of
.
For the sparse recovery algorithms shown in the figure, all algorithms fully utilize the array degrees of freedom of the sparse array. Among these, the proposed algorithm and the SSR algorithm demonstrate better localization performance than the NNM algorithm.
4.5. Computation Time
This section presents the final experiment in this paper, which simulates the computation time of each algorithm. The simulation was conducted on a system equipped with a 13th Gen Intel(R) Core(TM) i9-13900K CPU and 32.0 GB*2 RAM. The manufacturer of the CPU is Intel Corporation, located in Santa Clara, CA, USA. The simulation scenario is the same as that in
Figure 6 from the first experiment. Each data point underwent 200 Monte Carlo trials, and all optimization problems involved in the simulation were solved using Matlab’s built-in CVX solver.
Table 2 shows the differences in computational complexity between the various algorithms. It can be observed that the sparse recovery algorithms have significantly higher complexity, mainly due to the optimization problem solving involved. The SSR algorithm has the highest complexity, primarily because of the grid-based solving process.