Next Article in Journal
Advanced Machine Learning Methods for Major Hurricane Forecasting
Next Article in Special Issue
Extended Polar Format Algorithm (EPFA) for High-Resolution Highly Squinted SAR
Previous Article in Journal
Satellite Multi-Sensor Data Analysis of Unusually Strong Polar Lows over the Chukchi and Beaufort Seas in October 2017
Previous Article in Special Issue
Transmit Beampattern Design for Distributed Satellite Constellation Based on Space–Time–Frequency DoFs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Science and Technology on Electronic Information Control Laboratory, Chengdu 610036, China
3
Northern Institute of Electronic Equipment of China, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(1), 121; https://doi.org/10.3390/rs15010121
Submission received: 7 November 2022 / Revised: 12 December 2022 / Accepted: 16 December 2022 / Published: 26 December 2022
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)

Abstract

:
Electromagnetic data annotation is one of the most important steps in many signal processing applications, e.g., radar signal deinterleaving and radar mode analysis. This work considers cooperative electromagnetic data annotation from multiple reconnaissance receivers/platforms. By exploiting the inherent correlation of the electromagnetic signal, as well as the correlation of the observations from multiple receivers, a low-rank matrix recovery formulation is proposed for the cooperative annotation problem. Specifically, considering the measured parameters of the same emitter should be roughly the same at different platforms, the cooperative annotation is modeled as a low-rank matrix recovery problem, which is solved iteratively either by the rank minimization method or the maximum-rank decomposition method. A comparison of the two methods, with the traditional annotation method on both the synthetic and real data, is given. Numerical experiments show that the proposed methods can effectively recover missing annotations and correct annotation errors.

1. Introduction

As radar has been widely used in the battlefield, radar signal reconnaissance plays an important role in electronic warfare (EW). Typically, the first step of the radar reconnaissance system is to annotate the intercepted radar pulses with some key parameters, such as pulse width, carrier frequency, pulse repetition interval, direction of arrival (DOA), etc., which is also known as pulse description word (PDW). By analyzing the range and variation characteristics of these parameters, the working mode and behavior of the radar can be recognized. Therefore, accurate annotation is one of the key steps for radar countermeasure [1,2]. However, with the appearance of advanced multi-function radar systems, the electromagnetic environment has become increasingly complex, and the annotation is facing unprecedented challenges [3]. Firstly the electromagnetic spectrum is congested, and the pulse density of radar signals surges. At present, the pulse density in a typical environment may exceed millions or even tens of millions per second. Secondly, the advanced radar transmitter is programmable, networked, and intelligent, which leads to agile and overlapping parameters. The traditional fixed pulse pattern (such as fixed carrier frequency, repeated frequency, and unmodulated pulses) tends to be replaced with more complex time-varying patterns in modern radar systems. In addition, to improve the anti-reconnaissance and anti-jamming capabilities, more complex inter-pulse modulation patterns are adopted, which makes it hard to accurately annotate the parameters from the interception; the strong antagonism between the two sides of the non-cooperative game and the high real-time response induce incomplete and even wrong characteristic parameters of radar signals obtained by reconnaissance. Therefore, how to accurately and stably annotate the parameters of radar pluses is crucial for radar countermeasures.
Apart from radar countermeasures, data annotation is also commonly encountered in other fields, e.g., image and text data processing. At present, most annotations still rely on traditional manual methods. Manual annotation is often labor-intensive, tedious, and inefficient due to differences in personal experience and a lack of effective information. The heuristic rule-based annotation method and the pattern matching-based annotation method are also commonly used in the field of image and text data processing [4,5,6,7]. The annotation method based on the heuristic rule has low accuracy and generality, and cannot add semantic annotations to all the extracted data [7]. The pattern matching method utilizes the pre-established pattern matching relationship to annotate the data in a complementary manner [8], but in general, it is difficult to guarantee the correctness of the matching relationship. In view of the above shortcomings, it is difficult to adapt the traditional annotation methods to the reconnaissance electromagnetic data obtained under non-cooperative and strong confrontation conditions. Moreover, the reconnaissance data obtained by multiple heterogeneous platforms often have problems such as poor data quality, low annotation rate, and a serious lack of annotation information, which presents an obstacle to subsequent analyses and processing. How to realize the automatic annotation efficiently and accurately is particularly important for radar countermeasures.
In this work, we consider that radar reconnaissance data are intercepted by multiple reconnaissance platforms, but due to interference and noisy environments, each platform may have only partial, incomplete annotations of the radar pulses. Our goal is to use these partial annotations to cooperatively obtain an accurate and complete annotation. To this end, we exploit two key observations, namely, (1) radar reconnaissance data are often inherently correlated in the time-frequency domain; (2) interceptions from multiple platforms are highly correlated since they are from the same target. Upon the above two observations, we expect that the collected data from multiple platforms should exhibit a certain low-rank structure. The low-rank representation in matrix form is an important data representation, which has been widely used in various research areas such as robust principal component analysis [8,9] and matrix completion [10,11,12,13]. It also can be used for image restoration combined with sparse optimization [14,15,16]. Low-rank matrix recovery can be regarded as a generalization of compressed sensing, that is, how to recover the original matrix using the observation data under the low-rank condition [17,18,19]. Based on the theory of completion and recovery of the low-rank matrix, the redundancy existing in data can be exploited to fill in the missing elements or correct the erroneous annotations. While low-rank matrix completion has been widely used in other fields, e.g., image recovery [20,21,22,23,24] and matrix completion [25,26,27,28,29,30,31,32,33], to the best of our knowledge we are not aware of any work on electronic reconnaissance data annotation, especially in radar countermeasure applications. In this work, we first formulate the cooperative annotation problem as a low-rank matrix completion problem and then two efficient optimization algorithms are developed; one is based on convex relaxation and the other is non-convex max-rank decomposition. Simulations on synthetic data and real data are provided to demonstrate the efficacy of the proposed methods by comparing them with the conventional method.
The outline of this paper is given as follows. In Section 2, the problem formulation is presented. In Section 3, a rank-minimization algorithm for annotation completion is proposed. In Section 4, a maximum-rank-decomposition algorithm is proposed. In Section 5, numerical comparisons of the two proposed methods with some state-of-the-art algorithms are given. In the end, Section 6 concludes the paper.

2. Problem Formulation

Suppose that there are n 1 reconnaissance receivers/platforms and n 2 emitters/targets, e.g., radars, in the observation area within a certain time range. For each target, there are n 3 measured parameters, including time, location (such as longitude, altitude, and height), speed, frequency band, signal intensity, etc. An illustration of the measured parameters is given in Table 1, which records the annotation information of different platforms, where “ * * ” represents the received value of measured parameters.
The characteristics of the targets observed at different platforms in Table 1 can be written as a matrix X R m 1 × n 3 by arranging measured parameters in the order of platforms, where m 1 = n 1 × n 2 .
X = x 1 , 1 x 1 , 2 x 1 , n 3 x 2 , 1 x 2 , 2 x 2 , n 3 x m 1 , 1 x m 1 , 2 x m 1 , n 3
In general, it is difficult to collect target information all the time at each platform, and the parameters (annotation information) detected by different platforms are not exactly the same due to the heterogeneous characteristics between different types of platforms. In addition, different platforms have different statuses, such as “work/maintenance”, at the same time. All these facts lead to the missing characteristic information in Table 1 and matrix X, which is shown in Figure 1, where the small black squares represent the missing annotation information. Our goal is to recover the missing elements in the matrix X from the partially observed data, i.e., annotation completion.
According to the definition of X, the row vectors of characteristic parameters belonging to the same target should be highly correlated; therefore, the rank of matrix X does not exceed the number of targets n 2 , i.e., r = rank(X) n 2 . The matrix X is low-rank if there are enough monitoring platforms and enough categories of characteristic parameters, i.e., r = rank(X) min { m 1 , n 3 } . Thus, the annotation completion can be formulated as a low-rank matrix recovery problem, in which each row or column of the matrix can be expressed linearly by other rows or columns. The missing data can be recovered perfectly with a high probability [10,22,23] using the redundant information when the rank of the matrix and the number of known elements meet certain conditions. Therefore, it is theoretically feasible to use the low-rank matrix recovery theory for annotation completion. To put it into context, let D R m 1 × n 3 be the observation matrix of X, which contains the known annotation information of X. The annotation completion problem based on low-rank matrix recovery can be modeled as:
min X R m 1 × n 3 X D 0 s . t . rank ( X ) n 3
where X D 0 is the 0 -norm of X D , i.e., the number of non-zero elements in X D . This is a complex non-convex optimization problem since the non-convex function · 0 and the non-convex constraint on rank(X). It is difficult to obtain the global optimal solution. In order to solve this problem, the min-rank-based convex approximation algorithm and the max-rank-decomposition-based non-convex algorithm are employed to find approximate solutions for problem (2).
We summarize the frequently used notations in Table 2.

3. The Rank-Minimization-Based Convex Approximation Algorithm

In this section, a rank-minimization-based convex algorithm is proposed to solve problem (2). First, let Ω { 1 , 2 , , m 1 } × { 1 , 2 , , n 3 } denote the set of indices associated with the known annotations in X. Define the linear projection operator P Ω : R m 1 × n 3 R m 1 × n 3 as follows:
P Ω = D i , j , ( i , j ) Ω 0 , ( i , j ) Ω
where D i , j represents the element in the i-th row and j-th column of matrix D R m 1 × n 3 . Then, problem (2) can be recast as the following matrix rank minimization problem.
min X R m 1 × n 3 rank ( X ) s . t . P Ω ( X ) = P Ω ( D )
where rank(·) is the rank function. Problem (4) is still a non-convex problem. Here, we consider its convex relaxation. In fact, rank(X) describes the number of non-zero singular values of X, i.e., the 0 -norm of the singular value vector. Since the 0 -norm is a non-convex function, the 1 -norm is utilized as the convex approximation of 0 -norm, which gives rise to the nuclear norm of X as the convex approximation of rank(X). By introducing the matrix slack variable E R m 1 × n 3 , the problem (4) can be approximated as the following convex problem
min X , E R m 1 × n 3 X * s . t . X + E = D , P Ω ( E ) = 0
where X * is the nuclear norm of X. To solve problem (5), we employ the alternating direction method of multiple (ADMM) algorithms. Specifically, denote the augmented Lagrangian function L c ( X , E , Λ )
L c ( X , E , Λ ) = X * + Tr { Λ T ( D X E ) } + c 2 D X E F 2
where c > 0 is the penalty factor, Λ R m 1 × n 3 is the Lagrangian multiplier matrix, Tr { · } is the trace of the matrix, · F is the Frobenius norm. Then, problem (5) can be solved by alternately updating X, E, and Λ , respectively, as follows
X k + 1 = arg min X R m 1 × n 3 L c ( X , E k , Λ k ) E k + 1 = arg min P Ω ( E ) = 0 L c ( X k + 1 , E , Λ k ) Λ k + 1 = Λ k + c ( D X k + 1 E k + 1 ) .
In the following, the updating for (7) is given.

3.1. Updating X

The updating of X R m 1 × n 3 is conducted by solving the following problem (8).
min X X * Tr { Λ k T X } + c 2 X + E k D F 2 .
In order to solve (8), an auxiliary variable matrix A k R m 1 × n 3 is introduced, which is defined as
A k = D E k + 1 c Λ k
and the singular value decomposition of A k is given by
A k = U k Σ k V k T
where U k R m 1 × m 1 and V k R n 3 × n 3 are the left and right singular matrices, respectively, Σ k R m 1 × n 3 and Σ k = Diag { σ i } , i = 1 , 2 ,   , min { m 1 , n 3 } is a diagonal matrix with the diagonal elements σ i being the i-th singular value of A k . Define the operator [ · ] + as
[ · ] + = max { · , 0 } .
Then, the optimal solution of problem (8) is given by [28]
X k + 1 = U k Diag [ σ i c 1 ] + V k T , i = 1 , 2 ,   , min { m 1 , n 3 }

3.2. Updating E

The updating of E R m 1 × n 3 can be given by solving
min E R m 1 × n 3 E ( D X k + 1 + c 1 Λ k ) F 2 s . t . P Ω ( E ) = 0 .
Clearly, the optimal solution E k + 1 of problem (13) is given by D X k + 1 + c 1 Λ k for elements not in the set Ω , thus we have
E k + 1 = P Ω ( D X k + 1 + c 1 Λ k )
where
P Ω ( A i , j ) = A i , j , ( i , j ) Ω 0 , ( i , j ) Ω
Then, the whole procedure for solving problem (5) is summarized in Algorithm 1.
Algorithm 1 The rank-minimization-based algorithm
  Initialization: D, X 0 , E 0 , Λ 0 , k = 0
   Repeat
    X k + 1 = U k Diag { [ σ k c 1 ] + } V k T ;
    E k + 1 = P Ω ( D X k + 1 + c 1 Λ k ) ;
    Λ k + 1 = Λ k + c ( D X k + 1 E k + 1 ) ;
    k = k + 1 ;
   Until some stopping criteria satisfied;
   Return X k .
From Algorithm 1, we find that the computation consumption is mainly in updating matrix X due to the singular value decomposition of A k . The total computation complexity of Algorithm 1 is at the order of O ( max { m 1 , n 3 } 3 ) since the size of A k is m 1 × n 3 .

4. The Maximum-Rank-Decomposition-Based Non-Convex Algorithm

In this section, we consider an alternative way to tackle the annotation completion problem (2) from the maximum-rank decomposition perspective. Specifically, the maximum-rank decomposition of X R m 1 × n 3 (suppose rank(X) = m 2 ) is given by
X = U V
where U R m 1 × m 2 , V R m 2 × n 3 . Upon (15), problem (2) is recast as
min X R m 1 × n 3 , U R m 1 × m 2 , V R m 2 × n 3 X D 1 s . t . X = U V
As before, we employ the ADMM approach to handle problem (15). Specifically, the augmented Lagrangian function of (16) is given as
L c ( X , U , V , Φ ) = X D 1 + Tr { Φ T ( U V X ) } + c 2 U V X F 2
where Φ R m 1 × n 3 is the Lagrangian multiplier matrix, c is the penalty factor. The ADMM algorithm repeatedly runs the following updating
X k + 1 = arg min X R m 1 × n 3 L c ( X , U k , V k , Φ k ) U k + 1 = arg min U R m 1 × m 2 L c ( X k + 1 , U , V k , Φ k ) V k + 1 = arg min V R m 2 × n 3 L c ( X k + 1 , U k + 1 , V , Φ k ) Φ k + 1 = Φ k + c ( U k + 1 V k + 1 X k + 1 )
until stopping criteria are satisfied.

4.1. Updating X

The updating of X is given by solving
min X R m 1 × n 3 X D 1 Tr { Φ k T X } + c 2 U k V k X F 2 .
By using the first-order optimality condition, we have
Φ k + c ( U k V k X ) X D 1
where X D 1 represents the sub-differential of X D 1 , which is given by
X D 1 = X D X D 1 , X D { e e | e e 1 1 } , X = D
with e e R m 1 × 1 and e e 1 1 . Then, we have
X k + 1 = D , Y k 1 1 Y k 1 1 c · Y k Y k 1 + D , o t h e r w i s e
where Y k = Φ k + c ( U k V k D ) .

4.2. Updating U

The updating of U R m 1 × m 2 is given by solving
min U R m 1 × m 2 Tr { Φ k T U V k } + c 2 U V k X k + 1 F 2 .
As the problem (23) is an unconstrained quadratic program, the optimal solution can be given by the first-order optimality condition, thus we have
U k + 1 = ( X k + 1 1 c Φ k ) V k T ( V k V k T ) 1 .

4.3. Updating V

The V R m 2 × n 3 updating is given by solving
min V R m 2 × n 3 Tr { Φ k T U k + 1 V } + c 2 U k + 1 V X k + 1 F 2 .
Similar to the problem (23), its optimal solution is given by
V k + 1 = ( U k + 1 T U k + 1 ) 1 U k + 1 T ( X k + 1 1 c Φ k ) .
We summarize the whole procedure of the ADMM algorithm for problem (16) in Algorithm 2.
The computation complexity of Algorithm 2 is decided by the updating steps. Note that the size of U k is ( m 1 × m 2 ), the size of V k is ( m 2 × n 3 ), and according to the low-rank assumption, we have m 2 m 1 and m 2 n 3 . The computation complexity for updating X k , U k , and V k is at the order of O ( m 1 × m 2 × n 3 ) . It can be seen that the non-convex algorithm (Algorithm 2) has lower per-iteration complexity as compared with the convex algorithm (Algorithm 1).
In addition, two proposed methods are designed to recover the missing feature parameters, the value of parameters is real and the auxiliary variables using the algorithm are real as well. Therefore, they cannot be utilized for complex parameters directly.
Algorithm 2 The max-rank-decomposition-based algorithm
  Initialization: D, U 0 , V 0 , Φ 0 , k=0
   Repeat:
   X k + 1 = D , Y k 1 1 Y k 1 1 c · Y k Y k 1 + D , o t h e r w i s e ;
   U k + 1 = ( X k + 1 1 c Φ k ) V k T ( V k V k T ) 1 ;
   V k + 1 = ( U k + 1 T U k + 1 ) 1 U k + 1 T ( X k + 1 1 c Φ k ) ;
   k = k + 1 ;
   Until some stopping criteria satisfied;
   Return: X k .

5. Numerical Experiments and Discussion

In this section, the performance of the two proposed methods is tested with synthetic data and real data, and the comparison testing with three different methods is also given. To evaluate the performance, the mean squared error (MSE) is adopted as performance metrics, which is denoted as
MSE = Error m n
with
Error = i , j X i , j X ^ i , j 2 X i , j 2
where X is the original matrix with size ( m × n ), and i = 1 , 2 , . . . , m , j = 1 , 2 , . . . , n , X ^ is the recovered matrix.

5.1. Synthetic Data Test of Proposed Methods

The synthetic data is generated by a radar target simulator, including 10 platforms, 10 targets in t = ( t 1 , . . . , t 10 ) , for each target, 10 features are utilized, and each feature is normalized, which forms the original data matrix X with [100 × 100] and rank r = 10. In order to test the performance of proposed methods under different missing ratios, the observation matrix D is given by randomly dropping out elements with different ratios in each row of X and setting them as empty. Part of the elements of X are shown in Table 3 and part of the observation matrix D with 50% of the annotations of X randomly removed is shown in Table 4.
In Table 5 and Table 6, the completed annotations by Algorithms 1 and 2 are given respectively. It can be seen that the missing elements are recovered after matrix completion. Compared with the original matrix X, we found that the proposed methods can recover X efficiently. Take the first row of X for example, the fourth, fifth, and sixth elements in Table 5 are recovered by Algorithm 1 with values 1.1639, 1.2384, and 1.0438, which are exactly the same as that in X; i.e., they are perfectly recovered. Meanwhile, the corresponding recovered values by Algorithm 2 in Table 6 are 1.1643, 1.1978, and 1.0437, with MSE 1 × 10 3 , which suggests that the proposed methods can fill in the missing annotations efficiently.
In Figure 2, the MSE of two proposed methods under different missing rates is given. It can be found that the MSE decreases with the decreasing of the missing ratio, which suggests that both of the proposed methods can recover or recorrect the missing or wrong elements in D efficiently. Comparing the two methods, we find that Algorithm 1 has lower MSE with the missing ratio < 0.7, the main reason is that the completion by max rank decomposition in Algorithm 2 results in the measurement error.
In the discussion above, we have assumed rank(X) = 10 as a prior. In practice, the rank of X is generally unknown and needs to be jointly estimated. In fact, the rank minimization in Algorithm 1 cannot estimate the rank of D directly, while Algorithm 2 can predict the rank directly due to the max-rank decomposition of D. The comparison of the estimated rank and the real rank of X given by Algorithm 2 is presented in Figure 3. It can be seen that the estimated rank of the proposed method is consistent with the real rank. In fact, we find that when the missing ratio ≤ 50%, the curve of rank setting vs. estimated rank is consistent with the curve in Figure 3. The main reason is that fewer missing records result in better recovery results. When the missing ratio is ≥50%, the estimated rank is unstable and not consistent with the rank setting, the main reason is that more missing records can lead to rank variation.

5.2. Real Data Test of Proposed Methods

Apart from the synthetic data test, in the following, we verify the performance of the proposed methods with real data—PDW records from real radars. For the real data test, the missing ratio is about 30%. The missing information is set as empty, moreover, certain errors are added to verify the error correction capability of proposed methods. Part of the real data X and the observation data D are illustrated in Table 7 and Table 8, respectively.
The recovery for missing PDW rerecords of Algorithms 1 and 2 are shown in Table 9 and Table 10, respectively. From the two tables, it can be seen that both methods can fill in the missing annotations accurately. Specifically, for the carrier frequency annotation in the first column, the MSE is 1 × 10 3 ; for the pulse width annotation in the second column, the MSE is about 1 × 10 2 ; for the amplitude annotation in the fourth column, the error is about 1 × 10 2 ; for the AOA parameter in the last column, the error is about 1 × 10 3 .
The correction for wrong PDW records of Algorithms 1 and 2 are also validated. For the real data X in Table 7, it can be seen that the PW and AOA records of Target “19” for platform “1” are “0.2200” and “0.3533” with underline, which is wrong and totally different from other platform records. From Table 9, we have that the correction of Algorithm 1 for PW and AOA are “1.7863” and “461.7466”, which are close to the records of platform “2”. The results of Algorithm 2 are consistent with Algorithm 1, which suggests that the proposed methods can correct the wrong records efficiently.
In addition, the run times of Algorithms 1 and 2 are compared under different missing ratios, and the result is shown in Figure 4. We see that the run time of Algorithm 2 is stable for different missing ratios, and much lower than Algorithm 1 when the missing ratio exceeds 0.3. This is consistent with the complexity analysis at the end of Section 4.3.
In the end, the iteration number of Algorithms 1 and 2 under different missing ratios are shown in Figure 5, it can be found that the iteration number of Algorithm 2 is lower than Algorithm 1 and stable in different missing ratios, which is consistent with the running time and complexity analysis.

5.3. Comparison Test

In this section, the comparison test of proposed methods with three state-of-the-art methods for electromagnetic data annotation completion is given. Three compared methods are:
  • The K-nearest neighbor method (KNN) in [32], which predicts the missing annotation by its K nearest neighbors;
  • The augmented Lagrange multiplier method for low-rank matrix recovery (ALM) in [27], where the annotation completion is formulated as a convex optimization model solved by the ALM algorithm;
  • The nuclear norm regularized method for annotation completion (NNLS) in [28], where the annotation completion is formulated as an optimization model solved by the accelerated proximal gradient algorithm.
For comparison testing, the synthetic data is utilized, which is generated by the radar target simulator with 10 platforms, 10 targets, and in t = [ t 1 , . . . , t 10 ] , 10 features are utilized for each target, which forms the original data matrix X with size 100 × 100 and rank r = 10.
Then, the performance of the proposed methods and compared methods are discussed. The MSE of five methods under different missing ratios are shown in Figure 6. It can be seen that the MSE increases roughly with the increase of the missing ratio for all methods. The MSE of proposed Algorithms 1 and 2 are roughly the same, and much lower than the KNN, ALM, and NNLS methods, which demonstrates the superior recovery performance by using the ADMM algorithms. Compared to the KNN with Algorithms 1 and 2, it can be found that utilizing the low-rank structure for annotation completion can recover the missing annotation efficiently. In addition, the average MSE of the five compared methods is presented in Table 11. For each missing ratio, the feature parameters are dropped randomly ten times to get the average MSE of different compared methods.
In the end, the running time for different methods is given in Figure 7. We find that the proposed Algorithm 1 is more time-consuming than other compared methods since the SVD decomposition, and the running time is much more with the increasing of missing ratio. The KNN method has the lowest running time since the low computation. The running time of NNLS and ALM methods are lower than the proposed method’s Algorithms 1 and 2; the main reason is that the SVD decomposition in the proposed algorithms is time-consuming.
Based on the discussion above, we have that the proposed methods can recover and correct the missing and wrong annotation efficiently, but the running time is much more than compared methods.

6. Conclusions

In this work, we have considered cooperative annotation for electromagnetic reconnaissance data. By exploiting the correlation of observations at different platforms, we formulate the annotation completion problem as a low-rank matrix recovery problem and proposed two methods to solve this problem, including the rank-minimization-based convex algorithm and the maximum-rank-decomposition non-convex algorithm. Numerical experiments on synthetic data and real data suggest that the proposed methods can recover the missing annotation efficiently and achieve better MSE performance than the compared annotation methods.

Author Contributions

Conceptualization, W.Z. and J.Y.; methodology, W.Z., Q.L. and G.S.; software, Q.L. and J.L.; validation, W.Z., H.S. and G.S.; visualization, Q.L. and H.S.; writing—original draft, W.Z. and G.S.; writing—review and editing, J.Y. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) 61871092 and U20B2070.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sui, J.; Liu, Z.; Liu, L.; Li, X. Progress in Radar Emitter Signal Deinterleaving. J. Radars 2022, 11, 418–433. [Google Scholar]
  2. Fu, Y.; Wang, X. Radar signal recognition based on modified semi-supervised SVM algorithm. In Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 25–26 March 2017; pp. 2336–2340. [Google Scholar]
  3. Zhu, M.; Wang, S.; Li, Y. Model based Representation and Deinterleaving of Mixed Radar Pulse Sequences with Neural Ma-chineTranslation Network. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 1733–1752. [Google Scholar] [CrossRef]
  4. He, Y.; Zhu, Y.; Zhao, P. Panorama of national defense big data. Syst. Eng. Electron. 2016, 38, 1300–1305. [Google Scholar]
  5. Li, A.; Zang, Q.; Sun, D.; Wang, M. A text feature-based approach for literature mining of IncRNA-protein interactions. Neurocomputing 2016, 206, 73–80. [Google Scholar] [CrossRef]
  6. Arlotta, L.; Crescenz, V.; Mecca, G.; Merialdo, P. Automatic annotation of data extracted from large web sites. In Proceedings of the 6th International Workshop on Web and Databases, San Diego, CA, USA, 12–13 June 2003; ACM: New York, NY, USA, 2003; pp. 7–12. [Google Scholar]
  7. Li, M.; Li, X. Deep web data annotation method based on result schema. J. Comput. Appl. 2011, 31, 1733–1736. [Google Scholar]
  8. Candes, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM (JACM) 2011, 58, 1–37. [Google Scholar] [CrossRef]
  9. Xu, H.; Caramanis, C.; Sanghavi, S. Robust PCA via Outlier Pursuit. IEEE Trans. Inf. Theory 2012, 58, 3047–3064. [Google Scholar] [CrossRef] [Green Version]
  10. Candes, E.J.; Tao, T. The power of convex relaxation: Near optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar] [CrossRef] [Green Version]
  11. Kulin, M.; Kazaz, T.; Moerman, I.; De Poorter, E. End to end learning from spectrum data: A deep learning approach for wireless signal identification in spectrum monitoring applications. IEEE Access 2018, 6, 18484–18501. [Google Scholar] [CrossRef]
  12. Recht, B.; Fazel, M.; Parrilo, P.A. Guaranteed minimum rank solutions of linear matrix equations via nuclear norm minimiza-tion. SIAM Rev. 2010, 52, 471–501. [Google Scholar] [CrossRef] [Green Version]
  13. Wen, Z.; Yin, W.; Zhang, Y. Solving a low rank factorization model for matrix completion by a nonlinear successive over relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
  14. Waters, A.; Sankaranarayanan, A.; Baraniuk, R. SpaRCS: Recovering low-rank and sparse matrices from compressive meas-urements. Neural Inf. Process. Syst. 2011, 24, 1089–1097. [Google Scholar]
  15. Christodoulou, A.; Zhang, H.; Zhao, B.; Hitchens, T.K.; Ho, C.; Liang, Z.-P. High-Resolution Cardiovascular MRI by Inte-grating Parallel Imaging With Low-Rank and Sparse Modeling. IEEE Trans. Biomed. Eng. 2013, 60, 3083–3092. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Sykulski, M. RobustPCA: Decompose a Matrix into Low-Rank and Sparse Components. 2015. Available online: https://CRAN.R-project.org/package=rpca (accessed on 31 July 2015).
  17. Chen, Y.; Xu, H.; Caramanis, C.; Sanghavi, S. Robust matrix completion and corrupted columns. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; pp. 873–880. [Google Scholar]
  18. Negahban, S.; Wain, M. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 2012, 5, 1665–1697. [Google Scholar]
  19. Dai, W.; Kerm, E.; Milenk, O. A geometric approach to low-rank matrix completion. IEEE Trans. Inf. Theory 2012, 58, 237–247. [Google Scholar] [CrossRef]
  20. Bai, H.; Ma, J.; Xiong, K.; Hu, F. Design of weighted matrix completion model in image inpainting. Syst. Eng. Electron. 2016, 38, 1703–1708. [Google Scholar]
  21. Zhang, L.; Zhou, Z.; Gao, S.; Yin, J.; Lin, Z.; Ma, Y. Label information guided graph construction for semi-supervised learning. IEEE Trans. Image Process. 2017, 26, 4182–4192. [Google Scholar] [CrossRef]
  22. Keshavan, A.; Montanari, A.; Oh, S. Matrix completion from noisy entries. J. Mach. Learn. Res. 2010, 11, 2057–2078. [Google Scholar]
  23. Recht, B. A simpler approach to matrix completion. J. Mach. Learn. Res. 2011, 12, 3413–3430. [Google Scholar]
  24. Candes, E.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, C.; Zhao, H.; Wang, J.; Li, X.; Huang, P. SAR image denoising via fast weighted nuclear norm minimization. Syst. Eng. Electron. 2019, 41, 1504–1508. [Google Scholar]
  26. Hestenes, M. Multiplier and gradient methods. J. Optim. Theory Appl. 1969, 4, 303–320. [Google Scholar] [CrossRef]
  27. Lin, J.; Jiang, C.; Li, Q.; Shao, H.; Li, Y. Distributed method for joint power allocation and admission control based on ADMM Framework. J. Univ. Electron. Sci. Technol. China 2016, 45, 726–731. [Google Scholar]
  28. Toh, K.; Yun, S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010, 6, 615–640. [Google Scholar]
  29. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  30. Lin, C.; Lai, K.; Chen, Z.; Chen, Z.-B.; Chen, J.-Y. Patch-based information reconstruction of cloud-contaminated multitemporal images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 163–174. [Google Scholar] [CrossRef]
  31. Parikh, N.; Boyd, S. Proximal Algorithms, Foundations and Trends in Optimization; Now Publishers Inc.: Delft, The Netherlands, 2014. [Google Scholar]
  32. Chen, R. Semi-supervised k-nearest neighbor classification method. J. Image Graph. 2013, 18, 195–200. [Google Scholar]
  33. Shi, Q.; Hong, M. Penalty dual Decompositon method for nonsmooth nonconvex optimization Part I:Algorithm and Con-vergence Analysis. IEEE Trans. Signal Process. 2020, 68, 4108–4122. [Google Scholar] [CrossRef]
Figure 1. Partially annotated characteristics matrix.
Figure 1. Partially annotated characteristics matrix.
Remotesensing 15 00121 g001
Figure 2. The MSE of two proposed methods under different missing ratios.
Figure 2. The MSE of two proposed methods under different missing ratios.
Remotesensing 15 00121 g002
Figure 3. The rank setting vs. estimated rank of Algorithm 2.
Figure 3. The rank setting vs. estimated rank of Algorithm 2.
Remotesensing 15 00121 g003
Figure 4. The running time comparison of Algorithms 1 and 2 under different missing ratios.
Figure 4. The running time comparison of Algorithms 1 and 2 under different missing ratios.
Remotesensing 15 00121 g004
Figure 5. The iteration number comparison of Algorithms 1 and 2 under different missing ratios.
Figure 5. The iteration number comparison of Algorithms 1 and 2 under different missing ratios.
Remotesensing 15 00121 g005
Figure 6. The MSE comparison for different methods under different missing ratios.
Figure 6. The MSE comparison for different methods under different missing ratios.
Remotesensing 15 00121 g006
Figure 7. The running time comparison for different methods under different missing ratios.
Figure 7. The running time comparison for different methods under different missing ratios.
Remotesensing 15 00121 g007
Table 1. An illustration of annotation information of electronic reconnaissance data.
Table 1. An illustration of annotation information of electronic reconnaissance data.
Platform LabelTarget LabelFeature 1Feature 2Feature 3Feature 4Feature 5Feature n 3
1 * * * * * * * * * * * *
1
n 2 * * * * * * * * * * * *
1 * * * * * * * * * * * *
n 1
n 2 * * * * * * * * * * * *
Table 2. The notation of symbols.
Table 2. The notation of symbols.
NotationExplanation
X, D, E, Λ , U, VMatrix
e e Vector
x i , j , n 1 , n 2 , n 3 , cScalar
Table 3. The original annotated matrix X.
Table 3. The original annotated matrix X.
Target Label ( t i )Feature 1Feature 2Feature 3Feature 4Feature 5Feature 6Feature 7Feature 8Feature 9Feature 10
10.83310.93141.66361.16391.23841.04381.25271.06090.52210.8351
20.78601.37021.68611.66361.21480.86911.10241.78710.73181.1431
31.04000.98441.16851.19660.92420.68461.02631.04600.64730.8802
40.75581.18161.40441.58811.09960.79060.97511.70540.71310.9999
51.13721.52302.25052.07891.78411.46391.64052.10310.91321.3650
60.65870.80331.56881.21801.26431.11091.15621.20500.49660.7463
70.28840.56340.52580.69230.47230.46300.34220.73090.29690.5746
80.73130.85090.93881.10820.70310.39670.74501.12160.56580.6677
91.04311.39811.70291.74071.37991.18541.26701.70870.80461.3185
101.05971.35801.64731.93231.33560.92831.25092.01650.90901.1356
11.72011.75092.62382.10802.09701.91222.12791.79041.02341.7243
20.76121.30542.03161.57091.51171.33031.134531.60250.62991.2025
30.73860.93661.31141.19911.18351.26541.00691.08700.51541.0572
40.77471.01341.74421.34991.43821.40581.27671.27290.55021.0479
51.12881.16371.57431.21031.17551.03651.27720.96980.62801.1144
Table 4. The partially annotated matrix D.
Table 4. The partially annotated matrix D.
Target Label ( t i )Feature 1Feature 2Feature 3Feature 4Feature 5Feature 6Feature 7Feature 8Feature 9Feature 10
10.8331 1.6636 1.2527 0.52210.8351
20.78601.3702 1.66361.2148 1.78710.73181.1431
3 0.98441.1685 0.9242 1.0263
40.7558 1.4044 1.70540.71310.9999
51.1372 1.78411.4639 2.10310.91321.3650
6 1.5688 1.2643 1.20500.49660.7463
70.2884 0.52580.69230.47230.46300.34220.73090.29690.5746
80.73130.8509 0.7450 0.56580.6677
91.0431 1.3799 1.2670 1.3185
10 1.64731.9323 1.25092.01650.9090
1 2.6238 2.09701.91222.1279 1.7243
20.76121.3054 1.5117 0.62991.2025
3 1.31141.1991 1.0069 0.51541.0572
4 1.0134 1.43821.4058 1.27290.5502
5 1.1637 1.21031.17551.03651.27720.96980.62801.1144
Table 5. Results recovered by Algorithm 1.
Table 5. Results recovered by Algorithm 1.
Target Label ( t i )Feature 1Feature 2Feature 3Feature 4Feature 5Feature 6Feature 7Feature 8Feature 9Feature 10
10.83310.93141.66361.16391.23841.04381.25271.06090.52210.8351
20.78601.37021.68611.66361.21481.10241.78711.78710.73181.1431
31.04000.98441.16851.19660.92420.69461.02631.04600.64730.8802
40.75581.18161.40441.58811.09960.79060.97511.70540.71310.9999
51.13721.52302.25052.07891.78411.46391.64052.10310.91321.3650
60.65870.80331.56881.21801.26431.11091.15621.20500.49660.7463
70.28840.56340.52580.69230.47230.46300.34220.73090.29690.5746
80.73130.85090.93881.10820.70310.39670.74501.12160.56580.6677
91.04311.39811.70291.74071.37991.18541.26701.70870.80461.3185
101.05971.35801.64731.93231.33560.92831.25092.01650.90901.1356
11.72011.75092.62382.10802.09701.91222.12791.79041.02341.7243
20.76121.30542.03161.57091.51171.33031.34531.60250.62991.2025
30.73860.93661.31141.19911.18351.26541.00691.08700.51541.0572
40.77471.01341.74421.34991.43821.40581.27671.27290.55021.0479
51.12881.16371.57431.21031.17551.03651.27720.96980.62801.1144
Table 6. Results recovered by Algorithm 2.
Table 6. Results recovered by Algorithm 2.
Target Label ( t i )Feature 1Feature 2Feature 3Feature 4Feature 5Feature 6Feature 7Feature 8Feature 9Feature 10
10.83310.93761.66361.16431.19781.04371.25271.09010.52210.8351
20.78601.37021.68751.66361.21480.86911.16931.74200.73181.1431
31.04130.98441.16851.16850.92420.69461.02631.10580.64960.8788
40.75581.09761.40441.58371.15970.79421.11111.70540.71310.9999
51.13721.61602.24372.06591.78411.46391.66312.10310.91321.3650
60.66830.81561.56881.28991.26431.11031.08571.20500.49660.7463
70.28840.56170.52580.69230.47230.46300.34220.73090.29690.5746
80.73130.85090.98441.11830.71660.67750.74501.12880.56580.6677
91.04311.33511.75041.69241.37990.40251.26701.54470.76771.3185
101.09701.35891.64731.93231.35970.22621.25092.01650.90901.2495
11.71891.84592.62382.11562.09701.91222.12791.79151.11901.7243
20.76121.30541.99131.57681.51171.23741.41131.58670.62991.2025
30.80570.99801.31141.19911.17211.27091.00691.15480.51541.0572
40.78341.01341.52561.42271.43821.40581.19751.27290.55021.0699
50.94141.16371.52891.21031.17551.03651.27720.96980.62801.1144
Table 7. Real data X.
Table 7. Real data X.
Target Platf-1 Platf-2
LabelFWPWPTAMAOAFWPWPTAMAOA
1 4.7654 × 10 3 2.120078095.2850428.7933 4.7655 × 10 3 1.760078095.7800428.7967
2 4.7653 × 10 3 2.140080095.5050428.8067 4.7657 × 10 3 1.800077095.5050428.7933
3 4.7655 × 10 3 1.720082095.5600428.7933 4.7656 × 10 3 1.900077095.5050428.7767
4 4.7656 × 10 3 1.800083095.5600428.7867 4.7655 × 10 3 1.920076096.0750428.8000
5 4.7656 × 10 3 1.920082095.5600428.7767 4.7655 × 10 3 2.120078095.5050428.7633
6 4.7653 × 10 3 2.080083095.6700428.7700 4.7656 × 10 3 2.120075095.6700428.7867
7 4.7654 × 10 3 2.120084095.6700428.7900 4.7655 × 10 3 1.740077095.3950428.7933
8 4.7655 × 10 3 2.200084095.6700428.7833 4.7656 × 10 3 1.760078095.6700428.7967
9 4.7656 × 10 3 2.200085095.6150428.8000 4.7656 × 10 3 1.900079095.4500428.7733
10 4.7656 × 10 3 1.900084095.6150428.7700 4.7656 × 10 3 1.900080095.6150428.8000
11 4.7656 × 10 3 1.920087095.5600428.8000 4.7656 × 10 3 1.900078095.5050428.8000
12 4.7656 × 10 3 1.900087095.5050428.8000 4.7655 × 10 3 1.720080095.5600428.8267
13 4.7653 × 10 3 2.060088095.5600428.7700 4.7657 × 10 3 1.900080095.6700428.7733
14 4.7655 × 10 3 2.120088095.5600428.7833 4.7656 × 10 3 1.880081095.5050428.8000
15 4.7655 × 10 3 2.140089095.5600428.8000 4.7656 × 10 3 1.740081095.6700428.7433
16 4.7656 × 10 3 2.160089095.6150428.7900 4.7656 × 10 3 1.800082095.5600428.7933
17 4.7656 × 10 3 2.160090095.5600428.7933 4.7656 × 10 3 1.920082095.4500428.7767
18 4.7656 × 10 3 1.920089095.6150428.7767 4.7656 × 10 3 1.880083095.5600428.8000
19 4.7610 × 10 3 0.220073095.34000.3533 4.7656 × 10 3 1.880083095.5600428.8000
20 4.7654 × 10 3 1.900090095.5050428.4467 4.7655 × 10 3 2.140083095.6150428.7533
Table 8. Recorded real data D with missing annotations.
Table 8. Recorded real data D with missing annotations.
Target Platf-1 Platf-2
LabelFWPWPTAMAOAFWPWPTAMDOA
1 4.7654 × 10 3 2.1200 95.28500 95.7800428.7967
2 4.7653 × 10 3 80095.5050 4.7657 × 10 3 1.8000 95.5050428.7933
3 4.7655 × 10 3 95.5600428.7933 4.7656 × 10 3 1.9000 95.5050428.7767
4 4.7656 × 10 3 1.8000830 428.7867 4.7655 × 10 3 1.920076096.0750428.8000
5 4.7656 × 10 3 95.5600 4.7655 × 10 3 2.120078095.5050428.7633
6 4.7653 × 10 3 2.0800830 428.7700 4.7656 × 10 3 95.6700428.7867
7 4.7654 × 10 3 95.6700 1.7400770
8 4.7655 × 10 3 2.200084095.6700428.7833 4.7656 × 10 3 1.7600 95.6700
9 2.2000 428.8000 4.7656 × 10 3 1.9000790 428.7733
10 4.7656 × 10 3 1.900084095.6150 4.7656 × 10 3 1.900080095.6150428.8000
11 4.7656 × 10 3 1.9200 428.8000 4.7656 × 10 3 1.9000 95.5050428.8000
12 4.7656 × 10 3 870 428.8000 4.7655 × 10 3 80095.5600428.8267
13 4.7653 × 10 3 95.5600428.7700 4.7657 × 10 3 428.7733
14 2.1200880 428.7833 4.7656 × 10 3 1.880081095.5050428.8000
15 4.7655 × 10 3 2.1400 95.5600428.8000 4.7656 × 10 3 81095.6700428.7433
16 4.7656 × 10 3 2.160089095.6150428.7900 4.7656 × 10 3 1.8000 428.7933
17 4.7656 × 10 3 2.160090095.5600428.7933 4.7656 × 10 3 82095.4500428.7767
18 4.7656 × 10 3 1.9200890 428.7767 1.880083095.5600428.8000
19 4.7610 × 10 3 730 4.7656 × 10 3 83095.5600428.8000
20 4.7654 × 10 3 1.9000 95.5050 4.7655 × 10 3 2.140083095.6150428.7533
Table 9. Results recovered by Algorithm 1.
Table 9. Results recovered by Algorithm 1.
Target Platf-1 Platf-2
LabelFWPWPTAMAOAFWPWPTAMDOA
1 4.7654 × 10 3 2.1200862.319895.28500 4.7347 × 10 3 1.7002769.101095.7800428.7967
2 4.7653 × 10 3 1.881080095.5050590.2473 4.7657 × 10 3 1.8000763.421395.5050428.7933
3 4.7655 × 10 3 1.7387813.690495.5600428.7933 4.7656 × 10 3 1.9000778.778095.5050428.7767
4 4.7656 × 10 3 1.800083093.3096428.7867 4.7655 × 10 3 1.920076096.0750428.8000
5 4.7656 × 10 3 2.0022839.833395.5600481.1485 4.7655 × 10 3 2.120078095.5050428.7633
6 4.7653 × 10 3 2.080083093.5123428.7700 4.7656 × 10 3 1.7048777.167495.6700428.7867
7 4.7654 × 10 3 2.0342848.964495.6700243.2014 4.7000 × 10 3 1.740077094.1180543.5496
8 4.7655 × 10 3 2.200084095.6700428.7833 4.7656 × 10 3 1.7600754.383295.6700487.8920
9 4.6963 × 10 3 2.2000789.743693.3603428.8000 4.7656 × 10 3 1.900079093.2589428.7733
10 4.7656 × 10 3 1.900084095.6150146.4476 4.7656 × 10 3 1.900080095.6150428.8000
11 4.7656 × 10 3 1.9200811.199993.4410428.8000 4.7656 × 10 3 1.9000756.829595.5050428.8000
12 4.7656 × 10 3 2.022487094.6997428.8000 4.7655 × 10 3 1.764980095.5600428.8267
13 4.7653 × 10 3 1.8753837.707995.5600428.7700 4.7657 × 10 3 1.7816784.711394.5026428.7733
14 4.7233 × 10 3 2.120088094.4242428.7833 4.7656 × 10 3 1.880081095.5050428.8000
15 4.7655 × 10 3 2.1400847.004195.5600428.8000 4.7656 × 10 3 1.658781095.6700428.7433
16 4.7656 × 10 3 2.160089095.6150428.7900 4.7656 × 10 3 1.8000784.936194.9520428.7933
17 4.7656 × 10 3 2.160090095.5600428.7933 4.7656 × 10 3 1.778682095.4500428.7767
18 4.7656 × 10 3 1.920089094.8011428.7767 4.7250 × 10 3 1.880083095.5600428.8000
19 4.7610 × 10 3 1.786373095.2033461.7466 4.7656 × 10 3 1.803283095.5600428.8000
20 4.7654 × 10 3 1.9000813.596395.5050423.7582 4.7655 × 10 3 2.140083095.6150428.7533
Table 10. Results recovered by Algorithm 2.
Table 10. Results recovered by Algorithm 2.
Target Platf-1 Platf-2
LabelFWPWPTAMAOAFWPWPTAMDOA
1 4.7654 × 10 3 2.1200826.169595.28500 4.8275 × 10 3 1.7705802.938295.7800428.7967
2 4.7653 × 10 3 1.876080095.5050480.4501 4.7657 × 10 3 1.8000800.333795.5050428.7933
3 4.7655 × 10 3 1.8730822.182295.5600428.7933 4.7656 × 10 3 1.9000NaN95.5050428.7767
4 4.7656 × 10 3 1.800083093.6472428.7867 4.7655 × 10 3 1.920076096.0750428.8000
5 4.7656 × 10 3 1.8156796.982595.5600464.9850 4.7655 × 10 3 2.120078095.5050428.7633
6 4.7653 × 10 3 2.080083093.6649428.7700 4.7656 × 10 3 1.7131776.901895.6700428.7867
7 4.7654 × 10 3 1.8653818.761195.6700477.6912 4.7842 × 10 3 1.740077095.8159470.2310
8 4.7655 × 10 3 2.200084095.6700428.7833 4.7656 × 10 3 1.7600783.249295.6700462.8508
9 4.7506 × 10 3 2.2000813.686395.3412428.8000 4.7656 × 10 3 1.900079095.2221428.7733
10 4.7656 × 10 3 1.900084095.6150468.3993 4.7656 × 10 3 1.900080095.6150428.8000
11 4.7656 × 10 3 1.9200807.344094.5981428.8000 4.7656 × 10 3 1.9000784.642195.5050428.8000
12 4.7656 × 10 3 1.860087095.6633428.8000 4.7655 × 10 3 1.749780095.5600428.8267
13 4.7653 × 10 3 1.8397807.557995.5600428.7700 4.7657 × 10 3 1.7306784.850094.5049428.7733
14 4.7536 × 10 3 2.120088095.4003428.7833 4.7656 × 10 3 1.880081095.5050428.8000
15 4.7655 × 10 3 2.1400811.479495.5600428.8000 4.7656 × 10 3 1.739081095.6700428.7433
16 4.7656 × 10 3 2.160089095.6150428.7900 4.7656 × 10 3 1.8000794.368695.6510428.7933
17 4.7656 × 10 3 2.160090095.5600428.7933 4.7656 × 10 3 1.759682095.4500428.7767
18 4.7656 × 10 3 1.920089095.7352428.7767 4.7742 × 10 3 1.880083095.5600428.8000
19 4.7610 × 10 3 1.880973096.7416481.7037 4.7656 × 10 3 1.799483095.5600428.8000
20 4.7654 × 10 3 1.9000826.668795.5050482.3048 4.7655 × 10 3 2.140083095.6150428.7533
Table 11. The average MSE of five compared methods.
Table 11. The average MSE of five compared methods.
Missing RatioAlgorithm 1Algorithm 2KNNALMNNLS
0.10.01220.01370.03570.08100.1010
0.20.02360.02420.05500.08740.1330
0.30.03550.03900.06430.10380.1027
0.40.04090.04660.07720.10520.1104
0.50.05710.05120.08780.21860.3014
0.60.06070.06120.09690.31200.1064
0.70.06950.07230.11970.21440.2665
0.80.08660.08310.15130.11410.1934
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, W.; Yang, J.; Li, Q.; Lin, J.; Shao, H.; Sun, G. Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion. Remote Sens. 2023, 15, 121. https://doi.org/10.3390/rs15010121

AMA Style

Zhang W, Yang J, Li Q, Lin J, Shao H, Sun G. Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion. Remote Sensing. 2023; 15(1):121. https://doi.org/10.3390/rs15010121

Chicago/Turabian Style

Zhang, Wei, Jian Yang, Qiang Li, Jingran Lin, Huaizong Shao, and Guomin Sun. 2023. "Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion" Remote Sensing 15, no. 1: 121. https://doi.org/10.3390/rs15010121

APA Style

Zhang, W., Yang, J., Li, Q., Lin, J., Shao, H., & Sun, G. (2023). Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion. Remote Sensing, 15(1), 121. https://doi.org/10.3390/rs15010121

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop