Next Article in Journal
Validation of Machine Learning-Aided and Power Line Communication-Based Cable Monitoring Using Measurement Data
Previous Article in Journal
Posterior Approximate Clustering-Based Sensitivity Matrix Decomposition for Electrical Impedance Tomography
Previous Article in Special Issue
Computation Offloading and Resource Allocation Based on P-DQN in LEO Satellite Edge Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectrum Situation Awareness for Space–Air–Ground Integrated Networks Based on Tensor Computing

1
Shandong Provincial Key Laboratory of Wireless Communication Technologies, School of Information Science and Engineering, Shandong University, Qingdao 266237, China
2
Shanghai Research Institute of Intelligent Autonomous Systems, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(2), 334; https://doi.org/10.3390/s24020334
Submission received: 29 November 2023 / Revised: 29 December 2023 / Accepted: 4 January 2024 / Published: 5 January 2024
(This article belongs to the Special Issue Integration of Satellite-Aerial-Terrestrial Networks)

Abstract

:
The spectrum situation awareness problem in space–air–ground integrated networks (SAGINs) is studied from a tensor-computing perspective. Tensor and tensor computing, including tensor decomposition, tensor completion and tensor eigenvalues, can satisfy the application requirements of SAGINs. Tensors can effectively handle multidimensional heterogeneous big data generated by SAGINs. Tensor computing is used to process the big data, with tensor decomposition being used for dimensionality reduction to reduce storage space, and tensor completion utilized for numeric supplementation to overcome the missing data problem. Notably, tensor eigenvalues are used to indicate the intrinsic correlations within the big data. A tensor data model is designed for space–air–ground integrated networks from multiple dimensions. Based on the multidimensional tensor data model, a novel tensor-computing-based spectrum situation awareness scheme is proposed. Two tensor eigenvalue calculation algorithms are studied to generate tensor eigenvalues. The distribution characteristics of tensor eigenvalues are used to design spectrum sensing schemes with hypothesis tests. The main advantage of this algorithm based on tensor eigenvalue distributions is that the statistics of spectrum situation awareness can be completely characterized by tensor eigenvalues. The feasibility of spectrum situation awareness based on tensor eigenvalues is evaluated by simulation results. The new application paradigm of tensor eigenvalue provides a novel direction for practical applications of tensor theory.

1. Introduction

1.1. Space–Air–Ground Integrated Networks and Situation Awareness

The space–air–ground integrated networks (SAGINs), working as network infrastructures, provide ubiquitous, collaborative, and efficient information services for various network applications in large-scale space. The SAGINs are composed of three kinds of networks, including space networks, air networks, and ground networks. The ground networks are the main body of SAGINs. The space networks and air networks serve primarily as supplements and extensions. Through the deep integration of multidimensional networks, the SAGINss can effectively utilize various resources comprehensively, carry out intelligent network control and information processing so as to flexibly cope with the network services with different demands, and realize the functions of an integrated communication system, which is called communication–computing–cache.
Under the framework of SAGINs, the space networks consist of various satellite systems that form the space-backbone network and space-access network, providing global coverage, ubiquitous connectivity, broadband access, and communication services, mainly for rural areas and the regions where ground networks are difficult to be deployed. The space networks are composed of high-altitude communication platforms and UAV networks, which can enhance coverage, enable edge network access, and provide additional network services in emergency situations. The ground networks are mainly composed of ground internet and mobile communication networks, which are responsible for the network service in the business-intensive areas like cities or areas requiring high communication quality.

1.1.1. Space Networks

The space networks consist of different types of satellites, constellations, and the corresponding ground infrastructures, including the ground stations and the control centers. Satellites can be divided into GEO, MEO, LEO [1], and VLEO [2], according to the altitudes where they are working. GEO satellites, also called high elliptical orbit satellites, work at an altitude of more than 36,000 km and are often used for international long-distance communication. MEO satellites orbit at an altitude of 2000–36,000 km, and LEO satellites orbit at an altitude of 400–2000 km. Because their orbits are relatively low, the former are often used as positioning systems, such as GPS and Beidou navigation satellite systems, while the latter are often used for earth observation, earth surveys, and space stations. VLEO satellites have the lowest orbits, so they can be used to provide high-speed data communication, precise positioning, and other services.
Since satellites have a high position and wide coverage, they are regarded as the main method to enable global mobile communications. Iridium, for example, expects to provide global voice and data communication by adopting VLEO satellites. Starlink creates a low-cost, high-coverage space network that supports global communications and can provide internet access services at the speeds of optical fiber by increasing the density of satellites. Meanwhile, due to the high position of the satellites, transmission links are long and the propagation delays are large, making it easy to be destroyed by random or deliberate nodes attacking of malicious nodes during signal transmission. So, it is difficult to guarantee the QoS of real-time information interactive applications.

1.1.2. Air Networks

The air networks consist of stratospheric airships, high-altitude balloons, drones, helicopters, and other high-altitude platforms. As an intermediate layer for SAGINs, it can provide data routing between the platforms and the ground networks and exchange data information with space networks. High-altitude platforms (HAPs) are the main body of the air networks, which are located 2–20 km above the stratosphere and upper troposphere [3]. HAPs, compared with satellite communication platforms, have the advantages of simple deployment, short communication response time, and low cost and are often used in temporary high-bandwidth communication scenarios, such as emergency communication and disaster relief activities.
However, due to the rapid changes in HAPs network topology and communication links, in order to ensure the security and stability of the network and provide users with high throughput and low delay network access services, it is necessary to design effective coordination mechanisms such as channel allocation, which may result in a complex network structure or allocation mechanism and high allocation cost.

1.1.3. Ground Networks

The ground networks are composed of many sub-networks, such as Wireless Local Area Networks (WLAN), cellular networks, and Ad Hoc Networks. Most of the common communication modes in our daily lives can be classified into the ground networks. Meanwhile, the ground networks can provide users with communication services with high data transmission rates and throughputs.
However, as the construction of base stations and other infrastructure is more limited by complex terrain, the high construction cost leads to poor network services provided by the ground networks in remote areas with sparse populations. In addition, ground infrastructure is vulnerable to extremely severe weather, man-made damage, and other factors, resulting in communication disruption. To sum up, it is difficult to depend on the ground networks singly to satisfy the rapidly increasing and complicating communication needs.

1.1.4. Spectrum Situation Awareness

Spectrum situational awareness originates from spectrum sensing technology and the concept of situational awareness. Spectrum sensing was first proposed in [4], where Mitola coined a new concept of cognitive radio (CR) to realize dynamic spectrum access. In CR networks, the secondary users (SUs) can access the idle channels that are not occupied by the primary users (PUs) to totally improve spectrum efficiency. Spectrum sensing technology can obtain spectrum usage information in wireless systems through various signal detection methods, detect spectrum holes, and prevent interference to the PUs. The concept of situational awareness originated in the 1980s. It is mainly used to analyze information about the environment to acquire current and future situations and make corresponding judgments and decisions. Now, it is extended to perceive the environmental elements in a certain time and a certain space, understand the meaning of these elements and predict their future state.
There is still no exact definition of spectrum situational awareness at the moment. However, according to the core purpose of spectrum sensing and situational awareness, it can be understood as the acquisition of various state information from the current spectrum space, such as spectrum busy state and spectrum radiated power. On this basis, various parameters and the development trend of spectrum space can be analyzed. Its core technologies are summarized as wide-area spectrum situational awareness, dynamic spectrum situation generation, and spectrum situation utilization [5].

1.2. Tensor Computing and Tensor Eigenvalues

Tensor computing is a new concept mainly including tensor decomposition, tensor completion, and tensor eigenvalue. Tensor decomposition and completion have been widely used in signal processing [6], machine learning [7], big data analysis [8] and other fields [9,10], while engineering applications of tensor eigenvalue in the field of communications are relatively lacking.

1.2.1. Tensor Decomposition and Tensor Completion

Tensor decomposition originated with Hitchcock in 1927 [11], and the conception of the multiway model was proposed by Cattell in 1944 [12]. It can be viewed as the higher-order generalization of matrix factorization, that is, converting higher-order data into a combination of lower-order and lower-dimension data. The two most common types of tensor decomposition are CP decomposition and Tucker decomposition.
Based on the definition of the rank-one tensor, Hitchcock proposed that the tensor was divided into a finite number of rank-one tensors for the first time, named as polyadic form, which was the rudiments of CP decomposition. The name of CP decomposition has not been clearly given; parallel factors [13] and CANDECOMP [14] both were the former names of CP decomposition until Kiers called this form of tensor representation method CANDECOMP/PARAFAC: that is, CP decomposition [15]. For example, given a third-order tensor X R I × J × K , it can be re-written by CP decomposition as
X = r = 1 R a r b r c r ,
where the positive integer R is called CP-rank, and a r R I , b r R J , and c r R K for r = 1 , , R ; the notation “∘” represents the outer product.
Tucker decomposition, first proposed by Tucker in 1963 [16], can be regarded as a higher-order extension of principal component analysis (PCA). Like CP decomposition, Tucker decomposition also has many other names, such as three-mode PCA (3MPCA) [17] and higher-order SVD (HOSVD) [18]. For example, for a third-order tensor X R I × J × K , we can rewrite it by Tucker decomposition as
X = G × 1 A × 2 B × 3 C = l = 1 L m = 1 M n = 1 N G l m n a l b m c n = l = 1 L m = 1 M n = 1 M G l m n A i l B j m C k n ,
where A R I × L , B R J × M , and C R K × N are factor matrices. The tensor G R L × M × N is called the core tensor of Tucker decomposition, and a l R I , b m R J and c n R K for l = 1 , , L , m = 1 , , M , n = 1 , , N . The notation “ × n ” represents the n-mode product.
Tensor completion is used to complete the tensor by estimating the values according to the relationship between the existing data, structural properties of data, and the missing elements, which is often used in pattern recognition [19], compressed sensing [20] and other fields. Tensor completion is mainly divided into two kinds of methods [21]. One is based on the low-rank property, called the nuclear norm minimization method, like [22,23]. The other is based on low-rank tensor decomposition, like [24,25].

1.2.2. Tensor Eigenvalue

Tensor eigenvalue is the essential part of tensor computing, which is developed from the concept of matrix eigenvalue. Unlike a singular value or eigenvalue after the matricization of a tensor, which is a matrix singular value or eigenvalue, respectively, here, we discuss the tensor eigenvalue, which is calculated by regarding the tensor as a whole unit. For different research purposes and application backgrounds, scholars have put forward many different concepts of tensor eigenvalue, but the origin of tensor eigenvalue was proposed by professors Qi and Lim based on their respective research directions independently. The former treats the tensor eigenvalues as roots of multidimensional polynomials [26], while the latter proposed the concept of tensor eigenvalue by analogy with the Rayleigh quotient of symmetric matrix eigenvalues and the constraint variational method [27]. Although their starting points are different, the two concepts are essentially the same. Since the introduction of tensor eigenvalues, with a particular focus on Z-eigenvalues, they have been extensively utilized in a wide range of fields. These applications include global optimization problems [28,29], hypergraph theory [30,31], homogeneous polynomial system stability problems [32], and many others.
For supersymmetric tensors, where tensor elements have the same value if their indexes belong to the full permutation set of indexes, Qi called a number λ an N-eigenvalue of A R [ m , n ] if λ is the solution of the following homogeneous polynomial equation
A x m 1 = λ x [ m 1 ] ,
where ( A x m 1 ) i = i 2 , , i m = 1 n A i , i 2 , , i m x i 2 x i m and x [ m 1 ] = x 1 m 1 , x 2 m 1 , , x n m 1 T , and he called the solution x an N-eigenvector of A associated with the N-eigenvalue λ . If  λ R , x R n , λ is called an H-eigenvalue and x is called an H-eigenvector. Lim called them l m -eigenvalue and l m -eigenvector, which were defined as follows
A I n , x , , x = λ x m 1 ,
where I n R n is an all-one vector. E-eigenvalues and Z-eigenvalues satisfy
A x m 1 = λ x x T x = 1 ,
where if λ R , x R n , λ is called the Z-eigenvalue; otherwise, λ is called the E-eigenvalue.
In order to solve the problem of eigenvalue computing of non-symmetric tensors and different structure tensors as well as summarize different types of tensor eigenvalues, B -eigenpairs and B R -eigenpairs [33,34], where they are called B R -eigenpairs if B -eigenpairs are all real, are proposed and defined as
A { k } x m 1 = λ B x m 1 , m = m ; A { k } x m 1 = λ B x m 1 , B x m = 1 , m m .
Therein, B R [ m , n ] is a symmetric tensor, which has different forms depending on the different types of tensor eigenvalues we want to express. The notation “ A { k } ” means to find the eigenvalues of A in the k-th direction. Because A is a non-symmetric tensor, different order directions lead to different eigenpairs, and k is needed to indicate the order direction and the B -eigenpair can also be called a mode-k B -eigenpair.
Four types of eigenpairs described above can be represented by B -eigenpair and B R -eigenpair. If B is the unit tensor I R [ m , n ] , and m = m , the mode-1 B -eigenpairs are the N-eigenpairs and the mode-1 B R -eigenpairs are the H-eigenpairs. If B is the unit matrix  I R n × n , and m = 2 , the mode-1 B -eigenpairs are the E-eigenpairs and the mode-1 B R -eigenpairs are the Z-eigenpairs.
The main contributions of the paper can be summarized as follows:
  • A novel tensor-based spectrum situational awareness model is proposed to store and process multidimensional, heterogeneous, and massive spectral big data from space–air–ground integrated networks.
  • Two eigenvalue computing schemes, including the semidefinite relaxation algorithm and the homotopy continuation algorithm, are included to calculate the eigenvalues of spectrum situational awareness tensors. The computing performances of two algorithms are evaluated, and the superiority of the homotopy algorithm for tensor eigenvalue estimation is demonstrated.
  • A novel spectrum situational awareness scheme based on tensor eigenvalues is proposed, where the tensor eigenvalue distribution is used to evaluate the state of the spectrum. The new application paradigm of a tensor eigenvalue provides a novel direction for practical applications of tensor eigenvalues, especially using tensor eigenvalue distributions to construct a spectrum situational awareness scheme.
The remainder of the paper is provided as follows. Section 2 introduces the space–air–ground integrated network model and the tensor-based spectrum situation awareness model. Section 3 introduces the problem of tensor eigenvalue computing and two classical tensor eigenvalue computing algorithms, a semidefinite relaxation algorithm and a homotopy continuation algorithm. The performance of the two algorithms are compared in terms of convergence ratio, accuracy and CPU time. Section 4 introduces a tensor-eigenvalue-based spectrum sensing algorithm and simulation results with theoretical analysis. In Section 5, the paper is concluded.

2. System Model

2.1. Space–Air–Ground Integrated Network Model

The model of SAGINs is shown in Figure 1, where space networks, air networks and ground networks are three layers. As shown in the figure, the satellites including GEO, MEO, LEO and VLEO exchange information and realize key communication functions through intra-segment wireless links. In air networks, high-altitude platforms, such as aircrafts, airships, balloons, and UAVs, constitute corresponding sublayer networks. The air networks can provide special functions for emergency communications and other tasks. In ground networks, 5G networks can provide basic network infrastructures and basic internet access services. Many services of ground networks are implemented through inter-segment communication, like using air networks for Internet access enhancement services in emergency communications and using space networks for global communications through ground gateways.
In this model, sublayers within each network are independent of each other so as to give full play to their characteristics and advantages and facilitate the modification of the topology structure. Inter-segment networks are interwoven and fused with each other to realize the integration of space networks, air networks, and ground networks to complete the core purpose, accurate acquisition, rapid processing, and efficient transmission of information. A whole model and unified heterogeneous big data model are required!

2.2. Tensor-Based Spectrum Situation Awareness Model

For the proposed unified SAGINs model, the corresponding big data model is required to store and process heterogeneous big data. We try to propose a spectrum situational awareness model, adopt the unified data tensor model to solve the above problem, and use the data of this model to deal with the spectrum utilization issues from the perspective of spectrum usage.
For multidimensional, heterogeneous, and massive spectral big data, each order of tensors is used to represent a specific dimension, such as time, space, and frequency. The smallest unit of the tensor is constant; that is, the element of the tensor whose index is greater than or equal to the order is constant. In practical application, it is often necessary to collect multiple data in one location. Since these data do not belong to the same variable, they are often placed separately in multiple tensors of the same size during tensor modeling; that is, tensors are added to these data, respectively. Different tensors are represented by different notations, resulting in heterogeneous data. The huge amount of symbols and the heterogeneous data tensors pose a challenge to the final form of the same representation of big data.
To overcome this challenge, the tensor representation model of vector elements is proposed. Using vectors instead of scalars as the smallest unit of the tensor, all information can be expressed without error as long as the vector elements are ordered in the same order. The tensor representation model of vector elements is represented by the symbol A i 1 i 2 i m ( j ) , where m is the order of tensor A . For example, the temperature and humidity in a certain space are measured, and three sampling points are taken for each length, width, and height. The tensor model is a three-order three-dimensional tensor A with two variables, and the symbol is denoted as A x y z ( a ) , where A x y z = ( t , h ) and x, y, z, t, and h represent the length, width, height, temperature, and humidity, respectively. a is the sequence number of the variable in the vector. A x y z ( 1 ) represents the temperature tensor of the space and A 111 ( 1 ) represents the specific temperature value at sampling point 111. In other words, the uniform variable is formed as the order of the tensor, and the remaining variables are converted into vectors and stored in the tensor.
Using the tensor representation model of vector elements above, we can obtain the tensor-based spectrum situation awareness model. For the space–air–ground integrated spectrum situational awareness, user-centered situational awareness is carried out, and the awareness model is a five-order tensor composed of time, frequency, longitude, latitude, and altitude, where the elements of the tensor are data vectors. A five-order tensor A R T × F × X × Y × Z , composed of time, frequency, longitude, latitude, and altitude, is taken as an example of the awareness model here, and the model is shown in Figure 2. The data in the tensor  A represent the variation of the data along longitude, latitude, and altitude at different frequencies measured at different times. Taking A 23211 for example, where A 23211 = ( 3 , 10 ° , 0 ° , 0 ° ) , it means that at the first sampling time, the sampling frequency, sampling longitude, sampling latitude, sampling altitude, and radiation signal intensity are 3, the yaw angle is 10 ° , and the pitch and roll angles are 0 ° .

3. Tensor Eigenvalue Calculation

3.1. Related Work of Tensor Eigenvalue Calculation

Unlike matrix eigenvalue generation, the eigenvalue calculation problem of third or higher-order tensors can be considered as an NP-hard problem [35] due to the so-called “curse of dimensionality”. Nonetheless, several algorithms computing one or some eigenvalues of a tensor have been developed recently. Most algorithms are flawed, and these algorithms are designed for tensors of specific types, such as non-negative or real symmetric tensors.
For non-negative tensors, Ng, Qi, and Zhou proposed a power-type method to compute the largest H-eigenvalue of a non-negative tensor, which was called the Ni-Qi-Zhou method based on the Perron–Frobenius theorem [36]. For real symmetric tensors, Hu, Huang, and Qi proposed a sequential semidefinite programming method for computing extreme Z-eigenvalues [37]. Hao proposed a sequential subspace projection method for a similar purpose [38]. Kolda and Mayo proposed a shifted power method (SSHOPM) for computing Z-eigenvalues [39]. Han proposed an unconstrained optimization method for computing a real generalized eigenvalue for every order real symmetric tensor [40]. Lv and Ma proposed a Levenberg–Marquardt method to obtain all the H-eigenvalues of real semi-symmetric tensors [41]. In addition, for all symmetric tensor eigenvalues, there are many tensor eigenvalue algorithms such as NCM (Newton correction method), O-NCM [42] and FNS (fast Newton–Shultz-type iterative method) [43].
In recent years, the research on the eigenvalue computing of general tensors has made a breakthrough. For general tensors, Cui, Dai, and Nie proposed a novel method for computing all real eigenvalues of symmetric tensors by semidefinite relaxation [33] and then extended it to general tensors [44]. Chen, Han, and Zhou also proposed a homotopy method for computing all eigenvalues [34]. For solving tensor equations with applications, Liang, Zheng, and Zhao proposed alternating iterative methods based on ADMM, such as G-ADMM (Gauss–Seidel scheme) and TT-ADMM (tensor–train) [45]. Chen et al. provided a new idea to compute tensor eigenvalues by using digital signal processing technology and proved its feasibility from the perspective of a continuous-mode system [46].
To sum up, since the concept of the tensor eigenvalue was proposed, scholars have paid much attention to solving the tensor eigenvalue problem and put forward a mass of tensor eigenvalue numerical approximate algorithms, but most of them are based on the Newton iteration method to estimate values, just changing the initial conditions or the iteration equation to accelerate the rate of equation convergence, and a few use convex optimization algorithms to solve eigenvalue problems with special structures.

3.2. Tensor Eigenvalue Calculation Algorithms

3.2.1. Semidefinite Relaxation Algorithm

The semidefinite relaxation method, which is one of the first few algorithms to try to compute all eigenvalues of a tensor, is of great significance for the proposal and improvement of the subsequent algorithm, although it has some deficiencies like low convergence rate, long convergence time, inability to compute complex eigenvalues, and others. For the semidefinite relaxation method, to calculate Z-eigenvalues or E-eigenvalues, the corresponding algorithm is different, but the main idea is identical. Below, we take its Z-eigenvalue algorithm as an example to introduce the semidefinite relaxation method.
Let a eigenpair ( λ , x ) be a Z-eigenpair of A R [ m , n ] . We can derive that Z-eigenvalue λ is A x m and the corresponding eigenvector x needs to satisfy A x m 1 = A x m x , x T x = 1 . So, we can obtain a polynomial function p z
p z ( x ) = A x m 1 A x m x , x T x 1 .
Then, x is a Z-eigenvector of A when p z ( x ) = 0 . The Z-eigenvalues can be calculated from the smallest to the largest.
Firstly, compute the smallest Z-eigenvalue λ 1 of A . Define the polynomial optimization problem
min f ( x ) : = A x m s . t . p z ( x ) = 0 ,
where the optimal solution of (8) is λ 1 . According to Lasserre’s hierarchy [47], using the semidefinite relaxation method, the polynomial optimization problem (8) can be converted to
f 1 k = min f , y s . t . 1 , y = 1 , L p ( k ) ( y ) = 0 , M k ( y ) 0 ,
where · , y , L p ( k ) ( y ) , and M k ( y ) are defined in [47]. Then, with k = k 0 as the initial point, the solutions of the semidefinite relaxation (9) can be obtained, where k 0 = ( m + 1 ) / 2 . If there is no solution for k = k 0 , A has no Z-eigenvalue; otherwise, solve (9) again with an optimizer  y * . If y * satisfies rank M t k 0 y * = rank M t y * , we obtain λ 1 = f 1 k ; otherwise, let k = k + 1 and repeat the above procedures. In the end, we can obtain the smallest Z-eigenvalue  λ 1 .
Secondly, we need to know whether the next larger Z-eigenvalue λ i + 1 exists or not and then compute λ i + 1 if it exists. Let δ be a positive number that is close to zero. To find larger eigenvalues, (9) is modified to the following formula
min f ( x ) s . t . p z ( x ) = 0 ,   f ( x ) λ i + δ .
Like (9), we can obtain Lasserre‘s hierarchy of semidefinite relaxations
f i + 1 k = min f , y s . t . 1 , y = 1 ,   L p ( k ) ( y ) = 0 ,   M k ( y ) 0 ,   L f λ i δ ( k ) ( y ) 0 .
If the semidefinite relaxation (11) converges, which is checked by condition (10), the Z-eigenvectors x 1 , , x r can be computed by the method in [48]. Then, consider the optimization problems
v + λ i , δ : = max f ( x ) s . t . p z ( x ) = 0 ,   f ( x ) λ i + δ .
and
v λ i , δ : = min f ( x ) s . t . p z ( x ) = 0 ,   f ( x ) λ i δ .
The solutions of (12) and (13) are used to determine whether the eigenvalue is an isolated eigenvalue in order to generate δ , which is used in (11). λ i is an isolated Z-eigenvalue of A in [ λ i δ , λ i + δ ] if and only if
v λ i , δ = v + λ i , δ .
The implementation of the algorithm is shown in Algorithm 1 [44].
Algorithm 1 Compute all Z-eigenvalues of A R [ m , n ]
  • Step 0. Let k = k 0 , with k 0 : = ( m + 1 ) / 2 .
  • Step 1. Solve the semidefinite relaxation (9) by using Sedumi [49]. If there is no solution, A has no Z-eigenvalue and stop; if not, compute a minimizer y * .
  • Step 2. If rank M t k 0 y * rank M t y * for every t k , let k = k + 1 and return to Step 1; if not λ 1 = f 1 k , set i = 1 .
  • Step 3. Let δ = 0.05 .
  • Step 4. Solve (12) and (13) for the optimal solutions v + λ i , δ , v λ i , δ by using Sedumi. If v + λ i , δ v λ i , δ , let δ : = δ / 5 and compute v + λ i , δ , v λ i , δ again. Repeat this step until v + λ i , δ = v λ i , δ .
  • Step 5. Let k = k 0 .
  • Step 6. Solve the relaxation (11) by using Sedumi. If there is no solution, λ i is the largest Z-eigenvalue and stop; if not, compute a minimizer y * for it.
  • Step 7. If rank M t k y * rank M t y * , let k = k + 1 and return to Step 6; if not, λ i + 1 = f i + 1 k and return to Step 3 with i = i + 1 .
A concrete example below is provided to illustrate Algorithm 1. Consider the symmetric tensor A R [ 4 , 3 ] such that
A 1111 = 0.2147 , A 1112 = 0.3147 , A 1113 = 0.6738 , A 1122 = 0.1980 , A 1123 = 0.1335 , A 1133 = 0.7441 , A 1222 = 0.0761 , A 1223 = 0.3524 , A 1233 = 0.6900 , A 1333 = 0.5758 , A 2222 = 0.3686 , A 2223 = 0.3073 , A 2233 = 0.2145 , A 2333 = 0.0127 , A 3333 = 0.7286 .
Using Algorithm 1, we can obtain all the Z-eigenvalues of A . The Z-eigenvalues are shown in Table 1, where λ k ( n ) means λ k has n repeated roots and n = rank M t y * . The computation task takes about 4.56 s.

3.2.2. Homotopy Continuation-Type Algorithm

Compared with other calculation algorithms, the homotopy method has its own advantages, like less restrictive conditions, fewer operation costs, more universal methods to compute all common types of tensor eigenvalue, and so on. In order to use the homotopy method, we define the equivalence class
T ( λ , x ) = A ( k ) x m 1 1 λ B x m 1 1 A ( k ) x m 1 n λ B x m 1 n a 1 x 1 + a 2 x 2 + + a n x n + b = 0 ,
where A R [ m , n ] , B R [ m , n ] , λ and x = x 1 , , x n T are unknown, while a 1 , , a n , b are random numbers.
The main idea of the homotopy method is to convert the general polynomial system  T ( x ) = 0 into another polynomial system Q ( x ) = 0 that is easy to solve. In particular situations, the homotopy H ( x , t ) = 0 has smooth solution paths parameterized by t for t [ 0 , 1 ) , and all the isolated solutions of P ( x ) = 0 can be reached by tracing these solution paths. A useful homotopy is the linear homotopy [50]:
H ( x , t ) = ( 1 t ) γ Q ( x ) + t T ( x ) = 0 , t [ 0 , 1 ] ,
where γ is a random nonzero complex number. Another common homotopy is the polyhedral homotopy [51,52] based on Bernstein’s theorem:
H ( x , t ) = h 1 ( x , t ) , , h n ( x , t ) = 0 , t [ 0 , 1 ] ,
where h i ( x , t ) = ( 1 t ) γ Q i ( x ) + t T i ( x ) and T i ( x ) is the i-th polynomial in a polynomial system.
We mainly focus on the homotopy method to solve the tensor E-eigenvalue computing problem. The algorithm for computing the E-eigenvalue by the homotopy method is provided as follows. Firstly, with the polyhedral homotopy method, we can obtain the equivalence class T ( λ , x ) . The solution of T ( λ , x ) is relevant to the tensor eigenpairs, and ( λ , x ) is called equivalence eigenpairs. The polynomial expression T ( λ , x ) for the E-eigenvalue is
T ( λ , x ) = A ( 1 ) x m 1 1 λ x 1 A ( 1 ) x m 1 n λ x n a 1 x 1 + a 2 x 2 + + a n x n + b = 0 ,
where A R [ m , n ] , B is the identity matrix I n R n × n . Then, we obtain all equivalent eigenvalues and eigenvectors from the equivalence class by using NAClab [53] based on the PSolve [54] method, which is widely applied in sparse matrix factorization such as [55,56]. NAClab, a Matlab toolbox realizing the PSolve method, provides us a package in the numerical solution of polynomial systems by the homotopy continuation method. Finally, we find all E-eigenpairs by using the correspondence between equivalence eigenpairs and E-eigenpairs.
The implementation of the algorithm is shown in Algorithm 2 [34].
Some concrete examples are given below. Consider the symmetric tensor A R [ 4 , 3 ] , which is same as the example A given in the semidefinite relaxation algorithm. Using Algorithm 2, we likewise obtain all the E-eigenvalues and E-eigenvectors of A . Then, we can filter the real part of E-eigenpairs to obtain Z-eigenpairs. The Z-eigenvalues are shown in Table 2. The computation task takes about 0.28 s, which is a significant reduction in calculation time compared with the previous algorithm.
Algorithm 2 Compute all E-eigenpairs of A
  • Step 1. Using modified PSolve to obtain all equivalent eigenvalues and eigenvectors ( λ , x ) of T ( λ , x ) .
  • Step 2. For each ( λ , x ) obtained in Step 1, if x T x 0 , normalize it to obtain an eigenpair λ * , x * by
    λ * = λ x T x m 2 / 2 , x * = x x T x 1 / 2
    .
  • Step 3. Compute all E-eigenpairs λ , x of λ * , x * by λ = t m 2 λ * and x = t x * with t = ± 1 .
Then, consider the general tensor B R [ 3 , 4 ] such that
B 1 = 0.8810 1.8586 0.1136 1.4790 0.3232 0.6045 0.9047 0.8608 0.7841 0.1034 0.4677 0.7847 1.8054 0.5632 0.1249 0.3086 ,
B 2 = 0.2339 1.4694 0.3362 1.8359 1.0570 0.1922 0.9047 1.0360 0.2841 0.8223 0.2883 2.4245 0.0867 0.0942 0.3501 0.9594 ,
B 3 = 0.3158 0.9407 0.5583 0.9087 0.4286 0.7873 0.3114 0.2099 1.0360 0.8759 0.5700 1.6989 1.8779 0.3199 1.0257 0.6076 ,
B 4 = 0.1178 1.4831 1.1287 1.1741 0.6992 1.0203 0.2900 0.1269 0.2696 0.4470 1.2616 0.6568 0.4943 0.1097 0.4754 1.4814 ,
where B n is the n-th slice of B along the third order. By using Algorithm 2, we obtain all the E-eigenvalues and E-eigenvectors of B . The E-eigenvalues are shown in Table 3. The computation task takes about 0.37 s.

3.2.3. Calculation Performance Analysis

In this section, we will evaluate the calculation performances of the above two algorithms from several different aspects, such as the ratio of convergence, accuracy, and CPU time. All the numerical experiments were conducted on a PC with an Intel(R) core (TM) CPU at 3.00 GHz, 8 GB of RAM, and Windows 10. The packages Tensor-Toolbox [9] and TenEig-2.0 [34] were run using Matlab 2020a.
Firstly, the function “tenrand” in Tensor-Toolbox was used to generate the third-order two-dimension tensors to the fifth-order three-dimension tensors with 5000 sets across a total of 50,000 sets, in which the values of tensors follow the standard normal distribution. The generated dataset used Algorithm 2, “eeig” in TenEig-2.0, to calculate all the E-eigenpairs. As a matter of convenience, the non-convergence ratio, which is used to characterize the ratio of convergence, is defined as the number of error tensors, reporting the error during the computing over the total number of tensors. Among the 50,000 sets of data in this experiment, there were four tensors that did not converge. The non-convergence ratio is 8 × 10 5 , and the specific occurrence of non-convergence is shown in the following Table 4.
The accuracy of the algorithm is divided into an estimation bias, which is known as residual, and upper bound bias. The residual is defined as the difference between the actual truth value and the estimated fit value, that is, the value of X x i m λ i , where X R [ m , n ] is an arbitrary tensor and ( λ i , x i ) is the eigenpair of X . For E-eigenpairs of X R [ m , n ] , the upper bound is ( m 1 ) n 1 / ( m 2 ) [57], which means that X has ( m 1 ) n 1 / ( m 2 ) equivalent eigenpairs if the number of E-eigenpairs are finite, and we use the notation E [ m , n ] to represent the upper bound. We use the PSolve method in Algorithm 2 to obtain equivalent eigenpairs directly, and in the third step of the algorithm, the equivalence eigenpairs are transformed into eigenpairs, so the actual upper bound in theory is 2 E [ m , n ] , which is called E a c [ m , n ] . Based on the previous ratio of convergence test, the residual and upper limit data are obtained at the same time.
From Table 5, we can see that the residual is roughly 15 decimal places, which is negligible. The upper bound in theory is the same as the calculated upper bound. Finally, the computing costs represented by CPU time are shown in Table 6.
For small-scale tensors, where the order and dimension are less than four, the computing cost is linearly dependent on the upper bound with each eigenpair costing approximately 0.01 s. For large-scale tensors, affected by the “curse of dimension”, the upper bound increases rapidly, and the cost of computation increases extremely fast. The upper bound of the fifth-order five-dimension tensor is 682, and the CPU time is 30.28 s, while the upper bound of the sixth-order six-dimension tensor is 7812 and the CPU time is increased to about 2878 s, where the unit cost increases to 0.37 s.

4. Spectrum Situation Awareness Based on Tensor Eigenvalues

Spectrum situation awareness can be divided into three stages: perception (sensing), understanding, and prediction. Spectrum perception is the primary task to know the spectrum usage situation, which can be fulfilled by various sensors. Due to the unideal environment with noise, it is necessary to use heterogeneous information to improve sensing performance.
For SAGINs with the tensor big data model, it is indispensable to solve two significant problems before the spectrum situation awareness; one is the storage overhead problem, and the other is the data missing problem. These two problems can be well solved by tensor decomposition and tensor completion, which were mentioned above. For a big data tensor A R n 1 × n 2 × × n m , the required storage space is n m , and the overhead is unacceptable when m and n are large. Based on this, CP decomposition can greatly reduce the storage overhead. The specific algorithm [9] is as Algorithm 3.
In Algorithm 3, ★, ⊙, and ( · ) are the Hadamard product, Khatri–Rao product, and the Moore–Penrose pseudoinverse, respectively. A j is the j-mode unfolding of A . | | · | | is the tensor norm, taking A as an example, | | A | | = i 1 , , i m = 1 n 1 , , n m ( A i 1 , , i m ) 2 . The space overhead for storing the big data tensor A is reduced to m n R .
Algorithm 3 CP decomposition algorithm for big data tensors
  • Input: the big data tensor A , the CP-rank R , the tolerance γ 0 , and the maximium number of iterations N 0 .
  • Step 0. Initialize A i R n i × R for i = 1 , , m , j = 0 , and N = 1 .
  • Step 1. Compute A j = A ( j ) ( A m A j + 1 A j 1 A 1 ) ( A 1 T A 1 A j 1 T A j 1 A j + 1 T A j + 1 A m T A m ) , and j = j + 1 .
  • Step 2. If j < = m , return to Step 1. Otherwise, compute γ = 1 | | A A 1 A m | | | | | | A | | .
  • Step 3. If γ < γ 0 and N < N 0 , let j = 1 , N = N + 1 , and return to Step 1.
  • Output: A 1 , , A m .
In the actual scenario, it is inevitable to avoid partial data loss because of sensor failure, transmission loss, and other reasons, which is particularly common for big data. For spectral big data tensor X , the missing value problem can be solved by tensor completion based on CP decomposition, Tucker decomposition, and the minimum trace norm. Representation by the optimization problem can be written
min X , A 1 , , A m | | X A 1 A m | | 2 s . t . X Θ = Y Θ ,
min X , G , A 1 , , A m | | X G × 1 A 1 × m A m | | 2 s . t . X Θ = Y Θ ,
min X | | X | | * s . t . X Θ = Y Θ ,
where Θ is the set of nonzero-valued indexes in X , and | | · | | * is the tensor trace norm, which is defined as | | X | | * = i = 1 m α i | | X ( i ) | | * with i = 1 m α i = 1 , which can be regarded as the weighted sum of the n-mode unfolding matrix traces. All of these problems can be solved using the block coordinate descent algorithm, using the solution of (19) as an example to illustrate the tensor completion algorithm, as shown below Algorithm 4.
Algorithm 4 CP-based completion algorithm for big data tensors
  • Input: the big data tensor Y R n 1 × × n m , the tolerance γ 0 , and the maximium number of iterations N 0 .
  • Step 0. Initialize A 1 A m by using random numbers, Θ and its complementary set Θ ^ , let X Θ = Y Θ , N = 0 and k = 1 , and compute γ k = | | X A 1 A m | | 2 .
  • Step 1. Let X Θ ^ = ( A 1 A m ) Θ ^ .
  • Step 2. Compute A 1 A m of X by using Algorithm 3, and k = k + 1 .
  • Step 3. Compute γ k = | | X A 1 A m | | 2 .
  • Step 4. If γ 0 > γ k γ k 1 and N < N 0 , return to Step 1.
  • Output: the completed tensor X .
After solving the above problems, we try to use tensor eigenvalues to construct a spectrum situation awareness scheme. To the best of our knowledge, it is the first to use tensor eigenvalues to evaluate spectrum situation awareness. Similar to classical signal detection methods, the proposed situation awareness scheme is based on a binary hypothesis test
x ( t ) = s ( t ) + n ( t ) , H 1 n ( t ) , H 0
where x ( t ) denotes the received signal, s ( t ) is the target signal, and n ( t ) is the noise. Hypothesis test results are H 1 and H 0 .
Based on the spectrum situation awareness model, the signal tensor is generated with the target signal and noise. The eigenvalues of the signal tensor are used to construct the detectors, and the sensing results are generated by comparing the detector with the given threshold. For a given specific false alarm, the thresholds can be determined by the signal tensor with only noise H 0 . In polynomial time, the eigenvalue of the signal tensor is calculated with the homotopy method. Then, the detection performances can be evaluated by comparing such an eigenvalue with the given threshold. If the detector is greater than the threshold, the state is H 1 , indicating that there is a target signal. Otherwise, the state is H 0 , indicating no signal.
In order to demonstrate the performance of the algorithm, we simulated 52 sets of spectrum tensors with different tensor sizes and SNRs by Matlab; each set consisted of 10,000 received signal tensors and 1000 noise tensors. The detection performances of Algorithm 5 are shown in Figure 3 with varying tensor sizes, SNRs, and P α . In Figure 3a, the probability of detection  P d over 25 samples is plotted against the SNR under the various tolerated probabilities of false-alarm P α . It is found that with the increase of SNR, P d gradually increases and finally reaches 100 % . For the same threshold, as the SNR increases, the maximum eigenvalue increases, and the detection probability also increases. This phenomenon shows that this algorithm is effective for spectrum detection. When the SNR is sufficient, the detection success probability is close to 100 % ; that is, the maximum eigenvalue can be used to represent the existence of average energy in the tensor. Moreover, we noticed at the same time that P α mainly affects the speed of P d to reach 100 % . The larger P α is, the smaller SNR is when P d reaches 100 % . When P α = 0.01, SNR = 5 dB, that P d reaches 100 % for the first time, but when P α = 0.10, SNR = 2 dB. The reason for this phenomenon is that P α directly affects the value of the threshold. When P α increases, the threshold decreases, resulting in the tensor eigenvalue of a lower SNR being higher than the threshold.
Algorithm 5 Algorithm of eigenvalue-based spectrum situation awareness
  • Input: the noise tensors N , the signal tensors X , and the tolerated probability of false alarm P α .
  • Step 0. Compute the E-eigenvalues of N by Algorithm 2, and obtain the noise eigenvalue distribution.
  • Step 1. Compute the threshold T for a given P α based on noise eigenvalue distribution.
  • Step 2. Compute the E-eigenvalues of X and obtain the maximum eigenvalue (module value) S.
  • Step 3. Compare T and S. If T S , consider X to be the noise tensor. Otherwise, X is the signal tensor.
  • Output: the result of spectrum situation awareness.
Based on the above findings, the detection probability will increase if the tolerated probability of false alarm increases and vice versa. In order to illustrate this effect, the algorithm is compared in terms of the probability of detection P d as a function of the tolerated probability of false alarm P α in Figure 3b. When P α < 0.15 , P d increases rapidly, while when P α > 0.15 , P d increases gradually, and the curve of the larger SNR is above that of the smaller. The former is because, in hypothesis testing, the threshold distribution obeys the Gaussian distribution. In the first part, the threshold decreases rapidly with the increase of P α , and the detection success probability increases rapidly with the same SNR. In the second part, due to the concentrated distribution of the threshold, the threshold changes insignificantly with the increase of P α , resulting in a slow change of detection probability. The latter is because the larger SNR makes the desired threshold for P d = 100 % higher, which is self-consistent with the conclusion drawn by Figure 3a.
Figure 3c,d are mainly to illustrate the effect of tensor size on the algorithm. Instead of using tensors X R [ 3 , 3 ] in Figure 3a,b, we use tensors Y R [ 3 , 5 ] in Figure 3c,d, where each tensor grows from 27 elements to 243 elements. From the comparison, it is easy to find that the effect of the algorithm for Y is significantly better than that for X . When P α = 0.01, the SNR just has to be equal to 2 dB in order for P d to reach 100 % . The slope of the front part of the curve is significantly higher than that of Figure 3a.
Through subsequent simulation experiments on tensors of different order and different dimensions, it seems to be concluded that for tensors of the same order, the larger the dimension is, the better the algorithm effect will be. For the same dimensional tensor, the larger the order is, the worse the algorithm is. Explaining the cause of this phenomenon is the key problem to be solved in the following work. However, in a word, the eigenvalue detection method for spectrum sensing can effectively solve the problem of spectrum signals.

5. Conclusions

A novel tensor-based spectrum situational awareness scheme has been proposed, and the tensor was used to model and process multidimensional, heterogeneous, and massive spectral big data from space–air–ground integrated networks. In particular, the tensor eigenvalues have been utilized to construct the spectrum situation awareness, in which the distributions of E-eigenvalues were included to formulate spectrum sensing algorithms. Two tensor eigenvalue calculation schemes have also been provided to generate tensor eigenvalues. Simulation results have evaluated the correctness of the proposed situational awareness scheme. The situation awareness scheme can detect the spectrum state quickly, accurately and coarse grained, providing valuable support for subsequent understanding and prediction. However, as the complexity of tensor eigenvalue computation increases catastrophically with the increase of tensor size, this scheme can only deal with local small-scale data of spectral big data. In the future, the research will focus on exploring the correlation between tensor decomposition and eigenvalue, aiming to enable large-scale tensor eigenvalue computation. Additionally, further investigation in tensor computing will be conducted to enhance the subsequent operations of situational awareness. Overall, the novel tensor-based spectrum situational awareness scheme has provided a new application paradigm for tensor theory.

Author Contributions

Conceptualization, B.Q.; formal analysis, B.Q. and W.Z.; investigation, L.Z.; methodology, B.Q.; software, W.Z.; supervision, W.Z.; validation, L.Z.; writing—original draft preparation, B.Q.; writing—review and editing, W.Z. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China under Grant 2022YFF0604903.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Niu, Z.; Shen, X.S.; Zhang, Q.; Tang, Y. Space-air-ground integrated vehicular network for connected and automated vehicles: Challenges and solutions. Intell. Converg. Netw. 2020, 1, 142–169. [Google Scholar] [CrossRef]
  2. Guo, F.; Yu, F.R.; Zhang, H.; Li, X.; Ji, H.; Leung, V.C. Enabling massive iot toward 6g: A comprehensive survey. IEEE Internet Things J. 2021, 8, 11891–11915. [Google Scholar] [CrossRef]
  3. Guo, H.; Li, J.; Liu, J.; Tian, N.; Kato, N. A survey on space-air-ground-sea integrated network security in 6g. IEEE Commun. Surv. Tutor. 2021, 24, 53–87. [Google Scholar] [CrossRef]
  4. Mitola, J.; Maguire, G.Q. Cognitive radio: Making software radios more personal. IEEE Pers. Commun. 1999, 6, 13–18. [Google Scholar] [CrossRef]
  5. Wu, Q.; Ren, J. New paradigm of electromagnetic spectrum space: Spectrum situation. J. Nanjing Univ. Aeronaut. Astronaut. 2016, 48, 625–633. [Google Scholar]
  6. Lathauwer, L.D.; Moor, B.D. From matrix to tensor: Multilinear algebra and signal processing. In Institute of Mathematics and Its Applications Conference Series; Citeseer: Pittsburgh, PA, USA, 1998; Volume 67, pp. 1–16. [Google Scholar]
  7. Sidiropoulos, N.D.; Lathauwer, L.D.; Fu, X.; Huang, K.; Papalexakis, E.E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  8. Song, Q.; Ge, H.; Caverlee, J.; Hu, X. Tensor completion algorithms in big data analytics. ACM Trans. Knowl. Discov. Data 2019, 13, 1–48. [Google Scholar] [CrossRef]
  9. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  10. Anandkumar, A.; Ge, R.; Hsu, D.; Kakade, S.M.; Telgarsky, M. Tensor decompositions for learning latent variable models. J. Mach. Res. 2014, 15, 2773–2832. [Google Scholar]
  11. Hitchcock, F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  12. Cattell, R.B. ‘Parallel proportional profiles’ and other principles for determining the choice of factors by rotation. Psychometrika 1944, 9, 267–283. [Google Scholar] [CrossRef]
  13. Harshman, R.A. Foundations of the Parafac Procedure: Models and Conditions for an ‘Explanatory’ Multimodal Ffactor Analysis; University of California: Los Angeles, CA, USA, 1970. [Google Scholar]
  14. Carroll, J.D.; Chang, J.-J. Analysis of individual differences in multidimensional scaling via an n-way generalization of ‘eckart-young’ decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
  15. Kiers, H.A. Towards a standardized notation and terminology in multiway analysis. J. Chemom. J. Chemom. Soc. 2000, 14, 105–122. [Google Scholar] [CrossRef]
  16. Tucker, L.R. Implications of factor analysis of three-way matrices for measurement of change. Probl. Meas. Chang. 1963, 15, 122–137. [Google Scholar]
  17. Kroonenberg, P.M.; Leeuw, J.D. Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika 1980, 45, 69–97. [Google Scholar] [CrossRef]
  18. Lathauwer, L.D.; Moor, B.D.; Vandewalle, J. An introduction to independent component analysis. J. Chemom. J. Chemom. Soc. 2000, 14, 123–149. [Google Scholar] [CrossRef]
  19. Geng, X.; Smith-Miles, K.; Zhou, Z.-H.; Wang, L. Face image modeling by multilinear subspace analysis with missing values. In Proceedings of the 17th ACM International Conference on Multimedia, Beijing, China, 19–24 October 2009; pp. 629–632. [Google Scholar]
  20. Li, X. Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx. 2013, 37, 73–99. [Google Scholar] [CrossRef]
  21. Yokota, T.; Zhao, Q.; Cichocki, A. Smooth parafac decomposition for tensor completion. IEEE Trans. Signal Process. 2016, 64, 5423–5436. [Google Scholar] [CrossRef]
  22. Gandy, S.; Recht, B.; Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011, 27, 025010. [Google Scholar] [CrossRef]
  23. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 208–220. [Google Scholar] [CrossRef]
  24. Acar, E.; Dunlavy, D.M.; Kolda, T.G.; Mørup, M. Scalable tensor factorizations for incomplete data. Chemom. Intell. Lab. Syst. 2011, 106, 41–56. [Google Scholar] [CrossRef]
  25. Chen, Y.-L.; Hsu, C.-T.; Liao, H.-Y.M. Simultaneous tensor decomposition and completion using factor priors. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 577–591. [Google Scholar] [CrossRef] [PubMed]
  26. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  27. Lim, L.-H. Singular values and eigenvalues of tensors: A variational approach. In Proceedings of the 1st IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Puerto Vallarta, Mexico, 13–15 December 2005; pp. 129–132. [Google Scholar]
  28. Qi, L.; Wang, F.; Wang, Y. Z-eigenvalue methods for a global polynomial optimization problem. Math. Program. 2009, 118, 301–316. [Google Scholar] [CrossRef]
  29. Song, Y.; Qi, L. Eigenvalue analysis of constrained minimization problem for homogeneous polynomial. J. Glob. Optim. 2016, 64, 563–575. [Google Scholar] [CrossRef]
  30. Li, G.; Qi, L.; Yu, G. The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory. Numer. Linear Algebra Appl. 2013, 20, 1001–1029. [Google Scholar] [CrossRef]
  31. Surana, A.; Chen, C.; Rajapakse, I. Hypergraph Similarity Measures. IEEE Trans. Netw. Sci. Eng. 2023, 10, 658–674. [Google Scholar] [CrossRef]
  32. Chen, C. Explicit solutions and stability properties of homogeneous polynomial dynamical systems. IEEE Trans. Autom. Control. 2023, 68, 4962–4969. [Google Scholar] [CrossRef]
  33. Cui, C.-F.; Dai, Y.-H.; Nie, J. All real eigenvalues of symmetric tensors. SIAM J. Matrix Anal. Appl. 2014, 35, 1582–1601. [Google Scholar] [CrossRef]
  34. Chen, L.; Han, L.; Zhou, L. Computing tensor eigenvalues via homotopy methods. SIAM J. Matrix Anal. Appl. 2016, 37, 290–319. [Google Scholar] [CrossRef]
  35. Hillar, C.J.; Lim, L.-H. Most tensor problems are np-hard. J. ACM 2013, 60, 1–39. [Google Scholar] [CrossRef]
  36. Ng, M.; Qi, L.; Zhou, G. Finding the largest eigenvalue of a nonnegative tensor. SIAM J. Matrix Anal. Appl. 2010, 31, 1090–1099. [Google Scholar] [CrossRef]
  37. Hu, S.; Huang, Z.-H.; Qi, L. Finding the extreme z-eigenvalues of tensors via a sequential semidefinite programming method. Numer. Linear Algebra Appl. 2013, 20, 972–984. [Google Scholar] [CrossRef]
  38. Hao, C.L.; Cui, C.F.; Dai, Y.H. A sequential subspace projection method for extreme z-eigenvalues of supersymmetric tensors. Numer. Linear Algebra Appl. 2015, 22, 283–298. [Google Scholar] [CrossRef]
  39. Kolda, T.G.; Mayo, J.R. Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl. 2011, 32, 1095–1124. [Google Scholar] [CrossRef]
  40. Han, L. An unconstrained optimization approach for finding real eigenvalues of even order symmetric tensors. arXiv 2012, arXiv:1203.5150. [Google Scholar] [CrossRef]
  41. Lv, C.-Q.; Ma, C.-F. A levenberg–marquardt method for solving semi-symmetric tensor equations. J. Comput. Appl. Math. 2018, 332, 13–25. [Google Scholar] [CrossRef]
  42. Jaffe, A.; Weiss, R.; Nadler, B. Newton correction methods for computing real eigenpairs of symmetric tensors. SIAM J. Matrix Anal. Appl. 2018, 39, 1071–1094. [Google Scholar] [CrossRef]
  43. Dehdezi, E.K.; Karimi, S. A fast and efficient newton-shultz-type iterative method for computing inverse and moore-penrose inverse of tensors. J. Math. Model. 2021, 9, 645–664. [Google Scholar]
  44. Nie, J.; Zhang, X. Real eigenvalues of nonsymmetric tensors. Comput. Optim. Appl. 2018, 70, 1–32. [Google Scholar] [CrossRef]
  45. Liang, M.; Zheng, B.; Zhao, R. Alternating iterative methods for solving tensor equations with applications. Numer. Algorithms 2019, 80, 1437–1465. [Google Scholar] [CrossRef]
  46. Chen, Z.; Wang, X.; Yu, S.; Li, Z.; Guo, H. Continuous-mode quantum key distribution with digital signal processing. NPJ Quantum Inf. 2023, 9, 28. [Google Scholar] [CrossRef]
  47. Lasserre, J.B. Global optimization with polynomials and the problem of moments. SIAM J. Optim. 2001, 11, 796–817. [Google Scholar] [CrossRef]
  48. Henrion, D.; Lasserre, J.-B. Detecting global optimality and extracting solutions in gloptipoly. In Positive Polynomials in Control; Springer: Berlin/Heidelberg, Germany, 2005; pp. 293–310. [Google Scholar]
  49. Sturm, J.F. Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
  50. Li, T. Solving polynomial systems by the homotopy continuation methods. In Handbook of Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1993; pp. 209–304. [Google Scholar]
  51. Huber, B.; Sturmfels, B. A polyhedral method for solving sparse polynomial systems. Math. Comput. 1995, 64, 1541–1555. [Google Scholar] [CrossRef]
  52. Lee, T.-L.; Li, T.-Y.; Tsai, C.-H. Hom4ps-2.0: A software package for solving polynomial systems by the polyhedral homotopy continuation method. Computing 2008, 83, 109–133. [Google Scholar] [CrossRef]
  53. Zeng, Z.; Li, T.-Y. Naclab: A matlab toolbox for numerical algebraic computation. ACM Commun. Comput. Algebra 2014, 47, 170–173. [Google Scholar] [CrossRef]
  54. Davis, T.A.; Davidson, E.S. Psolve: A Concurrent Algorithm for Solving Sparse Systems of Linear Equations; Technical Report; Center for Supercomputing Research and Development: Urbana, IL, USA, 1987. [Google Scholar]
  55. Sadayappan, P.; Visvanathan, V. Efficient sparse matrix factorization for circuit simulation on vector supercomputers. IEEE Trans. Computer-Aided Des. Integr. Circuits Syst. 1989, 8, 1276–1285. [Google Scholar] [CrossRef]
  56. Hadfield, S.M. On the Lu Factorization of Sequences of Identically Structured Sparse Matrices within a Distributed Memory Environment. Ph.D. Thesis, Citeseer, Pittsburgh, PA, USA, 1994. [Google Scholar]
  57. Cartwright, D.; Sturmfels, B. The number of eigenvalues of a tensor. Linear Algebra Its Appl. 2013, 438, 942–952. [Google Scholar] [CrossRef]
Figure 1. Space–air–ground integrated network model.
Figure 1. Space–air–ground integrated network model.
Sensors 24 00334 g001
Figure 2. Tensor representation model and tensor-based spectrum situation awareness model.
Figure 2. Tensor representation model and tensor-based spectrum situation awareness model.
Sensors 24 00334 g002
Figure 3. Probability of detection of spectrum sensing algorithms based on tensor eigenvalue, as a function of the SNR and the tolerated probability of false-alarm. (a) Performance against SNR; (b) performance against P α ; (c) performance against SNR; (d) performance against P α .
Figure 3. Probability of detection of spectrum sensing algorithms based on tensor eigenvalue, as a function of the SNR and the tolerated probability of false-alarm. (a) Performance against SNR; (b) performance against P α ; (c) performance against SNR; (d) performance against P α .
Sensors 24 00334 g003
Table 1. Z-eigenvalues of A R [ 4 , 3 ] by semidefinite relaxation algorithm.
Table 1. Z-eigenvalues of A R [ 4 , 3 ] by semidefinite relaxation algorithm.
k1234
λ k 1.1878 ( 2 ) 0.5609 ( 2 ) 0.4543 ( 2 ) 0.3269 ( 4 )
k5678
λ k 0.0584 ( 2 ) 0.6287 ( 2 ) 1.6377 ( 2 ) 2.7482 ( 2 )
Table 2. Z-eigenvalues of A R [ 4 , 3 ] by homotopy continuation.
Table 2. Z-eigenvalues of A R [ 4 , 3 ] by homotopy continuation.
k1234
λ k 1.1878 ( 2 ) 0.5609 ( 2 ) 0.4543 ( 2 ) 0.3269 ( 4 )
k5678
λ k 0.0584 ( 2 ) 0.6287 ( 2 ) 1.6377 ( 2 ) 2.7482 ( 2 )
Table 3. E-eigenvalues of B R [ 3 , 4 ] by semidefinite relaxation algorithm.
Table 3. E-eigenvalues of B R [ 3 , 4 ] by semidefinite relaxation algorithm.
k1234
λ k ± ( 1.5909 + 0.0000 i ) ± ( 1.3095 + 0.0000 i ) ± ( 1.2097 + 0.0000 i ) ± ( 1.1606 + 1.4460 i )
k5678
λ k ± ( 1.1606 1.4460 i ) ± ( 1.0632 + 1.0201 i ) ± ( 1.0632 1.0201 i ) ± ( 0.8579 + 1.1735 i )
k9101112
λ k ± ( 0.8579 1.1735 i ) ± ( 0.8360 + 0.0260 i ) ± ( 0.8360 0.0260 i ) ± ( 0.6803 + 1.6365 i )
k131415
λ k ± ( 0.6803 1.6365 i ) ± ( 0.6331 + 0.0000 i ) ± ( 0.3802 + 0.0000 i )
Table 4. The non-convergence number in 5000 sets of tensors by homotopy continuation algorithm.
Table 4. The non-convergence number in 5000 sets of tensors by homotopy continuation algorithm.
[ m , n ] [ 3 , 2 ] [ 3 , 3 ] [ 3 , 4 ] [ 3 , 5 ] [ 4 , 2 ]
N u m 00200
[ m , n ] [ 4 , 3 ] [ 4 , 4 ] [ 4 , 5 ] [ 5 , 2 ] [ 5 , 3 ]
N u m 10010
Table 5. The accuracy in 5000 sets of tensors by the homotopy continuation algorithm.
Table 5. The accuracy in 5000 sets of tensors by the homotopy continuation algorithm.
( m , n ) ( 3 , 2 ) ( 3 , 3 ) ( 3 , 4 ) ( 3 , 5 ) ( 4 , 2 )
r e s i d u a l 5.30 × 10 16 3.54 × 10 16 3.12 × 10 16 1.03 × 10 15 3.30 × 10 16
E a c 61430628
N61430628
( m , n ) ( 4 , 3 ) ( 4 , 4 ) ( 4 , 5 ) ( 5 , 2 ) ( 5 , 3 )
r e s i d u a l 1.03 × 10 15 1.90 × 10 15 8.81 × 10 15 5.85 × 10 16 1.64 × 10 15
E a c 26802421042
N26802421042
Table 6. The time of computing in 5000 sets of tensors by homotopy continuation algorithm.
Table 6. The time of computing in 5000 sets of tensors by homotopy continuation algorithm.
( m , n ) ( 3 , 2 ) ( 3 , 3 ) ( 3 , 4 ) ( 3 , 5 ) ( 4 , 2 )
T i m e ( s ) 8.79 × 10 2 1.38 × 10 1 2.59 × 10 1 6.40 × 10 1 9.86 × 10 2
( m , n ) ( 4 , 3 ) ( 4 , 4 ) ( 4 , 5 ) ( 5 , 2 ) ( 5 , 3 )
T i m e ( s ) 2.46 × 10 1 8.16 × 10 1 4.58 1.18 × 10 1 4.11 × 10 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qi, B.; Zhang, W.; Zhang, L. Spectrum Situation Awareness for Space–Air–Ground Integrated Networks Based on Tensor Computing. Sensors 2024, 24, 334. https://doi.org/10.3390/s24020334

AMA Style

Qi B, Zhang W, Zhang L. Spectrum Situation Awareness for Space–Air–Ground Integrated Networks Based on Tensor Computing. Sensors. 2024; 24(2):334. https://doi.org/10.3390/s24020334

Chicago/Turabian Style

Qi, Bin, Wensheng Zhang, and Lei Zhang. 2024. "Spectrum Situation Awareness for Space–Air–Ground Integrated Networks Based on Tensor Computing" Sensors 24, no. 2: 334. https://doi.org/10.3390/s24020334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop