1. Introduction
In software implementations, massive parallel correlation is done by exploiting the Fourier transformation. Mathematically, a time domain convolution is a multiplication in the frequency domain. By having all the IF samples in memory, we can transform to the frequency domain, perform a simple multiplication by the Fourier transform of the Pseudorandom noise (PRN) code, and later perform an inverse transform back to the time domain, this approach requires a large amount of random access memory RAM to store the data being received from the IF, and it is more of a store and process approach [
1]. This research explores the use of compressed sensing (CS) to reduce the number of samples and, therefore, the amount of RAM required, which could allow the development of new processing signal technologies where the signal is processed where more computational resources are available.
Due to digital processing technology and the implementation of software-based GNSS receivers, researchers are motivated to try new acquisition and tracking methods of the GNSS signal with the advantages of robustness, sensitivity, and anti-jamming capabilities [
2]. With the development of GNSS systems with more robust signals and the development of multiple constellations, GNSS receivers are facing a considerable amount of data processing, and the receiver hardware is growing larger, having a dramatic impact on the development of consumer- and professional-grade GNSS receivers. Receiver manufacturers are busily developing and implementing unique signal acquisition and tracking algorithms, advanced integrity monitoring algorithms, advanced multipath mitigation algorithms, and a host of other enhancements in an effort to improve the performance of GNSS receivers and make their products stand out in a crowded field [
3]. The primary objective of this paper is to develop a technique that allows reducing the number of samples with a secondary goal of improve the computational requirements of GNSS signal acquisition by optimizing the computational complexity. For the purpose of this paper, acquisition its understood as the process to estimate the code phase and Doppler values of GNSS signals from the IF that are accurate enough to start tracking [
4]. Thus, this paper focuses on a GNSS receiver in the cold start state when the receiver does not rely on stored information [
5], specifically, Global Positioning System (GPS) receivers and its application to other constellations, like the European constellation Galileo.
GPS receivers must observe and measure GNSS navigation signals from at least four satellites to obtain a three-dimensional position, velocity, and user clock error estimates. Use of more than the minimum four satellites will improve the accuracy of the user solution by using an overdetermined solution [
6]. GPS satellites simultaneously transmit several ranging codes and navigation data using binary phase-shift keying (BPSK). However, only a limited number of central frequencies are used. Satellites using the same frequency are distinguished by using different ranging codes, also called chipping codes. Satellites are uniquely identified by a serial number called the space vehicle number (SVN) which does not change during its lifetime [
7]. Additionally, all operating satellites have a pseudo-random noise (PRN) number which uniquely identifies the ranging codes that a satellite uses. The GPS satellite generates the signal. A frequency synthesizer driven by an atomic clock on the satellite makes a sinusoidal carrier frequency at 1575.42 MHz. This carrier is then modulated with a repeating code known as the C/A (coarse/acquisition) code. The C/A code is a binary sequence of 1023 bits, and it is used to multiply the carrier to form a binary phase-shift keyed (BPSK) modulated signal, The C/A is repeated every millisecond. The signal is further modulated by a 50-bps data stream containing the ephemeris data. It roughly takes 1 chip = 1 microsecond to travel the length of 300 m, and it takes 1 epoch = 1023 bits of PRN code 1 ms to travel 300 km (see
Figure 1).
Galileo satellites transmit the E1 (L1) signal on the centered frequency 1575.42 MHz, the same as GPS and, with a reference bandwidth of 24.5520 MHz, the E1 signal contains pilot and data channels, and both use composite-binary offset carrier (CBOC) modulation(see
Figure 2), which is multiplexed BOC(1,1) and BOC(6,1).
The receiving power level at the Earth’s surface of r(t) is extremely weak, well below the noise floor. The minimum received power on the ground, defined at the output of an ideally-matched right-hand circularly-polarized 0 dBi user receiving antenna when the satellite elevation angle is higher than 10 degrees, is −157 dBW, considering 50%/50% E1B/E1C power-sharing [
8].
Code A has 1023 MHz chipping rate, the data channel has a navigation message with 250 bps rate. The pilot channel is called E1-C, and the data channel is called E1-B. This kind of modulation allows GPS and Galileo signals to occupy the same frequency while avoiding mutual interference, making building receivers that use both GPS and Galileo simpler because GPS and Galileo use the same frequency.
A distinction is made between signals containing navigation data (the data channels) and signals carrying no data (pilot channels) [
9]: the signals of the data and pilot channels are shifted by 90 degrees in phase, which allows for their separation in the receivers. Galileo allows the receiver to estimate the ionospheric delay error. This error is due to the delay that the navigation signals suffer when they travel through the ionosphere. This delay makes the distance from the satellite to the user, as measured by the receiver, appear longer than it actually is and, if not corrected, would lead to large positioning errors. Fortunately, this delay is proportional to the frequency of the signal, with lower frequency signals experiencing a longer delay than higher frequency signals. Therefore, by combining measurements to the same satellite at two different frequencies, it is possible to produce another measurement where the ionospheric delay error has been canceled out. This cancellation becomes more effective as the separation between the two frequencies increases. This is the reason why Galileo services are generally realized using pairs of signals [
9].
Basic GPS receiver architecture is shown in
Figure 3. The satellite signal binary phase-shift keyed (BPKS) signal arrives at the antenna with some radio frequency (RF) plus noise. The front-end purpose of the receiver is to filter, amplify, and down-convert the incoming signal from analog to digital (A–D) to an intermediate frequency (IF) or lower frequency that is easy to process and sample in the receiver baseband. It is important to know that the RF front-end contains analog components that generate thermal noise and in the majority of satellite-receiver design the noise comes not from the satellites or any external source, but from the receiver itself [
1]. After the front end, there is the baseband section of the receiver. The IF to baseband mixer acts to remove the carrier from the signal, leaving the original binary sequence that was created at the satellite and the 50-bps data, but also noise.
At the correlator, the receiver takes a replica of the PRN code and multiplies it by the received signal, then integrates. When the correlators are aligned with the incoming signal a correlation peak is observed, and a hit is declared if the integrated value crosses a predetermined threshold. Moreover, the baseband block is repeated once per each channel so that each channel can acquire a different satellite. Therefore, a standard receiver has more than one channel.
One aspect to notice is that until the correlation peak is found, there are two unknowns. One is the actual frequency offset by a Doppler value and the offset of the local oscillator at the receiver. Therefore, an important part is the acquisition space, which occurs in 2D, is that one axis is the frequency (KHz), and the other is the code delay (chips). The search is typically done in frequency bins. This is called a frequency and code-delay search. The traditional approach convolves then the received signal with the code division multiple access (CDMA) code of each satellite in the time domain, and the correct alignment corresponds to the one that maximizes convolution. This approach has a computational complexity of O.
In the frequency domain the receiver takes the FFT of the received signal, it multiplies the output of this Fourier transform by the FFT of the CDMA code and then performs the inverse fast Fourier transform (IFFT) on the resulting signal, the output will spike at the correct shift that synchronizes the code with the received signal. The computational complexity of this approach is O
[
10].
Hassanieh et al. presented an FFT-based GPS locking algorithm of complexity O
, called QuickSync, that builds on recent developments of sparse recovery, and introduces the lowest complexity algorithm to date. The algorithm is tested on two datasets with data collected in the US using an SDR and a second one collected in Europe. Their design reduces the number of multiplications for detecting the correct shift by a median of 2.2×, the algorithm aliases the received signal in the time domain before taking its FFT, performs a subsample FFT on the aliased signal, subsamples the FFT of the satellite CDMA code, and multiplies the resulting samples with the aliased subsample FFT. Then it performs the IFFT, and the output is aliased in the time domain. Picking the shift that maximizes the correlation [
11], the algorithm developed in this research does not compete with the algorithms already in the market as its main focus is on compressing the signal that is to be used for those other algorithms.
Three contributors to the frequency offset to consider at the acquisition search are the frequency uncertainty and the noise in the TXCO-generated frequency, the Doppler effect for satellite motion, frequencies for rising and setting GPS satellites, and the receiver motion. For a receiver under static conditions, the most significant contributor to frequency offset is the satellite motion, which is about 4.2 KHz [
1], however, under high dynamic conditions, signals produce significant Doppler frequency shifts, which hinders the fast acquisition of signals, in the case of the maximum velocity of the satellite combined with very high user velocity-approach values as high as 10 KHz [
12].
The signal search and acquisition becomes important when the receiver is looking for several satellites at the same time: i.e., parallelism. A typical standalone GPS receiver can acquire signals down to about −160-decibel milliwatts (dBm) and might require a minute or more to obtain a position from a cold start. GPS receivers usually include some degree of parallelism, when considering a receiver having N channels, in which each channel is dedicated to searching for signals with a different PRN sequence. Within a channel, the frequency and code-phase search spaces are further divided into several windows [
6].
Parallelism can be implemented in hardware using massively parallel correlators, or in software using fast Fourier transform-based techniques [
13] where the massive parallel correlation is done by exploiting a property of the Fourier transformation. This approach requires having all the IF samples in RAM, where it can be transformed to the frequency domain, perform a simple multiplication, and finally perform an inverse transform back to the time domain. This will have the same results than using the standard hardware approach. However, due to the larger amount of data required to store the data received from the IF, this approach to store and process data requires a large amount of hardware or enough central processing unit (CPU) capacity.
Teixeira and Miralles developed a basic correlator using MATLAB and Simulink to validate the results and performance techniques when actual GPS satellite signal records are used, and formulated and implemented alternative parallel architectures to perform a circular correlation by decomposing the initial circular correlation into several smaller ones, which are independent and can be processed in parallel. When applied to GNSS signals, using FFT-based, parallel-code-phase search (PCS) has advantages for hardware-based implementations using field programmable gate arrays. The parallel architectures implemented are radix, FFTs, multipliers, adders, and NCOs. Additionally, the coded QuickSync algorithm, which exploits the sparse nature of the synchronization problem, and relays in an important property of aliasing a signal in the time domain, is equivalent to subsampling the signal spectrum [
10]. The authors are in favor of software-defined radio (SDR) and the work presented provides a set of functional tools that allow to pretest initial prototypes of the GNSS-SVD-C algorithm.
The development of software-based GNSS receivers is rapidly revolutionized in satellite-based navigation applications, and the receiver technology needs to be updated efficiently for high positional accuracy requirements under noisy environments. As discussed before, the acquisition based on spread spectrum technology is an essential process for identifying satellites, with the development of GNSS and the emergence of multisystem joint positioning, the receiver design is moving towards more data processing and, therefore, hardware scale needs to be improved. The fundamental cause is that most of the sampled data is obtained by using the Nyquist-Shannon sampling theorem [
14]. The theorem states that a signal can be exactly reproduced if it is sampled at a frequency F, where F is greater than twice the maximum frequency in the signal [
15]. However, even though this is a sufficient condition for accurate recovery, it is not a necessary condition. This condition increases the system computation time and cost of modern wideband receivers. In a real application, sampling at the Nyquist rate usually produces a high number of samples. Additionally, the front-end design of future GNSS receivers must meet the needs of multi-navigation signal reception. Thus, the instantaneous bandwidth of RF front end is increased and increases the complexity of baseband signal processing [
16]. The bandwidth of the receiver should be large enough to avoid signal to noise ratio (SNR) loss. This generally requires higher sampling rates with an attendant increase in power consumption and processing loads, a factor that is detrimental to low-cost and low-power consumer applications [
6].
Song proposed a faster acquisition algorithm via subsample FFT. The algorithm first downsamples by a factor ‘
d’ and then multiplies the FFT of the received signal with the FFT of the locally-generated PRN code, and takes the IFFT of the resulting signal, which produces a single spike at the correct time shift [
17]. The problem with this algorithm is that the downsampling factor ‘
d’ increases the noise contamination linearly, even though the computation time decreases exponentially,
. The truncation of PRN sequences leads to a reduction in the correlation of the GPS signals and may not be an appropriate solution. Fortin and Landry identified GNSS signal characteristics and addressed them by a universal acquisition and tracking channel, proposing an architecture that allows sequential acquisition and tracking of any chipping rate, carrier frequency, FDMA channel, modulation—i.e., BPSK(q) or QPSK(q), sin/cos BOC(p, q), CBOC(r, p, Pr
), and TMBOC(r, p,
)—or constellation, where a mobile device could integrate fewer universal channels, securing signal availability and minimizing power consumption and chip size, the results showing a 66% increase in power consumption compared with the established reference [
18]. The design principles align very well with this research in the sense that they identify the need to design new receivers to accommodate the increasing demands of new GNSS signals.
In recent years, the CS approach has been proven to effectively reduce the number of measurement samples required for digital signal acquisition systems. Compressed sensing, also known as compressive sensing, is a signal processing technique for efficiently acquiring and reconstructing a signal by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem [
19]. This research recommends an efficient method to acquire a GNSS signal using compressed sensing. Fortunately, the GPS signal, as any wireless RF signal, is relatively sparse [
20]. The topic proposed in this paper is a novel CS method that requires low computation and regular hardware size, completes the acquisition process faster, and acquires weak signals until about −160 dBm.
An extensive description of CS theory is described in
Section 2 and
Section 3. The central problem of compressed sensing is the reconstruction of the high-dimensional sparse signal representation of
from a low-dimensional linear observation
.
A study from Hansen and Li performed a preliminary exploration of CS theory applied in GPS systems in 2012 [
21]. They utilized the classic random binary matrix to observe the GPS signal and then adopt the reduced multiple measurement vector boost algorithm to reconstruct the signal. However, the signal reconstruction algorithm is very complex as the scheme is based on the multiple measurement CS theory. Kong proposed a two-stage compressed sensing algorithm taking a specifically structured matrix as the measurement matrix and employing multiple Walsh-Hadamard transforms as the signal reconstruction algorithm in 2012 [
22], though the two-stage compressed sensing will lead to much higher algorithmic complexity. Additionally, the algorithm can be used only to acquire strong GPS signals, which is not always the case.
Ou et al. developed a novel technique scheme based on CS achieves the transform sparsity of GNSS signals by utilizing the Gaussian random matrix and recovers the signal by using the single measurement OMP (orthogonal matching pursuit) algorithm [
23]. This scheme has an extra carrier to noise(CNR) loss problem, and the extra CNR caused by the CS algorithm is inversely proportional to the compressed ratio. The research is useful in the sense that it implies how to select a better anti-noise performance measurement matrix and how to choose the best performance of a signal reconstruction algorithm based on different compression ratios, increasing the coherent integration and the number of non-coherent integration.
To solve the problem mentioned previously, a novel GNSS signal acquisition scheme based on compressed sensing is proposed in this research. The main focus is on
minimization decoding models because
minimization has the following two advantages: (a) the flexibility to incorporate prior information into decoding models; and (b) uniform recoverability [
24]. A critical aspect regarding uniform recoverability is that recoverability is essentially invariant concerning different types of random matrices. This means that the random matrix does not have to be a random Gaussian or a random Bernoulli matrix with rather restrictive conditions, such as zero mean, and which are computationally expensive [
22].
In real applications either measurements are noisy, signal sparsity is inexact, or both. Here inexact sparsity refers to the situation where a signal contains a small number of significant components in magnitude, while the magnitudes of the rest are small, but not necessarily zero. Such approximately sparse signals are compressible, too [
24]. CS is an emerging methodology with a solid theoretical foundation that is still evolving. Most previous analyses in CS theory relied on the restrictive isometric property (RIP) of the measurement matrix A. These analyses can be called matrix-based. The non-RIP analysis, on the other hand, is subspace-based and utilizes the classic KGG (Kashin, Garnaev, and Gluskin) inequality to supply the order of recoverable sparsity [
24].
Chang proposed a CS method to enhance GNSS signal acquisition performance with interference present. The interference is mitigated through the orthogonal feature between interference and the desired signal using the subspace projecting method. Meanwhile, the RIP can be preserved by projecting the Toeplitz-structured sensing matrix to ensure that the linear projection of the signal can retain its original structure and allow the recovery of the correlation output (sparse signal) [
16]. This method is aligned with the topic in compressive sensing for this research, in the sense that it is subspace-based, but still uses the RIP approach to sound theory.
The proposed CS model for the GNSS signal includes the three aspects shown in
Figure 4. The first part is the sparse representation of the signal which consists of Toeplitz matrix design and sparse decomposition via matrix multiplication. The second part of this model is the compressed transmission; by linearly transforming the observation vector, the dimension can be reduced, which is far less than the original signal dimension. The third part is the reconstruction of the GNSS signal, since the observation vector can be calculated from the left singular and right singular vectors; the essence of the reconstruction is completed by using the convex relaxation method to match the original GNSS signal and, as part of this research, the GNSS-SVD-Convex algorithm is proposed to compress and reconstruct the signal.