1. Introduction
SAL is an advanced imaging technology that extends synthetic aperture technology from the microwave band to the laser band. By utilizing the “motion” of a small optical aperture to synthesize an equivalent “large aperture”, SAL overcomes the resolution limitations imposed by traditional optical imaging systems. The shorter operating wavelength of SAL enables faster imaging times and higher spatial resolution compared to Synthetic Aperture Radar (SAR), offering significant potential for applications such as military target detection.
Since 1994, the Lincoln Laboratory at MIT has demonstrated two-dimensional SAL imaging [
1]. In 2002, the U.S. Navy Laboratory achieved a range resolution of 170 μm and an azimuth resolution of 90 μm for cooperative targets at a distance of 30 cm [
2]. Similarly, the Aerospace Information Research Institute successfully conducted two-dimensional SAL imaging of fixed diffuse targets at a distance of 2 m, achieving a range resolution of 60 μm and an azimuth resolution better than 50 μm [
3]. Lockheed Martin Corporation achieved imaging results with a 1 m field of view and a resolution better than 3.3 cm for targets at a distance of 1.6 km [
4]. Additionally, Raytheon and the U.S. Air Force Laboratory launched experimental satellites equipped with SAL systems.
In China, several institutions are leading research efforts in SAL technology, including the Xi’an University of Electronic Science and Technology; Shanghai Institute of Optics and Fine Mechanics; and the Aerospace Information Research Institute, Chinese Academy of Sciences (formerly the Institute of Electronics, Chinese Academy of Sciences). In 2011, the Shanghai Institute of Optics and Fine Mechanics achieved SAL imaging with a resolution of 1.4 mm (azimuth) × 1.2 mm (range) for targets at a distance of 14 m [
5]. In the same year, the Aerospace Information Research Institute achieved an imaging resolution of 233 μm (azimuth) × 170 μm (rang) for targets at a distance of 2.4 m [
6]. In 2017, the Aerospace Information Research Institute conducted an airborne SAL experiment with cooperative targets at a distance of 2.5 km, achieving centimeter-scale resolution [
7].
During the development of SAL, the frequency modulation nonlinearity of the transmitted signal was identified as a significant factor adversely affecting the imaging quality of Frequency Modulated Continuous Wave (FMCW) Lidar. To address this challenge, it is necessary to estimate the frequency modulation nonlinearity errors of the transmitted signal and subsequently correct the nonlinear errors in the difference frequency signal. The concept of phase recovery algorithms to address such issues dates back to the 1950s when Sayre et al. proposed methods to mitigate blurriness in optical imaging caused by light wave interference [
8]. The core idea involves constrained replacement and transformation in both the spatial and frequency domains to recover the original signal from amplitude values in a transformed domain [
9].
Currently, researchers employ two main categories of methods to correct nonlinear errors. The first method involves setting up a reference channel to monitor time-varying optical frequencies in real time, using its output as a clock signal to sample the difference frequency signal at equal optical frequency intervals [
10,
11]. This approach effectively avoids frequency nonlinearity by replacing equal-time sampling with equal optical frequency sampling. However, this method must adhere to the Nyquist sampling theorem, and the maximum detectable range is limited by the reference channel’s distance [
12,
13].
The second method estimates and corrects the nonlinear components of the difference frequency signal after acquisition using information from the reference channel. This approach is advantageous because it is not constrained by the reference channel’s maximum detection range. In 2007, Tae-Jung Ahn and colleagues from the Gwangju Institute of Science and Technology proposed a nonlinear compensation method based on the Hilbert transform, utilizing time-varying frequency information from the reference channel to correct nonlinear errors in the difference frequency signal [
14,
15]. While effective in eliminating nonlinear components, this method requires complex calculations to address phase wrapping issues [
16]. In 2009, Kivilcim Yuksel and colleagues from the Faculty of Engineering in Belgium converted the time-varying phase of the reference channel into amplitude changes, using envelope detection to extract time-varying frequencies and perform equal optical frequency resampling of the difference frequency signal [
17]. Although this method reduces computational complexity, it performs poorly in estimating nonlinear noise errors.
While these conventional methods have demonstrated effectiveness in certain scenarios, they exhibit inherent limitations when dealing with complex, high-order nonlinear phase distortions due to their reliance on linear approximations and local optimization frameworks.
Recent advances in phase compensation have introduced orthonormal complete basis functions as a powerful mathematical tool for representing arbitrary wavefront distortions. Notable developments include the methods of orthogonal polynomial inverses [
18] and orthogonal basis expansion-based phase noise suppression [
19]. These basis function methods decompose phase errors into orthogonal components, enabling efficient representation and compensation of high-order aberrations. More sophisticated approaches have further enhanced performance by integrating basis function expansions with compressive sensing theory [
20] or deep learning architectures [
21].
However, critical challenges remain in current implementations: (1) the selection of basis functions often lacks physical justification from SAL imaging principles; additionally, (2) fixed-order truncation strategies may either underfit complex distortions or overfit to noise.
When performing heterodyne coherent detection of FMCW Lidar signals, the difference frequency signal contains multiple components from the reference signal. Accurately extracting the reference signal from the multi-component difference frequency signal is critical for compensating frequency modulation nonlinearity errors. Therefore, it is essential to develop a robust nonlinear error reconstruction method compatible with existing coherent detection systems to enhance imaging performance.
This paper proposes a physics-informed adaptive method to measure the nonlinearities of both the transmitted and reference signals, compensating for the reference signal’s nonlinearities in the echo difference frequency signal. The key contributions can be summarized as follows:
This study innovatively proposes an initial phase reconstruction and compensation method based on orthonormal complete basis functions, which establish a theoretical connection between the physical origins of SAL phase errors and the mathematical properties of basis functions, enabling physically meaningful mode selection.
Experimental data demonstrate its effectiveness in improving system range resolution and reducing the peak side lobe ratio by 3 dB across various target scenarios.
2. Theoretical Principles
2.1. Analysis of Nonlinear Phase Errors
In long-distance coherent detection, various nonlinear effects, such as laser linewidth and frequency instability, inevitably introduce nonlinear phase noise [
22]. This phase noise disrupts the phase relationship between the echo signal and the local oscillator light, degrading the imaging performance [
23]. To achieve equivalent coherence between the echo and the local oscillator in the digital domain and to extract image information from distant targets, it is crucial to accurately measure the nonlinear phase noise of the emitted light and compensate for it in the echo data [
24].
Specifically, in the digital domain, advanced signal processing techniques are employed to compensate for nonlinear phase noise. This compensation focuses the energy of the interference signal, achieving detection sensitivity and resolution equivalent to analog coherence. At the same time, it provides noise-free laser phase information, enabling high-resolution imaging in the azimuth direction. The following section provides a theoretical analysis of digital coherent imaging. Taking linear frequency modulation as an example, the emitted signal can be expressed as
Similarly, the local oscillator light is represented as
where
f0 is the laser frequency,
K is the frequency modulation rate, and
et(
t) and
eref(
t) represent the nonlinear phase noise of the emitted laser and local oscillator laser. For simplicity, the signal amplitude is omitted since it does not affect the analysis. The echo signal from any scattering center on the target can be expressed as follows:
where
R is the distance from the scattering center to the Lidar phase center, and
c is the speed of light. After coherent detection, the photoelectric current is given by
where “*” denotes the complex conjugate, and
is the time delay. The time-domain sampled data of the photoelectric current are arranged into a two-dimensional matrix according to the signal modulation period. The time,
t, can be expressed as the sum of range time
tr and azimuth time
ta,
t =
tr +
ta. Considering the displacement of the scattering center during azimuth time, the expression for
I(
tr,
ta) becomes
where
v is the velocity of the scattering center in azimuth time. By further simplifying and ignoring fixed initial phases and smaller phase variations, the expression reduces to
where 2K
R/c represents the range frequency, 2
v/
λ represents the azimuth frequency, and
λ is the laser wavelength. A two-dimensional Fourier transform applied to the above expression in range and azimuth yields the image, Img(
fr,
fa), of the scattering center:
In this expression, ⊗ denotes convolution, and the term following the convolution symbol represents the error term. The function corresponding to all scattering centers provides the distance–velocity information of the target. Due to the generation mechanism of laser nonlinear phase noise, the spectrum of the error term exhibits a Lorentzian shape with a bandwidth approximately twice the laser linewidth. When convolved with the two-dimensional delta function, this error term causes the image to become completely defocused, severely impacting imaging quality. Accurate compensation of the nonlinear phase noise is therefore critical to achieving high-resolution SAL imaging, particularly for long-distance targets.
2.2. Nonlinear Reconstruction Analysis
Based on the derivation above, an external interference method can be employed to reconstruct the nonlinear phase noise of the system. Since the reference signal and the transmitted signal undergo different linear modulation processes, their nonlinear errors cannot cancel each other out. As shown in
Figure 1, a self-calibrating system is established to indirectly measure the laser phase. In this system, the laser output from the seed source is divided into two beams. One beam enters the orange modulator for linear modulation, and the modulated laser is amplified by the optical amplifier before being transmitted to the target. The other beam, serving as the reference laser, undergoes modulation in the orange square modulator, is amplified, and then undergoes quadrature demodulation with the target-reflected light. It then enters the balanced detector, followed by data acquisition and processing. The self-calibrating signal is generated by the coherent mixing of the transmitted signal and the reference signal.
To compensate for the error terms, it is necessary to simultaneously measure both the self-calibrating signal and the target echo. The nonlinear phases of the self-calibrating signal and the echo signal can be expressed as follows:
where
φref is the nonlinear phase of the self-calibrating signal,
φif is the nonlinear phase of the echo signal,
τref is the reference delay,
eref is the reference nonlinearity, and
et is the transmitted nonlinearity.
The nonlinear phase noise of the reference signal and the transmitted signal can be reconstructed using a set of orthonormal complete functions {
ψk(
x)}, where
k = 0…∞. The nonlinear phase can be expressed as
Here, ψk(x) represents a set of orthogonal basis functions, ck represents the coefficients to be fitted, and K is the number of basis functions.
Nonlinear disturbances during laser transmission can cause echo energy to disperse over different ranges, thereby degrading imaging quality. The frequency modulation nonlinear errors in the transmitted signal are characterized by deviations from the ideal linear frequency–time relationship, leading to distortions or fluctuations in the instantaneous frequency curve. For the reference signal, these nonlinear errors primarily stem from the nonlinear response of the modulator during signal generation or instability in the laser source itself.
The nonlinear phase errors of the reference and transmitted signals typically contain multiple frequency components, including low-frequency trend errors and high-frequency random disturbances. These errors are generally smooth and continuous on a global scale but can exhibit drastic and complex variations locally. To accurately describe such characteristics, a combination of mixed orthogonal basis functions is required, capable of capturing both smooth and oscillatory features.
For low-frequency smooth nonlinear phase, polynomial basis functions (e.g., Legendre or Chebyshev polynomials) are suitable for describing low-frequency trends. For high-frequency oscillatory components, Fourier basis functions, such as ψ
k(x) = e
j2πkx, or trigonometric basis functions (cos(
ωjx) and sin(
ωjx)) are used for periodic or oscillatory errors. Considering the spectral distribution of the error (as shown in
Figure 2), an appropriate orthogonal complete basis function set {
ψk(
x)} can be selected.
Specifically, orthogonal basis functions are constructed as a combination of mixed polynomial and trigonometric basis functions. The nonlinear phase of the transmitted signal can be approximated as
Here,
Pk(
t) represents the polynomial basis functions, such as Legendre or Chebyshev polynomials, while cos (
ωjt) and sin (
ωjt) are the trigonometric basis functions.
ck,
aj, and
bj are the coefficients to be estimated;
K1 and
K2 are the orders of the polynomial and trigonometric functions (tunable parameters); and
ωj represents the frequency parameters of the trigonometric basis functions (equidistant or based on data distribution). By substituting Equation (7) into the difference of the two nonlinear phase expressions in Equation (6), we obtain the following:
To reconstruct the nonlinear phase, the coefficients
aj,
bj,
ωj, and
ck are estimated using the least squares (LS) method. The estimated coefficients
are then substituted back into Equation (7) to obtain the transmitted nonlinear phase estimate.
The estimated transmitted nonlinear phase is substituted into Equation (6) to obtain the reference nonlinear phase error estimate .
The reconstructed nonlinear phase can then be used to directly compensate for the reference nonlinearity. The compensated signal is expressed as
To address the remaining transmitted nonlinear error, which is related to the target distance, an RVP (Residual Video Phase) filter is applied. Its correction function is given by exp (
jπf2/
K), where
f is the frequency. The corrected beat frequency signal becomes
Compensation for
etRVP(
t) is performed using the following formulas:
By multiplying Equation (11) by Equation (10), the nonlinear phase related to fast time
t is compensated, and the target’s distance information can be extracted via FFT, as shown in Equation (13), where
is the reference distance.
2.3. Simulation Experiment
In long-distance imaging using frequency-modulated continuous-wave (FMCW) Lidar, the short wavelength of laser signals makes them highly sensitive to various error sources. The nonlinear phase errors and noise in a system can arise from various factors, such as the non-ideal characteristics of hardware devices, external environmental interference, and inaccuracies in signal processing.
Nonlinear errors typically originate from nonlinear behaviors in the system, such as nonlinear phase distortion in optical systems, nonlinear effects in amplifiers within electronic circuits, and nonlinear computational errors in signal processing algorithms. Nonlinear errors can be expressed in the form of a polynomial, such as
where a
1, a
2, etc., are the coefficients of the nonlinear error terms, and t is the time or another relevant parameter. Noise typically originates from the following sources: thermal noise, which is random noise caused by thermal motion in hardware devices; quantization noise, which includes errors introduced during signal digitization due to limited resolution; and environmental noise, which includes interference from external environments, such as electromagnetic interference.
Noise is often assumed to be a random signal and can be described using probability distributions. For example, Gaussian noise can be mathematically expressed as
where σ
2 is the variance of the noise, representing the noise intensity. Nonlinear errors and noise often coexist. The total phase of the signal can be expressed as
where ϕ
ideal(
t) is the ideal phase, ϕ
error(
t) represents the nonlinear error, and n(
t) is the noise.
We assumed that the frequency-modulated continuous wave (FMCW) Lidar system has the following parameters: laser wavelength (λ = 1.55 μm), sweep period (PRT = 32 μs), emission bandwidth (Br = 5 GHz), and sampling frequency (Fs = 150 MHz). Using the nonlinear error reconstruction and compensation algorithm, we performed nonlinear phase reconstruction and compensation for a point target located at 15 m within the same scene. By tracking the phases of the internal calibration signal and the indoor echo signal, the model proposed in this article was utilized to separately reconstruct the nonlinear components of the transmitted signal and the reference signal.
The reconstruction results are compared with the ideal values, as shown in
Figure 3.
Figure 3a shows the transmitted signal. The actual simulated error is represented by the black line, while the reconstructed nonlinearity is indicated by the red curve. The reconstruction results closely match the actual error, demonstrating the accuracy of the proposed method.
Figure 3b shows the reference signal. Similarly, the actual simulated error is represented by the black line, and the reconstructed nonlinearity is indicated by the red curve. The reconstructed results align very well with the actual error, further validating the effectiveness of the algorithm.
The method proposed in this article directly compensates for the reference nonlinear phase error in the echo signal. Additionally, a Residual Video Phase (RVP) filter is applied to remove the range-dependent nonlinear error in the transmitted signal, enabling precise correction. The comparison of range resolution is shown in
Figure 4. The blue line represents the focusing results in the range direction before compensation, while the red line represents the distribution after compensation. After calculations, the range resolution before compensation is approximately 3 m, whereas the range resolution after compensation significantly improves to 3 cm, indicating a dramatic enhancement in the range imaging performance. This simulation experiment demonstrates the effectiveness of the proposed nonlinear error reconstruction and compensation algorithm, achieving a substantial improvement in range resolution and overall imaging quality.
3. Results
At a distance of 4.3 km from the Lidar, a cooperative target was positioned, as illustrated in
Figure 5. The target comprises a rectangular metal plate, equipped with fixed mounting holes, onto which five optical corner cube reflectors with black borders are securely attached. These corner cube reflectors are designed to significantly enhance the reflection intensity of the laser signal, thereby enabling the Lidar system to achieve high-precision detection and imaging. In
Figure 5, the background corresponds to the metal plate, the white dots represent the screws on the plate, and the black objects depict the mechanical housings of the optical corner cubes. To provide a clear reference for the target’s dimensions, a scale bar is included in the figure, clearly indicating the size of the target.
Figure 6 illustrates the nonlinear phase measurements and reconstructions.
Figure 6a,b display the measured internal calibration nonlinear phase and the echo nonlinear phase, respectively.
Figure 6c,d show the reconstructed nonlinear phases of the reference signal and the transmitted signal. It is evident from the figures that there are noticeable differences in the nonlinear phase errors between the reference signal and the transmitted signal.
The reconstructed nonlinear phase errors were subsequently used to compensate for the echoes from different targets.
Figure 7a shows the range-Doppler (RD) imaging results for the optical cones depicted in
Figure 5. The target exhibits clear defocusing in both the range and azimuth directions. Using the orthonormal complete function reconstruction method, the nonlinear phase was obtained and applied for compensation. The imaging results after phase compensation are shown in
Figure 7b, where defocusing in the range direction is effectively eliminated.
Figure 7c further illustrates the continuation of azimuth phase gradient autofocus (PGA) based on
Figure 7b, demonstrating significant improvements in azimuth focusing as well.
To further validate the effectiveness of the proposed algorithm, the resolution and peak side lobe ratio for each cone were calculated. For cone 1, the range slices before and after nonlinear compensation are shown on the left and right sides of
Figure 8, respectively. The analysis in
Figure 8 indicates that the orthonormal complete function nonlinear reconstruction method performs effectively for range compensation, resulting in a narrower main lobe, reduced side lobes, and a significant improvement in resolution.
The resolution and peak side lobe ratio for each cone are summarized in
Table 1. The comparative analysis of imaging results before and after phase compensation (
Figure 8) revealed significant performance enhancements. Prior to compensation, the target range profile exhibited noticeable broadening with dispersed mainlobe energy and elevated sidelobe levels (approximately −9 dB PSLR), primarily caused by coherence degradation due to nonlinear phase noise. Following compensation, the range profile demonstrated three key improvements: First, the average range resolution after compensation improved significantly, while the average peak side lobe ratio decreased by 3 dB. Second, substantial sidelobe suppression was achieved, with the highest sidelobe level being reduced to −30 dB with an 8 dB improvement. Third, the target signal-to-noise ratio (SNR) increased by 7 dB. These improvements validate the effectiveness of the proposed phase compensation method in maintaining signal coherence, particularly demonstrating excellent correction capability for quadratic phase errors induced by laser frequency instability.
For a cooperative satellite model target located at 4 km, as shown in
Figure 9a, the system directly obtained the RD imaging results, which are shown in
Figure 9b. These results exhibit defocusing phenomena in both the range and azimuth directions.
The impact of the atmosphere on SAL imaging primarily manifests in aspects such as scattering, absorption, cloud and fog obstruction, turbulence, and background noise. These factors can reduce the signal-to-noise ratio, resolution, and accuracy of imaging. In practical applications, the atmospheric effects can be mitigated, and the performance and adaptability of SAL imaging can be improved by selecting appropriate wavelengths, employing atmospheric correction algorithms, introducing adaptive optics technology, and integrating multi-sensor collaborative observation.
Molecules and aerosol particles in the atmosphere scatter laser signals, reducing the signal-to-noise ratio (SNR) and causing image blurring or distortion, especially in regions with high concentrations of atmospheric particles. Scattering effects can be categorized into two main types.
The intensity of Rayleigh scattering is inversely proportional to the fourth power of the wavelength (1/λ4). As a result, shorter wavelengths (e.g., visible light) are more susceptible to Rayleigh scattering, leading to signal attenuation and reduced spatial resolution.
Mie scattering has a broader impact range and is dependent on both the laser wavelength and particle size. It can cause signal deviation and uneven intensity, thereby affecting imaging accuracy.
- 2.
Clouds and Haze
Clouds and haze severely obstruct the propagation of laser signals, increasing scattering and absorption effects, which further degrade imaging quality.
- 3.
Atmospheric Turbulence
Atmospheric turbulence causes random fluctuations in the phase and amplitude of laser signals, which negatively impact imaging precision.
Using the proposed method, the reconstructed nonlinear phases of the transmitted and reference signals are sampled, and the imaging results after compensation are shown in
Figure 9c. After compensation, the range image of the target is compressed, and the side lobes are significantly reduced.
Following nonlinear compensation in the range direction, pulse-by-pulse compensation for azimuth phase function errors is necessary. The specific compensation process includes phase gradient filtering; phase gradient error estimation; motion parameter estimation; the construction of the phase compensation matrix, and phase error compensation imaging. This iterative processing improves the degree of imaging focus, resulting in the final imaging results for the satellite model, as shown in
Figure 10. The results demonstrate that the proposed method achieves a high degree of focus in the range directions, significantly enhancing the imaging quality.