Next Article in Journal
AP-PointRend: An Improved Network for Building Extraction via High-Resolution Remote Sensing Images
Previous Article in Journal
Rapeseed Area Extraction Based on Time-Series Dual-Polarization Radar Vegetation Indices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology

1
National Key Laboratory of Microwave Imaging, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(8), 1480; https://doi.org/10.3390/rs17081480
Submission received: 18 February 2025 / Revised: 26 March 2025 / Accepted: 10 April 2025 / Published: 21 April 2025
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
By extending synthetic aperture technology from a microwave band to laser wavelength, the synthetic aperture ladar (SAL) achieves extremely high spatial resolution independent of the target distance in long-range imaging. Nonlinear phase correction is a critical challenge in SAL imaging. To address the issue of phase noise during the imaging process, we first analyze the theoretical impact of nonlinear phase noise in imaging performance. Subsequently, a reconstruction and compensation method based on orthonormal complete basis functions is proposed to mitigate nonlinear phase noise in SAL imaging. The simulation results validate the accuracy and robustness of the proposed method, while experimental data demonstrate its effectiveness in improving system range resolution and reducing the peak side lobe ratio by 3 dB across various target scenarios. This advancement establishes a solid foundation for the application of SAL technology in ground-based remote sensing and space target observation.

1. Introduction

SAL is an advanced imaging technology that extends synthetic aperture technology from the microwave band to the laser band. By utilizing the “motion” of a small optical aperture to synthesize an equivalent “large aperture”, SAL overcomes the resolution limitations imposed by traditional optical imaging systems. The shorter operating wavelength of SAL enables faster imaging times and higher spatial resolution compared to Synthetic Aperture Radar (SAR), offering significant potential for applications such as military target detection.
Since 1994, the Lincoln Laboratory at MIT has demonstrated two-dimensional SAL imaging [1]. In 2002, the U.S. Navy Laboratory achieved a range resolution of 170 μm and an azimuth resolution of 90 μm for cooperative targets at a distance of 30 cm [2]. Similarly, the Aerospace Information Research Institute successfully conducted two-dimensional SAL imaging of fixed diffuse targets at a distance of 2 m, achieving a range resolution of 60 μm and an azimuth resolution better than 50 μm [3]. Lockheed Martin Corporation achieved imaging results with a 1 m field of view and a resolution better than 3.3 cm for targets at a distance of 1.6 km [4]. Additionally, Raytheon and the U.S. Air Force Laboratory launched experimental satellites equipped with SAL systems.
In China, several institutions are leading research efforts in SAL technology, including the Xi’an University of Electronic Science and Technology; Shanghai Institute of Optics and Fine Mechanics; and the Aerospace Information Research Institute, Chinese Academy of Sciences (formerly the Institute of Electronics, Chinese Academy of Sciences). In 2011, the Shanghai Institute of Optics and Fine Mechanics achieved SAL imaging with a resolution of 1.4 mm (azimuth) × 1.2 mm (range) for targets at a distance of 14 m [5]. In the same year, the Aerospace Information Research Institute achieved an imaging resolution of 233 μm (azimuth) × 170 μm (rang) for targets at a distance of 2.4 m [6]. In 2017, the Aerospace Information Research Institute conducted an airborne SAL experiment with cooperative targets at a distance of 2.5 km, achieving centimeter-scale resolution [7].
During the development of SAL, the frequency modulation nonlinearity of the transmitted signal was identified as a significant factor adversely affecting the imaging quality of Frequency Modulated Continuous Wave (FMCW) Lidar. To address this challenge, it is necessary to estimate the frequency modulation nonlinearity errors of the transmitted signal and subsequently correct the nonlinear errors in the difference frequency signal. The concept of phase recovery algorithms to address such issues dates back to the 1950s when Sayre et al. proposed methods to mitigate blurriness in optical imaging caused by light wave interference [8]. The core idea involves constrained replacement and transformation in both the spatial and frequency domains to recover the original signal from amplitude values in a transformed domain [9].
Currently, researchers employ two main categories of methods to correct nonlinear errors. The first method involves setting up a reference channel to monitor time-varying optical frequencies in real time, using its output as a clock signal to sample the difference frequency signal at equal optical frequency intervals [10,11]. This approach effectively avoids frequency nonlinearity by replacing equal-time sampling with equal optical frequency sampling. However, this method must adhere to the Nyquist sampling theorem, and the maximum detectable range is limited by the reference channel’s distance [12,13].
The second method estimates and corrects the nonlinear components of the difference frequency signal after acquisition using information from the reference channel. This approach is advantageous because it is not constrained by the reference channel’s maximum detection range. In 2007, Tae-Jung Ahn and colleagues from the Gwangju Institute of Science and Technology proposed a nonlinear compensation method based on the Hilbert transform, utilizing time-varying frequency information from the reference channel to correct nonlinear errors in the difference frequency signal [14,15]. While effective in eliminating nonlinear components, this method requires complex calculations to address phase wrapping issues [16]. In 2009, Kivilcim Yuksel and colleagues from the Faculty of Engineering in Belgium converted the time-varying phase of the reference channel into amplitude changes, using envelope detection to extract time-varying frequencies and perform equal optical frequency resampling of the difference frequency signal [17]. Although this method reduces computational complexity, it performs poorly in estimating nonlinear noise errors.
While these conventional methods have demonstrated effectiveness in certain scenarios, they exhibit inherent limitations when dealing with complex, high-order nonlinear phase distortions due to their reliance on linear approximations and local optimization frameworks.
Recent advances in phase compensation have introduced orthonormal complete basis functions as a powerful mathematical tool for representing arbitrary wavefront distortions. Notable developments include the methods of orthogonal polynomial inverses [18] and orthogonal basis expansion-based phase noise suppression [19]. These basis function methods decompose phase errors into orthogonal components, enabling efficient representation and compensation of high-order aberrations. More sophisticated approaches have further enhanced performance by integrating basis function expansions with compressive sensing theory [20] or deep learning architectures [21].
However, critical challenges remain in current implementations: (1) the selection of basis functions often lacks physical justification from SAL imaging principles; additionally, (2) fixed-order truncation strategies may either underfit complex distortions or overfit to noise.
When performing heterodyne coherent detection of FMCW Lidar signals, the difference frequency signal contains multiple components from the reference signal. Accurately extracting the reference signal from the multi-component difference frequency signal is critical for compensating frequency modulation nonlinearity errors. Therefore, it is essential to develop a robust nonlinear error reconstruction method compatible with existing coherent detection systems to enhance imaging performance.
This paper proposes a physics-informed adaptive method to measure the nonlinearities of both the transmitted and reference signals, compensating for the reference signal’s nonlinearities in the echo difference frequency signal. The key contributions can be summarized as follows:
  • This study innovatively proposes an initial phase reconstruction and compensation method based on orthonormal complete basis functions, which establish a theoretical connection between the physical origins of SAL phase errors and the mathematical properties of basis functions, enabling physically meaningful mode selection.
  • Experimental data demonstrate its effectiveness in improving system range resolution and reducing the peak side lobe ratio by 3 dB across various target scenarios.

2. Theoretical Principles

2.1. Analysis of Nonlinear Phase Errors

In long-distance coherent detection, various nonlinear effects, such as laser linewidth and frequency instability, inevitably introduce nonlinear phase noise [22]. This phase noise disrupts the phase relationship between the echo signal and the local oscillator light, degrading the imaging performance [23]. To achieve equivalent coherence between the echo and the local oscillator in the digital domain and to extract image information from distant targets, it is crucial to accurately measure the nonlinear phase noise of the emitted light and compensate for it in the echo data [24].
Specifically, in the digital domain, advanced signal processing techniques are employed to compensate for nonlinear phase noise. This compensation focuses the energy of the interference signal, achieving detection sensitivity and resolution equivalent to analog coherence. At the same time, it provides noise-free laser phase information, enabling high-resolution imaging in the azimuth direction. The following section provides a theoretical analysis of digital coherent imaging. Taking linear frequency modulation as an example, the emitted signal can be expressed as
E T t = exp j 2 π f 0 t + j π K t 2 + j e t t
Similarly, the local oscillator light is represented as
E I t = exp j 2 π f 0 t + j π K t 2 + j e r e f t
where f0 is the laser frequency, K is the frequency modulation rate, and et(t) and eref(t) represent the nonlinear phase noise of the emitted laser and local oscillator laser. For simplicity, the signal amplitude is omitted since it does not affect the analysis. The echo signal from any scattering center on the target can be expressed as follows:
E R t = exp j 2 π f 0 t 2 R c + j π K t 2 R c 2 + j e t t 2 R c
where R is the distance from the scattering center to the Lidar phase center, and c is the speed of light. After coherent detection, the photoelectric current is given by
I t = E I t E R * t   = exp j 4 π K R c t + j 4 π f 0 R c + j 4 π K R 2 c 2 + j e r e f t j e t t 2 R c
where “*” denotes the complex conjugate, and τ = 2 R c is the time delay. The time-domain sampled data of the photoelectric current are arranged into a two-dimensional matrix according to the signal modulation period. The time, t, can be expressed as the sum of range time tr and azimuth time ta, t = tr + ta. Considering the displacement of the scattering center during azimuth time, the expression for I(tr,ta) becomes
I t r , t a = exp j 4 π K R + v t a c t r + j 4 π f 0 R + v t a c + j 4 π K R + v t a 2 c 2 + j e r e f t j e t t 2 R c
where v is the velocity of the scattering center in azimuth time. By further simplifying and ignoring fixed initial phases and smaller phase variations, the expression reduces to
I t r , t a = exp j 2 π 2 R K c t r + j 2 π 2 v λ t a + j e r e f t j e t t 2 R c
where 2KR/c represents the range frequency, 2v/λ represents the azimuth frequency, and λ is the laser wavelength. A two-dimensional Fourier transform applied to the above expression in range and azimuth yields the image, Img(fr, fa), of the scattering center:
I m g f r , f a = F I t r , t a   = δ f r 2 R K c , f a 2 v λ F exp j e r e f t j e t t 2 R c
In this expression, ⊗ denotes convolution, and the term following the convolution symbol represents the error term. The function corresponding to all scattering centers provides the distance–velocity information of the target. Due to the generation mechanism of laser nonlinear phase noise, the spectrum of the error term exhibits a Lorentzian shape with a bandwidth approximately twice the laser linewidth. When convolved with the two-dimensional delta function, this error term causes the image to become completely defocused, severely impacting imaging quality. Accurate compensation of the nonlinear phase noise is therefore critical to achieving high-resolution SAL imaging, particularly for long-distance targets.

2.2. Nonlinear Reconstruction Analysis

Based on the derivation above, an external interference method can be employed to reconstruct the nonlinear phase noise of the system. Since the reference signal and the transmitted signal undergo different linear modulation processes, their nonlinear errors cannot cancel each other out. As shown in Figure 1, a self-calibrating system is established to indirectly measure the laser phase. In this system, the laser output from the seed source is divided into two beams. One beam enters the orange modulator for linear modulation, and the modulated laser is amplified by the optical amplifier before being transmitted to the target. The other beam, serving as the reference laser, undergoes modulation in the orange square modulator, is amplified, and then undergoes quadrature demodulation with the target-reflected light. It then enters the balanced detector, followed by data acquisition and processing. The self-calibrating signal is generated by the coherent mixing of the transmitted signal and the reference signal.
To compensate for the error terms, it is necessary to simultaneously measure both the self-calibrating signal and the target echo. The nonlinear phases of the self-calibrating signal and the echo signal can be expressed as follows:
φ r e f = e r e f t τ r e f e t t
φ i f = e r e f t τ r e f e t t τ τ r e f
where φref is the nonlinear phase of the self-calibrating signal, φif is the nonlinear phase of the echo signal, τref is the reference delay, eref is the reference nonlinearity, and et is the transmitted nonlinearity.
The nonlinear phase noise of the reference signal and the transmitted signal can be reconstructed using a set of orthonormal complete functions {ψk(x)}, where k = 0…∞. The nonlinear phase can be expressed as
φ ( x ) = φ 0 + k = 0 K c k ψ k ( x )
Here, ψk(x) represents a set of orthogonal basis functions, ck represents the coefficients to be fitted, and K is the number of basis functions.
Nonlinear disturbances during laser transmission can cause echo energy to disperse over different ranges, thereby degrading imaging quality. The frequency modulation nonlinear errors in the transmitted signal are characterized by deviations from the ideal linear frequency–time relationship, leading to distortions or fluctuations in the instantaneous frequency curve. For the reference signal, these nonlinear errors primarily stem from the nonlinear response of the modulator during signal generation or instability in the laser source itself.
The nonlinear phase errors of the reference and transmitted signals typically contain multiple frequency components, including low-frequency trend errors and high-frequency random disturbances. These errors are generally smooth and continuous on a global scale but can exhibit drastic and complex variations locally. To accurately describe such characteristics, a combination of mixed orthogonal basis functions is required, capable of capturing both smooth and oscillatory features.
For low-frequency smooth nonlinear phase, polynomial basis functions (e.g., Legendre or Chebyshev polynomials) are suitable for describing low-frequency trends. For high-frequency oscillatory components, Fourier basis functions, such as ψk(x) = ej2πkx, or trigonometric basis functions (cos(ωjx) and sin(ωjx)) are used for periodic or oscillatory errors. Considering the spectral distribution of the error (as shown in Figure 2), an appropriate orthogonal complete basis function set {ψk(x)} can be selected.
Specifically, orthogonal basis functions are constructed as a combination of mixed polynomial and trigonometric basis functions. The nonlinear phase of the transmitted signal can be approximated as
e t t k = 0 K 1 c k P k t + j = 1 K 2 ( a j cos ω j t + b j sin ω j t )
Here, Pk(t) represents the polynomial basis functions, such as Legendre or Chebyshev polynomials, while cos (ωjt) and sin (ωjt) are the trigonometric basis functions. ck, aj, and bj are the coefficients to be estimated; K1 and K2 are the orders of the polynomial and trigonometric functions (tunable parameters); and ωj represents the frequency parameters of the trigonometric basis functions (equidistant or based on data distribution). By substituting Equation (7) into the difference of the two nonlinear phase expressions in Equation (6), we obtain the following:
φ i f φ r e f = e t t τ r e f e t t τ τ r e f   τ k = 0 K 1 c k P k t τ r e f + ω j j = 1 K 2 a j sin ω j t τ r e f + b j cos ω j t τ r e f
To reconstruct the nonlinear phase, the coefficients aj, bj, ωj, and ck are estimated using the least squares (LS) method. The estimated coefficients a j ~ , b j ~ , ω j ~ , a n d c k ~ are then substituted back into Equation (7) to obtain the transmitted nonlinear phase estimate.
e t ~ t τ r e f = k = 0 K 1 c k ~ P k t τ r e f + j = 1 K 2 ( a j ~ cos ω j ~ t τ r e f + b j ~ sin ω j ~ t τ r e f )
The estimated transmitted nonlinear phase e t ~ t τ r e f is substituted into Equation (6) to obtain the reference nonlinear phase error estimate e r e f ~ t τ r e f .
The reconstructed nonlinear phase can then be used to directly compensate for the reference nonlinearity. The compensated signal is expressed as
s i f 1 t = s i f t s e r e f * t = e x p j 2 π f c τ τ r e f + k τ τ r e f t + 1 2 k τ r e f 2 1 2 k τ 2 j e t t τ
To address the remaining transmitted nonlinear error, which is related to the target distance, an RVP (Residual Video Phase) filter is applied. Its correction function is given by exp (jπf2/K), where f is the frequency. The corrected beat frequency signal becomes
s i f 2 t = F 1 F s i f 1 t exp j π f 2 k e x p j 2 π f c τ τ r e f + k τ τ r e f t e t R V P t
Compensation for etRVP(t) is performed using the following formulas:
s e t R V P t F 1 F s e t t exp j π f 2 k exp j 2 π e t R V P t
s e t t = exp j e t ~ t τ r e f
By multiplying Equation (11) by Equation (10), the nonlinear phase related to fast time t is compensated, and the target’s distance information can be extracted via FFT, as shown in Equation (13), where R r e f = τ r e f c / 2 is the reference distance.
s i f t = s i n c B t 2 R R r e f c e x p j 4 π R R r e f λ

2.3. Simulation Experiment

In long-distance imaging using frequency-modulated continuous-wave (FMCW) Lidar, the short wavelength of laser signals makes them highly sensitive to various error sources. The nonlinear phase errors and noise in a system can arise from various factors, such as the non-ideal characteristics of hardware devices, external environmental interference, and inaccuracies in signal processing.
Nonlinear errors typically originate from nonlinear behaviors in the system, such as nonlinear phase distortion in optical systems, nonlinear effects in amplifiers within electronic circuits, and nonlinear computational errors in signal processing algorithms. Nonlinear errors can be expressed in the form of a polynomial, such as
ϕerror(t) = a1t2 + a2t3 + …
where a1, a2, etc., are the coefficients of the nonlinear error terms, and t is the time or another relevant parameter. Noise typically originates from the following sources: thermal noise, which is random noise caused by thermal motion in hardware devices; quantization noise, which includes errors introduced during signal digitization due to limited resolution; and environmental noise, which includes interference from external environments, such as electromagnetic interference.
Noise is often assumed to be a random signal and can be described using probability distributions. For example, Gaussian noise can be mathematically expressed as
n(t)∼N(0, σ2)
where σ2 is the variance of the noise, representing the noise intensity. Nonlinear errors and noise often coexist. The total phase of the signal can be expressed as
ϕtotal(t) = ϕideal(t) + ϕerror(t) + n(t)
where ϕideal(t) is the ideal phase, ϕerror(t) represents the nonlinear error, and n(t) is the noise.
We assumed that the frequency-modulated continuous wave (FMCW) Lidar system has the following parameters: laser wavelength (λ = 1.55 μm), sweep period (PRT = 32 μs), emission bandwidth (Br = 5 GHz), and sampling frequency (Fs = 150 MHz). Using the nonlinear error reconstruction and compensation algorithm, we performed nonlinear phase reconstruction and compensation for a point target located at 15 m within the same scene. By tracking the phases of the internal calibration signal and the indoor echo signal, the model proposed in this article was utilized to separately reconstruct the nonlinear components of the transmitted signal and the reference signal.
The reconstruction results are compared with the ideal values, as shown in Figure 3. Figure 3a shows the transmitted signal. The actual simulated error is represented by the black line, while the reconstructed nonlinearity is indicated by the red curve. The reconstruction results closely match the actual error, demonstrating the accuracy of the proposed method. Figure 3b shows the reference signal. Similarly, the actual simulated error is represented by the black line, and the reconstructed nonlinearity is indicated by the red curve. The reconstructed results align very well with the actual error, further validating the effectiveness of the algorithm.
The method proposed in this article directly compensates for the reference nonlinear phase error in the echo signal. Additionally, a Residual Video Phase (RVP) filter is applied to remove the range-dependent nonlinear error in the transmitted signal, enabling precise correction. The comparison of range resolution is shown in Figure 4. The blue line represents the focusing results in the range direction before compensation, while the red line represents the distribution after compensation. After calculations, the range resolution before compensation is approximately 3 m, whereas the range resolution after compensation significantly improves to 3 cm, indicating a dramatic enhancement in the range imaging performance. This simulation experiment demonstrates the effectiveness of the proposed nonlinear error reconstruction and compensation algorithm, achieving a substantial improvement in range resolution and overall imaging quality.

3. Results

At a distance of 4.3 km from the Lidar, a cooperative target was positioned, as illustrated in Figure 5. The target comprises a rectangular metal plate, equipped with fixed mounting holes, onto which five optical corner cube reflectors with black borders are securely attached. These corner cube reflectors are designed to significantly enhance the reflection intensity of the laser signal, thereby enabling the Lidar system to achieve high-precision detection and imaging. In Figure 5, the background corresponds to the metal plate, the white dots represent the screws on the plate, and the black objects depict the mechanical housings of the optical corner cubes. To provide a clear reference for the target’s dimensions, a scale bar is included in the figure, clearly indicating the size of the target.
Figure 6 illustrates the nonlinear phase measurements and reconstructions. Figure 6a,b display the measured internal calibration nonlinear phase and the echo nonlinear phase, respectively. Figure 6c,d show the reconstructed nonlinear phases of the reference signal and the transmitted signal. It is evident from the figures that there are noticeable differences in the nonlinear phase errors between the reference signal and the transmitted signal.
The reconstructed nonlinear phase errors were subsequently used to compensate for the echoes from different targets. Figure 7a shows the range-Doppler (RD) imaging results for the optical cones depicted in Figure 5. The target exhibits clear defocusing in both the range and azimuth directions. Using the orthonormal complete function reconstruction method, the nonlinear phase was obtained and applied for compensation. The imaging results after phase compensation are shown in Figure 7b, where defocusing in the range direction is effectively eliminated. Figure 7c further illustrates the continuation of azimuth phase gradient autofocus (PGA) based on Figure 7b, demonstrating significant improvements in azimuth focusing as well.
To further validate the effectiveness of the proposed algorithm, the resolution and peak side lobe ratio for each cone were calculated. For cone 1, the range slices before and after nonlinear compensation are shown on the left and right sides of Figure 8, respectively. The analysis in Figure 8 indicates that the orthonormal complete function nonlinear reconstruction method performs effectively for range compensation, resulting in a narrower main lobe, reduced side lobes, and a significant improvement in resolution.
The resolution and peak side lobe ratio for each cone are summarized in Table 1. The comparative analysis of imaging results before and after phase compensation (Figure 8) revealed significant performance enhancements. Prior to compensation, the target range profile exhibited noticeable broadening with dispersed mainlobe energy and elevated sidelobe levels (approximately −9 dB PSLR), primarily caused by coherence degradation due to nonlinear phase noise. Following compensation, the range profile demonstrated three key improvements: First, the average range resolution after compensation improved significantly, while the average peak side lobe ratio decreased by 3 dB. Second, substantial sidelobe suppression was achieved, with the highest sidelobe level being reduced to −30 dB with an 8 dB improvement. Third, the target signal-to-noise ratio (SNR) increased by 7 dB. These improvements validate the effectiveness of the proposed phase compensation method in maintaining signal coherence, particularly demonstrating excellent correction capability for quadratic phase errors induced by laser frequency instability.
For a cooperative satellite model target located at 4 km, as shown in Figure 9a, the system directly obtained the RD imaging results, which are shown in Figure 9b. These results exhibit defocusing phenomena in both the range and azimuth directions.
The impact of the atmosphere on SAL imaging primarily manifests in aspects such as scattering, absorption, cloud and fog obstruction, turbulence, and background noise. These factors can reduce the signal-to-noise ratio, resolution, and accuracy of imaging. In practical applications, the atmospheric effects can be mitigated, and the performance and adaptability of SAL imaging can be improved by selecting appropriate wavelengths, employing atmospheric correction algorithms, introducing adaptive optics technology, and integrating multi-sensor collaborative observation.
  • Scattering Effects
Molecules and aerosol particles in the atmosphere scatter laser signals, reducing the signal-to-noise ratio (SNR) and causing image blurring or distortion, especially in regions with high concentrations of atmospheric particles. Scattering effects can be categorized into two main types.
The intensity of Rayleigh scattering is inversely proportional to the fourth power of the wavelength (1/λ4). As a result, shorter wavelengths (e.g., visible light) are more susceptible to Rayleigh scattering, leading to signal attenuation and reduced spatial resolution.
Mie scattering has a broader impact range and is dependent on both the laser wavelength and particle size. It can cause signal deviation and uneven intensity, thereby affecting imaging accuracy.
2.
Clouds and Haze
Clouds and haze severely obstruct the propagation of laser signals, increasing scattering and absorption effects, which further degrade imaging quality.
3.
Atmospheric Turbulence
Atmospheric turbulence causes random fluctuations in the phase and amplitude of laser signals, which negatively impact imaging precision.
Using the proposed method, the reconstructed nonlinear phases of the transmitted and reference signals are sampled, and the imaging results after compensation are shown in Figure 9c. After compensation, the range image of the target is compressed, and the side lobes are significantly reduced.
Following nonlinear compensation in the range direction, pulse-by-pulse compensation for azimuth phase function errors is necessary. The specific compensation process includes phase gradient filtering; phase gradient error estimation; motion parameter estimation; the construction of the phase compensation matrix, and phase error compensation imaging. This iterative processing improves the degree of imaging focus, resulting in the final imaging results for the satellite model, as shown in Figure 10. The results demonstrate that the proposed method achieves a high degree of focus in the range directions, significantly enhancing the imaging quality.

4. Discussion

This paper first elucidates the imaging principles of SAL and systematically analyzes the detrimental effects of phase noise on imaging quality. The proposed algorithm demonstrates effective mitigation of both low-frequency noise and high-frequency oscillatory noise, albeit at the cost of increased computational time and processing complexity. A limitation lies in the algorithm’s substantial computational resource requirements, which may hinder its applicability in real-time scenarios.
Future research should focus on two critical directions: (1) developing more robust algorithms to compensate for atmospheric distortion effects and enhance imaging accuracy, and (2) designing computationally efficient signal processing methods to enable real-time imaging and analysis capabilities.

5. Conclusions

This paper proposes a physics-informed adaptive method to measure the nonlinearities of both the transmitted and reference signals, compensating for the reference signal’s nonlinearities in the echo difference frequency signal. The paper first analyzes the theoretical impact of nonlinear phase noise in imaging performance. Subsequently, a theoretical model for nonlinear phase reconstruction and compensation using orthonormal complete functions is proposed. This model leverages internal calibration and dual-phase measurements of the echo signal. Through simulations, the proposed method is shown to accurately reconstruct the system’s nonlinear phase, demonstrating strong robustness in phase reconstruction and significantly improving range resolution.
The experimental results further confirm that this method enhances the system’s range resolution while reducing the peak side lobe ratio by 3 dB, achieving effective compensation for various long-range targets. Theoretical simulations and experimental data collectively verify that the orthonormal complete function nonlinear phase reconstruction and compensation method can effectively improve imaging quality. This lays a solid foundation for the application of SAL imaging in target detection and remote sensing for Earth observation.
A reconstruction and compensation method based on orthonormal complete basis functions is proposed to mitigate nonlinear phase noise in SAL imaging. The simulation results validate the accuracy and robustness of the proposed method, while the experimental data demonstrate its effectiveness in improving system range resolution and reducing the peak side lobe ratio by 3 dB across various target scenarios.

Author Contributions

Conceptualization, R.S. and J.Z.; methodology, D.W.; software, W.L.; validation, B.W.; formal analysis, M.X.; data curation, Y.W.; writing—original draft preparation, J.Z.; supervision, J.Z.; project administration, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China “Earth Observation and Navigation” (No:2022YFB3902504).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank the staff of the National Key Laboratory of Microwave Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, for their valuable conversations and comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Green, T.J.; Marcus, S.; Colella, B.D. Synthetic-aperture-radar imaging with a solid-state laser. Appl. Opt. 1995, 34, 6941–6949. [Google Scholar] [CrossRef] [PubMed]
  2. Bashkansky, M.; Lucke, R.L.; Funk, E.; Rickard, L.J.; Reintjes, J. Two-dimensional synthetic aperture imaging in the optical domain. Opt. Lett. 2002, 27, 1983–1985. [Google Scholar] [CrossRef] [PubMed]
  3. Beck, S.M.; Buck, J.R.; Buell, W.F.; Dickinson, R.P.; Kozlowski, D.A.; Marechal, N.J.; Wright, T.J. Synthetic-aperture imaging laser radar: Laboratory demonstration and signal processing. Appl. Opt. 2005, 44, 7621–7629. [Google Scholar] [CrossRef] [PubMed]
  4. Krause, B.W.; Buck, J.; Ryan, C.; Hwang, D.; Kondratko, P.; Malm, A.; Gleason, A.; Ashby, S. Synthetic aperture ladar flight demonstration. In Proceedings of the Conference on Lasers and Electro-Optics: Laser Applications to Photonic Applications, Baltimore, MD, USA, 1–6 May 2011. [Google Scholar]
  5. Li-ren, L.; Yu, Z.; Ya-nan, Z.; Jianfeng, S.; Yapeng, W.; Zhu, L.; Aimin, Y.; Lijuan, W.; Enwen, D.; Wei, L. A large-aperture synthetic aperture imaging ladar demonstrator and its verification in laboratory space. Acta Opt. Sin. 2011, 31, 112–116. [Google Scholar]
  6. Jin, W.; Zhao-sheng, Y.; Zhi-long, Z.; Feifei, L.; Donglei, W.; Yongxin, T.; Yuanyuan, S.; Na, L. Synthetic aperture ladar imaging with one way far-field diffraction. J. Infrared Millim. Waves 2013, 32, 514–518. [Google Scholar]
  7. Guangzuo, L.; Ning, W.; Ran, W.; Keshu, Z.; Yirong, W. Imaging method for airborne SAL data. Electron. Lett. 2017, 53, 351–353. [Google Scholar]
  8. Sayre, D. Some Implications of a Theorem Due to Shannon. Acta Cryst. Logra 1952, 5, 843. [Google Scholar] [CrossRef]
  9. Fatima, G.; Babu, P. PGPAL: A monotonic iterative algorithm for phase-retrieval under the presence of poisson-gaussian noise. IEEE Signal Process. Lett. 2022, 29, 533–537. [Google Scholar] [CrossRef]
  10. Glombitza, U.; Brinkmeyer, E. Coherent frequency-domain reflectometry for characterization of single-mode integrated-optical waveguides. J. Light. Technol. 1993, 11, 1377–1384. [Google Scholar] [CrossRef]
  11. Soller, B.J.; Gifford, D.K.; Wolfe, M.S.; Froggatt, M.E. High resolution optical frequency domain reflectometry for characterization of components and assemblies. Opt. Express 2005, 13, 666–674. [Google Scholar] [CrossRef] [PubMed]
  12. Moore, E.D.; Mcleod, R.R. Correction of sampling errors due to laser tuning rate fluctuations in swept-wavelength interferometry. Opt. Express 2008, 16, 13139–13149. [Google Scholar] [CrossRef] [PubMed]
  13. Iiyama, K.; Yasuda, M.; Takamiya, S. Extended-range high-resolution FMCW reflectometry by means of electronically frequency-multiplied sampling signal generated from auxiliary interferometer. Ieice Trans. Electron. 2006, 89, 823–829. [Google Scholar] [CrossRef]
  14. Ahn, T.J.; Kim, D.Y. Analysis of nonlinear frequency sweep in high-speed tunable laser sources using a self-homodyne measurement and Hilbert transformation. Appl. Opt. 2007, 46, 2394–2400. [Google Scholar] [CrossRef] [PubMed]
  15. Roos, P.A.; Reibel, R.R.; Berg, T.; Kaylor, B.; Barber, Z.W.; Wm, R.B. Ultrabroadband optical chirp linearization for precision metrology applications. Opt. Lett. 2009, 34, 3692–3694. [Google Scholar] [CrossRef] [PubMed]
  16. Ahn, T.J.; Lee, J.Y.; Kim, D.Y. Suppression of nonlinear frequency sweep in an optical frequency-domain reflectometer by use of Hilbert transformation. Appl. Opt. 2005, 44, 7630–7634. [Google Scholar] [CrossRef] [PubMed]
  17. Yuksel, K.; Wuilpart, M.; Mégret, P. Analysis and suppression of nonlinear frequency modulation in an optical frequency-domain reflectometer. Opt. Express 2009, 17, 5845–5851. [Google Scholar] [CrossRef] [PubMed]
  18. Tsimbinos, J.; Lever, K.V. Nonlinear system compensation based on orthogonal polynomial inverses. IEEE Trans. Circuits Syst. I-Regul. Pap. 2001, 48, 406–417. [Google Scholar] [CrossRef]
  19. Fang, X.; Yang, C.; Zhang, T.; Zhang, F. Orthogonal basis expansion-based phase noise suppression for PDM CO-OFDM system. IEEE Photonics Technol. Lett. 2014, 26, 376–379. [Google Scholar] [CrossRef]
  20. Güngör, A.; Çetin, M.; Güven, H.E. Compressive synthetic aperture radar imaging and autofocusing by augmented lagrangian methods. IEEE Trans. Comput. Imaging 2022, 8, 273–285. [Google Scholar] [CrossRef]
  21. Kazemi, S.; Yonel, B.; Yazici, B. Deep learning based synthetic aperture imaging in the presence of phase errors via decoding priors. In Proceedings of the IEEE Radar Conference (RadarConf23), San Antonio, TX, USA, 1–5 May 2023. [Google Scholar]
  22. Agrawal, G.P. Fundamentals of nonlinear fiber optics. In Nonlinear Fiber Optics, 5th ed.; Academic Press: New York, NY, USA, 2013. [Google Scholar]
  23. Killey, R.I.; Watts, P.M.; Glick, M.; Bayvel, P. Electronic dispersion compensation by signal predistortion. In Proceedings of the 2006 Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, Anaheim, CA, USA, 5–10 March 2006. [Google Scholar]
  24. Lavery, D.; Maher, R.; Millar, D.S.; Thomsen, B.C.; Bayvel, P.; Savory, S. Digital coherent receivers for long-reach optical access networks. J. Light. Technol. 2013, 31, 609–620. [Google Scholar] [CrossRef]
Figure 1. A schematic diagram of the system. The square orange box represents the seed source, the rectangular orange box indicates the linear modulation, the green triangle signifies optical amplification, the green square denotes quadrature demodulation processing, and the gray squares represent the balanced detector, data acquisition, and data processing. The large dashed box represents the radar system, and the small rectangular dashed box indicates the internal calibration signal.
Figure 1. A schematic diagram of the system. The square orange box represents the seed source, the rectangular orange box indicates the linear modulation, the green triangle signifies optical amplification, the green square denotes quadrature demodulation processing, and the gray squares represent the balanced detector, data acquisition, and data processing. The large dashed box represents the radar system, and the small rectangular dashed box indicates the internal calibration signal.
Remotesensing 17 01480 g001
Figure 2. The spectral distribution of the transmitted signal error.
Figure 2. The spectral distribution of the transmitted signal error.
Remotesensing 17 01480 g002
Figure 3. The nonlinear reconstruction of the transmitted signal and the reference signal: (a) a comparison of the nonlinear estimates and the real error of the transmitted signal and (b) a comparison of the nonlinear estimates and the error of the reference signal.
Figure 3. The nonlinear reconstruction of the transmitted signal and the reference signal: (a) a comparison of the nonlinear estimates and the real error of the transmitted signal and (b) a comparison of the nonlinear estimates and the error of the reference signal.
Remotesensing 17 01480 g003
Figure 4. The range compression results before and after nonlinear phase compensation at a 15 m distance.
Figure 4. The range compression results before and after nonlinear phase compensation at a 15 m distance.
Remotesensing 17 01480 g004
Figure 5. A multi-cone combination located at a distance of 4.3 km. The numbers 1–5 represent the serial numbers of the optical corner cube reflectors mounted on the metal plate.
Figure 5. A multi-cone combination located at a distance of 4.3 km. The numbers 1–5 represent the serial numbers of the optical corner cube reflectors mounted on the metal plate.
Remotesensing 17 01480 g005
Figure 6. Nonlinear phase error. (a,b) are the measured internal calibration nonlinear phase and echo nonlinear phase, respectively. (c,d) are the reconstructed nonlinear phases of the reference signal and the transmitted signal.
Figure 6. Nonlinear phase error. (a,b) are the measured internal calibration nonlinear phase and echo nonlinear phase, respectively. (c,d) are the reconstructed nonlinear phases of the reference signal and the transmitted signal.
Remotesensing 17 01480 g006aRemotesensing 17 01480 g006b
Figure 7. The imaging results for the four channels and the stitched imaging results from the four channels. (a) shows the range-Doppler (RD) imaging results for the optical cones. (b) shows imaging after phase compensation. (c) illustrates the continuation of azimuth phase gradient autofocus.
Figure 7. The imaging results for the four channels and the stitched imaging results from the four channels. (a) shows the range-Doppler (RD) imaging results for the optical cones. (b) shows imaging after phase compensation. (c) illustrates the continuation of azimuth phase gradient autofocus.
Remotesensing 17 01480 g007
Figure 8. The range slices of cone 1 before and after nonlinear compensation.
Figure 8. The range slices of cone 1 before and after nonlinear compensation.
Remotesensing 17 01480 g008
Figure 9. The physical satellite model, the RD imaging result, and the imaging result after compensation with the internal calibration signal. (a) illustrates a cooperative satellite model target located at 4 km. (b) presents the RD imaging results, and (c) displays the imaging results after compensation.
Figure 9. The physical satellite model, the RD imaging result, and the imaging result after compensation with the internal calibration signal. (a) illustrates a cooperative satellite model target located at 4 km. (b) presents the RD imaging results, and (c) displays the imaging results after compensation.
Remotesensing 17 01480 g009
Figure 10. The imaging results after compensation with the azimuth phase function error.
Figure 10. The imaging results after compensation with the azimuth phase function error.
Remotesensing 17 01480 g010
Table 1. The compared parameters before and after compensation.
Table 1. The compared parameters before and after compensation.
ParametersThe Distance
Resolution Before Compensation/m
The Distance PSLR Before
Compensation/dB
The Distance Resolution After Compensation/mThe Distance PSLR After Compensation/dB
cone 10.03−9.55980.0288−12.8698
cone 20.0288−6.99940.0282−12.8331
cone 30.03−7.84230.0288−11.2202
cone 40.0288−9.98530.0282−11.3834
cone 50.03−9.19790.0294−10.9558
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, R.; Zhao, J.; Wang, D.; Li, W.; Wang, Y.; Wang, B.; Xiang, M. Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology. Remote Sens. 2025, 17, 1480. https://doi.org/10.3390/rs17081480

AMA Style

Shi R, Zhao J, Wang D, Li W, Wang Y, Wang B, Xiang M. Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology. Remote Sensing. 2025; 17(8):1480. https://doi.org/10.3390/rs17081480

Chicago/Turabian Style

Shi, Ruihua, Juanying Zhao, Dong Wang, Wei Li, Yinshen Wang, Bingnan Wang, and Maosheng Xiang. 2025. "Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology" Remote Sensing 17, no. 8: 1480. https://doi.org/10.3390/rs17081480

APA Style

Shi, R., Zhao, J., Wang, D., Li, W., Wang, Y., Wang, B., & Xiang, M. (2025). Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology. Remote Sensing, 17(8), 1480. https://doi.org/10.3390/rs17081480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop