Next Article in Journal
Leveraging Ice, Cloud, and Land Elevation Satellite-2 Laser Altimetry and Surface Water Ocean Topography Radar Altimetry for Error Diagnosis in Hydraulic Models: A Case Study of the Chao Phraya River
Next Article in Special Issue
Identification and Analysis of Wind Shear Within the Transiting Frontal System at Xining International Airport Using Lidar
Previous Article in Journal
Analysis of Instantaneous Doppler Positioning Performance Based on LEO Satellite Ephemeris Errors
Previous Article in Special Issue
Research on the Tunable Optical Alignment Technology of Lidar Under Complex Working Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform

1
National Key Laboratory of Laser Spatial Information, Harbin Institute of Technology, Harbin 150001, China
2
Zhengzhou Research Institute of Harbin Institute of Technology, Zhengzhou 450000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(4), 622; https://doi.org/10.3390/rs17040622
Submission received: 6 January 2025 / Revised: 7 February 2025 / Accepted: 10 February 2025 / Published: 11 February 2025

Abstract

:
Array Geiger-mode avalanche photodiode (GM-APD) Light Detection and Ranging (LiDAR) has the advantages of high sensitivity and long imaging range. However, due to its operating principle, GM-APD LiDAR requires processing based on multiple-laser-pulse data to complete the target reconstruction. Therefore, the influence of the device’s movement or scanning motion during GM-APD LiDAR imaging cannot be ignored. To solve this problem, we designed a reconstruction method based on coordinate system transformation and the Position and Orientation System (POS). The position, attitude, and scanning angles provided by POS and angular encoders are used to reduce or eliminate the dynamic effects in multiple-laser-pulse detection. Then, an optimization equation is constructed based on the negative-binomial distribution detection model of GM-APD. The spatial distribution of photons in the scene is ultimately computed. This method avoids the need for field-of-view registration, improves data utilization, and reduces the complexity of the algorithm while eliminating the effect of LiDAR motion. Moreover, with sufficient data acquisition, this method can achieve super-resolution reconstruction. Finally, numerical simulations and imaging experiments verify the effectiveness of the proposed method. For a 1.95 km building scene with SBR ~0.137, the 2 × 2-fold super-resolution reconstruction results obtained by this method reduce the distance error by an order of magnitude compared to traditional methods.

1. Introduction

As an active detection remote sensing technology, LiDAR is nowadays widely used in civil and military fields, such as autonomous driving [1,2], mapping [3,4], ecological monitoring [5,6,7,8], and target detection [9,10,11]. In recent decades, GM-APD LiDAR [12,13,14], as a new type of system, has gained attention and developed rapidly because of its advantages, such as high photon sensitivity and long detection distance.
The single-photon sensitivity of the GM-APD makes this kind of LiDAR relatively sensitive to background light noise. Therefore, it needs to be combined with suitable noise filtering and reconstruction algorithms [15,16,17,18,19,20] to enhance the detection capability of the GM-APD LiDAR. According to the operating principle of the GM-APD detector, these algorithms need data of multiple-laser-pulse detection. Although the GM-APD can operate at a high repetition frequency, the reconstruction will be affected when the LiDAR moves or rotates during the acquisition. To meet this “dynamic” reconstruction requirement, Altmann et al. proposed a fast online estimation method [21]. They applied a Bayesian model to estimate the target’s distance and intensity information pixel by pixel and achieved real-time 3D reconstruction on individual photon detection events. Jiang et al. presented a GM-APD imager with dynamic imaging capability [22]. Likewise, they avoid the dynamic effects in reconstruction by a fast and efficient processing method—an algorithm based on solving the 3D optimization equation. Both methods fail to deal with the actual movement of the LiDAR or the target but rely on the fast-processing nature of the algorithm itself to finish the “dynamic” reconstruction. These methods still cannot completely solve the impact in the reconstruction of the GM-APD LiDAR when applied to high-speed motion platforms such as vehicle-borne and airborne.
Each image element of the GM-APD array can detect the echo photons simultaneously after a single laser pulse emission, significantly improving the operational efficiency of LiDAR compared to point- and line-scanning modes. However, due to the process limitation, the element number of the GM-APD array is relatively small [23,24,25]. Furthermore, the GM-APD LiDAR’s instantaneous field-of-view cannot be designed to be very large to ensure enough energy density of the laser echo in each image element. Therefore, the array GM-APD LiDAR commonly scans to extend the field-of-view (FOV, which is two-dimensional for array GM-APD systems. In the following descriptions, we use this abbreviation to refer to the two-dimensional imaging angular range of the array GM-APD LiDAR), which also introduces the need for related reconstruction algorithms. Henriksson et al. built a transverse scanning LiDAR system with a 128 × 32 pixels GM-APD [26]. In the reconstruction, they assume that the scanning speed is a fixed constant and derive the orientation information of the data points based on the scanning speed, the column number, and the pixel angle. Finally, the panoramic reconstruction is completed by applying peak detection of the modified histogram. In addition to transverse scanning, conical scanning is also an option for array LiDAR on airborne platforms. A single-photon LiDAR with conical scanning and its processing chain was presented in [27]. Combined with the general noise filtering and reconstruction process, they apply the position and orientation provided by the GNSS/IMU subsystem (the main component of the POS) to geolocate the data and then implement registration further to reduce the misalignments between the different datasets. The same team in [26] demonstrated their multi-position GM-APD scanning experiments in [28]. They were also equipped with the IMU and applied registration based on the ICP algorithm. For vehicle-borne or airborne scanning LiDAR systems, this scheme with the GNSS/IMU and the implementation of registration is relatively mature, while its application for the GM-APD LiDAR is still in the exploratory stage.
In this paper, a novel GM-APD LiDAR reconstruction method based on GNSS/IMU is proposed, which solves the influence of LiDAR’s own motion on vehicle-borne and airborne platforms and accomplishes the reconstruction of the scanned FOV without registration. In this method, we establish complete coordinate systems for each part of the LiDAR. To confirm the transformation relationship between the coordinate systems, we use GNSS, IMU, and angular encoders to measure position, orientation, and scanning angle. Then, the multiple-laser-pulse data are transformed into a unified coordinate system in the form of point clouds, eliminating the movement, rotation, and scanning of the LiDAR. Finally, the transformed point clouds are spatially rasterized, and an optimization solution based on the detection model of the GM-APD is performed. After generating the 3D photon distribution of the whole scene, the reconstruction is completed. Since the final optimization solution takes all the point clouds in the entire scanned FOV as input, the reconstruction result is generated directly at once, thus avoiding the registration of multiple FOVs and reducing the algorithm’s complexity.
The proposed method is applied to simulations and imaging experiments to verify the effect. Furthermore, the super-resolution reconstruction capability of the method is also demonstrated under sufficient data acquisition.

2. System Composition

The composition of the GM-APD LiDAR system is shown in Figure 1.
The structural layout of each component is shown in Figure 2. The two-axis scanning platform operates in a dual-frame configuration, with each frame equipped with one scanning motor. The rotation axes of the two motors are perpendicular to each other and intersect at one point. The outer frame’s scanning motor drives the inner frame to rotate uniformly, while the inner frame’s scanning motor drives the payload, achieving two-axis rotation. In this design, the outer scanning axis remains stationary relative to the LiDAR system, while the inner scanning axis rotates around the outer scanning axis due to the movement of the outer motor. Each scanning axis is equipped with an angular measurement device to measure the rotation angle. A gyroscope is used to measure the attitude change acceleration and velocity of the GM-APD detector during scanning or system movement. Therefore, it is installed on the innermost payload alongside the detector.
In the system, the GM-APD detector and the pulsed laser each have their own optical systems, forming two subsystems for reception and transmission. These subsystems are installed coaxially on the innermost payload of the scanning structure. To ensure image stabilization and optimize subsequent system modeling, the projection center of the receiving subsystem must coincide with the rotation centers of the two scanning motors. The optical axis of the receiving subsystem serves as the system optical axis of the GM-APD LiDAR system. The GNSS and IMU, combined as an integrated unit, provide the scanning controller with accurate spatial position and attitude data of the moving platform. Meanwhile, the two-axis scanning platform transmits the scanning angles, spatial acceleration, and velocity data—acquired by the internal angular measurement device and gyroscope—to the scanning controller. Based on these data, the scanning controller calculates the control parameters of the scanning motors and issues control commands to ensure the payload’s attitude stability in inertial space.
The information processor receives repeated detection experimental data uploaded by the GM-APD detector and reconstructs the field-of-view scene using algorithms. It identifies the pixel position of the target in the reconstructed image, converts it into scanning angles, and sends these angles as commands to the scanning controller to achieve lateral tracking of the target. During the target identification process, the relative distance of the target can also be determined from the reconstructed range image. This distance is then converted into gate control commands and gating width and transmitted as instructions to the detector within the two-axis scanning platform. Finally, when wide-field detection imaging is required, the information processor sends scanning commands to the scanning controller. The controller, integrating all relevant data, controls the scanning motors to achieve stable two-dimensional scanning of the payload in inertial space.
As shown in Figure 2, we established coordinate systems to support the design of the method, which included the detector coordinate system (CS. d), the optical coordinate system (CS. o), the scanning structure coordinate system (CS. s), the aircraft coordinate system (CS. ac), the local geographic coordinate system (CS. l), the geodetic coordinate system (CS. g), and the reconstruction coordinate system (CS. re). Each coordinate system is described in detail as follows.
  • CS. d is a two-dimensional coordinate system, whose origin is located at the initial pixel in the corner of the GM-APD detector, and whose m-axis and n-axis are parallel to the column and row directions of the array, respectively.
  • CS. o’s origin is located at the “projection center” of the optical system. The x-axis is defined to coincide with the line of sight (LOS), and the rest of the axes are defined according to the right-hand rule (ideally, the y-axis and the z-axis of CS. o are parallel to the m-axis and the n-axis of CS. d, respectively).
  • The origin of CS. s is the rotation center of the two-axes scanning structure. When the scanning structure is in zero position, the y-axis coincides with the pitch scanning axis, and the z-axis coincides with the yaw scanning axis. The x-axis is defined by the right-hand rule (ideally, when the scanning structure is in zero position, CS. s coincides with CS. o).
  • CS. ac’s origin is located at the center of IMU equipment in POS, whose x-axis, y-axis, and z-axis are defined by the IMU. In the absence of installation error, the x-axis points to the front of the aircraft, the y-axis points to the right side of the fuselage, and the z-axis points to the bottom of the fuselage (ideally, CS. ac and CS. s are parallel to each other in three axes).
  • To make it easier to calculate, CS. l’s origin is located at the center of GNSS antenna. The x-axis points due north, the y-axis points due east, and the z-axis coincides with the plumb direction.
  • The WGS-84 standard geodetic coordinate system is chosen as CS. g, an intermediate coordinate system for calculation. This coordinate system associates and unifies the CS. l corresponding to the detection of different instantaneous fields of view (IFOVs).
  • CS. re’s origin is located at the center of IMU, corresponding to the last moment in the acquisition process. Its x-axis, y-axis, and z-axis are parallel to CS. o’s. This definition helps ensure that the reconstructed target is more closely aligned with the actual state at that specific moment and reduces occlusion between the original point clouds during coordinate transformation and the reconstruction process.

3. Reconstruction Method

Combining the multiple-laser-pulse acquisition requirement of the GM-APD LiDAR and the adequate definition of the coordinate system of each part, we propose a convex optimized method based on coordinate system transformation with POS data to achieve the reconstruction of moving GM-APD LiDAR. Its schematic is shown in Figure 3.
The main problem faced in the reconstruction of the motion GM-APD LiDAR is that the scene of the pixel changes during the acquisition, and the multiple-laser-pulse data acquired by a single imaging element no longer corresponds to a fixed point of the scene. The method designed in Figure 3 transforms the data obtained by each laser pulse into a unified coordinate system to eliminate the effects generated by LiDAR’s motion.

3.1. Coordinate Transformation

The data are written as u i , m , n , representing that a photon detection event occurred at pixel (m, n) of the uth bin in the ith laser pulse. If this photon originates from the echo light of the signal, the target distance of this point can be expressed by
d i , m , n = t delay + t bin u i , m , n c 2 ,
where t delay is the delay time of the LiDAR’s ranging gate, t bin is the duration of the time-to-digital converter (TDC) bin, and c is the speed of light.
When the installation errors of each step are neglected, and the displacement of the platform is relatively small during the multiple-laser-pulse acquisition, the transformation of the data u i , m , n to CS. re is given by
X r e Y r e Z r e = ( C ¯ a c l C ¯ o s ) 1 C a c l C o s t delay + t bin u i , m , n c 2 f f m d s d h n d s d h + Δ X l s Δ Y l s Δ Z l s + C g l Δ X ¯ a c g Δ Y ¯ a c g Δ Z ¯ a c g ,
where d s is the size of the GM-APD’s image element, d h is the horizontal (or vertical) distance from the initial image element at the corner to the center of the array, and f is the focal length of the optical system.
The rotation matrices C o s , C a c l , and C g l are calculated from the scan angle, orientation, and position measured by the encoders and POS, respectively. They are as shown as
C o s = cos θ p cos θ y cos θ p sin θ y sin θ p sin θ y cos θ y 0 sin θ p cos θ y sin θ p sin θ y cos θ p ,
C a c l = cos Φ cos Θ cos Φ sin Θ sin Ψ sin Φ cos Ψ cos Φ sin Θ cos Ψ + sin Φ sin Ψ sin Φ cos Θ sin Φ sin Θ sin Ψ + cos Φ cos Ψ sin Φ sin Θ cos Ψ cos Φ sin Ψ sin Θ cos Θ sin Ψ cos Θ sin Ψ ,
and
C g l = cos θ lon sin θ lat sin θ lon sin θ lat cos θ lat sin θ lon cos θ lon 0 cos θ lon sin θ lat sin θ lon cos θ lat sin θ lat ,
where θ p and θ y are the pitch and yaw scanning angles measured by the encoders, respectively. Ψ, Θ, and Φ are the aircraft’s roll, pitch, and yaw orientation measured by the IMU. θ lon and θ lat are the aircraft’s longitude and latitude measured by the GNSS.
Referring to Equation (3) and Equation (4), the rotation C ¯ o s , C ¯ a c l are calculated from the scan angles ( θ ¯ p , θ ¯ r ) and the orientation ( Ψ ¯ , Θ ¯ , Φ ¯ ) corresponding to the last moment of multiple-laser-pulse acquisition.
Δ X l s , Δ Y l s , and Δ Z l s are the linear offset between the origin of CS. l and CS. s, i.e., the displacement of the GNSS antenna relative to the scanning structure, measured and calibrated in the laboratory.
( Δ X ¯ a c g , Δ Y ¯ a c g , Δ Z ¯ a c g ) is the coordinate of CS. ac’s origin in CS. g at the last moment of multiple-laser-pulse acquisition, calculated by
Δ X ¯ a c g = ( ν + H ¯ ) cos θ ¯ l a t cos θ ¯ l o n Δ Y ¯ a c g = ( ν + H ¯ ) cos θ ¯ l a t sin θ ¯ l o n Δ Z ¯ a c g = ( 1 e 2 ) ν + H ¯ sin θ ¯ l a t ,
where H ¯ is the aircraft’s altitude at the last moment of multiple-laser-pulse acquisition, e and ν are parameters related to the Earth’s ellipsoid. They are calculated by
e 2 = a 2 b 2 a 2 ν = a 1 e 2 sin 2 ( θ lat ) ,
where a and b are the long and the short semi-axes of the Earth’s ellipsoid, respectively.

3.2. Optimization Reconstruction

The original data u i , m , n is transformed into a point ( X r e , Y r e , Z r e ) in CS. re, which is applied to estimate the spatial distribution of photons. Before applying the optimization algorithm for the estimation, CS. re needs to be rasterized into voxels to generate a digital matrix for the spatial distribution of photons. Referring to the longitudinal and transverse resolutions of the GM-APD LiDAR, the CS. re’s rasterization parameters are given as
s x = t bin c 2
and
s y z = d s t ¯ delay c 2 f ,
where t ¯ d e l a y is the delay time of the LiDAR’s ranging gate at the last moment of multiple-laser-pulse acquisition.
After rasterization, each voxel is labeled with parameters m, n, and k. The set of coordinate points in a such voxel is shown as
U m , n , k = X r e , Y r e , Z r e | k 1 s x X r e < K s x , n 1 s y z Y r e < N s y z , m 1 s y z Z r e < M s y z .
With each dataset of a voxel, reconstruction can be achieved by applying the optimization algorithm based on the negative log-likelihood.
The ranging gate of the GM-APD LiDAR is divided into a finite number of bins by the TDC, and the photon detection events within each bin conform to a negative-binomial distribution [29]. The detection probability of any pixel in the kth bin can be expressed as
P k = j = 1 k 1 1 N j N k ,
where N k denotes the average number of photons entering this pixel in the kth bin. In practice, the number of photons satisfies 0 N k 1 , ensuring that the probability is non-negative. The negative log-likelihood function is constructed as
L N = ln i = 1 i max P k i = i = 1 i max j = 1 k i 1 ln 1 N j i = 1 i max ln N k i ,
where i max is the total number of the laser pulses.
Letting m = m 0 and n = n 0 , the negative log-likelihood function corresponding to one column of voxels can be obtained by
L ( N m 0 , n 0 | U m 0 , n 0 ) = X r e , Y r e , Z r e U m 0 , m 0 j = 1 k 1 ln 1 N m 0 , n 0 , j + ln N m 0 , n 0 , k ,
where
U m 0 , n 0 = k U m 0 , n 0 , k .
Then, Equation (13) can be abbreviated as
L ( N | U ) = U j = 1 k 1 ln 1 N j + ln N k = k = 1 k m a x j = 1 k 1 ln 1 N j + ln N K Y K ,
where Y K is the number of points in set U m , n , k , and k m a x is the total number of CS. Re’s voxels in dimension k.
The first and the second partial derivatives of Equation (15) are shown as
L N k = j = k + 1 k max Y j 1 N k Y k N k
and
2 L N k N k = j = k + 1 k max Y j 1 N k 2 + Y k N k 2 , k = k 0 , k k .
So, the negative log-likelihood function is convex since its second partial derivatives are non-negative. The TV regularization is introduced as a constraint, and the solution of the spatial photon distribution is transformed into an optimization problem with a globally optimal solution, which is shown as
N ^ = arg min N , N 0 L ( N | U ) + λ x R x ( N ) + λ y z R y z ( N ) ,
where N ^ is the estimated result of N, which is the three-dimensional matrix of the average photon distribution in space. R x and R yz are the longitudinal and lateral TV regularization functions, while λ x and λ yz are the longitudinal and lateral regularization coefficients.
As can be seen, we chose an anisotropic double regularization form to construct the optimization equation, aiming to account for the differences in spatial resolution in the longitudinal and lateral directions of the GM-APD LiDAR. The definitions of the two regularization functions can be expressed as
R x ( N ) = k = 1 k max N m , n , k N m , n , k + 1
and
R y z ( N ) = m = 1 m max n = 1 n max N m , n , k N m + 1 , n , k + N m , n , k N m , n + 1 , k .
For edge pixels, it is defined as
N m max + 1 , n , k N m , n max + 1 , k N m , n , k max + 1 .
Applying the method given in [30], the three-dimensional matrix of the spatial photon distribution corresponding to the scene can be completed at last. Since the matrix N has a high sparsity, we can apply the reconstruction strategy based on detection, which allows us to generate the intensity and range images of the scene from it. For example, using the maximum value method, the formula can be expressed as
D m , n = arg max k N ^ m , n , k I m , n = max N ^ m , n , k ,
where D m , n is the value at position (m, n) in the reconstruction distance image matrix D, and I m , n is the value at position (m, n) in the reconstruction intensity image matrix I.

3.3. Super-Resolution Reconstruction

When sufficient data have been acquired, the voxels may contain redundant data points after coordinate transformation. In such cases, selecting smaller parameters s x and s yz in the horizontal and vertical directions, respectively, can enable super-resolution reconstruction.
Due to manufacturing limitations, the effective photosensitive area of the GM-APD detector’s detection unit is often smaller than the physical size of the detection unit itself. This ratio is referred to as the fill factor, as shown in Figure 4a. Consequently, each detection unit can only effectively detect targets within the field of view corresponding to its central position. When the LiDAR system moves laterally relative to the target, this motion can be equivalently regarded as a lateral ‘scanning’ by the detection units.
As shown in Figure 4b, the equivalent lateral scanning process densely covers the area corresponding to the size of the detector unit pixels. By combining the measurements of position, attitude, and scanning angles with coordinate system transformations, the equivalent scanning process can be accurately described. Therefore, lateral super-resolution is feasible. During reconstruction, an appropriate super-resolution voxel size can be determined based on the amount of data collected and the motion characteristics, as shown in Figure 4c, enabling lateral super-resolution reconstruction.
On the other hand, the TDC timing circuit of the GM-APD is responsible for the discrete division of the LiDAR imaging gating domain, which limits the range resolution of GM-APD LiDAR. When longitudinal motion occurs between the carrier and the target, it is equivalent to the GM-APD detector performing a longitudinal ‘scan’ with the current timing interval. During the equivalent scanning process, the photon distribution within the gating domain over time (or distance) densely covers the TDC timing intervals. This means that in different repeated detection experiments, each timing interval may record different photon distance distribution information with finer granularity than the LiDAR’s inherent range resolution. This provides feasibility for longitudinal super-resolution reconstruction.
Raw data collected at different times retain unified longitudinal range information, as shown in Figure 5a. After coordinate transformation, the data points are mapped to accurate spatial positions, and voxelization of the transformed data results in changes to the data points within each voxel, as shown in Figure 5b. By selecting a smaller timing interval for data statistics, as shown in Figure 5c, longitudinal super-resolution reconstruction can be achieved.
Finally, by selecting smaller parameters, such as s x < s x and s yz < s y z , to specify the voxel size, super-resolution reconstruction can be achieved. At this point, the dataset calculation formula is shown as
D M , N , K = X r e , Y r e , Z r e | K 1 s x X r e < K s x , N 1 s y z Y r e < N s y z , M 1 s y z Z r e < M s y z .

4. Simulation-Based Quantitative Evaluation

In the experiment, the complete information (distance, reflectance, etc.) of the target in the scene is difficult to obtain. In that case, we cannot provide a ground truth for comparison for the algorithm evaluation. Therefore, to verify the effectiveness of the proposed method, we conducted a simulation based on the Monte Carlo method [31], and its implementation process is shown in Figure 6. In the simulation, the array size of the GM-APD is 64 × 64, the pixel’s angular resolution is 0.5 mrad, the detector’s time resolution is 1 ns, corresponding to a distance resolution given by d D = t b i n c / 2 , with a value of 0.15 m, and the laser worked at 2 kHz. A scene of urban buildings is selected as the simulation input, shown in Figure 7.
Under signal-to-background ratio (SBR, which is used to describe the relationship between the total photon intensity of the signal to the background in the simulation) ~0.137, the simulated LiDAR performs uniform whiskbroom scanning to the scene. The average distance to the targets is about 1.95 km, and the scanning speed is 11.1°/s. Based on the accuracy of commonly used attitude and position measurement devices, an orientation disturbance of ±0.1° and a position disturbance of ±0.5 m were added in the simulation to simulate the measurement errors in a real laboratory environment. Finally, the entire simulated imaging area corresponds to an 80 m × 220 m region on the ground, equivalent to 0.4 s of scanning (corresponding to 800 Monte Carlo simulations under the 2 kHz laser repetition rate setting), with the lateral scanning direction perpendicular to the motion of the platform. The reconstruction results with different methods are as shown in Figure 8 and Figure 9. The voxel size is calculated based on Equations (8) and (9).
In Figure 8 and Figure 9, (f) represents the 2 × 2-fold super-resolution reconstruction results, corresponding to a lateral resolution of s yz = s y z / 2 .
In Figure 8b–d, the reconstruction is performed by the peak-picking algorithm [32]. The high-level noise in Figure 8b is due to the small number of pulses for each instantaneous FOV. As a result of the significant orientation disturbance during the scanning, the targets are seriously distorted and shifted in Figure 8c. The results with transformation, such as those in Figure 8d–f, reconstruct the spatial relationship of the scene much more accurately. The result (e) based on optimization has lower noise compared with the result Figure 8d by peek-picking. The super-resolution reconstruction result (Figure 8f), on the other hand, further improves the spatial resolution on top of the low-noise character in Figure 8e.
In the reconstruction result shown in Figure 9b, the target is almost indistinguishable. This is due to the same issue as in Figure 8b, where insufficient data correspond to the instantaneous FOV. In Figure 9c, because only the scanning speed was used for position estimation, positional errors occurred for each pixel’s data points, ultimately causing severe blurring of the target.
Figure 8d–f all show reconstruction results with accurate target positions; however, the noise is noticeably more severe in Figure 9d. By comparison, the reconstruction results of the proposed method, as shown in Figure 8e,f, not only ensure the accurate recovery of the target’s position but also effectively reduce noise. Notably, Figure 9f, which achieves super-resolution intensity image reconstruction under the same data conditions, further enhances the details of the target reconstruction.
For the quantitative comparison, the root mean square error (RMSE) of each reconstruction distance result was calculated separately with the ground truth Figure 8a. The formula for the RMSE is
Q RMSE = D truth D 2 n pixels ,
where Q R M S E is the computed value of RMSE, D t r u t h is the distance ground truth with same resolution, and n p i x e l s is the number of pixels.
For the reconstructed intensity results, the peak signal-to-noise ratio (PSNR) relative to the ground truth in Figure 9a was calculated. The formula for the PSNR is
Q P S N R = 10 log 10 I truth 2 I truth I 2 ,
where Q PSNR is the computed value of PSNR, I truth is the intensity ground truth with same resolution.
The evaluation metrics for each reconstruction result are shown in Table 1.
Table 1 generally reflects evaluation results consistent with the subjective assessment of the reconstruction effects in Figure 8 and Figure 9. The reconstructed range image after coordinate transformation contains excessive noise, resulting in the largest RMSE. Reconstruction based on position estimation using scanning speed reduces noise but introduces errors in the target’s spatial position, leading to a significant RMSE as well.
Compared to the range resolution of the LiDAR, the RMSE of the reconstruction results is relatively large. This is primarily due to errors in the recovery of the target’s lateral position during the reconstruction process. In scenes with clusters of houses, where there are frequent range discontinuities, even small lateral position errors can lead to significant differences in the range values between the reconstructed range image and the reference image at the same pixel location (especially at the edges of the houses). Additionally, the noticeable noise in other reconstruction methods is another reason why the RMSE is much larger than the LiDAR’s range resolution.
In contrast, the reconstruction results using the histogram maximum method after coordinate transformation visually resemble those of the proposed method. However, due to the influence of high background noise, it achieves a range image reconstruction RMSE of 12.82 m. The proposed method, on the other hand, reduces the RMSE to 2.03 m, showing significant improvement.
At the same time, the 3D point clouds of the scanning FOV can be generated without registration, as shown in Figure 10.
Figure 10 uses an ‘East–North–Up’ coordinate system, with a reference point on the ground plane as the origin, to display the 3D point cloud. The altitude above the ground plane is used for color encoding. By applying the proposed method, the detection data are restored to its absolute position in space.
Through comparison, it can be observed that the reconstruction results exhibit good agreement with the reference point cloud of the simulation input in terms of the target’s position, shape, and posture. The application of 3D point clouds with absolute positions allows for further identification tasks by incorporating prior knowledge of the target.
In the above simulation experiments, the position and angle measurement devices recorded data at a frequency of 100 Hz. However, in practical applications, the operating frequencies of the measurement components may vary. To verify the impact of different operating frequencies of auxiliary measurement components on the reconstruction results, a controlled variable comparison experiment was conducted.
Under the same simulation parameters as the previous experiments, the simulated carrier motion disturbance was modified to a high-frequency disturbance of 400 Hz with a positional variation of ± 0.1 m and an attitude angle disturbance of ± 0.03 ° . By varying the simulation measurement frequency of the auxiliary data, the measurement frequency-evaluation metric curves were obtained, as shown in Figure 11. The error bars on the curves represent the standard deviations of multiple simulation results.
To simplify the analysis, only two reconstruction methods, the peak-picking method based on coordinate transformation and the proposed method, are compared. As shown in Figure 11, with the exponential increase in the auxiliary data measurement frequency, the RMSE of the reconstructed range image for both methods exhibits a nearly linear decreasing trend. However, the rate of decrease for the photon 3D distribution reconstruction method is nearly twice that of the histogram maximum method. Under the same conditions, compared to the histogram method, the photon 3D distribution reconstruction method reduces RMSE by 77.64% to 78.39%.
For the reconstructed intensity image, the increase in auxiliary data measurement frequency has little impact on the PSNR of the histogram maximum method but significantly improves the results of the photon 3D distribution reconstruction method. Compared to the histogram method, the photon 3D distribution reconstruction method achieves an average PSNR improvement of 1.96 dB, with an improvement of up to 2.37 dB under sufficient data measurement conditions. The comparative experiments highlight the superior reconstruction performance of the proposed method under the same coordinate transformation conditions.

5. Experimental Validation and Application

The quantitative performance validation experiments through simulations have demonstrated the effective role of the proposed method in scanning LiDAR imaging reconstruction on mobile platforms. To further validate the proposed method, we utilized data obtained from an airborne LiDAR imaging experiment. Due to the experimental conditions, this dataset was not collected specifically for surveying or remote sensing purposes. However, it includes complete auxiliary data such as attitude and position information, which is sufficient to verify our method. The array size of the GM-APD is 64 × 64, the angular resolution is 0.25 mrad, the distance resolution is 0.1875 m, and the laser worked at 20 kHz.

5.1. Airborne LiDAR Imaging Experiment

In the experiment, a cluster of houses, located approximately 800 m from the LiDAR, was selected as targets, and the experiment scene is shown in Figure 12. The average size of the houses is approximately 15 m × 8 m × 5 m. The experiment was conducted at sunset with an SBR of ~5.67. When the aircraft was ready to fly away from the target, a piece of passive scanning data were recorded, based on which we validated the reconstruction algorithm. Here, ‘passive scanning’ refers to the data acquisition process where the LiDAR’s field of view (FOV) naturally covers the target area as the aircraft moves away, without active scanning planning or adjustment of the LiDAR system. During data collection, the aircraft’s relative flight altitude averaged approximately 240 m, with an average flight speed of about 65 m/s. The average passive scanning speed was up to 8.3°/s, with the scanning direction being approximately perpendicular to the motion direction. The scanning FOV was about 1.12° × 3.55° (covering approximately 10 houses), and the data acquisition event was about 1 s. Since the LiDAR optical axis was oriented in a forward-downward direction during passive scanning, the target cluster of houses experienced mutual occlusion, and the collected data corresponded to an area of approximately 50 m × 60 m on the ground.
Applying the proposed method, we reconstructed the scene from the passive scanning data and obtained the following results as shown in Figure 13. The voxel size is calculated based on Equations (8) and (9).
In Figure 13b, we can find a blur in the reconstruction result. It is because the mechanical scanning structure used in this experiment is less stable, and there are speed fluctuations when the scanning speed is relatively high. Moreover, the acquisition frequency and attitude accuracy of the POS and encoders are insufficient for precise and stable detection requirements. Specifically, the acquisition frequency is 100 Hz, and the attitude accuracy is 0.03° (GNSS) and 0.015° (post-processed). In the experiment, the parameters needed for coordinate system transformation can only be obtained by interpolation (e.g., cubic spline interpolation), which leads to a certain degradation in reconstruction performance. These problems can be improved by optimizing the scanning structure and selecting a POS and encoders with high operating frequency and accuracy.
The reconstructed 3D point clouds of the scene and their comparison with the aerial map are shown in Figure 14.
Through comparison, it can be seen that after the reconstruction, the 3D point clouds are consistent with the actual scene in terms of orientation and proportion. The application of the proposed method in scanning imaging makes up for the shortcomings of this system of LiDAR in terms of instantaneous FOV and array size. At the same time, it avoids registration and improves reconstruction efficiency.

5.2. Imaging Under Vibration

A local part of a building was selected as the target, as shown in Figure 15. In the experiment, we introduced a jitter to simulate the vibration disturbance of the LiDAR on a vehicle-borne or airborne platform and verify the adaptability of the proposed method in such a situation. Limited by the operating frequency of the measuring equipment we used, the jitter was set with a small amplitude of about 0.03° and a low frequency of 1 Hz. The SBR was ~0.21, and the ranging gate (the distance window within which the LiDAR system measures distances) was set between 360 m and 510 m.
The proposed method was applied to the reconstruction. The results are shown in Figure 16.
Under the measurement of the POS and encoders, LiDAR’s vibration corresponds to a sub-pixel scanning of the target. After coordinate system transformation, the proposed method can provide reconstruction results with a higher spatial resolution (with an equivalent angular resolution of 0.125 mrad). The SBR was low at the time of the acquisition, i.e., the background light was relatively strong. The total echo intensity image and the background light intensity image can be obtained after further calculation based on the photon distribution, as shown in Figure 17.
After applying the proposed method, the data collected by the 64 × 64 array GM-APD LiDAR can generate a 256 × 256 intensity image, as shown in Figure 17b. This result is already very close to that of a conventional infrared camera. To some extent, this makes up for the lack of pixels in the GM-APD detector. Combined with conventional recognition and monitoring algorithms, these results can provide more scene information. Together with the high-resolution distance image, the 3D point clouds colored by the background light intensity (i.e., the sunlight reflection) can be generated, as shown in Figure 18.
The proposed method avoids the vibration influence in the imaging process of the GM-APD LiDAR and provides support for super-resolution reconstruction. Especially under strong daylight, the background light information, which was once regarded as noise, can be used by the proposed method to obtain the sunlight reflection of the scene. This feature greatly enhances the application efficiency of the GM-APD LiDAR.

6. Discussion

This paper presents a reconstruction method for GM-APD LiDAR based on coordinate system transformation and convex optimization, successfully addressing the imaging reconstruction problem caused by LiDAR motion and active scanning. By accurately recording the motion of the detector using GNSS, IMU, and encoders, our method unifies the data collected from multiple laser pulses into a common coordinate system and uses voxelization to obtain the input for the optimization process. Combining the negative log-likelihood function and anisotropic TV regularization, an optimization equation is constructed to solve for the spatial photon distribution of the scene, thereby reconstructing both the range and intensity images. Unlike traditional methods that rely on field-of-view stitching, our method innovatively avoids this operation, significantly reducing noise in the reconstruction results and improving data utilization and reconstruction accuracy. The output of our method is a three-dimensional matrix of spatial photon distribution, which retains more scene information compared to directly outputting 2D intensity and range images, thus providing more complete and reliable data for subsequent applications.
In traditional dynamic imaging methods, after reconstructing each instantaneous field of view (FOV), field-of-view stitching is typically performed through coordinate transformation or feature point matching using POS data. However, this process may lead to error accumulation, which can impact the final reconstruction results. In contrast, our method first restores the original data to its absolute spatial position, then performs reconstruction based on the optimization equation, treating the entire dynamic dataset as a whole to solve for the complete scene. This approach not only increases data utilization efficiency and reduces error propagation, but also fully leverages the spatial correlations within the scene. By incorporating the deconvolution capabilities of the optimization equation and considering the changes in the spatial absolute positions of the original data, we also achieve super-resolution reconstruction, further extending the application value of GM-APD LiDAR.
In the performance evaluation simulation, for the 1.95 km building scene’s super-resolution reconstruction, our method achieved a significant precision improvement under the condition of SBR ~0.137, with errors reduced by an order of magnitude compared to traditional field-of-view stitching methods. The improvement in super-resolution reconstruction is attributed to the optimization of the photon spatial distribution during data acquisition. Through precise coordinate system transformation and negative log-likelihood optimization, we are able to better restore the photon distribution, avoiding spatial position errors caused by measurement and attitude errors. Additionally, our method also demonstrated good reconstruction results in airborne imaging and vibration-affected imaging applications, further validating its practicality.
Despite the significant achievements, there are still limitations to our method. For example, the measurement frequency and encoder precision used in the experiments may limit performance when the LiDAR sensor is moving at high speeds with the mobile platform. Particularly, under low-frequency measurements, although coordinate system transformation and deconvolution operations effectively reduce errors, lower measurement frequencies may still introduce reconstruction errors. Additionally, instability in the scanning structure and mechanical errors in the optical system could affect the imaging quality in high-dynamic scenes. Future research can focus on improving the measurement frequency of POS devices and encoders to further optimize data acquisition accuracy, particularly in complex environments.
Further improvements could focus on enhancing the stability and precision of the hardware system to reduce interference from motion during data collection. In addition, combining advanced algorithms such as deep learning could be an effective research direction. Future work can also explore how to integrate our method with other sensor technologies (such as visual sensors) to further improve the overall accuracy and real-time performance of the imaging system.
In conclusion, the reconstruction method proposed in this paper provides an efficient and reliable solution for GM-APD LiDAR applications on mobile platforms, demonstrating potential in dynamic 3D detection, super-resolution reconstruction, and 3D reconstruction. Future research will explore ways to improve measurement accuracy, optimize hardware systems, and integrate new technologies such as machine learning to enhance the performance and applicability of this method in real-world scenarios.

Author Contributions

Conceptualization, D.L.; Methodology, D.L.; Software, D.L.; Investigation, W.L.; Data curation, X.Z.; Writing—original draft, D.L.; Writing—review & editing, S.L.; Supervision, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eom, J.; Kim, G.; Park, Y. Concurrent firing LIDAR for self-driving car. In Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 20–22 October 2021; pp. 1226–1229. [Google Scholar]
  2. Liu, H.; Wu, C.; Wang, H. Real time object detection using LiDAR and camera fusion for autonomous driving. Sci. Rep. 2023, 13, 8056. [Google Scholar] [CrossRef] [PubMed]
  3. Wei, C.; Jian, Z. Application of intelligent UAV onboard LiDAR measurement technology in topographic mapping. In Proceedings of the 2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT), Chongqing, China, 22–24 November 2021; pp. 942–945. [Google Scholar]
  4. Yue, X.; Zhang, Y.; Chen, J.; Chen, J.; Zhou, X.; He, M. LiDAR-based SLAM for robotic mapping: State of the art and new frontiers. Ind. Robot. Int. J. Robot. Res. Appl. 2024, 51, 196–205. [Google Scholar] [CrossRef]
  5. Li, Y.; Guo, Q.; Wan, B.; Qing, H.; Wang, D.; Xu, K.; Yang, M. Current status and prospect of three-dimensional dynamic monitoring of natural resources based on LiDAR. Natl. Remote Sens. Bull. 2021, 25, 381–402. [Google Scholar] [CrossRef]
  6. Coman, C.M.; Toma, B.C.; Constantin, M.A.; Florescu, A. Ground level LiDAR as a contributing indicator in an environmental protection application. IEEE Access 2023, 11, 106277–106288. [Google Scholar] [CrossRef]
  7. Grishin, M.Y.; Lednev, V.N.; Sdvizhenskii, P.A.; Pavkin, D.Y.; Nikitin, E.A.; Bunkin, A.F.; Pershin, S.M. Lidar monitoring of moisture in biological objects. In Doklady Physics; Pleiades Publishing: Moscow, Russia, 2021; Volume 66, pp. 273–276. [Google Scholar]
  8. An, S.; Yuan, L.; Xu, Y.; Wang, X.; Zhou, D. Ground subsidence monitoring in based on UAV-LiDAR technology: A case study of a mine in the Ordos, China. Geomech. Geophys. Geo-Energy Geo-Resour. 2024, 10, 57. [Google Scholar] [CrossRef]
  9. Pan, W.; Li, C.; Yuan, X.; Long, T.; Zeng, X.; Xie, G. Research on lidar target detection for metro scene. Electr. Drive Locomot. 2022, 2, 89–96. [Google Scholar]
  10. Vrba, M.; Walter, V.; Pritzl, V.; Pliska, M.; Báča, T.; Spurný, V.; Saska, M. On onboard LiDAR-based flying object detection. IEEE Trans. Robot. 2024, 41, 593–611. [Google Scholar] [CrossRef]
  11. Wang, H.; Peng, Y.; Liu, L.; Liang, J. Study on target detection and tracking method of UAV based on lidar. In Proceedings of the 2021 Global Reliability and Prognostics and Health Management (PHM-Nanjing), Nanjing, China, 15–17 October 2021; pp. 1–6. [Google Scholar]
  12. Aull, B.F.; Loomis, A.H.; Young, D.J.; Heinrichs, R.M.; Felton, B.J.; Daniels, P.J.; Landers, D.J. Geiger-mode avalanche photodiodes for three-dimensional imaging. Linc. Lab. J. 2002, 13, 335–349. [Google Scholar]
  13. Marino, R.M.; Davis, W.R. Jigsaw: A foliage-penetrating 3D imaging laser radar system. Linc. Lab. J. 2005, 15, 23–36. [Google Scholar]
  14. Albota, M.A.; Aull, B.F.; Fouche, D.G.; Heinrichs, R.M.; Kocher, D.G.; Marino, R.M.; Zayhowski, J.J. Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays. Linc. Lab. J. 2002, 13, 351–370. [Google Scholar]
  15. Tachella, J.; Altmann, Y.; Ren, X.; McCarthy, A.; Buller, G.S.; Mclaughlin, S.; Tourneret, J.Y. Bayesian 3D reconstruction of complex scenes from single-photon lidar data. SIAM J. Imaging Sci. 2019, 12, 521–550. [Google Scholar] [CrossRef]
  16. Kirmani, A.; Venkatraman, D.; Shin, D.; Colaço, A.; Wong, F.N.; Shapiro, J.H.; Goyal, V.K. First-photon imaging. Science 2014, 343, 58–61. [Google Scholar] [CrossRef] [PubMed]
  17. Shin, D.; Xu, F.; Venkatraman, D.; Lussana, R.; Villa, F.; Zappa, F.; Shapiro, J.H. Photon-efficient imaging with a single-photon camera. Nat. Commun. 2016, 7, 12046. [Google Scholar] [CrossRef] [PubMed]
  18. Li, Z.P.; Ye, J.T.; Huang, X.; Jiang, P.Y.; Cao, Y.; Hong, Y.; Pan, J.W. Single-photon imaging over 200 km. Optica 2021, 8, 344–349. [Google Scholar] [CrossRef]
  19. Tobin, R.; Halimi, A.; McCarthy, A.; Laurenzis, M.; Christnacher, F.; Buller, G.S. Three-dimensional single-photon imaging through obscurants. Opt. Express 2019, 27, 4590–4611. [Google Scholar] [CrossRef]
  20. Maccarone, A.; McCarthy, A.; Ren, X.; Warburton, R.E.; Wallace, A.M.; Moffat, J.; Buller, G.S. Underwater depth imaging using time-correlated single-photon counting. Opt. Express 2015, 23, 33911–33926. [Google Scholar] [CrossRef]
  21. Altmann, Y.; McLaughlin, S.; Davies, M.E. Fast online 3D reconstruction of dynamic scenes from individual single-photon detection events. IEEE Trans. Image Process. 2019, 29, 2666–2675. [Google Scholar] [CrossRef]
  22. Jiang, P.Y.; Li, Z.P.; Xu, F. Compact long-range single-photon imager with dynamic imaging capability. Opt. Lett. 2021, 46, 1181–1184. [Google Scholar] [CrossRef]
  23. Wu, J.; Qian, Z.; Zhao, Y.; Yu, X.; Zheng, L.; Sun, W. 64 × 64 GM-APD array-based readout integrated circuit for 3D imaging applications. Sci. China Inf. Sci. 2019, 62, 62407. [Google Scholar] [CrossRef]
  24. Xu, W.; Zhen, S.; Xiong, H.; Zhao, B.; Liu, Z.; Zhang, Y.; Zhang, B. Design of 128× 32 GM-APD array ROIC with multi-echo detection for single photon 3D LiDAR. In Proceedings of the Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, Kunming, China, 5–7 November 2021; Volume 11763, pp. 1171–1176. [Google Scholar]
  25. Smith, G.M.; McIntosh, K.A.; Donnelly, J.P.; Funk, J.E.; Mahoney, L.J.; Verghese, S. Reliable InP-based Geiger-mode avalanche photodiode arrays. In Proceedings of the Advanced Photon Counting Techniques III, Orlando, FL, USA, 14–16 April 2009; Volume 7320, pp. 163–172. [Google Scholar]
  26. Henriksson, M.; Jonsson, P. Photon-counting panoramic three-dimensional imaging using a Geiger-mode avalanche photodiode array. Opt. Eng. 2018, 57, 093104. [Google Scholar] [CrossRef]
  27. Gluckman, J. Design of the processing chain for a high-altitude, airborne, single-photon lidar mapping instrument. In Proceedings of the Laser Radar Technology and Applications XXI, Baltimore, MD, USA, 19–20 April 2016; Volume 9832, pp. 20–28. [Google Scholar]
  28. Jonsson, P.; Axelsson, M.; Allard, L.; Bissmarck, F.; Henriksson, M.; Rahm, M.; Sjöqvist, L. Photon counting 3D imaging from multiple positions. In Proceedings of the Emerging Imaging and Sensing Technologies for Security and Defence V and Advanced Manufacturing Technologies for Micro-and Nanosystems in Security and Defence III, Online, 21–25 September 2020; Volume 11540, pp. 86–101. [Google Scholar]
  29. Fouche, D.G. Detection and false-alarm probabilities for laser radars that use Geiger-mode detectors. Appl. Opt. 2003, 42, 5388–5398. [Google Scholar] [CrossRef] [PubMed]
  30. Harmany, Z.T.; Marcia, R.F.; Willett, R.M. This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—Theory and practice. IEEE Trans. Image Process. 2011, 21, 1084–1096. [Google Scholar] [CrossRef] [PubMed]
  31. Sullivan, R.; Franklin, J.; Heagy, J. Flash Lidar: Monte Carlo Estimates of Ballistic, Diffuse, and Noise Photons as Recorded by Linear and Geiger Detector Arrays; Tech. Rep.; Institute for Defense Analyses: Alexandria, VA, USA, 2004; DA Paper P-3833. [Google Scholar]
  32. Tolt, G.; Grönwall, C.; Henriksson, M. Peak detection approaches for time-correlated single-photon counting three-dimensional lidar systems. Opt. Eng. 2018, 57, 031306. [Google Scholar] [CrossRef]
Figure 1. Schematic of the GM-APD LiDAR’s composition. The arrows in the figure indicate the flow of commands and data.
Figure 1. Schematic of the GM-APD LiDAR’s composition. The arrows in the figure indicate the flow of commands and data.
Remotesensing 17 00622 g001
Figure 2. Structural Layout and Coordinate Frames of the GM-APD LiDAR. The arrows indicate the directions of the coordinate axes. The dashed hollow arrows represent the flow of commands and data.
Figure 2. Structural Layout and Coordinate Frames of the GM-APD LiDAR. The arrows indicate the directions of the coordinate axes. The dashed hollow arrows represent the flow of commands and data.
Remotesensing 17 00622 g002
Figure 3. Schematic of the proposed reconstruction method. During the movement of the GM-APD LiDAR, the scene was acquired by the array detector at different positions and orientations. The data, as the original point clouds, are unified into the same coordinate system after the transformation. Then, the space in which the transformed point clouds are located is rasterized in voxels. After a spatial distribution matrix of photons is constructed, the convex optimization is applied to solve the problem and complete the reconstruction of the target scene. The red solid arrows represent the direction of the optical signal. The black solid arrows represent the processing flow. The colored dots indicate the point cloud representation.
Figure 3. Schematic of the proposed reconstruction method. During the movement of the GM-APD LiDAR, the scene was acquired by the array detector at different positions and orientations. The data, as the original point clouds, are unified into the same coordinate system after the transformation. Then, the space in which the transformed point clouds are located is rasterized in voxels. After a spatial distribution matrix of photons is constructed, the convex optimization is applied to solve the problem and complete the reconstruction of the target scene. The red solid arrows represent the direction of the optical signal. The black solid arrows represent the processing flow. The colored dots indicate the point cloud representation.
Remotesensing 17 00622 g003
Figure 4. Schematic diagram of the detector array and transverse relative motion. (a) Detection unit grid and fill factor. (b) Equivalent scanning with lateral motion. (c) Super-resolution grid. In (b), the arrows indicate the direction of the equivalent scan.
Figure 4. Schematic diagram of the detector array and transverse relative motion. (a) Detection unit grid and fill factor. (b) Equivalent scanning with lateral motion. (c) Super-resolution grid. In (b), the arrows indicate the direction of the equivalent scan.
Remotesensing 17 00622 g004
Figure 5. Schematic diagram of the timing interval and longitudinal relative motion. (a) Original data statistics. (b) After motion preprocessing. (c) Super-resolution time intervals. The different colored dots in the figure represent point cloud samples detected at different time moments.
Figure 5. Schematic diagram of the timing interval and longitudinal relative motion. (a) Original data statistics. (b) After motion preprocessing. (c) Super-resolution time intervals. The different colored dots in the figure represent point cloud samples detected at different time moments.
Remotesensing 17 00622 g005
Figure 6. Flowchart of the Monte Carlo-based simulation.
Figure 6. Flowchart of the Monte Carlo-based simulation.
Remotesensing 17 00622 g006
Figure 7. The 3D model of the simulation scene.
Figure 7. The 3D model of the simulation scene.
Remotesensing 17 00622 g007
Figure 8. The reconstruction distance results of the scanning simulation. (a) Simulation input. (b) Reconstruction with registration. (c) Reconstruction using the scanning speed. (d) Reconstruction with coordinate system transformation. (e) Reconstruction using the proposed method. (f) 2 × 2-fold super-resolution reconstruction by the proposed method.
Figure 8. The reconstruction distance results of the scanning simulation. (a) Simulation input. (b) Reconstruction with registration. (c) Reconstruction using the scanning speed. (d) Reconstruction with coordinate system transformation. (e) Reconstruction using the proposed method. (f) 2 × 2-fold super-resolution reconstruction by the proposed method.
Remotesensing 17 00622 g008
Figure 9. The reconstruction intensity results of the scanning simulation. (a) Simulation input. (b) Reconstruction with registration. (c) Reconstruction using the scanning speed. (d) Reconstruction with coordinate system transformation. (e) Reconstruction using the proposed method. (f) 2 × 2-fold super-resolution reconstruction by the proposed method.
Figure 9. The reconstruction intensity results of the scanning simulation. (a) Simulation input. (b) Reconstruction with registration. (c) Reconstruction using the scanning speed. (d) Reconstruction with coordinate system transformation. (e) Reconstruction using the proposed method. (f) 2 × 2-fold super-resolution reconstruction by the proposed method.
Remotesensing 17 00622 g009
Figure 10. The reconstruction 3D point clouds of the scanning simulation. (a) Simulation input. (b) Reconstruction 3D point clouds. The blank area near the building in (b) is an occlusion in LiDAR’s FOV caused by the building itself.
Figure 10. The reconstruction 3D point clouds of the scanning simulation. (a) Simulation input. (b) Reconstruction 3D point clouds. The blank area near the building in (b) is an occlusion in LiDAR’s FOV caused by the building itself.
Remotesensing 17 00622 g010
Figure 11. Data measurement frequency vs. evaluation metric curve. (a) The variation curve of RMSE. (b) The variation curve of PSNR. The horizontal axis, Frequency, is presented using a logarithmic scale.
Figure 11. Data measurement frequency vs. evaluation metric curve. (a) The variation curve of RMSE. (b) The variation curve of PSNR. The horizontal axis, Frequency, is presented using a logarithmic scale.
Remotesensing 17 00622 g011
Figure 12. The experiment scene of airborne imaging. The dashed box shows the range of passive scanning FOV.
Figure 12. The experiment scene of airborne imaging. The dashed box shows the range of passive scanning FOV.
Remotesensing 17 00622 g012
Figure 13. The reconstruction results of passive scanning imaging. (a) Distance image. (b) Intensity image.
Figure 13. The reconstruction results of passive scanning imaging. (a) Distance image. (b) Intensity image.
Remotesensing 17 00622 g013
Figure 14. The reconstruction 3D point clouds of the scanning imaging. (a) 3D point clouds. (b) 3D point clouds (overlooking). (c) Aerial map of the imaged area. The dashed box shows the range of scanning, and the point clouds in it are reconstructed into the same absolute position as the aerial map.
Figure 14. The reconstruction 3D point clouds of the scanning imaging. (a) 3D point clouds. (b) 3D point clouds (overlooking). (c) Aerial map of the imaged area. The dashed box shows the range of scanning, and the point clouds in it are reconstructed into the same absolute position as the aerial map.
Remotesensing 17 00622 g014
Figure 15. The experiment scene of the imaging under vibration. The dashed box shows the position of the FOV in the scene.
Figure 15. The experiment scene of the imaging under vibration. The dashed box shows the position of the FOV in the scene.
Remotesensing 17 00622 g015
Figure 16. The reconstruction results of the imaging under vibration. (a) Distance image (64 × 64) with peak-picking. (b) Super-resolution distance image (256 × 256) with the proposed method. (c) Intensity image (64 × 64) with peak-picking. (d) Super-resolution intensity image (256 × 256) with the proposed method. The inhomogeneity in the intensity image is caused by laser speckle.
Figure 16. The reconstruction results of the imaging under vibration. (a) Distance image (64 × 64) with peak-picking. (b) Super-resolution distance image (256 × 256) with the proposed method. (c) Intensity image (64 × 64) with peak-picking. (d) Super-resolution intensity image (256 × 256) with the proposed method. The inhomogeneity in the intensity image is caused by laser speckle.
Remotesensing 17 00622 g016
Figure 17. Super-resolution intensity image results of the imaging under vibration. (a) Total echo intensity image (256 × 256). (b) Background light intensity image (256 × 256).
Figure 17. Super-resolution intensity image results of the imaging under vibration. (a) Total echo intensity image (256 × 256). (b) Background light intensity image (256 × 256).
Remotesensing 17 00622 g017
Figure 18. The 3D point clouds reconstruction of the imaging under vibration. For display convenience, the 3D point clouds of the two buildings are divided into (a,b). Both sets of point clouds are in the same coordinate system, with the position of the LiDAR as the origin and three axes parallel to the view axis, to the left, and above the LiDAR, respectively. The 3D point clouds are colored with the normalized background light intensity.
Figure 18. The 3D point clouds reconstruction of the imaging under vibration. For display convenience, the 3D point clouds of the two buildings are divided into (a,b). Both sets of point clouds are in the same coordinate system, with the position of the LiDAR as the origin and three axes parallel to the view axis, to the left, and above the LiDAR, respectively. The 3D point clouds are colored with the normalized background light intensity.
Remotesensing 17 00622 g018
Table 1. RMSE and PSNR with different methods.
Table 1. RMSE and PSNR with different methods.
RMSE (m)PSNR (dB)
With
registration
39.072.66
Using the
scanning speed
20.892.13
With coordinate
system transformation
12.822.89
Proposed2.745.41
Proposed
(Super-resolution)
2.035.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, D.; Sun, J.; Lu, W.; Li, S.; Zhou, X. Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform. Remote Sens. 2025, 17, 622. https://doi.org/10.3390/rs17040622

AMA Style

Liu D, Sun J, Lu W, Li S, Zhou X. Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform. Remote Sensing. 2025; 17(4):622. https://doi.org/10.3390/rs17040622

Chicago/Turabian Style

Liu, Di, Jianfeng Sun, Wei Lu, Sining Li, and Xin Zhou. 2025. "Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform" Remote Sensing 17, no. 4: 622. https://doi.org/10.3390/rs17040622

APA Style

Liu, D., Sun, J., Lu, W., Li, S., & Zhou, X. (2025). Reconstruction for Scanning LiDAR with Array GM-APD on Mobile Platform. Remote Sensing, 17(4), 622. https://doi.org/10.3390/rs17040622

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop