1. Introduction
As an active detection remote sensing technology, LiDAR is nowadays widely used in civil and military fields, such as autonomous driving [
1,
2], mapping [
3,
4], ecological monitoring [
5,
6,
7,
8], and target detection [
9,
10,
11]. In recent decades, GM-APD LiDAR [
12,
13,
14], as a new type of system, has gained attention and developed rapidly because of its advantages, such as high photon sensitivity and long detection distance.
The single-photon sensitivity of the GM-APD makes this kind of LiDAR relatively sensitive to background light noise. Therefore, it needs to be combined with suitable noise filtering and reconstruction algorithms [
15,
16,
17,
18,
19,
20] to enhance the detection capability of the GM-APD LiDAR. According to the operating principle of the GM-APD detector, these algorithms need data of multiple-laser-pulse detection. Although the GM-APD can operate at a high repetition frequency, the reconstruction will be affected when the LiDAR moves or rotates during the acquisition. To meet this “dynamic” reconstruction requirement, Altmann et al. proposed a fast online estimation method [
21]. They applied a Bayesian model to estimate the target’s distance and intensity information pixel by pixel and achieved real-time 3D reconstruction on individual photon detection events. Jiang et al. presented a GM-APD imager with dynamic imaging capability [
22]. Likewise, they avoid the dynamic effects in reconstruction by a fast and efficient processing method—an algorithm based on solving the 3D optimization equation. Both methods fail to deal with the actual movement of the LiDAR or the target but rely on the fast-processing nature of the algorithm itself to finish the “dynamic” reconstruction. These methods still cannot completely solve the impact in the reconstruction of the GM-APD LiDAR when applied to high-speed motion platforms such as vehicle-borne and airborne.
Each image element of the GM-APD array can detect the echo photons simultaneously after a single laser pulse emission, significantly improving the operational efficiency of LiDAR compared to point- and line-scanning modes. However, due to the process limitation, the element number of the GM-APD array is relatively small [
23,
24,
25]. Furthermore, the GM-APD LiDAR’s instantaneous field-of-view cannot be designed to be very large to ensure enough energy density of the laser echo in each image element. Therefore, the array GM-APD LiDAR commonly scans to extend the field-of-view (FOV, which is two-dimensional for array GM-APD systems. In the following descriptions, we use this abbreviation to refer to the two-dimensional imaging angular range of the array GM-APD LiDAR), which also introduces the need for related reconstruction algorithms. Henriksson et al. built a transverse scanning LiDAR system with a 128 × 32 pixels GM-APD [
26]. In the reconstruction, they assume that the scanning speed is a fixed constant and derive the orientation information of the data points based on the scanning speed, the column number, and the pixel angle. Finally, the panoramic reconstruction is completed by applying peak detection of the modified histogram. In addition to transverse scanning, conical scanning is also an option for array LiDAR on airborne platforms. A single-photon LiDAR with conical scanning and its processing chain was presented in [
27]. Combined with the general noise filtering and reconstruction process, they apply the position and orientation provided by the GNSS/IMU subsystem (the main component of the POS) to geolocate the data and then implement registration further to reduce the misalignments between the different datasets. The same team in [
26] demonstrated their multi-position GM-APD scanning experiments in [
28]. They were also equipped with the IMU and applied registration based on the ICP algorithm. For vehicle-borne or airborne scanning LiDAR systems, this scheme with the GNSS/IMU and the implementation of registration is relatively mature, while its application for the GM-APD LiDAR is still in the exploratory stage.
In this paper, a novel GM-APD LiDAR reconstruction method based on GNSS/IMU is proposed, which solves the influence of LiDAR’s own motion on vehicle-borne and airborne platforms and accomplishes the reconstruction of the scanned FOV without registration. In this method, we establish complete coordinate systems for each part of the LiDAR. To confirm the transformation relationship between the coordinate systems, we use GNSS, IMU, and angular encoders to measure position, orientation, and scanning angle. Then, the multiple-laser-pulse data are transformed into a unified coordinate system in the form of point clouds, eliminating the movement, rotation, and scanning of the LiDAR. Finally, the transformed point clouds are spatially rasterized, and an optimization solution based on the detection model of the GM-APD is performed. After generating the 3D photon distribution of the whole scene, the reconstruction is completed. Since the final optimization solution takes all the point clouds in the entire scanned FOV as input, the reconstruction result is generated directly at once, thus avoiding the registration of multiple FOVs and reducing the algorithm’s complexity.
The proposed method is applied to simulations and imaging experiments to verify the effect. Furthermore, the super-resolution reconstruction capability of the method is also demonstrated under sufficient data acquisition.
2. System Composition
The composition of the GM-APD LiDAR system is shown in
Figure 1.
The structural layout of each component is shown in
Figure 2. The two-axis scanning platform operates in a dual-frame configuration, with each frame equipped with one scanning motor. The rotation axes of the two motors are perpendicular to each other and intersect at one point. The outer frame’s scanning motor drives the inner frame to rotate uniformly, while the inner frame’s scanning motor drives the payload, achieving two-axis rotation. In this design, the outer scanning axis remains stationary relative to the LiDAR system, while the inner scanning axis rotates around the outer scanning axis due to the movement of the outer motor. Each scanning axis is equipped with an angular measurement device to measure the rotation angle. A gyroscope is used to measure the attitude change acceleration and velocity of the GM-APD detector during scanning or system movement. Therefore, it is installed on the innermost payload alongside the detector.
In the system, the GM-APD detector and the pulsed laser each have their own optical systems, forming two subsystems for reception and transmission. These subsystems are installed coaxially on the innermost payload of the scanning structure. To ensure image stabilization and optimize subsequent system modeling, the projection center of the receiving subsystem must coincide with the rotation centers of the two scanning motors. The optical axis of the receiving subsystem serves as the system optical axis of the GM-APD LiDAR system. The GNSS and IMU, combined as an integrated unit, provide the scanning controller with accurate spatial position and attitude data of the moving platform. Meanwhile, the two-axis scanning platform transmits the scanning angles, spatial acceleration, and velocity data—acquired by the internal angular measurement device and gyroscope—to the scanning controller. Based on these data, the scanning controller calculates the control parameters of the scanning motors and issues control commands to ensure the payload’s attitude stability in inertial space.
The information processor receives repeated detection experimental data uploaded by the GM-APD detector and reconstructs the field-of-view scene using algorithms. It identifies the pixel position of the target in the reconstructed image, converts it into scanning angles, and sends these angles as commands to the scanning controller to achieve lateral tracking of the target. During the target identification process, the relative distance of the target can also be determined from the reconstructed range image. This distance is then converted into gate control commands and gating width and transmitted as instructions to the detector within the two-axis scanning platform. Finally, when wide-field detection imaging is required, the information processor sends scanning commands to the scanning controller. The controller, integrating all relevant data, controls the scanning motors to achieve stable two-dimensional scanning of the payload in inertial space.
As shown in
Figure 2, we established coordinate systems to support the design of the method, which included the detector coordinate system (CS. d), the optical coordinate system (CS. o), the scanning structure coordinate system (CS. s), the aircraft coordinate system (CS. ac), the local geographic coordinate system (CS. l), the geodetic coordinate system (CS. g), and the reconstruction coordinate system (CS. re). Each coordinate system is described in detail as follows.
CS. d is a two-dimensional coordinate system, whose origin is located at the initial pixel in the corner of the GM-APD detector, and whose m-axis and n-axis are parallel to the column and row directions of the array, respectively.
CS. o’s origin is located at the “projection center” of the optical system. The x-axis is defined to coincide with the line of sight (LOS), and the rest of the axes are defined according to the right-hand rule (ideally, the y-axis and the z-axis of CS. o are parallel to the m-axis and the n-axis of CS. d, respectively).
The origin of CS. s is the rotation center of the two-axes scanning structure. When the scanning structure is in zero position, the y-axis coincides with the pitch scanning axis, and the z-axis coincides with the yaw scanning axis. The x-axis is defined by the right-hand rule (ideally, when the scanning structure is in zero position, CS. s coincides with CS. o).
CS. ac’s origin is located at the center of IMU equipment in POS, whose x-axis, y-axis, and z-axis are defined by the IMU. In the absence of installation error, the x-axis points to the front of the aircraft, the y-axis points to the right side of the fuselage, and the z-axis points to the bottom of the fuselage (ideally, CS. ac and CS. s are parallel to each other in three axes).
To make it easier to calculate, CS. l’s origin is located at the center of GNSS antenna. The x-axis points due north, the y-axis points due east, and the z-axis coincides with the plumb direction.
The WGS-84 standard geodetic coordinate system is chosen as CS. g, an intermediate coordinate system for calculation. This coordinate system associates and unifies the CS. l corresponding to the detection of different instantaneous fields of view (IFOVs).
CS. re’s origin is located at the center of IMU, corresponding to the last moment in the acquisition process. Its x-axis, y-axis, and z-axis are parallel to CS. o’s. This definition helps ensure that the reconstructed target is more closely aligned with the actual state at that specific moment and reduces occlusion between the original point clouds during coordinate transformation and the reconstruction process.
3. Reconstruction Method
Combining the multiple-laser-pulse acquisition requirement of the GM-APD LiDAR and the adequate definition of the coordinate system of each part, we propose a convex optimized method based on coordinate system transformation with POS data to achieve the reconstruction of moving GM-APD LiDAR. Its schematic is shown in
Figure 3.
The main problem faced in the reconstruction of the motion GM-APD LiDAR is that the scene of the pixel changes during the acquisition, and the multiple-laser-pulse data acquired by a single imaging element no longer corresponds to a fixed point of the scene. The method designed in
Figure 3 transforms the data obtained by each laser pulse into a unified coordinate system to eliminate the effects generated by LiDAR’s motion.
3.1. Coordinate Transformation
The data are written as
, representing that a photon detection event occurred at pixel (
m,
n) of the
uth bin in the
ith laser pulse. If this photon originates from the echo light of the signal, the target distance of this point can be expressed by
where
is the delay time of the LiDAR’s ranging gate,
is the duration of the time-to-digital converter (TDC) bin, and
c is the speed of light.
When the installation errors of each step are neglected, and the displacement of the platform is relatively small during the multiple-laser-pulse acquisition, the transformation of the data
to CS. re is given by
where
is the size of the GM-APD’s image element,
is the horizontal (or vertical) distance from the initial image element at the corner to the center of the array, and
f is the focal length of the optical system.
The rotation matrices
,
, and
are calculated from the scan angle, orientation, and position measured by the encoders and POS, respectively. They are as shown as
and
where
and
are the pitch and yaw scanning angles measured by the encoders, respectively. Ψ, Θ, and Φ are the aircraft’s roll, pitch, and yaw orientation measured by the IMU.
and
are the aircraft’s longitude and latitude measured by the GNSS.
Referring to Equation (3) and Equation (4), the rotation , are calculated from the scan angles (,) and the orientation (, , ) corresponding to the last moment of multiple-laser-pulse acquisition.
, , and are the linear offset between the origin of CS. l and CS. s, i.e., the displacement of the GNSS antenna relative to the scanning structure, measured and calibrated in the laboratory.
(
,
,
) is the coordinate of CS. ac’s origin in CS. g at the last moment of multiple-laser-pulse acquisition, calculated by
where
is the aircraft’s altitude at the last moment of multiple-laser-pulse acquisition,
e and
are parameters related to the Earth’s ellipsoid. They are calculated by
where
a and
b are the long and the short semi-axes of the Earth’s ellipsoid, respectively.
3.2. Optimization Reconstruction
The original data
is transformed into a point (
,
,
) in CS. re, which is applied to estimate the spatial distribution of photons. Before applying the optimization algorithm for the estimation, CS. re needs to be rasterized into voxels to generate a digital matrix for the spatial distribution of photons. Referring to the longitudinal and transverse resolutions of the GM-APD LiDAR, the CS. re’s rasterization parameters are given as
and
where
is the delay time of the LiDAR’s ranging gate at the last moment of multiple-laser-pulse acquisition.
After rasterization, each voxel is labeled with parameters
m,
n, and
k. The set of coordinate points in a such voxel is shown as
With each dataset of a voxel, reconstruction can be achieved by applying the optimization algorithm based on the negative log-likelihood.
The ranging gate of the GM-APD LiDAR is divided into a finite number of bins by the TDC, and the photon detection events within each bin conform to a negative-binomial distribution [
29]. The detection probability of any pixel in the kth bin can be expressed as
where
denotes the average number of photons entering this pixel in the
kth bin. In practice, the number of photons satisfies
, ensuring that the probability is non-negative. The negative log-likelihood function is constructed as
where
is the total number of the laser pulses.
Letting
m =
and
n =
, the negative log-likelihood function corresponding to one column of voxels can be obtained by
where
Then, Equation (13) can be abbreviated as
where
is the number of points in set
, and
is the total number of CS. Re’s voxels in dimension
k.
The first and the second partial derivatives of Equation (15) are shown as
and
So, the negative log-likelihood function is convex since its second partial derivatives are non-negative. The TV regularization is introduced as a constraint, and the solution of the spatial photon distribution is transformed into an optimization problem with a globally optimal solution, which is shown as
where
is the estimated result of
N, which is the three-dimensional matrix of the average photon distribution in space.
and
are the longitudinal and lateral TV regularization functions, while
and
are the longitudinal and lateral regularization coefficients.
As can be seen, we chose an anisotropic double regularization form to construct the optimization equation, aiming to account for the differences in spatial resolution in the longitudinal and lateral directions of the GM-APD LiDAR. The definitions of the two regularization functions can be expressed as
and
For edge pixels, it is defined as
Applying the method given in [
30], the three-dimensional matrix of the spatial photon distribution corresponding to the scene can be completed at last. Since the matrix
N has a high sparsity, we can apply the reconstruction strategy based on detection, which allows us to generate the intensity and range images of the scene from it. For example, using the maximum value method, the formula can be expressed as
where
is the value at position (m, n) in the reconstruction distance image matrix
D, and
is the value at position (m, n) in the reconstruction intensity image matrix
I.
3.3. Super-Resolution Reconstruction
When sufficient data have been acquired, the voxels may contain redundant data points after coordinate transformation. In such cases, selecting smaller parameters and in the horizontal and vertical directions, respectively, can enable super-resolution reconstruction.
Due to manufacturing limitations, the effective photosensitive area of the GM-APD detector’s detection unit is often smaller than the physical size of the detection unit itself. This ratio is referred to as the fill factor, as shown in
Figure 4a. Consequently, each detection unit can only effectively detect targets within the field of view corresponding to its central position. When the LiDAR system moves laterally relative to the target, this motion can be equivalently regarded as a lateral ‘scanning’ by the detection units.
As shown in
Figure 4b, the equivalent lateral scanning process densely covers the area corresponding to the size of the detector unit pixels. By combining the measurements of position, attitude, and scanning angles with coordinate system transformations, the equivalent scanning process can be accurately described. Therefore, lateral super-resolution is feasible. During reconstruction, an appropriate super-resolution voxel size can be determined based on the amount of data collected and the motion characteristics, as shown in
Figure 4c, enabling lateral super-resolution reconstruction.
On the other hand, the TDC timing circuit of the GM-APD is responsible for the discrete division of the LiDAR imaging gating domain, which limits the range resolution of GM-APD LiDAR. When longitudinal motion occurs between the carrier and the target, it is equivalent to the GM-APD detector performing a longitudinal ‘scan’ with the current timing interval. During the equivalent scanning process, the photon distribution within the gating domain over time (or distance) densely covers the TDC timing intervals. This means that in different repeated detection experiments, each timing interval may record different photon distance distribution information with finer granularity than the LiDAR’s inherent range resolution. This provides feasibility for longitudinal super-resolution reconstruction.
Raw data collected at different times retain unified longitudinal range information, as shown in
Figure 5a. After coordinate transformation, the data points are mapped to accurate spatial positions, and voxelization of the transformed data results in changes to the data points within each voxel, as shown in
Figure 5b. By selecting a smaller timing interval for data statistics, as shown in
Figure 5c, longitudinal super-resolution reconstruction can be achieved.
Finally, by selecting smaller parameters, such as
and
, to specify the voxel size, super-resolution reconstruction can be achieved. At this point, the dataset calculation formula is shown as
4. Simulation-Based Quantitative Evaluation
In the experiment, the complete information (distance, reflectance, etc.) of the target in the scene is difficult to obtain. In that case, we cannot provide a ground truth for comparison for the algorithm evaluation. Therefore, to verify the effectiveness of the proposed method, we conducted a simulation based on the Monte Carlo method [
31], and its implementation process is shown in
Figure 6. In the simulation, the array size of the GM-APD is 64 × 64, the pixel’s angular resolution is 0.5 mrad, the detector’s time resolution is 1 ns, corresponding to a distance resolution given by
, with a value of 0.15 m, and the laser worked at 2 kHz. A scene of urban buildings is selected as the simulation input, shown in
Figure 7.
Under signal-to-background ratio (SBR, which is used to describe the relationship between the total photon intensity of the signal to the background in the simulation) ~0.137, the simulated LiDAR performs uniform whiskbroom scanning to the scene. The average distance to the targets is about 1.95 km, and the scanning speed is 11.1°/s. Based on the accuracy of commonly used attitude and position measurement devices, an orientation disturbance of ±0.1° and a position disturbance of ±0.5 m were added in the simulation to simulate the measurement errors in a real laboratory environment. Finally, the entire simulated imaging area corresponds to an 80 m × 220 m region on the ground, equivalent to 0.4 s of scanning (corresponding to 800 Monte Carlo simulations under the 2 kHz laser repetition rate setting), with the lateral scanning direction perpendicular to the motion of the platform. The reconstruction results with different methods are as shown in
Figure 8 and
Figure 9. The voxel size is calculated based on Equations (8) and (9).
In
Figure 8 and
Figure 9, (f) represents the 2 × 2-fold super-resolution reconstruction results, corresponding to a lateral resolution of
.
In
Figure 8b–d, the reconstruction is performed by the peak-picking algorithm [
32]. The high-level noise in
Figure 8b is due to the small number of pulses for each instantaneous FOV. As a result of the significant orientation disturbance during the scanning, the targets are seriously distorted and shifted in
Figure 8c. The results with transformation, such as those in
Figure 8d–f, reconstruct the spatial relationship of the scene much more accurately. The result (e) based on optimization has lower noise compared with the result
Figure 8d by peek-picking. The super-resolution reconstruction result (
Figure 8f), on the other hand, further improves the spatial resolution on top of the low-noise character in
Figure 8e.
In the reconstruction result shown in
Figure 9b, the target is almost indistinguishable. This is due to the same issue as in
Figure 8b, where insufficient data correspond to the instantaneous FOV. In
Figure 9c, because only the scanning speed was used for position estimation, positional errors occurred for each pixel’s data points, ultimately causing severe blurring of the target.
Figure 8d–f all show reconstruction results with accurate target positions; however, the noise is noticeably more severe in
Figure 9d. By comparison, the reconstruction results of the proposed method, as shown in
Figure 8e,f, not only ensure the accurate recovery of the target’s position but also effectively reduce noise. Notably,
Figure 9f, which achieves super-resolution intensity image reconstruction under the same data conditions, further enhances the details of the target reconstruction.
For the quantitative comparison, the root mean square error (RMSE) of each reconstruction distance result was calculated separately with the ground truth
Figure 8a. The formula for the RMSE is
where
is the computed value of RMSE,
is the distance ground truth with same resolution, and
is the number of pixels.
For the reconstructed intensity results, the peak signal-to-noise ratio (PSNR) relative to the ground truth in
Figure 9a was calculated. The formula for the PSNR is
where
is the computed value of PSNR,
is the intensity ground truth with same resolution.
The evaluation metrics for each reconstruction result are shown in
Table 1.
Table 1 generally reflects evaluation results consistent with the subjective assessment of the reconstruction effects in
Figure 8 and
Figure 9. The reconstructed range image after coordinate transformation contains excessive noise, resulting in the largest RMSE. Reconstruction based on position estimation using scanning speed reduces noise but introduces errors in the target’s spatial position, leading to a significant RMSE as well.
Compared to the range resolution of the LiDAR, the RMSE of the reconstruction results is relatively large. This is primarily due to errors in the recovery of the target’s lateral position during the reconstruction process. In scenes with clusters of houses, where there are frequent range discontinuities, even small lateral position errors can lead to significant differences in the range values between the reconstructed range image and the reference image at the same pixel location (especially at the edges of the houses). Additionally, the noticeable noise in other reconstruction methods is another reason why the RMSE is much larger than the LiDAR’s range resolution.
In contrast, the reconstruction results using the histogram maximum method after coordinate transformation visually resemble those of the proposed method. However, due to the influence of high background noise, it achieves a range image reconstruction RMSE of 12.82 m. The proposed method, on the other hand, reduces the RMSE to 2.03 m, showing significant improvement.
At the same time, the 3D point clouds of the scanning FOV can be generated without registration, as shown in
Figure 10.
Figure 10 uses an ‘East–North–Up’ coordinate system, with a reference point on the ground plane as the origin, to display the 3D point cloud. The altitude above the ground plane is used for color encoding. By applying the proposed method, the detection data are restored to its absolute position in space.
Through comparison, it can be observed that the reconstruction results exhibit good agreement with the reference point cloud of the simulation input in terms of the target’s position, shape, and posture. The application of 3D point clouds with absolute positions allows for further identification tasks by incorporating prior knowledge of the target.
In the above simulation experiments, the position and angle measurement devices recorded data at a frequency of 100 Hz. However, in practical applications, the operating frequencies of the measurement components may vary. To verify the impact of different operating frequencies of auxiliary measurement components on the reconstruction results, a controlled variable comparison experiment was conducted.
Under the same simulation parameters as the previous experiments, the simulated carrier motion disturbance was modified to a high-frequency disturbance of 400 Hz with a positional variation of
0.1 m and an attitude angle disturbance of
. By varying the simulation measurement frequency of the auxiliary data, the measurement frequency-evaluation metric curves were obtained, as shown in
Figure 11. The error bars on the curves represent the standard deviations of multiple simulation results.
To simplify the analysis, only two reconstruction methods, the peak-picking method based on coordinate transformation and the proposed method, are compared. As shown in
Figure 11, with the exponential increase in the auxiliary data measurement frequency, the RMSE of the reconstructed range image for both methods exhibits a nearly linear decreasing trend. However, the rate of decrease for the photon 3D distribution reconstruction method is nearly twice that of the histogram maximum method. Under the same conditions, compared to the histogram method, the photon 3D distribution reconstruction method reduces RMSE by 77.64% to 78.39%.
For the reconstructed intensity image, the increase in auxiliary data measurement frequency has little impact on the PSNR of the histogram maximum method but significantly improves the results of the photon 3D distribution reconstruction method. Compared to the histogram method, the photon 3D distribution reconstruction method achieves an average PSNR improvement of 1.96 dB, with an improvement of up to 2.37 dB under sufficient data measurement conditions. The comparative experiments highlight the superior reconstruction performance of the proposed method under the same coordinate transformation conditions.
6. Discussion
This paper presents a reconstruction method for GM-APD LiDAR based on coordinate system transformation and convex optimization, successfully addressing the imaging reconstruction problem caused by LiDAR motion and active scanning. By accurately recording the motion of the detector using GNSS, IMU, and encoders, our method unifies the data collected from multiple laser pulses into a common coordinate system and uses voxelization to obtain the input for the optimization process. Combining the negative log-likelihood function and anisotropic TV regularization, an optimization equation is constructed to solve for the spatial photon distribution of the scene, thereby reconstructing both the range and intensity images. Unlike traditional methods that rely on field-of-view stitching, our method innovatively avoids this operation, significantly reducing noise in the reconstruction results and improving data utilization and reconstruction accuracy. The output of our method is a three-dimensional matrix of spatial photon distribution, which retains more scene information compared to directly outputting 2D intensity and range images, thus providing more complete and reliable data for subsequent applications.
In traditional dynamic imaging methods, after reconstructing each instantaneous field of view (FOV), field-of-view stitching is typically performed through coordinate transformation or feature point matching using POS data. However, this process may lead to error accumulation, which can impact the final reconstruction results. In contrast, our method first restores the original data to its absolute spatial position, then performs reconstruction based on the optimization equation, treating the entire dynamic dataset as a whole to solve for the complete scene. This approach not only increases data utilization efficiency and reduces error propagation, but also fully leverages the spatial correlations within the scene. By incorporating the deconvolution capabilities of the optimization equation and considering the changes in the spatial absolute positions of the original data, we also achieve super-resolution reconstruction, further extending the application value of GM-APD LiDAR.
In the performance evaluation simulation, for the 1.95 km building scene’s super-resolution reconstruction, our method achieved a significant precision improvement under the condition of SBR ~0.137, with errors reduced by an order of magnitude compared to traditional field-of-view stitching methods. The improvement in super-resolution reconstruction is attributed to the optimization of the photon spatial distribution during data acquisition. Through precise coordinate system transformation and negative log-likelihood optimization, we are able to better restore the photon distribution, avoiding spatial position errors caused by measurement and attitude errors. Additionally, our method also demonstrated good reconstruction results in airborne imaging and vibration-affected imaging applications, further validating its practicality.
Despite the significant achievements, there are still limitations to our method. For example, the measurement frequency and encoder precision used in the experiments may limit performance when the LiDAR sensor is moving at high speeds with the mobile platform. Particularly, under low-frequency measurements, although coordinate system transformation and deconvolution operations effectively reduce errors, lower measurement frequencies may still introduce reconstruction errors. Additionally, instability in the scanning structure and mechanical errors in the optical system could affect the imaging quality in high-dynamic scenes. Future research can focus on improving the measurement frequency of POS devices and encoders to further optimize data acquisition accuracy, particularly in complex environments.
Further improvements could focus on enhancing the stability and precision of the hardware system to reduce interference from motion during data collection. In addition, combining advanced algorithms such as deep learning could be an effective research direction. Future work can also explore how to integrate our method with other sensor technologies (such as visual sensors) to further improve the overall accuracy and real-time performance of the imaging system.
In conclusion, the reconstruction method proposed in this paper provides an efficient and reliable solution for GM-APD LiDAR applications on mobile platforms, demonstrating potential in dynamic 3D detection, super-resolution reconstruction, and 3D reconstruction. Future research will explore ways to improve measurement accuracy, optimize hardware systems, and integrate new technologies such as machine learning to enhance the performance and applicability of this method in real-world scenarios.