Next Article in Journal
Experimental and Numerical Study of Air Flow Reversal Induced by Fire in an Inclined Mine Working
Previous Article in Journal
New Web-Based Ventilator Monitoring System Consisting of Central and Remote Mobile Applications in Intensive Care Units
Previous Article in Special Issue
Research on Laser Dual-Mode Fusion Detection Method of Ship Wake Bubbles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Accuracy Reconstruction of Airborne Streak Tube Imaging LiDAR Using Particle Swarm Optimization

National Key Laboratory of Laser Spatial Information, Harbin Institute of Technology, Harbin 150080, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6843; https://doi.org/10.3390/app14156843
Submission received: 6 June 2024 / Revised: 30 July 2024 / Accepted: 30 July 2024 / Published: 5 August 2024
(This article belongs to the Special Issue Application of Signal Processing in Lidar)

Abstract

:
Airborne streak tube imaging LiDAR (STIL) consists of several different data-generating subsystems and introduces system errors each time it is installed on an aircraft. These errors change with each installation, which makes the parametric calibration of the LiDAR meaningless. In this study, we propose a high-precision reconstruction method for point clouds that can be used without calibrating the system parameters. In essence, after each remote sensing measurement, a self-checking process is performed with experimental data to replace the fixed system parameters. In this process, the splicing error of the same region measured under different conditions is used as a criterion to optimize the reconstruction parameters via a particle swarm optimization (PSO) algorithm. For a detection distance of 3000 m, the elevation error of the point cloud reconstruction reaches more than 1 m if the placement parameters are not optimized; after optimization, the elevation error can be controlled within 0.3 m.

1. Introduction

Airborne LiDAR is an active remote sensing method that has the benefits of 24 h operation, high penetration, precise range finding, and a quick turnaround. These characteristics have made it very popular for surveying a variety of regions, including urban areas [1,2], powerlines [3,4], forestry resources [5,6], glaciers and snow [7,8], and other areas [9,10,11].
Streak tube imaging LiDAR (STIL) is a line array detection system [12,13,14,15,16,17]. The array operating mode increases efficiency, and the streak tube sensor provides a higher resolution. Based on these two advantages, it can work at mid-high altitudes and be applied in underwater detection [18,19,20].
We have developed an STIL as an airborne laser system (ALS) to assess the local landscape and structures at a working distance of 3000 m in Zhaodong, China. According to the mapping mission requirements, the altitude accuracy should be better than 0.3 m. However, the error is more sensitive as the operating distance increases. For instance, a 0.02-degree deviation can cause a one-meter offset.
In the reconstruction process, there are three categories of error: computational, random, and systematic [21]. During our data processing, the computational errors were consistently smaller than 10 6 m. The random error for an ALS was also less than 0.2 m within a range of 30° scan angles and at the operational height of 3000 m [22]. Both error effects are compounded accuracy requirements. The primary element affecting the mapping mission goal is systematic error.
If installation errors were not taken into account, the elevation accuracy of the point cloud would be lower than one meter during data processing. This falls far short of the predicted outcome.
There are two essential steps to modifying the effect of installations. First, the whole point cloud generated by a single experiment is too large to be processed within a limited time, so we have to select some representative points for optimization. There are generally two options for picking points: one is from the overlapping bands [23], and the other requires layout homonymy points on the ground [24].
On the one hand, it is impossible to find a one-to-one correspondence on the ground from the rough point cloud that has not been checked and calibrated; on the other hand, measuring and averaging all points in an area is too time-consuming. Given these two factors, we decided to calibrate the system using data from overlapping bands.
However, the whole set of data from a flight test is too large to be processed in a given period of time. Thus, we need to match the same regions for different point clouds using target features and average them as control points to check parameters.
The second step is solving all installation errors using the points we selected. The most classic algorithm is the least square method (LSM) for this problem [25]. Unfortunately, this method did not work for our airborne STIL. We have to use nonlinear optimization with global search capabilities.
Particle swarm optimization (PSO) is one of the best-known global search algorithms [26]; it is widely used in many fields, such as path planning [27] and neural network training [28].
This algorithm, based on particle competition, is insensitive to initial values and monotonicity and thus has a strong global solution search capability. These features make it ideal for parametric calibration [29,30,31,32,33].
Based on PSO, we finally obtained multiple sets of checking parameters, reducing the elevation error from more than 1 m to 0.3 m.

2. Background Knowledge

2.1. Streak Tube Imaging LiDAR

The STIL workflow is depicted in Figure 1a. The laser is activated by the sequential pulse and is received by a streak tube image sensor. In this process, four subsystems generate data for reconstruction. The streak tube image detector records the time when echo photons come back, the scanning system provides the scanning angle using a grating ruler, the positioning and orientation system (POS) records the position and posture information at the same time, and the pulse sequential system triggers the three subsystems to start with a delay after the laser beam is emitted.
The STIL works as an ALS, which is operated as shown in Figure 1b. It is located by the POS, which integrates a GPS and an inertial navigation system (INS). The POS calculates the GPS signals from satellites to its own position and obtains posture information from the INS.
In addition, the scanning mirror swings back along the perpendicular direction of flight, and the laser footprint is formed on the ground, as shown in Figure 1c, where the green lines represent the laser streaks, and red indicates the scan trajectory. We designed different scanning speeds for forward and reverse directions and only used the forward scanning data. There are some overlapping areas between valid scans, which play an important role when checking the parameters.
The POS, delay, and angle data can be calculated for reconstruction, but the streak image should be converted to distance and field angle. The working principle of the streak tube detector is shown in Figure 2.
Currently, the impulse delay, incident time, incidence angle, scanning angle, and POS data may all be used to reconstruct the point cloud.
However, there are inclinations among the scanner, the streak tube detector, and the POS during installation. Their installation mistakes make the point cloud reconstruction more challenging and less accurate.
For low-altitude ALS, installation errors have a low impact on the inversion accuracy, such that a rough measurement is sufficient to ensure that the effect of systematic errors is smaller than the effect of random errors. Unfortunately, as the working height increases, the installation errors are amplified during the point cloud-inversion process. The placement parameters obtained from ground measurements cannot meet the accuracy requirements of point cloud reconstruction.
Therefore, we had to check the system parameters with experimental flight data. These parameters are included in the coordinate transformation matrixes, under which the raw data are reconstructed into a point cloud. This is clearly a global nonlinear problem.

2.2. Particle Swarm Optimization

The PSO algorithm is a computational optimization technique inspired by the social behavior of birds flocking or fish schooling. It was first introduced by James Kennedy and Russell Eberhart in 1995 [26].
Figure 3 shows an overview of how the PSO works. Firstly, it initializes a swarm of particles with random positions and velocities within the search space. Secondly, it evaluates each particle’s position and determines its fitness value based on the objective function. Then, it starts a cycle that updates the personal best and global best, as well as the velocity if the best solution is worse than the cut-off point.

3. Point Cloud Reconstruction

3.1. Reconstruction Process

To convert from a laser signal to a real point in the WGS-84 system, the target point is determined using several coordinate transitions. The process of point cloud reconstruction can be abstracted as coordinate transformations of a vector. According to Figure 1, the streak signal undergoes a reflection as it travels from the target to the streak tube detector. When data such as stripe signals and POS are synthesized, the rotation of coordinates is inevitably involved due to placement errors.
A vector should be multiplied by a rotation matrix while being transformed between different Cartesian coordinates. The three-dimensional rotation matrix can be expressed by Euler angles as R ( A , B , C ) , whose matrix form is shown as
R A , B , C = cos A cos B sin A cos C + cos A sin B sin C sin A sin C + cos A sin B cos C sin A cos B cos A cos C + sin A sin B sin C cos A sin C + sin A sin B cos C sin A cos A sin B cos B cos C .
A vector should be multiplied by a reflection matrix while reflected by a mirror. The three-dimensional reflection matrix can be calculated by the mirror norm vector u . We define M as the mirror reflection matrix, which can be expressed by
M = I 2 u T u .
We define the normalized vector of incident light as L S , the photon travel distance as d, and the whole transformation from this vector to the point in world frame as
P = R W P × d R P L × M × R L S × L S + P P .
This equation contains one reflection and three rotations: R L S is the rotation matrix between the sensor and laser footprint, R P L is the rotation matrix from the POS to the footprint, and the matrix R W P expresses the rotation from the POS frame to the real world. The parameter P p is the POS position calculated from the GPS and IMU. The specific form of the position is
P = x p y p z p T .

3.2. Conversions of Coordinates

3.2.1. The Sensor Coordinate

The first-hand data are located on the pixel coordinate of the streak images shown in Figure 4a. We need to extract the location information from it using the center-of-mass method. The centroid in row i of the image can be calculated using the following formula:
x i = j = 0 N j I i j j = 0 N I i j ,
where I i j is the grayscale value of the position i , j , N is the column count, and the centroid array x i is shown in Figure 4b. We rewrite x i in the coordinate form as x , y where the variable y is just the row number i .
After calculating the gray centroid horizontally, the image is converted to points with pixel units. It is the first coordinate transition that maps pixel points from an image to a distance-angle coordinate using calibration matrixes [34]. Then, the y-axis data are mapped to the incident angle θ , and the x-axis is converted to the time of flight (TOF) t s .
The calibration matrixes whose pseudo-color figure is shown in Figure 4c,d are data-mapping tables between the ( x , y ) pixel frame and the t s , θ system. We denote T and Θ as the matrixes of the time delay and incident angle, respectively.
The subscripts of the elements in the matrix are integers, while the centroid coordinates in the x-direction are often fractional, and their corresponding delay parameters need to be obtained via interpolation. We define x L = x as the floor of x and x R = x as the ceiling of x . The correspondence from the image coordinate system to the sensor coordinate system can be expressed as
t s = T x L y x x L + T x R y x R x L θ = Θ x L y x x L + Θ x R y x R x .
Its accuracy meets the inversion requirements, where one pixel corresponds to a maximum of 0.004 degrees in the spatially calibrated matrix and 1.1 nanoseconds in the temporal calibration matrix. After using the center-of-mass method and interpolation of the calibrated matrix, the accuracy can reach 0.1 pixel. Thus, the t s , θ data calibrated from the two matrixes are 0.0004° and 0.11 nanoseconds. Both cause an error of less than 0.02 m, which is negligible for a plotting accuracy of 0.3 m.
In the sensor frame t s , θ , t denotes the time between when the camera starts working and when it receives the light signal, and θ is the incident angle. Then, the normalized vector of incident light can be expressed by θ as follows:
L S = 1 1 + tan 2 θ 0 tan θ 1 + tan 2 θ T .
t s is just one part of TOF, and another part of the TOF t d is the delay between the time when the laser and camera were triggered. Thus, the whole TOF t = t S + t d and the distance measurement value d can be shown as
d = 1 2 c t S + t d ,
where c is the speed of light.
At this point, the first coordinate transition was completed, and the next goal was the position in the scanning coordinate system. The errors introduced by this transition from the streak image and time delay to the incident vector L s and ranging value d are less than the accuracy requirements for mapping, so in the subsequent calculations, we just used the vector L s and the distance d as a definite value.

3.2.2. The Scanning Coordinate

Laser streaks on the target object were reflected using a scanner mirror and incident to the sensor shown in Figure 5, so the distance and angle inversed using the calibration matrixes represent the image point P i instead of the target P t .
Ideally, there would be a simple relationship between P i and P t . However, the scanning motor and image sensor are not coaxial, which leads to rotation between the two axes systems. In addition, the normal oscillating mirror and the spin axis are not at a 45° angle, leading to a more complex reflection between image and object systems.
We define the Euler angles between the sensor and mirror as r , s ,   and t and the normal space angles as Φ and Ω . The parameter Φ is the angle between the normal mirror and the motor’s rotational axis, and Ω is the placement error in the direction of motor rotation.
A high-precision circular grating ruler was fixed on the motor, from which we could read the mirror rotation angle. Not surprisingly, there was also an error between the grating ruler and scanning system. Thus, we should not only define the raster reading φ but also another angle Ψ to modify it.
Now, the matrix R ( r , s , t ) can be used to describe the first rotation transformation R L S , and R ( 0 , 0 , Ψ + φ ) can indicate the second. Then, the norm vector can be deduced as
u = R 0 , 0 , Ψ + φ × cos Φ cos Ω cos Φ sin Ω sin Φ .
This step converts the coordinate from the sensor to the scanning system and contains two rotations and one reflection transformation.

3.2.3. The POS Coordinate

At this point, we should migrate the coordinate from the scanning system to the POS via rotation and translation.
First, we define u , v , and w as the installation angles of the POS, and the rotation matrix R P L = R ( u , v , w ) is used to transform the scanning system to a navigational system. It is so difficult to measure these three angles that we should use them in optimization.
Second, we define o , p , and q as the sensor’s x, y, and z axis position in the POS coordinate system. We use this position to shift values from the sensor to the navigational system, which connects the real world with longitude, latitude, and altitude. The shift vector is simpler than those matrixes and can be written as the vector P p = o , p , q T .
As the last process in the local relative coordinate system, we should solve the position P with those transition matrixes and vectors. The distance d plays a key role as the length component.

3.2.4. The World Coordinate

The POS outputs position and posture data, both located in the real world, or rather, in the WGS-84 coordinate system.
We define α , β , and γ as the POS angles, which expresses the real POS posture to the ground plane of reference. These angles are formulated as the rotation matrix R α , β , γ .
We use L 0 , B 0 , and H 0 to indicate the longitude, latitude, and altitude of the POS, respectively. L 0 and B 0 continue to rotate the POS axes using the following matrix:
R e L , B = sin L sin B cos L cos B cos L cos L sin B sin L cos L sin L 0 cos B sin B .
To combine with the POS coordinate, longitude and latitude should be converted from angle to length variables using the following conversion:
x y z = f T L B H = N + H cos B cos L N + H cos B sin L N 1 e 2 + H sin L , N = a 1 e 2 sin 2 B .
The function f T converts the positions L , B , H from the WGS-84 system to an ECEF (Earth-centered, Earth-fixed point x , y , z , where a is the Earth radius, and e is the first eccentricity in the Earth model.
With the same coordinate format, the point x p , y p , z p is converted to the position on Earth by
x e y e z e = R e L 0 , B 0 × R α , β , γ × x p y p z p + f T L 0 B 0 H 0 .
Finally, we convert the ECEF coordinate back to the WGS-84 system using the iterative process f W x , y , z shown in (13) [35], which is the standard conversion relationship between ECEF and WGS84 systems.
f W x , y , z ~ L = tan 1 y x B = t a n 1 z + N e 2 s i n B x 2 + y 2 + z 2 H = x 2 + y 2 cos B N N = a 1 e 2 sin 2 B
The end condition is that the difference between the old and new value is less than 1 × 10 9 °. The error in length due to a difference of 1° in latitude and longitude is not greater than 111 km, so this process results in an error of less than 1 mm.

3.3. Variable Types

There are many variables in the point cloud-reconstruction process, some of which are experimental data, some of which are reconfiguration parameters that can be measured, and others of which are parameters that cannot be measured well enough. The parameters that cannot be measured need to be searched using the PSO algorithm.
In Table 1, these parameters are classified into three types: experimental data (E), measurable parameters (M), and parameters that need to be checked and calibrated (C).
In addition to the calibration matrixes, the distance between the two subsystems is a are measurable parameter, because even with the coarsest measuring methods, a measuring accuracy higher than 1 mm can be obtained.
However, the angular placement error behaves quite differently, and it varies with the working distance. The mounting angle error causes distance errors in two directions with the form of
Δ 1 = D sin ϕ + ϵ sin ϕ Δ 2 = D cos ϕ cos ϕ + ϵ ,           ϕ 0,90 ° , ϵ > 0 .
where d is the dummy ranging value in the current coordinate system, ϕ is the measured mounting angle, ϵ is the angular placement error, and Δ is the distance error due to the angle placement. Since Δ 1 = Δ 2 = 0.3   m , and D = 3000   m , ϵ has the minimum value of 0.3 3000 = 1 × 10 4 rad, while ϕ equals 0 or π 2 . This means that an angular accuracy of 10 4 rad must be achieved; otherwise, a distance accuracy of 0.3 m cannot be reached.
Thus, the main parameters to be checked are from the placement errors between the image, scanning, and POS coordinate systems.

4. Parameter Optimization

4.1. Point Selection

The whole point cloud is too big for an intelligence algorithm to optimize. As a result, we must choose a few checking sites for iterative calculation. First, the point cloud is reconstructed using the uncalibrated parameters, and then the composite required points are selected from it. After target point selection, the selected points need to be denoised via radius filtering. Finally, the raw data are determined one-to-one with the point cloud using subscripts and selected for self-checking.
Different areas play different roles in the self-checking process, and these points need to be selected using different methods. Target buildings have definite edges that can be boxed to an area with determinant latitudes and longitudes. Further, the flat ground’s altitude slowly changes, so we can use it to adjust the vertical accuracy.
Target buildings are real buildings in a city, which means they appear randomly in the point cloud. It is difficult to find a complete one-to-one mapping building in adjacent strips, but it is easy in two air lines. Therefore, we have three sets of control points:
P b : target building points between different air routes.
P f : flat area points among different air routes.
P a : flat area points between adjacent scanning bands.

4.1.1. Target Buildings

To ensure that the altitude accuracy is meaningful, it is important to have the control points aligned in the latitude and longitude directions. The role of the target buildings is to align the two sets of point clouds in the horizontal direction.
In the inspection area, the buildings suitable for a parametric check have a regular geometry and are at a certain height above the ground. A typical building suitable for checking schools is shown in Figure 6b.
In order to choose the target building points, a rectangle should be boxed around the building. Then, the points higher than the altitude threshold in the rectangular area should be selected. It must be ensured that the rectangular selection box is flat except for the target building.
Target buildings are used to match the points from different air lines, so the strip splicing error should be eliminated through point-selection work. We have to select buildings that stand on one strip completely, which are always small, as shown in Figure 6.
There are three sets of targets: the first contains points generated from east to west, the second from north to south, and the third from west to east. The three airlines constitute two control groups: white numbers on the first picture are compared against the second, and black numbers are subtracted by the third.
We checked the accuracy of the latitude and longitude directions using two groups, one of which is reverse-checking and the other is crosschecking. Both of them need four vertexes selected from imprecise point cloud manually.
Depending on the point of the target building, it is possible to make a judgment about the error in the horizontal direction as
e ¯ b = 1 18 i = 1 9 x i c 1 x i d 2 + y i c 1 y i d 2 + x i c 2 x i e 2 + y i c 2 y i e 2 ,
where the subscript i denotes the average value of the i th set of point clouds, c 1 , c 2 , d , e correspond to the four different labeled builds in the three figures; x is the relative distance in longitude; and y is the latitude.

4.1.2. Flat Areas among Different Air Routes

There is a further characteristic of the flat mark point that distinguishes it from its building counterpoint. The flat object has no clear edge to locate its horizontal position, and we cannot even make sure that it is an object that contains the same area as another. However, this characteristic results from the slowly changing height, which means horizontal misalignment does not affect the altitude accuracy.
Thus, we selected a square area on streets, especially on crossroads, and made four points in every cloud. The square is more convenient than a target building because it is small enough to stay within the same scanning band. We can find the same area among the three clouds generated from different heading airlines, as shown in Figure 7.
There are also three sets in Figure 7, but we painted them in different pseudo colors to show the street contour. Every subgraph has eight marked areas, which contain approximately thirty points and are averaged to just one height. We subtracted the second and third height from the first and gained two groups of altitude difference values as follows:
e ¯ f = 1 18 i = 1 9 h i a h i b + h i a h i c ,
where h i a , h i b , and h i c are the altitude averages of the points in the i th target area in Figure 7a–c.

4.1.3. Flat Area for Adjacent Scanning Bands

Error-checking parameters not only lead to misalignment between buildings generated from different airlines but also cause altitude differences between adjacent scanning bands. The points across two strips have a greater field angle, so their photons are incident on the sensor more marginally. Sometimes, half of them are wasted in the reconstruction process, but the continuously high trend from the middle to sides is still useful for checking the installation error.
The area is not wide enough to accommodate complete buildings on overlapped areas, so we have to select flat targets as in the second selection method. Figure 8 shows the selection positions across scanning bands beneath the east–west airline.
We select a point that spans both strips and draw a circle horizontally with that point as the center and then average the heights of all the points within the circle. For the i th circle, we obtain an average value on each of the front and back strips, denoted as h i f and h j b ; the error criterion can be expressed as
e ¯ a = 1 9 i = 1 9 h i f h i b .

4.2. Parameter Optimization Using the PSO Algorithm

There are nine parameters that should be optimized, as listed in Table 1, and we defined them as a checking vector p = ( r , s , t , Ψ , Φ , Ω , u , v , w ) .
It is obvious that the reconstruction of STIL is a nonlinear process whose inverse problem cannot be solved using the least square method (LSM) based on the matrix form. Furthermore, it is a global optimization process, where the nonlinear LSM falls into the local optimal solution through the direction of the gradient descent.
One of the best ways to avoid local optimums is to introduce randomness, and most intelligent heuristic algorithms work with it. If we set a random step size for the nonlinear LSM, the program evolves into an intelligent algorithm.
The PSO not only uses random steps but also introduces a competition mechanism, a population-based stochastic optimization algorithm inspired by the social behavior of birds flocking or fish schooling. The PSO is commonly used to solve optimization problems where the objective is to find the optimal solution in a search space.
During the optimization of placement parameters, the checking vector p is abstracted into a particle. Since the PSO starts running, 50 checking vectors are generated randomly and comprise the particle swarm. Three types of checking points are reconstructed with every particle, and the weighted average is determined to obtain a matching error shown as
e = w b e b ¯ + w f e f ¯ + w a e a ¯ .
The average value e i ¯ is the distance between two groups of check points in point set P i , and w i is its weight.
The particle p i records its own minimum matching error as p h i , and there is a best particle q j in the j th iteration. Every particle has a v in every iteration and moves along a straight line without a matching error. However, it exists in a swarm and is burdened with the task of finding an optimal solution, so it has to move to the historical optimal solution p h i or the ethnic optimal solution q j .
Thus, the particle p j i , which is the i th particle in the swarm, in its j th iteration, would have the new speed v j + 1 i .
v j + 1 i = v j i + c 0 r j q j p j i + c 1 r 1 p h i p j i
The vectors p j i , p h i , and q j are the positions of their corresponding particles; c 0 and c 1 are the global and local constants; and r 1 , r 2 are randoms. The pseudocode is shown in Algorithm 1.
Algorithm 1: The PSO for installation checking
  Input: 45 pairs of marking points
  Output: 8 parameters
  • function pso(points, max_iter, target_precision)
  •   Initialize particles p 0   = radom(50, 8)
  •   Initialize speed v 1 = zeros(50,8)
  •   Initialize personal best particles p g = zeros(50, 8).
  •    e = zeros(50)
  •   for j = 1 to max_iter
  •      for i = 1:50
  •          e i = test_one_parameter(points, p j i )
  •         if test_one_parameter(points, p h i ) > e i
  •            p h i = p j i
  •         end if
  •      end for
  •       g = argmin i e i
  •       q j = p g i
  •      if e g < target_precision
  •         return q j
  •       v j + 1 = v j + c 0 r j q j p j + c 1 r 1 p g p j
  •       p j + 1 + = v j + 1
  •    end for
  •   return q j
  • end function
  • // calculate the error by one group of parameters
  • function test_one_parameter(points, p )
  •     reconstruct points by the parameter p
  •    e ¯ b = 1 18 i = 1 9 x i c 1 x i d 2 + y i c 1 y i d 2 + x i c 2 x i e 2 + y i c 2 y i e 2
  •    e ¯ f = 1 18 i = 1 9 h i a h i b + h i a h i c
  •    e ¯ a = 1 9 i = 1 9 h i f h i b
  •    e = w b e b ¯ + w f e f ¯ + w a e a ¯
  •    return e
  • end function

4.3. Optimization Result

4.3.1. Coupling of Placement Parameters

Fifteen optimizations were performed, and the nine checking parameters in each optimization are shown in Figure 9. The maximum angle error is approximately 5.5°, which is impossible. These large angular errors imply coupling between the nine parameters. Coupling but not simple merging is the biggest difficulty in checking the error of airborne STILs.
There are three directions in the rotation operation, corresponding to the three sets of coupling relationships of the checking parameters, shown as
w = 0.9979 r 0.0355 Φ = 0.5632 s + v + 45.228 Ψ + Ω = 0.4965 t u + 0.5391 .
Their linear fitting diagram is shown as Figure 10.
The parameter coupling phenomenon indicates a correlation before different placement angles and verifies that the particle swarm algorithm can solve the multi-parameter optimization problem.
Based on the definitions in Table 1, the coupling parameters can be found to be related in orientation. The nine parameters come from three devices, the imaging system, the scanning system and the POS. Although the orientation of each parameter was not specified during modelling, based on the optimization results, it can be seen that ϕ , s , and v represent the same orientation in all three coordinate systems. The same is true for other parameters. In addition, the scanning system was modelled with redundancy, resulting in Ψ and Ω playing the same role.
The coefficients between the parameters w and r in these three sets of couplings converge to 1, suggesting that the two may be condensed. The linear coupling coefficients for the other two sets of relationships are not 1, indicating that these parameters cannot be optimized by fitting the relationships. However, according to the model’s definition, the parameters Ψ and Ω are both used to describe the error of the motor rotation angle. The grating ruler has been installed co-axially to the rotating shaft so that its zero position is virtually the same physical quantity as the motor placement error in the tangential direction of the rotating shift.
When one or more of these parameters are fixed, the parameters associated with them will also converge to fixed values during the optimization process. In traditional calibration methods, it is necessary to obtain the exact placement error. The phenomenon of parameter coupling, on the other hand, makes the number of parameters that have to be obtained by measurements much lower, which is very beneficial for the traditional checking methods.
Considering that the purpose of checking is to improve the accuracy of the point cloud, the non-uniqueness of the optimum caused by parameter coupling is not a major issue. Even the exact physical meaning of the nine parameters seems less important, as long as they are substituted into the conformal equations to be able to invert a highly accurate point cloud.

4.3.2. Reconstruction Accuracy

An ALS topographic mapping task is essentially a measurement of the topographic relief of an area. In its reconstructed point clouds, latitude and longitude often act as the coordinate frame, while elevation is the measured value. Therefore, we locate the proximity points in different point clouds by giving the latitude and longitude and use the elevation difference between the two as the reconstruction error.
Though three types of points are used in global optimization, the criteria for determining the error are used for elevation.
Finally, the vertical errors are around 0.3 m, different every time when solved through the PSO, and the minimum value is as low as 0.271 m, as shown in Figure 11. In contrast, whether assuming placement errors of 0 or optimizing using nonlinear LSM starting with random numbers, the elevation errors of the generated point cloud are all greater than 1 m.
On the one hand, the optimization results of the placement errors have huge jitter and are coupled with each other; on the other hand, the point cloud reconstruction results are relatively stable, and their reconstruction errors are controlled within a small range. In fact, although we have been referring to the parameters to be optimized as placement errors, they are clearly not true installation angles. In other words, we have achieved highly accurate point cloud reconstruction without checking parameters.
The placement error obtained by the PSO algorithm produces a significant effect on point cloud reconstruction. As far as the horizontal direction is concerned, it solves the misalignment problem between the strips and reflects the validity of the building target criterion shown in Figure 12.
The difference before and after PSO is more intuitively shown in Figure 13, where a clear misalignment before the nonlinear optimization is shown in Figure 13b. The PSO has smoothed this issue out, as shown in Figure 13c.

5. Conclusions

In this study, we propose a high-accuracy reconstruction method for airborne STIL. Three types of points are selected from different airlines or scanning bands, and iterated as weighted variables using the PSO algorithm. The optimization results show that the elevation accuracy is always maintained in a small range despite the complex coupling relationship between different parameters. This means that high-precision reconstruction of 3D point clouds can be achieved without the need to measure placement errors of the LiDAR system during mapping missions. Among several optimization parameters, the best optimization setting increases the elevation accuracy to within 0.3 m.

Author Contributions

Conceptualization, R.F.; methodology, X.W.; software, X.W.; validation, X.W., Z.C. and C.D.; formal analysis, X.W. and R.F.; data curation, Z.D.; writing—original draft preparation, X.W.; writing—review and editing, Z.C.; visualization, X.W.; project administration, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 62192774 and No. 62305085).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

According to the requirements of the project regulatory department, streak images and point-cloud data cannot be provided.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tarolli, P. High-resolution topography for understanding Earth surface processes: Opportunities and challenges. Geomorphology 2014, 216, 295–312. [Google Scholar] [CrossRef]
  2. Huang, J.; Stoter, J.; Peters, R.; Nan, L.L. City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point Clouds. Remote Sens. 2022, 14, 2254. [Google Scholar] [CrossRef]
  3. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  4. Guo, B.; Li, Q.Q.; Huang, X.F.; Wang, C.S. An Improved Method for Power-Line Reconstruction from Point Cloud Data. Remote Sens. 2016, 8, 36. [Google Scholar] [CrossRef]
  5. Li, X.; Chen, W.Y.; Sanesi, G.; Lafortezza, R. Remote Sensing in Urban Forestry: Recent Applications and Future Directions. Remote Sens. 2019, 11, 1144. [Google Scholar] [CrossRef]
  6. Sankey, T.; Donager, J.; Mcvay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  7. Yi, D.; Harbeck, J.P.; Manizade, S.S.; Kurtz, N.T.; Studinger, M.; Hofton, M. Arctic sea ice freeboard retrieval with waveform characteristics for NASA’s Airborne Topographic Mapper (ATM) and Land, Vegetation, and Ice Sensor (LVIS). IEEE Trans. Geosci. Remote 2015, 53, 1403–1410. [Google Scholar] [CrossRef]
  8. Sold, L.; Huss, M.; Hoelzle, M.; Andereggen, H.; Joerg, P.; Zemp, M. Methodological approaches to infer end-of-winter snow distribution on alpine glaciers. J. Glaciol. 2013, 59, 1047–1059. [Google Scholar] [CrossRef]
  9. Coomes, D.A.; Dalponte, M.; Jucker, T.; Asner, G.P.; Banin, L.F.; Burslem, D.F.; Lewis, S.L.; Nilus, R.; Phillips, O.L.; Phua, M.-H.; et al. Area-based vs. tree-centric approaches to mapping forest carbon in Southeast Asian forests from airborne laser scanning data. Remote Sens. Environ. 2017, 194, 77–88. [Google Scholar] [CrossRef]
  10. Okyay, U.; Telling, J.; Glennie, C.L.; Dietrich, W.E. Airborne lidar change detection: An overview of Earth sciences applications. Earth-Sci. Rev. 2019, 198, 25. [Google Scholar] [CrossRef]
  11. Kotlarz, P.; Siejka, M.; Mika, M. Assessment of the accuracy of DTM river bed model using classical surveying measurement and LiDAR: A case study in Poland. Surv. Rev. 2020, 52, 246–252. [Google Scholar] [CrossRef]
  12. Gleckler, A.D. Multiple-slit Streak Tube Imaging Lidar (MS-STIL) Applications. In Laser Radar Technology and Applications V; SPIE: Bellingham, WA, USA, 2000; Volume 4035, pp. 266–278. [Google Scholar]
  13. Luo, T.; Fan, R.; Chen, Z.; Wang, X.; Chen, D. Deblurring streak image of streak tube imaging lidar using Wiener deconvolution filter. Opt. Express 2019, 27, 37541–37551. [Google Scholar] [CrossRef] [PubMed]
  14. Luo, T.; Fan, R.; Chen, Z.; Wang, X.; Dong, C.; Chen, D. Comparison of four deblurring methods for streak image of streak tube imaging lidar. OSA Contin. 2020, 3, 2863–2879. [Google Scholar] [CrossRef]
  15. Li, W.; Guo, S.; Zhai, Y.; Han, S.; Liu, F.; Lai, Z. Occluded target detection of streak tube imaging lidar using image inpainting. Meas. Sci. Technol. 2021, 32, 045404. [Google Scholar] [CrossRef]
  16. Gao, J.; Sun, J.; Wang, Q. Experiments on the range resolution measurement of a slit streak tube imaging lidar. Optik 2015, 126, 3084–3087. [Google Scholar] [CrossRef]
  17. Ye, G.; Fan, R.; Chen, Z.; Yuan, W.; Chen, D.; He, P. Range accuracy analysis of streak tube imaging lidar systems. Opt. Commun. 2016, 360, 7–14. [Google Scholar] [CrossRef]
  18. Gao, J.; Sun, J.; Wei, J.; Wang, Q. Research of underwater target detection using a slit streak tube imaging lidar. In Proceedings of the 2011 Academic International Symposium on Optoelectronics and Microelectronics Technology, Harbin, China, 12–16 October 2011; pp. 240–243. [Google Scholar]
  19. Gao, J.; Sun, J.; Wang, Q. Experiments of ocean surface waves and underwater target detection imaging using a slit Streak Tube Imaging Lidar. Optik 2014, 125, 5199–5201. [Google Scholar] [CrossRef]
  20. Fang, M.; Xue, Y.; Ji, C.; Yang, B.; Xu, G.; Chen, F.; Li, G.; Han, W.; Xu, K.; Cheng, G.; et al. Development of a large-field streak tube for underwater imaging lidar. Appl. Opt. 2022, 61, 7401–7408. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, J.; Xu, L.; Li, X.; Quan, Z. Quantitative evaluation of impacts of random errors on ALS accuracy using multiple linear regression method. IEEE Trans. Instrum. Meas. 2012, 61, 2242–2252. [Google Scholar] [CrossRef]
  22. Dan, J.; Yang, X.; Shi, Y.; Guo, Y. Random error modeling and analysis of airborne Lidar systems. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3885–3894. [Google Scholar]
  23. Vosselman, S.G. Experimental Comparison of Filter Algorithms for Bare-Earth Extraction from Airbore Laser Scanning Point clouds. ISPRS J. Photogramm. Remote Sens. 2004, 59, 85–101. [Google Scholar]
  24. Hu, Y. Automated Extraction of Digital Terrain Models Roads and Buildings Using Airborne Lidar Data; University of Calgary: Calgary, AB, Canada, 2003. [Google Scholar]
  25. Wang, Z.; Shu, R.; Xu, W.; Pu, H.; Yao, B. Analysis and Recovery of Systematic Errors in Airborne Laser System. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 289–294. [Google Scholar]
  26. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  27. Yiyang, L.; Xi, J.; Hongfei, B.; Zhining, W.; Liangliang, S. A general robot inverse kinematics solution method based on improved PSO algorithm. IEEE Access 2021, 9, 32341–32350. [Google Scholar] [CrossRef]
  28. Chen, J.F.; Do, Q.H.; Hsieh, H.N. Training artificial neural networks by a hybrid PSO-CS algorithm. Algorithms 2015, 8, 292–308. [Google Scholar] [CrossRef]
  29. Riwanto, B.A.; Tikka, T.; Kestilä, A.; Praks, J. Particle swarm optimization with rotation axis fitting for magnetometer calibration. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1009–1022. [Google Scholar] [CrossRef]
  30. Deng, L.; Lu, G.; Shao, Y.; Fei, M.; Hu, H. A novel camera calibration technique based on differential evolution particle swarm optimization algorithm. Neurocomputing 2016, 174, 456–465. [Google Scholar] [CrossRef]
  31. Lü, X.; Meng, L.; Long, L.; Wang, P. Comprehensive improvement of camera calibration based on mutation particle swarm optimization. Measurement 2022, 187, 110303. [Google Scholar] [CrossRef]
  32. Li, D.; Xu, J.; Zhu, B.; He, H. A calibration method of DVL in integrated navigation system based on particle swarm optimization. Measurement 2022, 187, 110325. [Google Scholar] [CrossRef]
  33. Song, X.; Yang, B.; Feng, Z.; Xu, T.; Zhu, D.; Jiang, Y. Camera calibration based on particle swarm optimization. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar]
  34. Chen, Z.; Shao, F.; Fan, Z.; Wang, X.; Dong, C.; Dong, Z.; Fan, R.; Chen, D. A Calibration Method for Time Dimension and Space Dimension of Streak Tube Imaging Lidar. Appl. Sci. 2023, 13, 10042. [Google Scholar] [CrossRef]
  35. Decker, B.L. World Geodetic System 1984; Defense Mapping Agency Aerospace Center: St. Louis, MO, USA, 1986. [Google Scholar]
Figure 1. STIL working mechanism; (a) system compositions and workflow of STILs; (b) work status of an ALS; (c) laser line footprints.
Figure 1. STIL working mechanism; (a) system compositions and workflow of STILs; (b) work status of an ALS; (c) laser line footprints.
Applsci 14 06843 g001
Figure 2. Schematic of the STIL system. (a) Schematic diagram of the data collection process. (b) The working principle of the streak array detector. (c) Streak image on CCD.
Figure 2. Schematic of the STIL system. (a) Schematic diagram of the data collection process. (b) The working principle of the streak array detector. (c) Streak image on CCD.
Applsci 14 06843 g002
Figure 3. Diagram of the PSO algorithm.
Figure 3. Diagram of the PSO algorithm.
Applsci 14 06843 g003
Figure 4. Factors of conversion from a pixel coordinate to a distance-angle coordinate: (a) is the original streak image, (b) is the centroid of the streak, (c) is the calibration matrix T mapping the time delay, and (d) is the calibration matrix Θ mapping the incident angle.
Figure 4. Factors of conversion from a pixel coordinate to a distance-angle coordinate: (a) is the original streak image, (b) is the centroid of the streak, (c) is the calibration matrix T mapping the time delay, and (d) is the calibration matrix Θ mapping the incident angle.
Applsci 14 06843 g004
Figure 5. Specular reflection diagram.
Figure 5. Specular reflection diagram.
Applsci 14 06843 g005
Figure 6. Target buildings: (a) top view of target buildings; (b) side view of the building in red in (a); (ce) selected buildings, with (c) showing the airline from east to west, (d) from north to south, and (e) from west to east.
Figure 6. Target buildings: (a) top view of target buildings; (b) side view of the building in red in (a); (ce) selected buildings, with (c) showing the airline from east to west, (d) from north to south, and (e) from west to east.
Applsci 14 06843 g006
Figure 7. Target flat areas from different air routes; (a) the east–west airline; (b) the south–north airline; (c) the west–east airline.
Figure 7. Target flat areas from different air routes; (a) the east–west airline; (b) the south–north airline; (c) the west–east airline.
Applsci 14 06843 g007
Figure 8. (a) Marked positions across strips; (b) diagram of the strip overlap.
Figure 8. (a) Marked positions across strips; (b) diagram of the strip overlap.
Applsci 14 06843 g008
Figure 9. The distribution of each parameter after multiple PSO optimizations. In this boxplot, the boxes represent the range of data from the first to the third quartile.
Figure 9. The distribution of each parameter after multiple PSO optimizations. In this boxplot, the boxes represent the range of data from the first to the third quartile.
Applsci 14 06843 g009
Figure 10. The coupling diagram of placement parameters.
Figure 10. The coupling diagram of placement parameters.
Applsci 14 06843 g010
Figure 11. The altitude errors in fifteen PSO solutions.
Figure 11. The altitude errors in fifteen PSO solutions.
Applsci 14 06843 g011
Figure 12. A comparison of misalignment with and without PSO parameters; (a) before PSO; (b) after PSO.
Figure 12. A comparison of misalignment with and without PSO parameters; (a) before PSO; (b) after PSO.
Applsci 14 06843 g012
Figure 13. Coincidence of point clouds under different air routes; (a) the test zone; (b) before PSO; (c) after PSO.
Figure 13. Coincidence of point clouds under different air routes; (a) the test zone; (b) before PSO; (c) after PSO.
Applsci 14 06843 g013
Table 1. Variables in the reconstruction process.
Table 1. Variables in the reconstruction process.
Variable NamePhysical MeaningVariable Type
x , y The centroids of streak imagesE
T The delay matrixM
Θ The angle matrixM
t d The delay between sending and receiving lasersE
r , s , t The Euler angles between the image and the scanning coordinateC
φ The scanner’s raster readingE
Ψ The raster ruler’s zero errorC
Φ The scanner’s installation errorC
Ω The direction of motor rotationC
u , v , w The Euler angles between the image and POS coordinateC
o , p , q The installation position of the POSM
α , β , γ The roll, pitch, and head angle of the POSE
L 0 , B 0 , H 0 The longitude, latitude, and altitude of the POSE
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Chen, Z.; Dong, C.; Dong, Z.; Chen, D.; Fan, R. High Accuracy Reconstruction of Airborne Streak Tube Imaging LiDAR Using Particle Swarm Optimization. Appl. Sci. 2024, 14, 6843. https://doi.org/10.3390/app14156843

AMA Style

Wang X, Chen Z, Dong C, Dong Z, Chen D, Fan R. High Accuracy Reconstruction of Airborne Streak Tube Imaging LiDAR Using Particle Swarm Optimization. Applied Sciences. 2024; 14(15):6843. https://doi.org/10.3390/app14156843

Chicago/Turabian Style

Wang, Xing, Zhaodong Chen, Chaowei Dong, Zhiwei Dong, Deying Chen, and Rongwei Fan. 2024. "High Accuracy Reconstruction of Airborne Streak Tube Imaging LiDAR Using Particle Swarm Optimization" Applied Sciences 14, no. 15: 6843. https://doi.org/10.3390/app14156843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop