1. Introduction
Airborne laser scanning (ALS) is a widely used active remote sensing technique for recording the surface and terrain of the earth, which generates accurate digital elevation models and serves in some basic applications, such as civilian surveying and mapping, biomass measurement, bathymetry, and military surveillance [
1,
2,
3,
4,
5]. Commonly, ALS mapping lidars include digitized waveform lidars, single-photon lidar (SPL), and Geiger-mode lidar (GML) [
6]. For the digitized waveform lidars (or linear mode lidars), the received photons are converted into an electric signal proportional to the incident laser intensity by a detector working in linear mode. The target distance and the reflect intensity are contained in the echo signal, which can be used to generate a three-dimensional (3D) point cloud and to provide target reflectance or material properties via radiometric calibration. In contrast to linear mode lidar (LML), which typically requires hundreds of detected photons, single-photon and Geiger-mode lidar can achieve range measurements by just a few returning photons per pulse, due to the detector’s sensitivity to individual photons. For the single-photon lidar, a typical system is the multibeam Single Photon LiDAR (SPL) developed by the Sigma Space Corporation, which uses a photomultiplier tube with a very short dead time [
7]. For the Geiger-mode lidar, a typical lidar is the Harris IntelliEarth™ Geospatial Solutions Geiger-mode LiDAR sensor, which utilizes a large-pixel-format focal plane array (FPA) detector [
8].
Comparisons between LML, SPL, and GML were demonstrated in references [
9,
10,
11,
12,
13,
14]. The results showed that the measurement precision of SPL and GML is lower than that of LML on rough surfaces, and the measurement precision of these systems is the same on smooth surfaces [
15]. SPL and GML have higher area collection efficiency than LML, due to the high sensitivity of the detector array [
16]. In addition, the ultra-high point density image generated by SPL and GML can improve foliage penetration to better sample bare earth and reveal the infrastructure details [
8]. In recent decades, the Lincoln Laboratory developed a series of lidar systems based on arrays of Geiger-mode avalanche photodiode (GMAPD) detectors [
17,
18,
19]. These systems validate the excellent utility of GML in foliage penetrating imaging and defense operations mapping. Recently, the characteristics of a high area coverage rate and high point density open up new applications for GML in humanitarian aid and disaster relief [
20].
A proper simulation model of airborne GML is essential for the prediction of lidar performance and the optimization of lidar design. The performance of GML in point density and ranging accuracy is significant for the quality of the point cloud product. The authors in [
21] proposed a simulation of a GML system to assess system performance and generate sample data. However, the simulation has not yet been experimentally verified. In addition, the GML system datasets are much larger than those collected by LML systems. For a 256 × 64 pixels GMAPD array capable of readout rates in excess of 8 kHz, the raw data recorder rate is about 250 MB/s [
18]. With the increasing pixel format of the GMAPD array, the data recorder rate will be extremely high in the future. The high recorder rate is challenging for real-time 3D imaging, and huge storage space is required for data curation. Therefore, the establishment of a data compression algorithm and the airborne verification experiment of the simulation model are the keys to improving the performance of the GML system. However, these aspects have not yet been fully studied.
In this paper, we present a simulation model of circular scanning airborne Geiger-mode FPA imaging lidar. The point density and the optimal rotational speed of the scanner can be predicted, which is significant in designing an airborne GML system. The circular scanning airborne GML system we developed can operate at AGLs between 0.35 km and 3 km. The lidar system employed a 64 × 64 pixels InGaAs/InP detector array to obtain the ranging profiles of surfaces. In addition, we developed a real-time data compression algorithm that is realized by only storing a small range of data containing the target in the range gate. This real-time data compression algorithm was applied in lidar to reduce half of the data transmission rate and storage space compared to the uncompressing situation. We organized several initial flight tests that were conducted in Wuhu City and Qionghai City to validate the simulation model and the real-time data compression method.
This paper aims initially, in
Section 2, to briefly introduce the principle of circular airborne Geiger-mode lidar (GML). Then, we present an overview of our circular scanning airborne GML system in
Section 3.
Section 3 provides the details of the real-time data compression method and the point cloud generation method.
Section 4 presents the airborne experimental results and the performance assessments. Finally,
Section 5 summarizes the main findings and provides an outlook for future research.
2. Principle of Airborne Geiger-Mode FPA Imaging Lidar
Figure 1 shows the basic principle of a Geiger-mode FPA Lidar. The scenery is illuminated by a laser pulse. Each pixel of the Geiger-mode FPA detector precisely measures the time of flight (ToF) of the photons reflected off the surface within the instantaneous field of view (IFOV) to determine the three-dimensional coordinate.
Figure 2 illustrates the timing sequence of a Geiger mode lidar imaging cycle.
Tbin is the timing resolution of the ToF counter integrated with the GMAPD. Lidar triggers the laser to emit a laser pulse and, in the meantime, triggers the ToF counter to start timing. The range gate delay time (
Tdelay) is set based on a priori knowledge of the approximate distance to the target. Following the range gate delay, the range gate is open and the range gate duration (
Tgate) is chosen to completely encompass the volume of interest. During the range gate duration, each GMAPD in the detector array is operated with a reverse-bias voltage above the avalanche breakdown voltage to detect the reflected photons. Once a pixel detects the photons, the ToF counter integrated with the pixel will stop timing. Finally, when the range gate duration is over, the over-bias voltage will be removed, the array TOF values will be read out, and the array is then ready for the next cycle.
For the airborne Geiger-mode FPA imaging lidar, assume that the terrain within the receiver field of view (FOV) is illuminated by the laser pulse and is imaged onto a solid-state
N ×
N pixels FPA detector. The mean number of signal photoelectrons per laser fire detected by the receiver can be calculated by the lidar range, as shown in Equation (1) [
22]:
where
is the one-way atmospheric transmission,
and
are the transmission efficiency of the transmitter and the receiver optics, respectively,
is the detector average photon detection efficiency (PDE),
is the target reflectivity,
is the target slope within the laser footprint,
Et is the laser pulse energy,
is the laser wavelength,
h is the Planck’s constant,
c is the velocity of light in vacuum,
Ar is the collecting area of the receiving telescope, and
R is the distance from lidar to surface.
The one-way atmospheric transmission is determined by Equation (2) [
23]:
where the scattering coefficient
k can be expressed according to visibility and wavelength by the following expression:
where
V is the visibility in km and
is the wavelength in nm.
The coefficient
q can be calculated by:
To calculate the mean number of signal photons returned to each GMAPD in the FPA camera, several system configurations should be considered, including: (1) the far-field intensity distribution of the transmit laser beam; (2) the ratio of the detected area to the receiver field coverage area; and (3) the effective detector fill factor of the GMAPD elements. Generally, the transmit laser beam can be shaped as a Gaussian beam, a super-Gaussian beam, or a flat-top beam to illuminate the surface. Recently, Kim et al. provided a detailed description of a GML system that used an unshaped Gaussian beam as the transmitter [
21]. In addition, the ASOTB employed a pair of commercial off-the-shelf cylindrical lenses to shape the circular laser beam for matching the GMAPD receiver’s 4-to-1 aspect ratio [
19]. For the most available GMAPD elements, micro lenses are used to ensure that every returned photon incident on the array efficiently couples to the detector. The effective detector fill factor can be up to above 60%.
Here, to simplify the model, we assumed that the intensity distribution of the laser footprint is uniform and that the transmitter FOV is approximately the same as the receiver FOV. Therefore, for an
N ×
N pixels GMAPD array, the mean number of photons returned to each pixel can be calculated by Equation (5):
where
is the ratio of detected area to receiver field coverage area and
is the effective detector fill factor of the Geiger-mode FPA detector.
The noise photons generated in the single-photon detector mainly come from the solar noise background and the dark count, which will pollute the 3D point cloud. For each GMAPD in the array, the received mean number of background noise photons within a range gate duration is described by Equation (6):
where
is the bandwidth of the optical filter,
is each individual pixel’s IFOV,
is the subtended angle between the sun and the surface normal, and
is the wavelength-dependent solar spectral illuminance on the surface. For the wavelength of 1545 nm, the value of
is about 0.27 W/(m
2·nm).
The PDE and the dark count rate (DCR) of every pixel in the GMAPD camera are dependent on the integrated circuit technology, the reverse-bias voltage, and the thermal distribution in the array, but most of the pixels have almost the same PDE and DCR [
24]. Hence, the total noise photoelectrons detected by the individual GMAPD can be determined by Equation (7):
where
is the number of noise photoelectrons caused by the dark count in every pixel and
is the FPA detector average DCR in Hz.
As the echo signal is weak, the number of detected photoelectrons follows Poisson statistics [
21,
22]. The distribution is determined by Equation (8):
where
n is the photons in the
Tbin and
P(m) is the probability that
m photoelectrons are detected during the imaging cycle.
Normally, the dead time of GMAPD is 50 nanoseconds when the single-photon avalanche diode is actively quenched. The lidar can detect the surface many times during the range gate. However, the GMAPD array detector implemented in our lidar system has only one measurement opportunity during the whole range gate, due to the limit of the readout integrated circuit. Referring to the timing sequence in
Figure 2,
To is the time delay before the target signal is detected. Then, the detection probability of zero noise photons being detected before the target is determined by Equation (9):
Therefore, the detection probability of the photons signal photons is determined by Equation (10):
As was meticulously described in [
1], to map unobstructed solid surfaces in the daylight, there are only three possible outcomes for a given GMAPD pixel-per-imaging cycle: (1) a surface photon is detected; (2) no photons are detected; or (3) a noise count is detected. In addition, the probability of these three situations was strictly described by Degnan [
1] when the range gate is approximately centered on the surface. For the range gate duration of 4096 ns, we usually set an appropriate range gate delay time to make sure that three-quarters of the range gate duration is approximately above the local ground altitude. Therefore, the probability of these three situations is determined by the following equations:
where
is the probability that a surface photon is detected,
is the probability that zero photons are detected, and
is the probability of detecting a noise count.
For a circular scanning airborne Geiger-mode FPA imaging lidar, the scanner covers the full swath width at least twice the per flight line with a fore and aft look, as shown in
Figure 3 [
25]. The back and front arcs of the scan are treated as individual swaths during point cloud generation. The single frame data are the ranging data acquired by the GMAPD array detector during one imaging cycle. As simulated in [
7], for the circular scanning pattern, although density increases significantly at the lateral edge of a strip, the density for much of the swath width is homogenous. Hence, the average point density can represent the point density of the majority scanning area. It can be predicted by dividing the total number of detected points by the total surface area scanned during the same time interval, i.e.,
where
is pulse repetition frequency,
H is flying height,
v is flying speed, and
is the conical scan angle to nadir. In Equation (14), the numerator and the denominator are the number of detected ground points and the swept area in one second, respectively.
An appropriate rotational speed must be set at a different flying altitude to reduce the scanning voids. To simplify the model, we assumed that the lidar optical system is ideal. Thus, the imaging area of the
N ×
N APD array on the ground is quadrate and the side length can be determined by Equation (15):
For the circular scanner, the back and front arcs of the scan can be treated as individual swaths. For the backward swath, as shown in
Figure 3 and
Figure 4, frame I and frame J are adjacent along the scan line in the center of the swath, while frame I and frame K are adjacent across the scan line.
Xh is the horizontal space length between frame I and frame K and
Xv is the vertical space length between frame I and frame J. To avoid the scanning voids in the backward swath, when the
Xv value is as large as
L, we can obtain the maximum number of rotations of the scanner per second by Equation (16)
When the
Xh value is equivalent to
L, we can obtain the minimum number of rotations of the scanner per second by Equation (17)
Therefore, when the values of
Xh and
Xv are equivalent, the mapping area is evenly covered. The optimal rotational speed of the scanner is determined by Equation (18):
Considering that the forward swath covered the same area, the rotational speed range of the scanner can be larger than the range calculated by Equations (13) and (14), but the optimal rotational speed is still the same.
3. System Overview
The airborne GML is one of the important components of our multispectral lidar system for civilian surveying and mapping, biomass measurement, and bathymetry. As shown in
Figure 5a, the multispectral lidar system was fitted to the belly pod of the twin-engined Diamond DA42 multipurpose platform aircraft. This relatively low operating-cost platform offered flight endurance in excess of 3 h at speeds of 200–365 km/h and electrical power for research activities.
This multispectral lidar system consists of GML, multi-wavelength ocean lidar (MWOL), a high-resolution aerial camera, and an integrated navigation system (INS), as shown in
Figure 5b. The INS includes a global navigation satellite system (GNSS) and an inertial measurement unit (IMU). The high accuracy position and the attitude of the multispectral lidar are provided by the INS. It also offers triggering via a pulse generator with a pulse-per-second (PPS) signal, which synchronizes the MWOL and GML sensors. The MWOL was designed to work in linear mode and is mainly applied to intertidal zone surveying and bathymetric mapping. The MWOL’s solid-state laser was developed to generate pulse lasers with wavelengths of 1064 nm, 532 nm, 486 nm, and 355 nm. The target intensity of different wavelengths can be obtained for point cloud classification and other applications. In addition, a palmer scanner was utilized in MWOL with a scanning FOV of 30°, and the nominal AGL range was 0.3 km to 0.5 km. As a combination, the GML operating at a wavelength of 1545 nm was designed to acquire a high-density lidar image to compensate for the MWOL’s low-density point cloud.
A block diagram of the detailed components of our GML system is presented in
Figure 6. A host computer sends control instructions to the GML system through a user datagram protocol and displays the raw ranging data in rea time. The primary technical specifications of the GML system are listed in
Table 1. Furthermore, the details of the system, the simulation results of the proposed system, and the method of point cloud generation are described in the following sections.
3.1. Laser and Detector Modules
The electro-optical receiver is a 64 × 64-pixel, 50-micron-pitch, photon-counting GMAPD array. Normally, the Geiger mode APD detector operates at a reverse-bias voltage above the avalanche breakdown voltage. It generates large currents that can be detected by digital timing circuitry when just a few photons hit the APD sensing area. The mean photon detection efficiency of the APD reaches 20%. The dark count rate of each APD is about 5 kHz, achieved by cooling the APD array to −20 ℃ during normal airborne operations. The solar noise per pixel is minimized through a narrow bandpass spectral filter, a small receive telescope 7.5 cm in diameter, and a small receive IFOV.
The GMAPD receiver array is sensitive to single photons, thereby relaxing the requirements of the transmitted laser power. A low-power pulsed fiber laser could be chosen to replace the high-power solid-state laser. Therefore, the requirements of overall system power, weight, and size would be reduced, which is important for airborne lidar applications. In this GML system, a diminutive, pigtailed, pulsed fiber laser source with an operating wavelength of 1545 nm was employed. The pulse width of the fiber laser was approximately 700 ps. The fiber laser emitted a Gaussian beam with a pulse repetition rate of 20 kHz and average power of 260 mW.
3.2. Optical System Design and Scanner Module
In the transceiver unit, the laser pulse is directed to the ground by a beam expander and a scanner. The returning light is collected by a homemade primary lens and collimated by a collimating lens. Then, the light moves through a 25.4 cm diameter, 3 nm wide narrow-band optical filter to reduce the amount of background, which affects the detector during daytime operations. Finally, the returning light is focused on the GMAPD array by a lens. In order to ensure that the laser path is coaxial with the optical axis of the main telescope, a 15 cm diameter hole was bored in the middle of the primary lens, as shown in
Figure 6. Moreover, the mounting space of the transmitting optics was reduced, which is of great benefit for space-starved airborne lidar operations. A further advantage is that the coaxial optical system is more convenient to adjust and align than the off-axis optical system.
The IFOV of each pixel in the detector array was approximately 60 μrad. The far-field divergence of the transmitted beam was fixed to 5.5 mrad after the laser passed through the transmitted optics, which is larger than the received FOV and offers a high coaxial tolerance. The GML can be regarded as a range camera. Hence, every single-frame data can be seen as a picture with distance. As is the case with conventional imaging systems, aberrations such as spherical, coma, astigmatism, and field curvature caused by the GML receiver optics cannot be ignored. In this GML, all the lenses were chosen as aspherical lenses to minimize the influence of aberrations. The focal length of the imaging optical system was 833.33 mm. The diameter of the circle of confusion was optimized to 45 μm, which was within the pixel size.
The scanner is a scanning wedge driven by a high-performance brushless motor. The rotational speed of the motor was more than 2000 revolutions per minute (RPM). However, the actual laser location on the surface is significant for the point cloud generation. Therefore, the installation angle and slope of the wedge prism should be measured with high accuracy. The rolling axis of the scanner was set colinear to the optical axis of transceiver optics. As shown in
Figure 6, the upper side of the wedge prism was perpendicular to the transceiver optical axis, while the slopy side was toward the ground to reduce the number of refractions in the prism. The wedge angle of the prism was 19.536°, which was measured by a bridge coordinate measuring machine. Then, the scanning angle could be accurately calculated based on the refractive equation.
3.3. Data Sampling and Real-Time Data Compression
The FPGA triggers the laser to emit a pulse. Then, the timers behind each GMAPD pixel are triggered to count after a delayed time, depending on flying altitude. These timers adopt 12-bit counters to record the time of flight (ToF) of echo events with the timing resolution of 1 ns. Therefore, the data recorder/acquisition server needs 2 bytes to store the ToF for each pixel. The storage rate will reach at least 156 MB/s with a frame rate of 20 kHz. However, the storage rate may be challenging for the SATA2 interface, and a huge storage space is required for data curation. Therefore, developing real-time data compression is necessary to reduce the storage rate.
For the GML system with a small FOV, we assumed that the number of targets in a single frame is only one. Then, the reflected photons of the target could be detected by most pixels in the array, and the ToF of these pixels was approximately the same in this frame. We recoded a small distance range that contained the primary target by 8 bits of memory space, while the other ToF was discarded. In this case, the 12-bit ToF was compressed to 8 bits. The recorder server needed only one byte to store the ToF for each pixel, which meant that the data storage rate could be reduced to 78 MB/s. Compared to the uncompressing situation, the data storage space was reduced by half. The real-time compression algorithm could be easily realized in FPGA by the following steps:
Step 1: For single-frame ranging data acquired by the GMAPD, the peak count was found in the time-correlated histogram as the reference value A.
Step 2: Using each pixel’s ToF value (B1, B2, …, B4096) minus A, the difference value (C1, C2, …, C4096) was obtained.
Step 3: If the difference value was located in [−63, 64], we used symbol one to tag this pixel’s ToF as valid and used 7 bits to record the difference value calculated by Step 2. In addition, if the difference value was beyond [−63, 64], we used the symbol zero to tag this pixel’s ToF as invalid.
We created a scenario where the lidar images a building at a fixed distance of 1.834 km. As shown in
Figure 7, the pixel’s ToF value near the target was reserved, while the others were abandoned after the real-time data compression algorithm. For single-frame data, the root mean square (RMS) range precision of the building was 0.09 m after noise filter processing.
Moreover, in order to match the timestamps between the scanning system, the detector system, and the INS system, the GPS’s PPS signal was used as the synchronization pulse and the internal oscillator clocks on the FPGA for further precision. Finally, the synchronized information, including lidar system position, pose, encoder, and the compressed counting values, was saved in the SSD. The stored structure of data is shown in
Figure 8.
3.4. System Analysis and Simulation Results
The requirements of laser power for a GML system depend on several parameters, which were analyzed in
Section 2. For our GML lidar, the major technical specifications are shown in
Table 1. We assumed that the plane’s flying speed, the target reflectivity, the two-way atmospheric transmission, and the visibility were 220 km/h, 20%, 81%, and 15 km, respectively. By substituting these parameters into the simulation model, the simulation results (shown in
Table 2) included the mean number of detected signal photons per pixel per pulse, the mean number of noise photons during range gate per detection cycle, the point density, and the scanner rotational speed.
3.5. Point Cloud Generation
For an airborne lidar, a series of coordinate frame transformations are executed to calculate the geo-referencing of points. First, the 3D coordinates of the laser footprint in the lidar reference frame (LRF) were obtained based on the lidar geometric model. Then, the points in the LRF were transformed to the INS reference frame (IRF) according to the boresight angles. After that, the points in IRF were transformed to the local geodetic frame (LGF) through three attitude angles (roll, pitch, and yaw) provided by the IMU. Finally, the coordinates of the points in LGF were transformed to ECEF reference frame (ERF). The parameters of the transformation from LGF to ERF (including the latitude, the longitude, and the ellipsoidal height) were provided by the GNSS. Referring to the literature [
26,
27,
28,
29], the generic georeferencing of a typical lidar survey system can be calculated as follows:
where
is the coordinates of the point on the ground in the ERF,
is the offset vector from the IRF to the ERF,
is the rotation matrix from the IRF to the ERF,
is the rotation matrix from the LRF to the IRF,
is the position of the laser footprint on the ground in the LRF, and
is the offset vector from the LRF to the IRF.
For lidar employing a GMAPD array, the establishment of the lidar geometric model is different from the lidar system with a single detector. For our GML system, the laser was only used to illuminate the surface, and the detector shot a picture with distance, as shown in
Figure 2. According to the reversibility of light theory, we assumed that the light emits from the pixel in the detector array. The light-emitting unit vector could be easily determined through a ray-tracing model. As described in [
21], we defined the lidar’s coordinate system
Oxyz, as shown in
Figure 9. The origin of the LiDAR coordinate system
O is the perspective center of the lidar imaging system. The
Oz-axis is perpendicular to the detector focal plane and points to the ground along the optical axis of the imaging system. The axis
Ox-axis and
Oy-axis are parallel to the side of the array, respectively, according to the right-hand system.
As shown in
Figure 9b, the focal length of the imaging system is
f, the array center is denoted by coordinate A (0, 0,
−f), and each pixel coordinate is denoted by B (x, y,
−f). The value of x and y is dependent on the pixel pitch of the array and the pixel location in the array. Therefore, the ray unit vector can be calculated by:
When the ray is propagated to the scanner, the path of the ray through the wedge prism can be traced by Snell’s Law. The diagram of the prism-scanner assembly is shown in
Figure 10. In this circumstance, we defined the scanner coordinate system as the same as the lidar coordinate system. The rotated angle given by the encoder was defined as zero. Therefore, the initial two surface normal vectors for the upper and lower facets of the wedge prism can be given by:
where
is the slope of the wedge prism.
However, the normal vector of the upper face and the lower face of the wedge prism will be changed during the rotation of the scanner. Because the scanner rotates the prism about its
Z-axis, the scanner rotation transform matrix can be written as:
where
is the rotated instantaneous angular position of the wedge prism measured by the encoder.
Hence, the normal vectors for the upper and lower facets of the wedge prism after rotation can be determined by Equations (24) and (25):
Based on the vector version of Snell’s Law [
26], the direction of the laser exiting prism
can be calculated by Equations (26) and (27):
where
is the refractive index of air and
is the refractive index of prism.
The ranging information measured in the pixel is
r, and the point position in the lidar coordinate system can be determined by Equation (28):
Finally, the point geo-referencing in the ECEF reference frame can be calculated from Equations (19)–(28).