Next Article in Journal
Analysis of Global Sea Level Change Based on Multi-Source Data
Previous Article in Journal
A Hybrid Classification of Imbalanced Hyperspectral Images Using ADASYN and Enhanced Deep Subsampled Multi-Grained Cascaded Forest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots

1
College of Engineering and Technology, Southwest University, Chongqing 400715, China
2
Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
3
Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
4
College of Mechanical and Electronic Engineering, Northwest A&F University, Xianyang 712100, China
5
Institute of Mechanical Equipment, Xinjiang Academy of Agricultural and Reclamation Science, Shihezi 832000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(19), 4846; https://doi.org/10.3390/rs14194846
Submission received: 19 July 2022 / Revised: 6 September 2022 / Accepted: 26 September 2022 / Published: 28 September 2022
(This article belongs to the Topic Artificial Intelligence in Sensors)

Abstract

:
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult to obtain accurate measurement with calibration files based on factory settings. To solve this problem, we investigate the intrinsic calibration of multi-beam laser sensors. Specifically, we calibrate the five intrinsic parameters of LiDAR with a nonlinear optimization strategy based on static planar models, which include measured distance, rotation angle, pitch angle, horizontal distance, and vertical distance. Firstly, we establish a mathematical model based on the physical structure of LiDAR. Secondly, we calibrate the internal parameters according to the mathematical model and evaluate the measurement accuracy after calibration. Here, we illustrate the parameter calibration with three steps: planar model estimation, objective function construction, and nonlinear optimization. We also introduce the ranging accuracy evaluation metrics, including the standard deviation of the distance from the laser scanning points to the planar models and the 3σ criterion. Finally, the experimental results show that the ranging error of calibrated sensors can be maintained within 3 cm, which verifies the effectiveness of the laser intrinsic calibration.

1. Introduction

Light Detection and Ranging (LiDAR) sensors, cameras, and other information perception sensors are essential components in current robotics and automation systems [1,2]. The performance of such systems highly depends on the quality of intrinsic and extrinsic calibration parameters for these sensors [3,4,5,6]. Accurate intrinsic parameters can ensure that the data obtained by the sensors are meaningful and valid. Currently, the intrinsic calibration techniques of cameras are relatively mature and many open-source packages are available [7,8]. However, the intrinsic calibration of LiDAR needs further investigation due to its complex manufacturing process. Typically, LiDAR is subjected to rigorous internal calibration before leaving the factory with initial parameter values. As the service time of LiDARs increases, the initial parameters may not maintain their optimal values due to changes caused by shock in the internal mechanical parts of the sensor. Especially in the agriculture field, rough farmland will exacerbate the loosening of internal components of LiDARs mounted on ground vehicles.
At present, LiDARs have a wide range of applications in the agricultural field. Some common LiDAR-based research hotpots include 3D reconstruction [9,10], crop phenotyping [11], and yield estimation [12], etc. For these applications, the observed objects (such as crops and fruits) are characterized by small measurement targets and fine information perception. This characteristic requires LiDAR to meet centimeter-level or even millimeter-level measurement accuracy. For example, LiDAR-based crop organ-level phenotyping requires measuring fine morphological parameters such as leaf length, leaf angle, and stem diameters of crops [11]. Therefore, it is especially critical to study the intrinsic calibration of LiDARs for agricultural application scenarios.
To ensure a high measurement accuracy of LiDARs, a secondary calibration of the sensor parameters is required. In this paper, we investigate the intrinsic calibration of the multi-beam laser sensors, and the main contributions are as follows:
(1) We established a mathematical model based on the physical structure of HDL-64E_S3, determined the objective function of sensor intrinsic calibration, and solved it based on nonlinear optimization.
(2) We verified the feasibility of an intrinsic calibration strategy by field experiments in unstructured agricultural environments.

2. Related Work

2.1. LiDAR Application in Agriculture

Compared with cameras, LiDARs have the advantages of higher measurement accuracy, better robustness, and richer 3D environment information. These advantages give LiDARs great potential for applications in agriculture [13]. The relevant literature indicates that LiDARs have been widely used in research on agricultural robot navigation [14,15,16,17], target identification [18,19,20,21], and high-throughput crop phenotyping [22,23,24,25]. Figure 1 shows some examples of LiDAR applications in agricultural environments. For example, Ref. [15] proposed an algorithm called VineSLAM suitable for localization and mapping in a woody-crop vineyard. This approach used both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel particle filter that considers both feature modalities. Crop discrimination at the plant or patch level is vital for modern technology enabled agriculture. Ref. [21] used an advanced deep learning framework to improve and apply an object-level classification to three kinds of vegetable crops (cabbage, tomato, and eggplant) using high-resolution LiDAR point clouds. Ref. [11] proposed a new field sensing solution to high-throughput phenotyping. In their solution, they allowed the robot to move around the parcel to collect the point cloud and adopted an open-source library, Point Cloud Library, to acquire plant height and row space. Although LiDARs have extremely attractive applications, as mentioned earlier, their measurement accuracy may gradually decrease during long-term use. Therefore, it is necessary to investigate the intrinsic calibration of LiDARs.

2.2. Lidar Intrinsic Calibration

The intrinsic calibration of LiDAR is directly related to its type and working principle. Different types of LiDARs have different working principles, so their corresponding mathematical models are also different. At present, LiDAR can be roughly divided into two categories according to its working principles: multi-beam LiDAR and solid-state LiDAR.
Multi-beam LiDAR performs scanning through spinning a macroscopic component, either the whole sensor or an optical element such as a prism or a galvanometer mirror. Therefore, it has advantages in larger horizontal field of view. However, this mechanical construction results in the moving parts with large enclosures and poor mechanical tolerance to vibration and shock. Thus, it is necessary to calibrate the intrinsic parameters before use. Normally, the intrinsic parameters and mathematical models corresponding to different beam versions of LiDARs are also different. The 64-beam LiDAR have five intrinsic parameters, whereas the 16-beam and 32-beam LiDAR have three intrinsic parameters [3,26,27,28,29,30,31]. According to a literature search, the target models used for LiDAR intrinsic calibration include both planar and columnar models. Ref. [29] proposed a measurement calibration method based on the condition-adjustment equation and compared the calibration results with those of the original factory to prove the effectiveness of the method. Ref. [32] estimated the intrinsic parameters of a 32-beam LiDAR based on constrained point clouds of column surface features (e.g., light poles) and fitted 3D column surface models. Some studies also demonstrate intrinsic calibration methods without feature targets. Ref. [33] put forward an unsupervised calibration method, which assumes that points in space are located on adjacent surfaces. Based on this assumption, an energy function was defined to calibrate the intrinsic and extrinsic parameters of LiDARs.
Solid-state LiDAR uses electronic components such as optical phased arrays, photonic ICs, and far field radiation patterns instead of mechanical rotating components to realize the adjustment of the emitted laser angle. They can avoid large mechanical parts in their physical structure and have generated great interest because they also provide scalability, reliability and embeddedness. At present, there are few studies on the intrinsic calibration of solid-state LiDAR [3,28]. Ref. [28] introduced a geometrical model for the scanning system of a solid-state LiDAR based only on Snell’s law and its specific mechanics, but compared with multi-beam LiDAR, it is limited in the horizontal field of view. Thus, solid-state LiDAR is not capable of covering a 360° horizontal field of view.
In summary, multi-beam LiDAR is still the mainstream sensor in many application studies due to its wide-ranging horizontal view. In this paper, we select the Velodyne HDL-64E_S3 multi-beam 3D LiDAR to conduct the study of intrinsic calibration. Firstly, we introduce the data transmission format for HDL-64E_S3 and establish a mathematical model based on the physical structure and working principle to determine the relationship between the internal parameters and the point cloud data. Secondly, according to the measurement accuracy evaluation metrics of LiDARs presented in the paper, the objective function for sensor intrinsic calibration is derived. Finally, the nonlinear optimization algorithm is used to find the optimal solution for the system of overdetermined equations formed by the objective function.

3. Materials and Methods

3.1. HDL-64E_S3 LiDAR

The Velodyne HDL-64E_S3 LiDAR is shown in Figure 2a. This type of LiDAR consists of 64 laser emitters that fire laser beams outward at different pitch angles. Meanwhile, these laser emitters are driven by a high-speed rotating motor to achieve a panoramic scan with a pitch-view angle-range of 30° and a horizontal-view angle-range of 360°. In addition, the sensor is equipped with 64 laser receivers for measuring the distance information returned by target reflection. The laser emitters are divided into 4 groups with 16 in each group. The serial number and arrangement position of each group are shown in Figure 2b (each group is indicated by a different color). The two groups of emitters located in the upper and lower regions are called the upper block sequences (corresponding to laser sequences from 0 to 31) and the lower block sequences (corresponding to laser sequences from 32 to 63), respectively. After the sensor is powered on, the upper block sequences and the lower block sequences fire laser beams in pairs flowing in the order of arrangement. In Figure 2b, laser emitters 0 and 32 belong to the same simultaneous light-emitting pairs, for example.

3.1.1. Data Transmission Format

Data transmission of HDL-64E_S3 is based on UPD Ethernet packets. Each packet contains a header, a data payload of firing data, and status data. Data packets are assembled with the collection of all firing data for six upper block sequences and six lower block sequences. The upper and lower block sequences are distinguished by block identification bits (the upper and lower block identification bits are /0xEEFF and /0xDDFF, respectively). The upper block laser distance and intensity data are collected first followed by the lower block laser data. The data packet is then combined with the status and header data in a UDP packet transmitted over Ethernet. The overall structure of the packets is shown in Figure 3.
As can be seen from Figure 3, each block laser data contains only one 2-byte block identification bit (namely block id) and one 2-byte rotation angle. The remaining data are 2 bytes of distance and 1 byte of intensity information, which are collected by the laser in the corresponding block sequence. Here, block id corresponds to the block sequence, that is, the upper block or the lower block. Different block sequences correspond to different laser serial numbers, as shown in Figure 2b. By identifying the block id, we could get the data (rotation angle, distance, and intensity) of the corresponding block sequence laser collected. The rotation angle represents the instantaneous rotation angle of LiDAR, not the 32 lasers in the block sequence. Since all the lasers are mounted on the same plane, the actual rotation angle of each laser consists of the instantaneous angle and the compensation angle.
As described in the previous section, a data packet has 12 group block sequences, of which 6 groups are upper block sequences and 6 groups are lower block sequences. Each block data includes a block id, rotation angle, and other information collected by the lasers, including distance and intensity. In addition, the status data always contains a GPS 4-byte timestamp. The status data also contains one type of data, which rotates through a sequence of different pieces of information. This is not the focus of this paper and will not be elaborated here.

3.1.2. Mathematical Model for Point Coordinates Calculation

Based on the physical structure of the sensors’ eccentric rotation, we established a mathematical model of point coordinates calculation for HDL-64E_S3, as shown in Figure 4 [27,29,31]. In the model, we constructed two Cartesian coordinate systems, where the coordinate origins of blue and black coordinate systems are the center point of LiDAR and the laser emitter inside LiDAR, respectively. The point cloud output from HDL-64E_S3 LiDAR is determined by solving the mapping relationships between these two coordinate systems. The main parameters in the mathematical model are as follows:
  • l i : Measured distance returned by lasers;
  • Δ l i : Distance compensation value for laser-measured distance l i ;
  • θ i : Rotation angle in the X-Y plane (counterclockwise rotation for positive direction);
  • Δ θ i : Compensation angle for horizontal rotation angle θ i ;
  • φ i : Pitch angle in the Y-Z plane;
  • h o s c _ i : Horizontal compensation distance of lasers (the red line in Figure 4b);
  • v o s c _ i : Vertical compensation distance of lasers (the red line in Figure 4a).
Here, the letter i is the serial number of laser emitters; l i + Δ l i and θ i + Δ θ i are the actual distance and actual rotation angle collected by lasers, respectively.
Typically, long-term wear and tear on LiDAR causes slight changes in the pitch angle of laser emitters, which results in the factory-set pitch angle φ i no longer maintaining the optimal parameters. Furthermore, it causes other internal parameters associated with the pitch angle to change as well. To compensate for the changes in pitch angles, we redefine the pitch angle of the laser emitters as φ i + Δ φ i , where the Δ φ i is the change value in the pitch angle of the laser emitter with the serial number i. Therefore, the coordinate values (x, y, z) of the target measured can be expressed as:
x = ( ( l i + Δ l i ) cos ( φ i + Δ φ i ) v o s c _ i sin ( φ i + Δ φ i ) ) sin ( θ i + Δ θ i ) h o s c _ i cos ( θ i + Δ θ i ) y = ( ( l i + Δ l i ) cos ( φ i + Δ φ i ) v o s c _ i sin ( φ i + Δ φ i ) ) cos ( θ i + Δ θ i ) + h o s c _ i sin ( θ i + Δ θ i ) z = ( l i + Δ l i ) sin ( φ i + Δ φ i ) + v o s c _ i cos ( φ i + Δ φ i )
Equation (1) can be simplified as:
f ( x , y , z ) = g ( l , Δ l , θ , Δ θ , φ , Δ φ , h o s c , v o s c )
where Δ l , Δ θ , Δ φ , h o s c , and v o s c are the 5 intrinsic parameters that are to be calibrated. l , and θ and φ are known parameters, which can be obtained from the UDP packet and factory calibration files, respectively. Thus, Equation (2) can be further expressed as:
f ( x , y , z ) = g ( Δ l , Δ θ , Δ φ , h o s c , v o s c )

3.2. Parameter Calibration

An offline calibration approach based on the static plane models is used to compensate for the measurement errors of the sensors [36,37]. First, we scan the surrounding environment with planar information using a LiDAR placed at a fixed position. Then, we fit the estimation planes based on the point clouds returned by the LiDAR. Finally, we correct the internal parameters using the estimated planar models. The specific steps of the intrinsic calibration are as follows.
(i) Estimating the Plane Model. We estimate the plane models using the raw data collected by our LiDAR and calculate the parameters of the plane models.
(ii) Constructing the Objective Function. We assume that the lower the dispersion of the distance from the laser scanned points to the plane model, the higher the accuracy of the parameter calibration. Here, we substitute the mathematical model of LiDAR point coordinates into the plane model of the point cloud estimation and solve the distance function from the scanned point to the plane model. According to our assumption, it is known that the minimization equation of this distance function is the objective function of parameter calibration.
(iii) Solving the Overdetermined Equation. We construct a system of overdetermined equations based on the objective function, in which we take the tuple of sensor measurement values and the corresponding plane model parameters as the input values for the objective function. To find the optimal solution of the system of overdetermined equations, a nonlinear optimization algorithm based on the Levenberg–Marquardt (LM) algorithm is used for our parameter calibration algorithm [38].

3.2.1. Plane Model

Theoretically, the points q(x, y, z) measured onto the plane by lasers should match the plane equation of the scanned plane. Thus, we formulate the plane equation as:
a k x + b k y + c k z + d k = 0
where the letter k is the number of scanned planes and a k , b k , c k , d k are the plane model parameters, which can be calculated by the Random Sample Consensus (RANSAC) algorithm. RANSAC is a parameter estimation algorithm for mathematical models that has a good evaluation in terms of efficiency and accuracy of plane detection. We implement the detection of multiple planes by calling the SACSegmentation function from the Point Cloud Library (PCL). The distance threshold was set to 0.002 m and the maximum number of iterations was set to 10,000.

3.2.2. Objective Function

To construct the objective function, we take the number of noisy points nearby the planar point cloud as the evaluation metric of the measurement accuracy, which can visualize the dispersion of the point cloud on the plane scanned by LiDAR, as shown in Figure 5. The figure shows the plane model and related point cloud collected by LiDAR, where the points of different colors indicate the measured values scanned by lasers of different serial numbers. We can see that there are some significant noise points around the plane, which indicate the point clouds obtained by LiDAR have a large dispersion. Here, we quantify the dispersion of the point cloud by the Standard Deviation (SD) of the Point-to-Plane Distance (P2P-D). Ideally, all points should lie on the plane, i.e., P2P-D is 0. However, it is in fact difficult to satisfy the ideal condition, so we can estimate the dispersion of the points by solving the minimum of SD. The equation for solving the minimum of SD is:
m i n { q Q V a r ( d i s t ( q , p ) ) | p P k }
where k is the scanned plane number, P is the set of parameters of all fitted planes, p is a group of plane parameters within plane P, Q is a tuple of raw measured values returned by all laser beams (i.e., distance, rotation angle, and laser serial number), and q is a tuple of raw measured values returned by one of the laser beams. The d i s t ( q , p ) function is used to calculate the distance from the set of parameters q to the corresponding plane p in the Cartesian coordinate system. Thus, P2P-D is:
d i s t ( q , p ) = a k x i + b k y i + c k z i + d k a k 2 + b k 2 + c k 2
We know that Equation (5) is a monotonic function with respect to Equation (6). Therefore, we can infer that solving for the minimum of SD is equivalent to solving for the minimum of P2P-D. The objective function used for intrinsic calibration can be expressed as:
m i n { q Q d i s t ( q , p ) | p P k }
From the working principle of HDL-64E_S3, it is known that the sensor has 64 laser emitters and each laser emitter has 5 internal parameters. Therefore, if there are k scan planes, we need to calibrate 64 × 5 × k internal parameters according to the objective function.

3.2.3. Nonlinear Optimization

The previous section demonstrates that the objective function is a system of overdetermined equations. To obtain the optimal solution of the system of equations, we utilize the LM algorithm for the nonlinear optimization of the objective function. The LM algorithm modifies the Gaussian Newton method by adding trust regions to the increments of the function variables, which limits the range of variables that change during the iterative process. The LM algorithm specifies that the approximation of the increment within the trust region is a valid increment; otherwise, the approximation is inaccurate. The LM algorithm can effectively avoid the matrix non-singularity and pathological problems of the linear system of equations, and it also provides the advantages of both the gradient method and the Newton method by adjusting the damping coefficient.
We assume that v is a column vector consisting of intrinsic parameters. From Equations (1) and (7), the least squares equation can be constructed as:
arg min v G ( v ) = 1 2 m = 1 N g ( v ) 2
where m is the index of the scanned point, N is the total number of point clouds around the scanned plane, v = ( Δ l , Δ θ , Δ φ , h o s c , v o s c ) T , and g ( v ) is the distance from the scanned points to the plane, that is P2P-D.
According to the LM algorithm, the iteration increments Δ v of the variable v can be expressed as:
Δ v = ( J ( v n ) T J ( v n ) + μ I ) 1 J ( v n ) T g ( v n )
where I is the identity matrix, g ( v n ) is the two-order Taylor expansion of g ( v ) at v = v n , and J ( v n ) is the Jacobi matrix obtained by the derivative of g ( v ) concerning the column vectors v at v = v n .
Equation (9) can be further abbreviated as:
Δ v * = ( H + μ I ) 1 g
where H is the Hessian matrix, H = J ( v n ) T J ( v n ) ; g is the closure error vector, g = J ( v n ) T g ( v n ) ; and μ is the damping coefficient, which is determined by the gain ratio ρ . The mathematical expression for the gain ratio ρ is:
ρ = G ( v ) G ( v + Δ v ) M ( 0 ) M ( Δ v )
where M ( Δ v ) is assumed to describe the behavior of G in the current iteration, which is defined as:
M ( Δ v ) G ( v ) + J T g ( v ) Δ v + 1 2 ( Δ v ) T J T J ( Δ v )
The grain ration ρ represents the similarity between the two-order Taylor expansion M ( Δ v ) and G . Its numerator and denominator represent the amount of variation of the function G and the first-order Taylor expansion M with the independent variable Δ v , respectively.
We determine the damping coefficient μ based on the value of the gain rate ρ . The damping coefficient μ is expressed as a segmental function:
μ = 2 μ μ = μ / 3 ρ < 0.25 ρ > 0.75
As a result, the iterative increment Δ v can be obtained from Equations (10) and (13). The optimal solution of the intrinsic parameters can be calculated based on the iterative results of v n + Δ v . It is worth mentioning that, similar to Gaussian–Newton algorithm, the LM algorithm also needs to set the initial parameter values. We set the initial values of the LM algorithm as the factory parameters of the LiDAR.

4. Results

4.1. Experimental Scheme

The experimental scheme for the intrinsic calibration of LiDAR consists of three steps: data acquisition, intrinsic calibration, and experimental validation.
(i) Data Acquisition. This step is to extract the raw data collected by lasers and parse the data. The raw data includes the rotation angle, measurement distance, and intensity collected by the corresponding serial number of laser emitters. Data parsing is a process that obtains the Cartesian coordinate representation of the raw data based on UPD Ethernet packets. The UDP data transfer format is described in the previous Section 3.1.1 and Section 3.1.2.
(ii) Intrinsic Calibration. We performed a secondary calibration of the internal parameters based on the results of the parsed data. The calibration steps include plane estimation, objective function construction, and nonlinear optimization. The details of the implementation of intrinsic calibration can be found in Section 3.2.
(iii) Experimental Verification. To verify the intrinsic calibration results of the sensors, we used the SD and 3sigma criteria (including the σ, 2σ, and 3σ criteria) of P2P-D to judge the effectiveness of the sensor intrinsic calibration. In general, the higher the calibration accuracy is, the smaller the SD and the larger the percentage of the 3sigma criterion are.
Finally, we compared the differences of the five internal parameters before and after calibration. The experimental scheme is shown in Figure 6.

4.2. Experimental Settings

To sufficiently demonstrate the feasibility of the calibration scheme, we conducted calibration experiments and verification experiments. For the calibration experiments, we determined the optimal solution for the five internal parameters of the LiDAR with help of the relationship between the planar model and the point cloud obtained by LiDAR. As a result, we could obtain the LiDAR calibration file by this experiment. For the verification experiments, we compared the range performance before and after calibration in agricultural scenarios.

4.2.1. Calibration Experiments

We selected the Velodyne HDL-64E_S3 LiDAR with 360° panoramic scanning for the intrinsic calibration. The experimental site was an underground parking lot surrounded by flat walls. The point cloud data of the walls scanned by laser emitters can be used to estimate the plane model for our calibration algorithm. To ensure accurate calibration of the entire sensor system, we selected four walls that can cover 360° panoramic information for plane estimation. Figure 7 and Figure 8 show the flat walls and corresponding point cloud data used for plane model estimation, respectively. Note that the LiDAR is mounted on a mobile platform in a tilted manner. This arrangement ensures that sufficient point cloud data is acquired and also prevents the estimated normal of the plane model from coinciding with the coordinate axes of the sensor itself. If the scanned plane normal is consistent with the direction of the sensor coordinate axis, this will lead to a lack of parameters in the objective function G. Thus, we cannot solve the optimal results of the target function G.

4.2.2. Verification Experiments

Furthermore, we performed field experiments to verify the accuracy of the calibration results for ranging effects in agricultural scenarios. Due to the lack of corresponding planes in the agricultural scene, we use two 30 × 40 cm2 planar plates as verification targets. As shown in Figure 9, the LiDAR is placed at a fixed position, and the planar plates marked with rectangles are erected in both orientations of the sensor. During the experiments, the planar plates are placed at different distances from the LiDAR, such as 2.5 m, 5 m, 7.5 m, and 10m. In this way, we can verify the feasibility of the calibration scheme by calculating the standard deviation of P2P-D and 3σ criteria based on plane plates.

4.3. Experimental Results

Figure 10 shows the projection results of the planar point clouds before and after the LiDAR calibration, where the internal parameters before calibration are the factory default values. The x, y, and z axes in the figure represent the position information of the laser scanning points in the Cartesian coordinate system. Here, Figure 10a,b show the point cloud projection results of the scanned wall 2 before and after calibration, respectively. Similarly, Figure 10c,d show the point cloud projection results of the scanned wall 4 before and after calibration, respectively. It is obvious from Figure 10 that the point cloud dispersion after parameter calibration is smaller than that before parameter calibration.

4.3.1. Standard Deviation Verification

According to the principle of numerical statistical analysis, we assume that the planar point clouds acquired by LiDAR are subject to a normal distribution. The SD of P2P-D can be used to judge the dispersion of the point clouds before and after the parameter calibration, as shown in Figure 11. In the figure, the horizontal and vertical axes indicate the laser beam sequence and the SD of P2P-D, respectively. The blue and orange lines indicate the SD of the P2P-D before and after intrinsic calibration, respectively. We found that the point clouds returned by the sensor before calibration have high dispersion, whereas the dispersion decreased after calibration. By calculating the mean value of SD of 64 laser beams, we can see that the mean value of SD before calibration was 2.76 cm, and the maximum SD reached 5.64 cm, whereas the mean value of SD after calibration was 1.58 cm, and the SD of most laser beams stayed within 3 cm. Therefore, the mean value of SD of the laser sensor was reduced by 1.18 cm after calibration.

4.3.2. Sigma Criterion Verification

To further quantify the effectiveness of the calibration results, we used the 3sigma criterion (σ, 2σ, and 3σ criteria) to analyze the distribution of the laser point clouds. The 3sigma criterion is often used to characterize the probability distribution of random variables in the normal distribution. Figure 12, Figure 13 and Figure 14 show the proportion of P2P-D values in a certain range for all points around the fitting plane before and after calibration. The range refers to the σ, 2σ, and 3σ confidence intervals, that is the proportion of P2P-D in the range of ( μ ± σ ) , ( μ ± 2 σ ) , and ( μ ± 3 σ ) , respectively. In these figures, the horizontal and vertical axes indicate the laser beam sequence and percentage of the 3sigma criterion, respectively. The blue and orange lines indicate the verification results before and after intrinsic calibration, respectively. From these three figures, it can be seen that the dispersion of the point clouds after calibration was significantly lower than that before calibration. Table 1 quantifies the mean percentage values of P2P-D for the 64 laser emitters in the three intervals of ( μ ± σ ) , ( μ ± 2 σ ) , and ( μ ± 3 σ ) before and after calibration, respectively, using the data from Figure 11, Figure 12 and Figure 13. Within the σ confidence interval, the mean values of P2P-D for 64 lasers before and after calibration were 70.81% and 79.87%, respectively. Within the 2σ confidence interval, the mean values before and after calibration were 95.68% and 95.93%, respectively. Within the 3σ confidence interval, the mean values before and after calibration were 98.29% and 99.09%, respectively. As a result, the mean percentages of P2P-D for 64 lasers improved by 9.06%, 0.25%, and 0.8% after calibration in the σ, 2σ, and 3σ confidence intervals, respectively. We can see that the calibration effect was not significantly improved within 2σ confidence interval from Table 1. This is because the sensor had a high measurement accuracy before calibration.

4.3.3. Differences in Calibration Parameters

Based on the calibration results presented in the previous section, we further analyzed the differences in the five intrinsic parameters (distance, rotation angle, pitch angle, horizontal distance, and vertical distance) before and after sensor calibration, as shown in Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19. In these figures, the horizontal axis represents the laser beam sequences, and the vertical axis represents the differences between before and after calibration. Table 2 shows the mean values and differences in the five intrinsic parameters for the 64 laser beams before and after calibration. The differences in the five intrinsic parameters of the LiDAR were as follows: the measured distance Δ l i decreased by 0.00932 m; the rotation angle Δ θ i decreased by 0.00785°; the vertical angle Δ φ i decreased by 0.00658°; the horizontal distance h o s c increased by 0.01579 m; and the vertical distance v o s c increased by 0.00469 m.

4.3.4. Verification Experiments in Agriculture Scene

We verified the feasibility of the intrinsic calibration using the standard deviation of the P2P-D and 3σ criterion in an agricultural scene, as shown in Table 3. We found that the increase values are generally positive within the σ and 3σ confidence intervals. Nevertheless, there was a significant negative growth of 2.33% within the 2σ confidence intervals, when the distance was 10 m. The reason may be that the LM algorithm is trapped in a local optimal solution rather than global optimal solution. At the same time, we found that the standard deviation after calibration was relatively decreased, which implies less dispersion of the point cloud on the planar plates. These results demonstrate the effectiveness of the calibration strategy.

5. Discussion and Conclusions

LiDAR has the advantages of high accuracy measurement performance, large-ranging area, and an open-source information sensing algorithm, and it is therefore, increasingly used in agricultural robots. However, the looseness of the internal components of the sensor can lead to a reduction in its ranging accuracy. According to the surveyed literature, research to improve the measurement accuracy of LiDAR still does not attract sufficient attention. Researchers focus more on studying data processing algorithms based on laser point cloud information. In this paper, we conduct research on the intrinsic calibration of LiDAR from the perspective of the ranging principle for laser sensors. A nonlinear optimization strategy based on static plane models is proposed for the calibration of five intrinsic parameters (distance, rotation angle, pitch angle, horizontal distance, and vertical distance) for multi-beam LiDAR.
First, we established the mathematical model by analyzing the working principle of LiDAR with the Velodyne HDL-64E_S3 LiDAR as an example. Then, a nonlinear optimization strategy based on the planar models was used to correct the internal parameters of laser sensors. We concentrated on the three stages (planar model estimation, objective function construction, and nonlinear optimization) of parameter correction. Finally, we demonstrated the effectiveness of the intrinsic calibration by analyzing the standard deviation of the point-to-plane distance and the 3sigma criterion. The experimental results illustrate that:
(1) The dispersion of the laser point clouds after calibration was significantly lower than that before calibration, indicating that the calibrated LiDAR has a higher measurement accuracy.
(2) The maximum standard deviation of the distance from the laser scanning points to the calibration plane before calibration was 5.64 cm, whereas the standard deviation after calibration stayed within the range of 3 cm.
(3) The percentages of points within σ, 2σ, and 3σ confidence intervals to the total number of points increased by 9.06%, 0.25%, and 0.80%, respectively.
The intrinsic calibration of the multi-beam LiDAR can solve the problem of measuring accuracy degradation due to vibration of the internal components of LiDAR sensors. We are optimistic that our work can improve the detection accuracy of agricultural robots in applications such as path planning, obstacle avoidance, target recognition, and phenotype observation. It can also provide inspiration for researchers to develop agricultural intelligent equipment with higher accuracy, wider application range, and improved robustness.

Author Contributions

Conceptualization, N.S., Q.Q., and C.Z.; methodology, N.S., Q.Q., and C.Z.; software, N.S. and Z.F.; validation, N.S., Q.Q., and C.Z.; formal analysis, Q.Q.; investigation, N.S., Q.Q. and C.Z.; resources, Q.Q., C.J., Q.F., and C.Z.; writing—original draft preparation, N.S.; writing—review and editing, N.S., Z.F.; Q.Q., T.L., C.J., Q.F., and C.Z.; visualization, N.S.; supervision, Q.Q. and C.Z.; project administration, Q.Q., C.J., Q.F., and C.Z.; funding acquisition, Q.Q., C.J., Q.F., and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (grant number 2019YFE0125200), the Science and Technology Cooperation Project of Xinjiang Production and Construction Crops (grant number 2022BC007), and the National Natural Science Foundation of China (grant number 61973040).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dang, X.; Rong, Z.; Liang, X. Sensor Fusion-Based Approach to Eliminating Moving Objects for SLAM in Dynamic Environments. Sensors 2021, 21, 230. [Google Scholar] [CrossRef] [PubMed]
  2. He, G.; Yuan, X.; Zhuang, Y.; Hu, H. An Integrated GNSS/LiDAR-SLAM Pose Estimation Framework for Large-Scale Map Building in Partially GNSS-Denied Environments. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  3. Huang, J.-K.; Feng, C.; Achar, M.; Ghaffari, M.; Grizzle, J.W. Global Unifying Intrinsic Calibration for Spinning and Solid-State LiDARs. arXiv 2020, arXiv:2012.03321. [Google Scholar]
  4. Yuan, C.; Liu, X.; Hong, X.; Zhang, F. Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments. IEEE Robot. Autom. Lett. 2021, 6, 7517–7524. [Google Scholar] [CrossRef]
  5. Muñoz-Bañón, M.Á.; Candelas, F.A.; Torres, F. Targetless Camera-LiDAR Calibration in Unstructured Environments. IEEE Access 2020, 8, 143692–143705. [Google Scholar] [CrossRef]
  6. Mishra, S.; Osteen, P.R.; Pandey, G.; Saripalli, S. Experimental Evaluation of 3D-LIDAR Camera Extrinsic Calibration. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 9020–9026. [Google Scholar]
  7. Oth, L.; Furgale, P.; Kneip, L.; Siegwart, R. Rolling Shutter Camera Calibration. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1360–1367. [Google Scholar]
  8. Wang, Q.; Fu, L.; Liu, Z. Review on camera calibration. In Proceedings of the 2010 Chinese Control and Decision Conference, Xuzhou, China, 26–28 May 2010; pp. 3354–3358. [Google Scholar]
  9. Pan, Y.; Han, Y.; Wang, L.; Chen, J.; Meng, H.; Wang, G.; Zhang, Z.; Wang, S. 3D Reconstruction of Ground Crops Based on Airborne LiDAR Technology. IFAC-Pap. 2019, 52, 35–40. [Google Scholar] [CrossRef]
  10. Liu, G.; Si, Y.; Feng, J. 3D reconstruction of agriculture and forestry crops. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2014, 45, 38–46+19. [Google Scholar] [CrossRef]
  11. Qiu, Q.; Sun, N.; Bai, H.; Wang, N.; Fan, Z.; Wang, Y.; Meng, Z.; Li, B.; Cong, Y. Field-Based High-Throughput Phenotyping for Maize Plant Using 3D LiDAR Point Cloud Generated With a “Phenomobile”. Front. Plant Sci. 2019, 10, 554. [Google Scholar] [CrossRef]
  12. Gené-Mola, J.; Gregorio, E.; Auat Cheein, F.; Guevara, J.; Llorens, J.; Sanz-Cortiella, R.; Escolà, A.; Rosell-Polo, J.R. Fruit detection, yield prediction and canopy geometric characterization using LiDAR with forced air flow. Comput. Electron. Agric. 2020, 168, 105121. [Google Scholar] [CrossRef]
  13. Jin, S.; Sun, X.; Wu, F.; Su, Y.; Li, Y.; Song, S.; Xu, K.; Ma, Q.; Baret, F.; Jiang, D.; et al. Lidar sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects. ISPRS J. Photogramm. Remote Sens. 2021, 171, 202–223. [Google Scholar] [CrossRef]
  14. Iqbal, J.; Xu, R.; Sun, S.P.; Li, C.Y. Simulation of an Autonomous Mobile Robot for LiDAR-Based In-Field Phenotyping and Navigation. Robotics 2020, 9, 19. [Google Scholar] [CrossRef]
  15. Aguiar, A.S.; dos Santos, F.N.; Sobreira, H.; Boaventura-Cunha, J.; Sousa, A.J. Localization and Mapping on Agriculture Based on Point-Feature Extraction and Semiplanes Segmentation From 3D LiDAR Data. Front. Robot. AI 2022, 9, 14. [Google Scholar] [CrossRef] [PubMed]
  16. Choudhary, A.; Kobayashi, Y.; Arjonilla, F.J.; Nagasaka, S.; Koike, M. Evaluation of mapping and path planning for non-holonomic mobile robot navigation in narrow pathway for agricultural application. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Electr Network, Iwaki, Fukushima, Japan, 11–14 January 2021; pp. 17–22. [Google Scholar]
  17. Emmi, L.; Le Flecher, E.; Cadenat, V.; Devy, M. A hybrid representation of the environment to improve autonomous navigation of mobile robots in agriculture. Precis. Agric. 2021, 22, 524–549. [Google Scholar] [CrossRef]
  18. Koenig, K.; Hofle, B.; Hammerle, M.; Jarmer, T.; Siegmann, B.; Lilienthal, H. Comparative classification analysis of post-harvest growth detection from terrestrial LiDAR point clouds in precision agriculture. ISPRS J. Photogramm. Remote Sens. 2015, 104, 112–125. [Google Scholar] [CrossRef]
  19. Kragh, M.; Jorgensen, R.N.; Pedersen, H. Object Detection and Terrain Classification in Agricultural Fields Using 3D Lidar Data. In Proceedings of the 10th International Conference on Computer Vision Systems (ICVS), Copenhagen, Denmark, 6–9 July 2015; pp. 188–197. [Google Scholar]
  20. Kragh, M.; Underwood, J. Multimodal obstacle detection in unstructured environments with conditional random fields. J. Field Robot. 2020, 37, 53–72. [Google Scholar] [CrossRef]
  21. Jayakumari, R.; Nidamanuri, R.R.; Ramiya, A.M. Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks. Precis. Agric. 2021, 22, 1617–1633. [Google Scholar] [CrossRef]
  22. Su, Y.; Wu, F.; Ao, Z.; Jin, S.; Qin, F.; Liu, B.; Pang, S.; Liu, L.; Guo, Q. Evaluating maize phenotype dynamics under drought stress using terrestrial lidar. Plant Methods 2019, 15, 11. [Google Scholar] [CrossRef]
  23. Wu, S.; Wen, W.; Xiao, B.; Guo, X.; Du, J.; Wang, C.; Wang, Y. An Accurate Skeleton Extraction Approach from 3D Point Clouds of Maize Plants. Front. Plant Sci. 2019, 10, 248. [Google Scholar] [CrossRef]
  24. Wang, K.; Zhou, J.; Zhang, W.; Zhang, B. Mobile LiDAR Scanning System Combined with Canopy Morphology Extracting Methods for Tree Crown Parameters Evaluation in Orchards. Sensors 2021, 21, 339. [Google Scholar] [CrossRef]
  25. Zhou, M.; Jiang, H.; Bing, Z.; Su, H.; Knoll, A. Design and evaluation of the target spray platform. Int. J. Adv. Robot. Syst. 2021, 18, 1729881421996146. [Google Scholar] [CrossRef]
  26. Yuwen, X.; Chen, L.; Yan, F.; Zhang, H.; Tang, J.; Tian, B.; Ai, Y. Improved Vehicle LiDAR Calibration With Trajectory-Based Hand-Eye Method. IEEE Trans. Intell. Transp. Syst. 2022, 23, 215–224. [Google Scholar] [CrossRef]
  27. Zhao, C.; Zhang, Y.; Du, J.; Guo, X.; Wen, W.; Gu, S.; Wang, J.; Fan, J. Crop Phenomics: Current Status and Perspectives. Front. Plant Sci. 2019, 10, 714. [Google Scholar] [CrossRef] [PubMed]
  28. García-Gómez, P.; Royo, S.; Rodrigo, N.; Casas, J.R. Geometric Model and Calibration Method for a Solid-State LiDAR. Sensors 2020, 20, 2898. [Google Scholar] [CrossRef] [PubMed]
  29. Zalud, L.; Kocmanova, P.; Burian, F.; Jilek, T.; Kalvoda, P.; Kopecny, L. Calibration and Evaluation of Parameters in A 3D Proximity Rotating Scanner. Elektron. Elektrotechnika 2015, 21, 3–12. [Google Scholar] [CrossRef]
  30. Zeng, Y.; Yu, H.; Dai, H.; Song, S.; Lin, M.; Sun, B.; Jiang, W.; Meng, M. An improved calibration method for a rotating 2D LIDAR system. Sensors 2018, 18, 497. [Google Scholar] [CrossRef] [PubMed]
  31. Chen, C.-Y.; Chien, J.; Huang, P.-S.; Hong, W.-B.; Chen, C.-F. Intrinsic parameters calibration for multi-beam LiDAR using the Levenberg-Marquardt algorithm. In Proceedings of the 27th Conference on Image and Vision Computing New Zealand, Dunedin, New Zealand, 26–28 November 2012; pp. 19–24. [Google Scholar]
  32. Chan, T.O.; Lichti, D.D. Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes. Remote Sens. 2015, 7, 10480–10500. [Google Scholar] [CrossRef]
  33. Levinson, J.; Thrun, S. Unsupervised Calibration for Multi-beam Lasers. In Experimental Robotics: The 12th International Symposium on Experimental Robotics; Khatib, O., Kumar, V., Sukhatme, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 179–193. [Google Scholar]
  34. Guevara, J.; Auat Cheein, F.A.; Gené-Mola, J.; Rosell-Polo, J.R.; Gregorio, E. Analyzing and overcoming the effects of GNSS error on LiDAR based orchard parameters estimation. Comput. Electron. Agric. 2020, 170, 105255. [Google Scholar] [CrossRef]
  35. Elnashef, B.; Filin, S.; Lati, R.N. Tensor-based classification and segmentation of three-dimensional point clouds for organ-level plant phenotyping and growth analysis. Comput. Electron. Agric. 2019, 156, 51–61. [Google Scholar] [CrossRef]
  36. Atanacio-Jiménez, G.; González-Barbosa, J.-J.; Hurtado-Ramos, J.B.; Ornelas-Rodríguez, F.J.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. LIDAR Velodyne HDL-64E Calibration Using Pattern Planes. Int. J. Adv. Robot. Syst. 2011, 8, 59. [Google Scholar] [CrossRef]
  37. Bergelt, R.; Khan, O.; Hardt, W. Improving the intrinsic calibration of a Velodyne LiDAR sensor. In Proceedings of the 2017 IEEE SENSORS, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
  38. Marquardt, D.W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
Figure 1. Examples of LiDAR applications in agricultural environments: (a) shrimp robot platform [20]; (b) AgRob V16 platform [15]; (c) tractor platform [19]; (d) acquisition of orchard canopy parameters [34]; (e) identifies and estimates apple production [12]; (f) plant organ division [35].
Figure 1. Examples of LiDAR applications in agricultural environments: (a) shrimp robot platform [20]; (b) AgRob V16 platform [15]; (c) tractor platform [19]; (d) acquisition of orchard canopy parameters [34]; (e) identifies and estimates apple production [12]; (f) plant organ division [35].
Remotesensing 14 04846 g001
Figure 2. The multi-beam LiDAR: (a) Velodyne HDL-64E_S3, (b) distribution of 64 laser emitters.
Figure 2. The multi-beam LiDAR: (a) Velodyne HDL-64E_S3, (b) distribution of 64 laser emitters.
Remotesensing 14 04846 g002
Figure 3. The data parsing format of LiDAR.
Figure 3. The data parsing format of LiDAR.
Remotesensing 14 04846 g003
Figure 4. Mathematical model of HDL-64E_S3: (a) side view; (b) top view.
Figure 4. Mathematical model of HDL-64E_S3: (a) side view; (b) top view.
Remotesensing 14 04846 g004
Figure 5. The noise points around the plane model.
Figure 5. The noise points around the plane model.
Remotesensing 14 04846 g005
Figure 6. The experimental scheme.
Figure 6. The experimental scheme.
Remotesensing 14 04846 g006
Figure 7. The experimental environment. (a) The flat walls scanned by LiDAR. (b) The pose of LiDAR mounted on our mobile platform.
Figure 7. The experimental environment. (a) The flat walls scanned by LiDAR. (b) The pose of LiDAR mounted on our mobile platform.
Remotesensing 14 04846 g007
Figure 8. The point cloud of the experimental scene.
Figure 8. The point cloud of the experimental scene.
Remotesensing 14 04846 g008
Figure 9. Verification experiment in agriculture scene.
Figure 9. Verification experiment in agriculture scene.
Remotesensing 14 04846 g009
Figure 10. The projection results of the planar point clouds before and after calibration: (a) point cloud projection of wall 2 before calibration; (b) point cloud projection of wall 2 after calibration; (c) point cloud projection of wall 4 before calibration; (d) point cloud projection of wall 4 after calibration.
Figure 10. The projection results of the planar point clouds before and after calibration: (a) point cloud projection of wall 2 before calibration; (b) point cloud projection of wall 2 after calibration; (c) point cloud projection of wall 4 before calibration; (d) point cloud projection of wall 4 after calibration.
Remotesensing 14 04846 g010
Figure 11. The SD of P2P-D before and after calibration.
Figure 11. The SD of P2P-D before and after calibration.
Remotesensing 14 04846 g011
Figure 12. The percentage of points within the σ confidence interval for the distance standard deviation from the point to the calibration plane.
Figure 12. The percentage of points within the σ confidence interval for the distance standard deviation from the point to the calibration plane.
Remotesensing 14 04846 g012
Figure 13. The percentage of points within the 2σ confidence interval for the distance standard deviation from the point to the calibration plane.
Figure 13. The percentage of points within the 2σ confidence interval for the distance standard deviation from the point to the calibration plane.
Remotesensing 14 04846 g013
Figure 14. The percentage of points within the 3σ confidence interval for the distance standard deviation from the point to the calibration plane.
Figure 14. The percentage of points within the 3σ confidence interval for the distance standard deviation from the point to the calibration plane.
Remotesensing 14 04846 g014
Figure 15. Differences in measured distance Δ l i .
Figure 15. Differences in measured distance Δ l i .
Remotesensing 14 04846 g015
Figure 16. Differences in rotation angle Δ θ i .
Figure 16. Differences in rotation angle Δ θ i .
Remotesensing 14 04846 g016
Figure 17. Differences in vertical angle Δ φ i .
Figure 17. Differences in vertical angle Δ φ i .
Remotesensing 14 04846 g017
Figure 18. Differences in horizontal distance h o s c _ i .
Figure 18. Differences in horizontal distance h o s c _ i .
Remotesensing 14 04846 g018
Figure 19. Differences in vertical distance v o s c _ i .
Figure 19. Differences in vertical distance v o s c _ i .
Remotesensing 14 04846 g019
Table 1. The mean percentages of P2P-D for 64 laser beams according to 3σ criterion.
Table 1. The mean percentages of P2P-D for 64 laser beams according to 3σ criterion.
Mean Percentage (%)μ ± σμ ± 2σμ ± 3σ
Before calibration70.8195.6898.29
After calibration79.8795.9399.09
Increase values9.060.250.80
Table 2. The mean values and differences in five intrinsic parameters for the 64 laser beams.
Table 2. The mean values and differences in five intrinsic parameters for the 64 laser beams.
Mean ValuesIntrinsic Parameters
Measured DistanceRotation AngleVertical AngleHorizontal DistanceVertical Distance
Δ l i /(m) Δ θ i /(°) Δ φ i /(°) h o s c _ i /(m) v o s c _ i /(m)
Before calibration1.443420.01089−0.174100.000000.18180
After calibration1.434100.00304−0.167520.015790.18649
Differences−0.00932−0.007850.006580.015790.00469
Table 3. The standard deviation of P2P-D and the percentage within σ, 2σ, 3σ confidence intervals at different distances.
Table 3. The standard deviation of P2P-D and the percentage within σ, 2σ, 3σ confidence intervals at different distances.
Distance (m)2.5 (m)5 (m)7.5 (m)10 (m)
Standard deviation
σ (m)
Before calibration0.04010.05720.04330.0427
After calibration0.02750.04270.01930.0244
Increase values (m)−0.0126−0.0145−0.024−0.0183
μ± σ
(%)
Before calibration71.1272.1273.9174.24
After calibration75.8478.8476.9575.64
Increase values (%)4.726.723.041.4
μ± 2σ
(%)
Before calibration93.5895.5492.6395.42
After calibration97.4195.4993.9093.09
Increase values (%)3.83−0.051.27−2.33
μ± 3σ
(%)
Before calibration97.9598.7996.8397.36
After calibration97.6599.5296.7698.25
Increase values (%)−0.30.73−0.070.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, N.; Qiu, Q.; Fan, Z.; Li, T.; Ji, C.; Feng, Q.; Zhao, C. Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots. Remote Sens. 2022, 14, 4846. https://doi.org/10.3390/rs14194846

AMA Style

Sun N, Qiu Q, Fan Z, Li T, Ji C, Feng Q, Zhao C. Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots. Remote Sensing. 2022; 14(19):4846. https://doi.org/10.3390/rs14194846

Chicago/Turabian Style

Sun, Na, Quan Qiu, Zhengqiang Fan, Tao Li, Chao Ji, Qingchun Feng, and Chunjiang Zhao. 2022. "Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots" Remote Sensing 14, no. 19: 4846. https://doi.org/10.3390/rs14194846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop