Figure 1.
Flowchart of the construction and evaluation of the 2D lidar-based perception system.
Figure 1.
Flowchart of the construction and evaluation of the 2D lidar-based perception system.
Figure 2.
System overview. (a) System hardware overview. 1. Servo motors (for driving wheels). 2. Lower computer. 3. Battery (48 V). 4. Voltage conversion modules. 5. DC motor driver. 6. Inertial Measurement Unit (IMU). 7. DC motor. 8. Servo controller (built-in magnetic encoder). 9. Two-dimensional lidar. 10. Industrial Personal Computer (IPC). All sensors and mechanical structures in this study follow a right-handed Cartesian coordinate system. Or-xyz represents the robot reference coordinate system, where the x-axis points in the direction of the robot’s forward motion, the y-axis points to the left of the robot’s forward direction, the z-axis points directly upwards from the robot, and the origin of this coordinate system is the projection point of the robot’s geometric center within the x-Or-y plane onto the ground. Ol-xyz is the reference coordinate system of 2D point clouds, whose origin is the center of the laser emitter, whose x-axis aligns with the optical axis of the 2D lidar. O3d-xyz denotes the reference coordinate system of constructed 3D point clouds, whose origin coincides with the origin of Ol-xyz, whose axes align with axes of Or-xyz and Oi-xyz (IMU coordinate system) (b) System data flow.
Figure 2.
System overview. (a) System hardware overview. 1. Servo motors (for driving wheels). 2. Lower computer. 3. Battery (48 V). 4. Voltage conversion modules. 5. DC motor driver. 6. Inertial Measurement Unit (IMU). 7. DC motor. 8. Servo controller (built-in magnetic encoder). 9. Two-dimensional lidar. 10. Industrial Personal Computer (IPC). All sensors and mechanical structures in this study follow a right-handed Cartesian coordinate system. Or-xyz represents the robot reference coordinate system, where the x-axis points in the direction of the robot’s forward motion, the y-axis points to the left of the robot’s forward direction, the z-axis points directly upwards from the robot, and the origin of this coordinate system is the projection point of the robot’s geometric center within the x-Or-y plane onto the ground. Ol-xyz is the reference coordinate system of 2D point clouds, whose origin is the center of the laser emitter, whose x-axis aligns with the optical axis of the 2D lidar. O3d-xyz denotes the reference coordinate system of constructed 3D point clouds, whose origin coincides with the origin of Ol-xyz, whose axes align with axes of Or-xyz and Oi-xyz (IMU coordinate system) (b) System data flow.
![Agronomy 15 00816 g002]()
Figure 3.
Schematic diagram of roll motion parameters and modern orchard structural parameters (a) Modern orchard inter-row structural parameters. (b) Geometric relationship between roll angular range and fruit tree size parameters.
Figure 3.
Schematic diagram of roll motion parameters and modern orchard structural parameters (a) Modern orchard inter-row structural parameters. (b) Geometric relationship between roll angular range and fruit tree size parameters.
Figure 4.
Schematic diagram of the maximum coverage range of the 3D point cloud and the maximum area that the tree row can be covered under static and moving states. (a) The maximum area covered by the point cloud generated by the system in a stationary state during one roll cycle, as well as the maximum area within the point cloud that can include the trunk area of fruit trees in the current tree row. (b) The maximum area covered by the point cloud generated by the system in a moving state during one roll cycle, as well as the maximum area within the point cloud that can include the trunk area of fruit trees in the current tree row.
Figure 4.
Schematic diagram of the maximum coverage range of the 3D point cloud and the maximum area that the tree row can be covered under static and moving states. (a) The maximum area covered by the point cloud generated by the system in a stationary state during one roll cycle, as well as the maximum area within the point cloud that can include the trunk area of fruit trees in the current tree row. (b) The maximum area covered by the point cloud generated by the system in a moving state during one roll cycle, as well as the maximum area within the point cloud that can include the trunk area of fruit trees in the current tree row.
Figure 5.
Maximum coverage area of two consecutive 3D point clouds generated during two consecutive roll cycles when the robot moves at a constant speed of 1 m/s in a straight line, and the lidar rolls at a minimum speed of 4.863°/s between −30° and 30°, including the region that fully encompasses the main trunk of the fruit trees within the row.
Figure 5.
Maximum coverage area of two consecutive 3D point clouds generated during two consecutive roll cycles when the robot moves at a constant speed of 1 m/s in a straight line, and the lidar rolls at a minimum speed of 4.863°/s between −30° and 30°, including the region that fully encompasses the main trunk of the fruit trees within the row.
Figure 6.
Schematic diagram of converting the 2D point cloud to the 3D point cloud. (a) The 2D lidar coordinate system and the 2D scanning range. (b) Schematic diagram of converting the 2D point cloud from a polar to Cartesian coordinate system. (c) Initial lidar coordinate system before rotation, aligning with the 3D point cloud reference coordinate system (O3d-xyz). (d) The 2D point cloud data in O2-xyz after clockwise rotation of O-xyz around the x-axis by θ2. (e) The 2D point cloud data in O1-xyz after anticlockwise rotation of O-xyz around the x-axis by θ1. (f) Registration of the 2D point clouds from O1-xyz and O2-xyz with T1 and T2 coordinate transformations to generate 3D point cloud data in O3d-xyz.
Figure 6.
Schematic diagram of converting the 2D point cloud to the 3D point cloud. (a) The 2D lidar coordinate system and the 2D scanning range. (b) Schematic diagram of converting the 2D point cloud from a polar to Cartesian coordinate system. (c) Initial lidar coordinate system before rotation, aligning with the 3D point cloud reference coordinate system (O3d-xyz). (d) The 2D point cloud data in O2-xyz after clockwise rotation of O-xyz around the x-axis by θ2. (e) The 2D point cloud data in O1-xyz after anticlockwise rotation of O-xyz around the x-axis by θ1. (f) Registration of the 2D point clouds from O1-xyz and O2-xyz with T1 and T2 coordinate transformations to generate 3D point cloud data in O3d-xyz.
Figure 7.
Flowchart of time synchronization between lidar and encoder.
Figure 7.
Flowchart of time synchronization between lidar and encoder.
Figure 8.
Pipeline of motion distortion compensation of the 3D point cloud using wheel odometry and IMU.
Figure 8.
Pipeline of motion distortion compensation of the 3D point cloud using wheel odometry and IMU.
Figure 9.
Schematic diagram of 6-DOF pose estimation based on IMU and wheel odometry. (a) Principle of wheel odometry based on differential model and schematic of the additional translation [∆x, ∆y, 0]T between O3d′-x′y′z′ and O3d″-x″y″z″ caused by spatial offset x3d2r between Or-xyz and O3d-xyz in the x-O-y plane. (b) Schematic of the displacement along the z-axis calculated by combining wheel odometry and IMU.
Figure 9.
Schematic diagram of 6-DOF pose estimation based on IMU and wheel odometry. (a) Principle of wheel odometry based on differential model and schematic of the additional translation [∆x, ∆y, 0]T between O3d′-x′y′z′ and O3d″-x″y″z″ caused by spatial offset x3d2r between Or-xyz and O3d-xyz in the x-O-y plane. (b) Schematic of the displacement along the z-axis calculated by combining wheel odometry and IMU.
Figure 10.
Navigation line extraction pipeline.
Figure 10.
Navigation line extraction pipeline.
Figure 11.
Schematic diagram of point cloud filtering and point set division. The red dashed rectangle is the passthrough filter area for vehicle points removal, where WR and LR are the width and length of the vehicle, respectively. The green rectangles L and R are the regions for left and right tree row extraction, where Wtr is the width of the tree row. The black dashed line lo is the mid-line detected in the previous frame of point cloud. The red line lp is the mid-line predicted by lo and the robot motion state.
Figure 11.
Schematic diagram of point cloud filtering and point set division. The red dashed rectangle is the passthrough filter area for vehicle points removal, where WR and LR are the width and length of the vehicle, respectively. The green rectangles L and R are the regions for left and right tree row extraction, where Wtr is the width of the tree row. The black dashed line lo is the mid-line detected in the previous frame of point cloud. The red line lp is the mid-line predicted by lo and the robot motion state.
Figure 12.
Trunk point extraction and navigation line detection. (a) Point cloud local geometric feature extraction, where (v0, v1, v2) are the first three eigenvectors of the local point cluster. (b) Navigation line fitting based on RANSAC. Red points represent the outliers, while yellow points are the inliers. Ll, Lm, and Lr represent the left tree row line, the tree row midline, and the right tree row line, respectively.
Figure 12.
Trunk point extraction and navigation line detection. (a) Point cloud local geometric feature extraction, where (v0, v1, v2) are the first three eigenvectors of the local point cluster. (b) Navigation line fitting based on RANSAC. Red points represent the outliers, while yellow points are the inliers. Ll, Lm, and Lr represent the left tree row line, the tree row midline, and the right tree row line, respectively.
Figure 13.
Comparison results before and after internal parameter calibration. (a) The 3D point cloud of the laboratory generated in the first and the second half of the revolution before calibration. 1. The enlarged view of the uncalibrated 3D point cloud of the screen and the desk. 2. The enlarged view of the uncalibrated 3D point cloud of the corner of the walls. (b) The 3D point cloud of the laboratory generated in the first and the second half of the revolution after calibration. 1. The enlarged view of the calibrated 3D point cloud of the screen and the desk. 2. The enlarged view of the calibrated 3D point cloud of the corner of walls.
Figure 13.
Comparison results before and after internal parameter calibration. (a) The 3D point cloud of the laboratory generated in the first and the second half of the revolution before calibration. 1. The enlarged view of the uncalibrated 3D point cloud of the screen and the desk. 2. The enlarged view of the uncalibrated 3D point cloud of the corner of the walls. (b) The 3D point cloud of the laboratory generated in the first and the second half of the revolution after calibration. 1. The enlarged view of the calibrated 3D point cloud of the screen and the desk. 2. The enlarged view of the calibrated 3D point cloud of the corner of walls.
Figure 14.
Outdoor test scenario. (a) Experimental scenario where the robot is located at the centerline of a tree row, where points a, b are nearly at the head and end of the tree row. (b) Experimental scenario where the robot is close to the left tree row, where points a’, b’ are at the head and end of the tree row. (c) Point a’ is the projection of the rolling lidar coordinate system origin on the ground plane (x-O-y-plane).
Figure 14.
Outdoor test scenario. (a) Experimental scenario where the robot is located at the centerline of a tree row, where points a, b are nearly at the head and end of the tree row. (b) Experimental scenario where the robot is close to the left tree row, where points a’, b’ are at the head and end of the tree row. (c) Point a’ is the projection of the rolling lidar coordinate system origin on the ground plane (x-O-y-plane).
Figure 15.
The 3D point cloud of the orchard, indicating varying heights through color, collected at observation points a and a’ under the static state. (a) The top view of both the left and right tree row point cloud at a. (b) The enlarged view of the left tree row point cloud at a. (c) The enlarged view of the right tree row point cloud at a. (d) The top view of both the left and right tree row point cloud at a’. (e) The enlarged view of the left tree row point cloud at a’. (f) The enlarged view of the right tree row point cloud at a’.
Figure 15.
The 3D point cloud of the orchard, indicating varying heights through color, collected at observation points a and a’ under the static state. (a) The top view of both the left and right tree row point cloud at a. (b) The enlarged view of the left tree row point cloud at a. (c) The enlarged view of the right tree row point cloud at a. (d) The top view of both the left and right tree row point cloud at a’. (e) The enlarged view of the left tree row point cloud at a’. (f) The enlarged view of the right tree row point cloud at a’.
Figure 16.
Results of the 3D point cloud motion distortion compensation, where colors represent variations in height. (a) The 3D point cloud of the left tree row with distortion caused by a motion speed of 0.5 m/s. (b) The 3D point cloud of the left tree row after motion distortion compensation at 0.5 m/s. (c) The 3D point cloud at the end of the tree row with distortion caused by a U-turn operation. (d) The next frame of the point cloud in Figure (c) shows distortion caused by a U-turn operation. (e) The 3D point cloud at the end of tree rows after motion distortion compensation during the U-turn operation. (f) The next frame of the point cloud in Figure (e) after motion compensation.
Figure 16.
Results of the 3D point cloud motion distortion compensation, where colors represent variations in height. (a) The 3D point cloud of the left tree row with distortion caused by a motion speed of 0.5 m/s. (b) The 3D point cloud of the left tree row after motion distortion compensation at 0.5 m/s. (c) The 3D point cloud at the end of the tree row with distortion caused by a U-turn operation. (d) The next frame of the point cloud in Figure (c) shows distortion caused by a U-turn operation. (e) The 3D point cloud at the end of tree rows after motion distortion compensation during the U-turn operation. (f) The next frame of the point cloud in Figure (e) after motion compensation.
Figure 17.
Filtering results of tree row point clouds. (a) Original 3D point clouds, indicating varying heights through color. (b) Three-dimensional point clouds after ground removal, indicating varying heights through color. (c) Extraction and distinction of point clouds from the left and right adjacent tree rows for the robot. The red point cloud within the left white dashed box represents the left tree row, while the blue point cloud within the right white dashed box represents the right tree row. The yellow point cloud represents point clouds outside the current tree row. (d) Segmentation result of point clouds from the left and right tree rows when the robot is at a certain point on the centerline within the row. (e) Segmentation result of point clouds from the left and right tree rows during the robot rotation. (f) Segmentation result of point clouds from the left and right tree rows when the robot stops alongside the left tree row after completing its rotation.
Figure 17.
Filtering results of tree row point clouds. (a) Original 3D point clouds, indicating varying heights through color. (b) Three-dimensional point clouds after ground removal, indicating varying heights through color. (c) Extraction and distinction of point clouds from the left and right adjacent tree rows for the robot. The red point cloud within the left white dashed box represents the left tree row, while the blue point cloud within the right white dashed box represents the right tree row. The yellow point cloud represents point clouds outside the current tree row. (d) Segmentation result of point clouds from the left and right tree rows when the robot is at a certain point on the centerline within the row. (e) Segmentation result of point clouds from the left and right tree rows during the robot rotation. (f) Segmentation result of point clouds from the left and right tree rows when the robot stops alongside the left tree row after completing its rotation.
Figure 18.
Results of point cloud segmentation for fruit tree trunks, where colors represent variations in height. (a) Unprocessed point clouds of the left and right tree rows, where the point clouds in the white dashed region are residual ground points, and the rectangular-shaped point clouds in the right tree row represent display boards in the orchard. (b) Tree row point clouds after segmenting the point clouds in (a) using local geometric features. (c) Inliers extracted by RANSAC from the point clouds in (b).
Figure 18.
Results of point cloud segmentation for fruit tree trunks, where colors represent variations in height. (a) Unprocessed point clouds of the left and right tree rows, where the point clouds in the white dashed region are residual ground points, and the rectangular-shaped point clouds in the right tree row represent display boards in the orchard. (b) Tree row point clouds after segmenting the point clouds in (a) using local geometric features. (c) Inliers extracted by RANSAC from the point clouds in (b).
Figure 19.
Tree-row-detection results: (a) Results of directly applying RANSAC for tree row detection to the left and right tree row point clouds. (b) Results of tree row detection using a combination of local geometric feature constraints and RANSAC. (c) Inliers extracted directly from the tree row point clouds using RANSAC, where the white point cloud represents inliers, the green point cloud represents outliers, and the point clouds within the white boxes are those that are actually side branches and leaf points but have been incorrectly detected as inliers; the point clouds within the red boxes are those that are actually trunk points but have not been recognized as inliers. (d) Inliers extracted using a combination of local geometric feature constraints and RANSAC, where the white point cloud represents inliers, the green point cloud represents outliers, and the point clouds within the red boxes are trunk points that were not detected in (c) but are now correctly identified as inliers.
Figure 19.
Tree-row-detection results: (a) Results of directly applying RANSAC for tree row detection to the left and right tree row point clouds. (b) Results of tree row detection using a combination of local geometric feature constraints and RANSAC. (c) Inliers extracted directly from the tree row point clouds using RANSAC, where the white point cloud represents inliers, the green point cloud represents outliers, and the point clouds within the white boxes are those that are actually side branches and leaf points but have been incorrectly detected as inliers; the point clouds within the red boxes are those that are actually trunk points but have not been recognized as inliers. (d) Inliers extracted using a combination of local geometric feature constraints and RANSAC, where the white point cloud represents inliers, the green point cloud represents outliers, and the point clouds within the red boxes are trunk points that were not detected in (c) but are now correctly identified as inliers.
![Agronomy 15 00816 g019]()
Figure 20.
Results of detected navigation line accuracy verification. (a) Navigation line angle error at a moving speed of 0.2 m/s. (b) Navigation line distance error at a moving speed of 0.2 m/s. (c) Navigation line angle error at a moving speed of 0.5 m/s. (d) Navigation line distance error at a moving speed of 0.5 m/s.
Figure 20.
Results of detected navigation line accuracy verification. (a) Navigation line angle error at a moving speed of 0.2 m/s. (b) Navigation line distance error at a moving speed of 0.2 m/s. (c) Navigation line angle error at a moving speed of 0.5 m/s. (d) Navigation line distance error at a moving speed of 0.5 m/s.
Figure 21.
Navigation-line-detection results under four different combinations of vertical FoV and vertical resolution. (a) Angular errors of the extracted navigation lines under four different combinations. (b) Distance errors of the extracted navigation lines under four different combinations. (c) Extracted navigation lines and tree trunk feature points under four different combinations.
Figure 21.
Navigation-line-detection results under four different combinations of vertical FoV and vertical resolution. (a) Angular errors of the extracted navigation lines under four different combinations. (b) Distance errors of the extracted navigation lines under four different combinations. (c) Extracted navigation lines and tree trunk feature points under four different combinations.
Figure 22.
Results of navigation line detection in a 12 m row width scenario. (a) Angular error of the navigation line. (b) Distance error of the navigation line. (c) Navigation line detection and tree trunk feature point extraction results.
Figure 22.
Results of navigation line detection in a 12 m row width scenario. (a) Angular error of the navigation line. (b) Distance error of the navigation line. (c) Navigation line detection and tree trunk feature point extraction results.
Figure 23.
The scene of dense lateral branches and leaves in summer.
Figure 23.
The scene of dense lateral branches and leaves in summer.
Table 1.
The main technical parameters of sensors utilized for rolling 2D lidar system.
Table 1.
The main technical parameters of sensors utilized for rolling 2D lidar system.
Sensors | Technical Parameters | Values |
---|
IMU (Yahboom Technology Co., Ltd., Shenzhen, China) | Data frequency/(Hz) | 200 |
SM100 servo motor (Feetech RC Model Co., Ltd., Shenzhen, China) | Resolution/(°) | 0.08 |
Rated torque/(Nm) | 12 |
Data frequency/(Hz) | 180 |
Lakibeam1 2D lidar (Richbeam Co., Ltd., Beijing, China) | Horizontal angular resolution/(°) | 0.25 |
Horizontal field of view/(°) | 270 |
Detection range/(m @70% reflectivity) | 25 |
Scanning frequency/(Hz) | 30 |
Sampling frequency/(kHz) | 43.2 |
Table 2.
The main technical parameters and prices of common commercially available 3D lidar and the rolling 2D lidar system.
Table 2.
The main technical parameters and prices of common commercially available 3D lidar and the rolling 2D lidar system.
Lidar Types | Resolution (H × V) | FOV (H × V) | Price | Accuracy | Range | Scanning Rate |
---|
LeiShen C16 [30] | 0.18° × 2° | 360° × 30° | $1865 | ±0.01 m | 150 m | 5, 10, 20 Hz |
LeiShen C32 [31] | 0.18° × 1° | 360° × 30° | $2215 | ±0.01 m | 150 m | 5, 10, 20 Hz |
OS1-128 [32] | 0.18° × 0.35° | 360° × 45° | $16,575 | ±0.005~0.03 m | 200 m | 5, 10, 20 Hz |
Rolling 2D lidar | 0.25° × 0.25° | 270° × 60° | $235 | ±0.02 m | 25 m | 0.125 Hz |
Table 3.
The average angular error and average distance error of the extracted navigation lines under four different combinations of vertical FoV and vertical resolution.
Table 3.
The average angular error and average distance error of the extracted navigation lines under four different combinations of vertical FoV and vertical resolution.
Vertical FoV (°)/Vertical Resolution (°) | Average Angle Error (°) | Average Distance Error (m) |
---|
30/2.0 | 0.814 | 0.065 |
60/2.0 | 0.681 | 0.059 |
30/0.25 | 0.198 | 0.018 |
60/0.25 | 0.272 | 0.023 |