*3.4. Results of Centerline Extraction*

As shown in Figure 12, the green lines are the centerlines of the orchard machinery. The combination of deep learning and least-square yields a great improvement in efficiency and accuracy compared with traditional methods.

**Figure 12.** Centerlines of fruit rows calculation in the orchard.

In order to evaluate the accuracy of the centerline generation, the benchmark line is selected manually; the difference between the algorithm-generated centerline and the best navigation line is then analyzed. Table 4 shows the fitting results of the centerline in the fruit rows. The accuracy of orchard centerline extraction is 90.00% according to 27 extracted proper centerlines out of 30 images.

**Table 4.** The fitting of the centerline in fruit rows.


Han, et al. proposed a U-Net network-based approach for visual navigation path recognition in orchards [22]. Table 5 gives a comparative analysis of the maximum and mean value pixel error of the centerline of the fruit tree rows calculated by both U-Net and DL\_LS. Under weak light, the maximum pixel error of the centerline is 19 pixels for U-Net and 8 pixels for DL\_LS, and the mean value pixel error of the centerline is 11.8 pixels for U-Net and 5.2 pixels for DL\_LS; under normal light, the maximum pixel error of the centerline extracted by U-Net is 10 pixels and 5 pixels for DL\_LS, and the mean value pixel error of the centerline extracted by U-Net is 6.5 pixels and 3.4 pixels for DL\_LS; under strong light, the maximum pixel error of the centerline extracted by U-Net is 7 pixels and 4 pixels for DL\_LS, and the mean value pixel error of the centerline extracted by U-Net is 2.9 pixels and 2.1 pixels for DL\_LS. From Table 5, we can infer that our DL\_LS can give higher centerline extraction results than those of U-Net.

**Table 5.** Comparison of centerline maximal pixel errors of different methods.


#### *3.5. Discussion*

Although our method can extract the centerline of two adjacent orchard tree rows with high accuracy, there are still some drawbacks or limitations in our method. First, some of the trunks detected by the deep learning algorithm are side views or parts of the whole trunks, which introduces pixel error while determining the reference points. As a result, the centerline extraction accuracy could be improved if a smart reference point selection strategy is designed. Second, fruit tree trunks of other rows may be captured into the images, so that the extracted feature points are distributed in a zigzag shape, which affects the accurate generation of fruit tree row centerlines. Therefore, a reference or feature point selection or filtering strategy should be proposed to improve our algorithm.

The trained network can identify the tree trunk and fruit tree accurately. The singletarget average accuracies for trees and trunks are 92.7% and 91.51% respectively. Trunks and fruit trees are well identified in different sunlight and weed-rich environments. The model has strong robustness, and it takes about 50 milliseconds to process an image, which meets the reliability of the algorithm in real-time mode.

### **4. Conclusions**

A centerline extraction algorithm of orchard rows was proposed based on the YOLO V3 network, which can detect fruit trees and trunks in contact with the ground area independent of light intensity, shade, and disturbances. The average detection accuracy of the tree trunks and fruit trees was 92.11% by outputting the coordinate text file of the bounding box at the same time.

With the trunk bounding box, the reference points of the fruit tree trunks were extracted and the least-squares method was applied to fit the fruit tree rows on both sides of the walking routine of the agricultural machinery. According to the experimental results, the centerline of the orchard line was finally fitted. The average accuracy of the fruit tree line extraction was calculated to be 90%.

In the future, our research will consider the fusion of multiple sensors which can acquire richer environmental information and enable automated navigation in complex and changing orchard environments.

**Author Contributions:** Conceptualization, J.Z.; methodology, J.Z and S.G.; validation, J.Z, S.G and Q.Q.; formal analysis, Q.Q., S.G. and J.Z; writing—original draft preparation, J.Z and S.G.; writing review and editing, J.Z., M.Z. and Q.Q.; visualization, Y.S., S.G. and J.Z.; supervision, M.Z.; funding acquisition, Q.Q. All authors have read and agreed to the published version of the manuscript.

**Funding:** This study was supported by National Natural Science Foundation of China (No.61973040, No. 31101088).

**Institutional Review Board Statement:** Not applicable.

**Data Availability Statement:** The data that support the findings of this study are available from the first author and the second author, upon reasonable request.

**Acknowledgments:** The authors would like to thank Senliu Chen, for his instructive suggestions and kind helps in experiment implementation and paper writing.

**Conflicts of Interest:** The authors declare no conflict of interest.
