Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (775)

Search Parameters:
Keywords = agricultural robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3896 KB  
Article
Liming-Induced Nitrous Oxide Emissions from Acidic Soils Dominated by Stimulative Nitrification
by Xiaoxiao Xiang, Hongyang Gong, Waqar Ahmed, Rodney B. Thompson, Wenxuan Shi, Junhui Yin and Qing Chen
Biology 2025, 14(9), 1110; https://doi.org/10.3390/biology14091110 - 22 Aug 2025
Viewed by 103
Abstract
Nitrous oxide (N2O) is a potent greenhouse gas, with emissions occurring mostly from agricultural soils, especially acidic soils. This research aimed to elucidate the response of soils dominated by nitrification-driven N2O production to alkaline amendments, given that nitrification is [...] Read more.
Nitrous oxide (N2O) is a potent greenhouse gas, with emissions occurring mostly from agricultural soils, especially acidic soils. This research aimed to elucidate the response of soils dominated by nitrification-driven N2O production to alkaline amendments, given that nitrification is a key process in N2O emission. This study investigated the impact of an alkaline mineral amendment (CSMP) on N2O emission, nitrification rate, and functional gene abundance. Using a robotic automated incubation system, CSMP both alone and in combination with urea was applied to two acidic soils (CL: pH 5.81; WS: pH 4.91). The results demonstrated that, relative to the CK, the CSMP-only treatment significantly increased N2O emissions by 18.4-fold in these acidic soils, with a 61.6-fold increase in the U + CSMP treatment. This very large increase was driven by a rise in AOB-amoA abundance and a concurrent decline in AOA-amoA, which was confirmed by structural equation modeling, which showed that the increase in pH strongly influenced N2O emission primarily through AOB-amoA. Although CSMP is effective for reversing soil acidification, its use must be carefully managed to prevent stimulation of N2O emissions. Future strategies should explore combining CSMP with approaches that can mitigate nitrification while maintaining its soil improvement benefits. This study provides critical insights for developing balanced management practices that address both soil health and climate change mitigation in acidic agricultural systems. Full article
Show Figures

Figure 1

20 pages, 16392 KB  
Article
PCC-YOLO: A Fruit Tree Trunk Recognition Algorithm Based on YOLOv8
by Yajie Zhang, Weiliang Jin, Baoxing Gu, Guangzhao Tian, Qiuxia Li, Baohua Zhang and Guanghao Ji
Agriculture 2025, 15(16), 1786; https://doi.org/10.3390/agriculture15161786 - 21 Aug 2025
Viewed by 160
Abstract
With the development of smart agriculture, the precise identification of fruit tree trunks by orchard management robots has become a key technology for achieving autonomous navigation. To solve the issue of tree trunks being hard to see against their background in orchards, this [...] Read more.
With the development of smart agriculture, the precise identification of fruit tree trunks by orchard management robots has become a key technology for achieving autonomous navigation. To solve the issue of tree trunks being hard to see against their background in orchards, this study introduces PCC-YOLO (PENet, CoT-Net, and Coord-SE attention-based YOLOv8), a new trunk detection model based on YOLOv8. It improves the ability to identify features in low-contrast situations by using a pyramid enhancement network (PENet), a context transformer (CoT-Net) module, and a combined coordinate and channel attention mechanism. By introducing a pyramid enhancement network (PENet) into YOLOv8, the model’s feature extraction ability under low-contrast conditions is enhanced. A context transformer module (CoT-Net) is then used to strengthen global perception capabilities, and a combination of coordinate attention (Coord-Att) and SENetV2 is employed to optimize target localization accuracy. Experimental results show that PCC-YOLO achieves a mean average precision (mAP) of 82.6% on a self-built orchard dataset (5000 images) and a detection speed of 143.36 FPS, marking a 4.8% improvement over the performance of the baseline YOLOv8 model, while maintaining a low computational load (7.8 GFLOPs). The model demonstrates a superior balance of accuracy, speed, and computational cost compared to results for the baseline YOLOv8 and other common YOLO variants, offering an efficient solution for the real-time autonomous navigation of orchard management robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 17979 KB  
Article
AFBF-YOLO: An Improved YOLO11n Algorithm for Detecting Bunch and Maturity of Cherry Tomatoes in Greenhouse Environments
by Bo-Jin Chen, Jun-Yan Bu, Jun-Lin Xia, Ming-Xuan Li and Wen-Hao Su
Plants 2025, 14(16), 2587; https://doi.org/10.3390/plants14162587 - 20 Aug 2025
Viewed by 251
Abstract
Accurate detection of cherry tomato clusters and their ripeness stages is critical for the development of intelligent harvesting systems in modern agriculture. In response to the challenges posed by occlusion, overlapping clusters, and subtle ripeness variations under complex greenhouse environments, an improved YOLO11-based [...] Read more.
Accurate detection of cherry tomato clusters and their ripeness stages is critical for the development of intelligent harvesting systems in modern agriculture. In response to the challenges posed by occlusion, overlapping clusters, and subtle ripeness variations under complex greenhouse environments, an improved YOLO11-based deep convolutional neural network detection model, called AFBF-YOLO, is proposed in this paper. First, a dataset comprising 486 RGB images and over 150,000 annotated instances was constructed and augmented, covering four ripeness stages and fruit clusters. Then, based on YOLO11, the ACmix attention mechanism was incorporated to strengthen feature representation under occluded and cluttered conditions. Additionally, a novel neck structure, FreqFusion-BiFPN, was designed to improve multi-scale feature fusion through frequency-aware filtering. Finally, a refined loss function, Inner-Focaler-IoU, was applied to enhance bounding box localization by emphasizing inner-region overlap and focusing on difficult samples. Experimental results show that AFBF-YOLO achieves a precision of 81.2%, a recall of 81.3%, and an mAP@0.5 of 85.6%, outperforming multiple mainstream YOLO series. High accuracy across ripeness stages and low computational complexity indicate it excels in simultaneous detection of cherry tomato fruit bunches and fruit maturity, supporting automated maturity assessment and robotic harvesting in precision agriculture. Full article
Show Figures

Figure 1

20 pages, 28680 KB  
Article
SN-YOLO: A Rotation Detection Method for Tomato Harvest in Greenhouses
by Jinlong Chen, Ruixue Yu, Minghao Yang, Wujun Che, Yi Ning and Yongsong Zhan
Electronics 2025, 14(16), 3243; https://doi.org/10.3390/electronics14163243 - 15 Aug 2025
Viewed by 276
Abstract
Accurate detection of tomato fruits is a critical component in vision-guided robotic harvesting systems, which play an increasingly important role in automated agriculture. However, this task is challenged by variable lighting conditions and background clutter in natural environments. In addition, the arbitrary orientations [...] Read more.
Accurate detection of tomato fruits is a critical component in vision-guided robotic harvesting systems, which play an increasingly important role in automated agriculture. However, this task is challenged by variable lighting conditions and background clutter in natural environments. In addition, the arbitrary orientations of fruits reduce the effectiveness of traditional horizontal bounding boxes. To address these challenges, we propose a novel object detection framework named SN-YOLO. First, we introduce the StarNet’ backbone to enhance the extraction of fine-grained features, thereby improving the detection performance in cluttered backgrounds. Second, we design a Color-Prior Spatial-Channel Attention (CPSCA) module that incorporates red-channel priors to strengthen the model’s focus on salient fruit regions. Third, we implement a multi-level attention fusion strategy to promote effective feature integration across different layers, enhancing background suppression and object discrimination. Furthermore, oriented bounding boxes improve localization precision by better aligning with the actual fruit shapes and poses. Experiments conducted on a custom tomato dataset demonstrate that SN-YOLO outperforms the baseline YOLOv8 OBB, achieving a 1.0% improvement in precision and a 0.8% increase in mAP@0.5. These results confirm the robustness and accuracy of the proposed method under complex field conditions. Overall, SN-YOLO provides a practical and efficient solution for fruit detection in automated harvesting systems, contributing to the deployment of computer vision techniques in smart agriculture. Full article
Show Figures

Figure 1

42 pages, 10178 KB  
Article
Grid-Based Path Planning of Agricultural Robots Driven by Multi-Strategy Collaborative Evolution Honey Badger Algorithm
by Yunyu Hu and Peng Shao
Biomimetics 2025, 10(8), 535; https://doi.org/10.3390/biomimetics10080535 - 14 Aug 2025
Viewed by 259
Abstract
To address the limitations of mobile robots in path planning within farmland-specific environments, this paper proposes a biomimetic model: Multi-strategy Collaborative Evolution Honey Badger Algorithm (MCEHBA), MCEHBA achieves improvements through the following strategies: firstly, integrating a sinusoidal function-based nonlinear convergence factor to dynamically [...] Read more.
To address the limitations of mobile robots in path planning within farmland-specific environments, this paper proposes a biomimetic model: Multi-strategy Collaborative Evolution Honey Badger Algorithm (MCEHBA), MCEHBA achieves improvements through the following strategies: firstly, integrating a sinusoidal function-based nonlinear convergence factor to dynamically balance global exploration and local exploitation; secondly, combining the differential evolution strategy to enhance population diversity, and utilizing gravity-centred opposition-based learning to improve solution space search efficiency; finally, constructing good point set initialization and decentralized boundary constraint handling strategyto further increase convergence accuracy and speed. This paper validates the effectiveness of the optimization strategy and the performance of MCEHBA through the CEC2017 benchmark function set, and assesses the statistical significance of the results using the Friedman test and Nemenyi test. The findings demonstrate that MCEHBA exhibits excellent optimization capabilities. Additionally, this study applied MCEHBA to solve three engineering application problems and compared its results with six other algorithms, showing that MCEHBA achieved the minimum objective function values in all three cases. Finally, simulation experiments were conducted in three farmland scenarios of varying scales, with comparative tests against three state-of-the-art algorithms. The results indicate that MCEHBA generates paths with minimized total costs, demonstrating superior global convergence and engineering applicability. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Graphical abstract

54 pages, 2856 KB  
Review
Applications, Trends, and Challenges of Precision Weed Control Technologies Based on Deep Learning and Machine Vision
by Xiangxin Gao, Jianmin Gao and Waqar Ahmed Qureshi
Agronomy 2025, 15(8), 1954; https://doi.org/10.3390/agronomy15081954 - 13 Aug 2025
Viewed by 597
Abstract
Advanced computer vision (CV) and deep learning (DL) are essential for sustainable agriculture via automated vegetation management. This paper methodically reviews advancements in these technologies for agricultural settings, analyzing their fundamental principles, designs, system integration, and practical applications. The amalgamation of transformer topologies [...] Read more.
Advanced computer vision (CV) and deep learning (DL) are essential for sustainable agriculture via automated vegetation management. This paper methodically reviews advancements in these technologies for agricultural settings, analyzing their fundamental principles, designs, system integration, and practical applications. The amalgamation of transformer topologies with convolutional neural networks (CNNs) in models such as YOLO (You Only Look Once) and Mask R-CNN (Region-Based Convolutional Neural Network) markedly enhances target recognition and semantic segmentation. The integration of LiDAR (Light Detection and Ranging) with multispectral imagery significantly improves recognition accuracy in intricate situations. Moreover, the integration of deep learning models with control systems, which include laser modules, robotic arms, and precision spray nozzles, facilitates the development of intelligent robotic mowing systems that significantly diminish chemical herbicide consumption and enhance operational efficiency relative to conventional approaches. Significant obstacles persist, including restricted environmental adaptability, real-time processing limitations, and inadequate model generalization. Future directions entail the integration of varied data sources, the development of streamlined models, and the enhancement of intelligent decision-making systems, establishing a framework for the advancement of sustainable agricultural technology. Full article
(This article belongs to the Special Issue Research Progress in Agricultural Robots in Arable Farming)
Show Figures

Figure 1

17 pages, 5705 KB  
Article
Cherry Tomato Bunch and Picking Point Detection for Robotic Harvesting Using an RGB-D Sensor and a StarBL-YOLO Network
by Pengyu Li, Ming Wen, Zhi Zeng and Yibin Tian
Horticulturae 2025, 11(8), 949; https://doi.org/10.3390/horticulturae11080949 - 11 Aug 2025
Viewed by 433
Abstract
For fruit harvesting robots, rapid and accurate detection of fruits and picking points is one of the main challenges for their practical deployment. Several fruits typically grow in clusters or bunches, such as grapes, cherry tomatoes, and blueberries. For such clustered fruits, it [...] Read more.
For fruit harvesting robots, rapid and accurate detection of fruits and picking points is one of the main challenges for their practical deployment. Several fruits typically grow in clusters or bunches, such as grapes, cherry tomatoes, and blueberries. For such clustered fruits, it is desired for them to be picked by bunches instead of individually. This study proposes utilizing a low-cost off-the-shelf RGB-D sensor mounted on the end effector and a lightweight improved YOLOv8-Pose neural network to detect cherry tomato bunches and picking points for robotic harvesting. The problem of occlusion and overlap is alleviated by merging RGB and depth images from the RGB-D sensor. To enhance detection robustness in complex backgrounds and reduce the complexity of the model, the Starblock module from StarNet and the coordinate attention mechanism are incorporated into the YOLOv8-Pose network, termed StarBL-YOLO, to improve the efficiency of feature extraction and reinforce spatial information. Additionally, we replaced the original OKS loss function with the L1 loss function for keypoint loss calculation, which improves the accuracy in picking points localization. The proposed method has been evaluated on a dataset with 843 cherry tomato RGB-D image pairs acquired by a harvesting robot at a commercial greenhouse farm. Experimental results demonstrate that the proposed StarBL-YOLO model achieves a 12% reduction in model parameters compared to the original YOLOv8-Pose while improving detection accuracy for cherry tomato bunches and picking points. Specifically, the model shows significant improvements across all metrics: for computational efficiency, model size (−11.60%) and GFLOPs (−7.23%); for pickable bunch detection, mAP50 (+4.4%) and mAP50-95 (+4.7%); for non-pickable bunch detection, mAP50 (+8.0%) and mAP50-95 (+6.2%); and for picking point detection, mAP50 (+4.3%), mAP50-95 (+4.6%), and RMSE (−23.98%). These results validate that StarBL-YOLO substantially enhances detection accuracy for cherry tomato bunches and picking points while improving computational efficiency, which is valuable for resource-constrained edge-computing deployment for harvesting robots. Full article
(This article belongs to the Special Issue Advanced Automation for Tree Fruit Orchards and Vineyards)
Show Figures

Figure 1

20 pages, 16638 KB  
Article
GIA-YOLO: A Target Detection Method for Nectarine Picking Robots in Facility Orchards
by Longlong Ren, Yuqiang Li, Yonghui Du, Ang Gao, Wei Ma, Yuepeng Song and Xingchang Han
Agronomy 2025, 15(8), 1934; https://doi.org/10.3390/agronomy15081934 - 11 Aug 2025
Viewed by 304
Abstract
The complex and variable environment of facility orchards poses significant challenges for intelligent robotic operations. To address issues such as nectarine fruit occlusion by branches and leaves, complex backgrounds, and the demand for high real-time detection performance, this study proposes a target detection [...] Read more.
The complex and variable environment of facility orchards poses significant challenges for intelligent robotic operations. To address issues such as nectarine fruit occlusion by branches and leaves, complex backgrounds, and the demand for high real-time detection performance, this study proposes a target detection model for nectarine fruit based on the YOLOv11 architecture—Ghost–iEMA–ADown You Only Look (GIA-YOLO). We introduce the GhostModule to reduce the model size and the floating-point operations, adopt the fusion attention mechanism iEMA to enhance the feature extraction capability, and further optimize the network structure through the ADown lightweight downsampling module. The test results show that GIA-YOLO achieves 93.9% precision, 88.9% recall, and 96.2% mAP, which are 2.2, 1.1, and 0.7 percentage points higher than YOLOv11, respectively; the size of the model is reduced to 5.0 MB and the floating-point operations is reduced to 5.2 G, which is 9.1% and 17.5% less compared to the original model, respectively. The model was deployed in the picking robot system and field tested in the nectarine facility orchard, the results show that GIA-YOLO maintains high detection precision and stability at different picking distances, with a comprehensive missed detection rate of 6.65%, a false detection rate of 8.7%, and supports real-time detection at 41.6 FPS. The results of the research provide an important reference and support for the optimization of the design and application of the nectarine detection model in the facility agriculture environment. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

31 pages, 4333 KB  
Review
Research Progress and Development Trend of Visual Detection Methods for Selective Fruit Harvesting Robots
by Wenbo Wang, Chenshuo Li, Yidan Xi, Jinan Gu, Xinzhou Zhang, Man Zhou and Yuchun Peng
Agronomy 2025, 15(8), 1926; https://doi.org/10.3390/agronomy15081926 - 10 Aug 2025
Viewed by 556
Abstract
The rapid development of artificial intelligence technologies has promoted the emergence of Agriculture 4.0, where the machines participating in agricultural activities are made smart with the capacities of self-sensing, self-decision-making, and self-execution. As representative implementations of Agriculture 4.0, intelligent selective fruit harvesting robots [...] Read more.
The rapid development of artificial intelligence technologies has promoted the emergence of Agriculture 4.0, where the machines participating in agricultural activities are made smart with the capacities of self-sensing, self-decision-making, and self-execution. As representative implementations of Agriculture 4.0, intelligent selective fruit harvesting robots demonstrate significant potential to alleviate labor-intensive demands in modern agriculture, where visual detection serves as the foundational component. However, the accurate detection of fruits remains a challenging issue due to the complex and unstructured nature of fruit orchards. This paper comprehensively reviews the recent progress in visual detection methods for selective fruit harvesting robots, covering cameras, traditional detection based on handcrafted feature methods, detection based on deep learning methods, and tree branch detection methods. Furthermore, the potential challenges and future trends of the visual detection system of selective fruit harvesting robots are critically discussed, facilitating a thorough comprehension of contemporary progress in this research area. The primary objective of this work is to highlight the pivotal role of visual perception in intelligent fruit harvesting robots. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 19463 KB  
Article
Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms
by Hongjun Wang, Anbang Zhao, Yongqi Zhong, Gengming Zhang, Fengyun Wu and Xiangjun Zou
Agronomy 2025, 15(8), 1924; https://doi.org/10.3390/agronomy15081924 - 9 Aug 2025
Viewed by 334
Abstract
Pineapple harvesting in natural orchard environments faces challenges such as high occlusion rates caused by foliage and the need for complex spatial planning to guide robotic arm movement in cluttered terrains. This study proposes an innovative visual detection model, Yolov10n-SSE, which integrates split [...] Read more.
Pineapple harvesting in natural orchard environments faces challenges such as high occlusion rates caused by foliage and the need for complex spatial planning to guide robotic arm movement in cluttered terrains. This study proposes an innovative visual detection model, Yolov10n-SSE, which integrates split convolution (SPConv), squeeze-and-excitation (SE) attention, and efficient multi-scale attention (EMA) modules. These improvements enhance detection accuracy while reducing computational complexity. The proposed model achieves notable performance gains in precision (93.8%), recall (84.9%), and mAP (91.8%). Additionally, a dimensionality-reduction strategy transforms 3D path planning into a more efficient 2D image-space task using point clouds from a depth camera. Combining the artificial potential field (APF) method with an improved RRT* algorithm mitigates randomness, ensures obstacle avoidance, and reduces computation time. Experimental validation demonstrates the superior stability of this approach and its generation of collision-free paths, while robotic arm simulation in ROS confirms real-world feasibility. This integrated approach to detection and path planning provides a scalable technical solution for automated pineapple harvesting, addressing key bottlenecks in agricultural robotics and fostering advancements in fruit-picking automation. Full article
Show Figures

Figure 1

19 pages, 7742 KB  
Article
Three-Dimensional Point Cloud Reconstruction of Unstructured Terrain for Autonomous Robots
by Wei Chen, Xiufang Lin and Xiangpan Zheng
Sensors 2025, 25(16), 4890; https://doi.org/10.3390/s25164890 - 8 Aug 2025
Viewed by 242
Abstract
In scenarios such as field exploration, disaster relief, and agricultural automation, LIDAR-based reconstructed terrain models can largely contribute to robot activities such as passable area identification and path planning optimization. However, unstructured terrain environments are typically absent and poorly characterized by artificially labeled [...] Read more.
In scenarios such as field exploration, disaster relief, and agricultural automation, LIDAR-based reconstructed terrain models can largely contribute to robot activities such as passable area identification and path planning optimization. However, unstructured terrain environments are typically absent and poorly characterized by artificially labeled features, which makes it difficult to find reliable feature correspondences in the point cloud between two consecutive LiDAR scans. Meanwhile, the persistence of noise accompanying unstructured terrain environments also causes certain difficulties in finding reliable feature correspondences between two consecutively scanned point clouds, which in turn leads to lower matching accuracy and larger offsets between neighboring frames. Therefore, this paper proposes an unstructured terrain construction algorithm combined with graph optimization theory based on LOAM algorithm further introducing the robots motion information provided by IMU. Experimental results show that the method proposed in this paper can achieve accurate and effective reconstruction in unstructured terrain environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

34 pages, 3764 KB  
Review
Research Progress and Applications of Artificial Intelligence in Agricultural Equipment
by Yong Zhu, Shida Zhang, Shengnan Tang and Qiang Gao
Agriculture 2025, 15(15), 1703; https://doi.org/10.3390/agriculture15151703 - 7 Aug 2025
Viewed by 623
Abstract
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative [...] Read more.
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative opportunity for the intelligent upgrade of agricultural equipment. This article systematically presents recent progress in computer vision, machine learning (ML), and intelligent sensing. The key innovations are highlighted in areas such as object detection and recognition (e.g., a K-nearest neighbor (KNN) achieved 98% accuracy in distinguishing vibration signals across operation stages); autonomous navigation and path planning (e.g., a deep reinforcement learning (DRL)-optimized task planner for multi-arm harvesting robots reduced execution time by 10.7%); state perception (e.g., a multilayer perceptron (MLP) yielded 96.9% accuracy in plug seedling health classification); and precision control (e.g., an intelligent multi-module coordinated control system achieved a transplanting efficiency of 5000 plants/h). The findings reveal a deep integration of AI models with multimodal perception technologies, significantly improving the operational efficiency, resource utilization, and environmental adaptability of agricultural equipment. This integration is catalyzing the transition toward intelligent, automated, and sustainable agricultural systems. Nevertheless, intelligent agricultural equipment still faces technical challenges regarding data sample acquisition, adaptation to complex field environments, and the coordination between algorithms and hardware. Looking ahead, the convergence of digital twin (DT) technology, edge computing, and big data-driven collaborative optimization is expected to become the core of next-generation intelligent agricultural systems. These technologies have the potential to overcome current limitations in perception and decision-making, ultimately enabling intelligent management and autonomous decision-making across the entire agricultural production chain. This article aims to provide a comprehensive foundation for advancing agricultural modernization and supporting green, sustainable development. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

25 pages, 4021 KB  
Article
A Hybrid Path Planning Algorithm for Orchard Robots Based on an Improved D* Lite Algorithm
by Quanjie Jiang, Yue Shen, Hui Liu, Zohaib Khan, Hao Sun and Yuxuan Huang
Agriculture 2025, 15(15), 1698; https://doi.org/10.3390/agriculture15151698 - 6 Aug 2025
Viewed by 356
Abstract
Due to the complex spatial structure, dense tree distribution, and narrow passages in orchard environments, traditional path planning algorithms often struggle with large path deviations, frequent turning, and reduced navigational safety. In order to overcome these challenges, this paper proposes a hybrid path [...] Read more.
Due to the complex spatial structure, dense tree distribution, and narrow passages in orchard environments, traditional path planning algorithms often struggle with large path deviations, frequent turning, and reduced navigational safety. In order to overcome these challenges, this paper proposes a hybrid path planning algorithm based on improved D* Lite for narrow forest orchard environments. The proposed approach enhances path feasibility and improves the robustness of the navigation system. The algorithm begins by constructing a 2D grid map reflecting the orchard layout and inflates the tree regions to create safety buffers for reliable path planning. For global path planning, an enhanced D* Lite algorithm is used with a cost function that jointly considers centerline proximity, turning angle smoothness, and directional consistency. This guides the path to remain close to the orchard row centerline, improving structural adaptability and path rationality. Narrow passages along the initial path are detected, and local replanning is performed using a Hybrid A* algorithm that accounts for the kinematic constraints of a differential tracked robot. This generates curvature-continuous and directionally stable segments that replace the original narrow-path portions. Finally, a gradient descent method is applied to smooth the overall path, improving trajectory continuity and execution stability. Field experiments in representative orchard environments demonstrate that the proposed hybrid algorithm significantly outperforms traditional D* Lite and KD* Lite-B methods in terms of path accuracy and navigational safety. The average deviation from the centerline is only 0.06 m, representing reductions of 75.55% and 38.27% compared to traditional D* Lite and KD* Lite-B, respectively, thereby enabling high-precision centerline tracking. Moreover, the number of hazardous nodes, defined as path points near obstacles, was reduced to five, marking decreases of 92.86% and 68.75%, respectively, and substantially enhancing navigation safety. These results confirm the method’s strong applicability in complex, constrained orchard environments and its potential as a foundation for efficient, safe, and fully autonomous agricultural robot operation. Full article
(This article belongs to the Special Issue Perception, Decision-Making, and Control of Agricultural Robots)
Show Figures

Figure 1

19 pages, 19040 KB  
Article
Multi-Strategy Fusion RRT-Based Algorithm for Optimizing Path Planning in Continuous Cherry Picking
by Yi Zhang, Xinying Miao, Yifei Sun, Zhipeng He, Tianwen Hou, Zhenghan Wang and Qiuyan Wang
Agriculture 2025, 15(15), 1699; https://doi.org/10.3390/agriculture15151699 - 6 Aug 2025
Viewed by 202
Abstract
Automated cherry harvesting presents a significant opportunity to overcome the high costs and inefficiencies of manual labor in modern agriculture. However, robotic harvesting in dense canopies requires sophisticated path planning to navigate cluttered branches and selectively pick target fruits. This paper introduces a [...] Read more.
Automated cherry harvesting presents a significant opportunity to overcome the high costs and inefficiencies of manual labor in modern agriculture. However, robotic harvesting in dense canopies requires sophisticated path planning to navigate cluttered branches and selectively pick target fruits. This paper introduces a complete robotic harvesting solution centered on a novel path-planning algorithm: the Multi-Strategy Integrated RRT for Continuous Harvesting Path (MSI-RRTCHP) algorithm. Our system first employs a machine vision system to identify and locate mature cherries, distinguishing them from unripe fruits, leaves, and branches, which are treated as obstacles. Based on this visual data, the MSI-RRTCHP algorithm generates an optimal picking trajectory. Its core innovation is a synergistic strategy that enables intelligent navigation by combining probability-guided exploration, goal-oriented sampling, and adaptive step size adjustments based on the obstacle’s density. To optimize the picking sequence for multiple targets, we introduce an enhanced traversal algorithm (σ-TSP) that accounts for obstacle interference. Field experiments demonstrate that our integrated system achieved a 90% picking success rate. Compared with established algorithms, the MSI-RRTCHP algorithm reduced the path length by up to 25.47% and the planning time by up to 39.06%. This work provides a practical and efficient framework for robotic cherry harvesting, showcasing a significant step toward intelligent agricultural automation. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

5 pages, 180 KB  
Proceeding Paper
Design of Automatic Generation Platform for Agricultural Robot
by Zhaowei Wang, Yurong Wang and Fangji Zhang
Eng. Proc. 2025, 98(1), 45; https://doi.org/10.3390/engproc2025098045 - 4 Aug 2025
Viewed by 175
Abstract
The design of robots is highly dependent on their applications. For agricultural robots, terrain, weather, and crop diversity need to be considered, and work efficiency, cost, and reliability must be evaluated. These factors are important to determine the design of agricultural robots. In [...] Read more.
The design of robots is highly dependent on their applications. For agricultural robots, terrain, weather, and crop diversity need to be considered, and work efficiency, cost, and reliability must be evaluated. These factors are important to determine the design of agricultural robots. In this study, we identified the constraint factors of agricultural robots from the perspectives of navigation, movement, control, cost, and reliability. The orthogonal defect classification (ODC) method was used to classify and grade these factors and explore the relationships among these factors. Based on the results, the design rules of agricultural robots were created, and an automatic production knowledge base of agricultural robot design was constructed. The results contribute to the automatic generation of the design framework of agricultural robots under specific environments to effectively improve the design level and quality of agricultural robots and popularize agricultural robots. Full article
Back to TopTop