Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (419)

Search Parameters:
Keywords = orchard environment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3284 KB  
Article
An Attention-Enhanced Bottleneck Network for Apple Segmentation in Orchard Environments
by Imran Md Jelas, Nur Alia Sofia Maluazi and Mohd Asyraf Zulkifley
Agriculture 2025, 15(17), 1802; https://doi.org/10.3390/agriculture15171802 - 23 Aug 2025
Viewed by 57
Abstract
As global food demand continues to rise, conventional agricultural practices face increasing difficulty in sustainably meeting production requirements. In response, deep learning-driven automated systems have emerged as promising solutions for enhancing precision farming. Nevertheless, accurate fruit segmentation remains a significant challenge in orchard [...] Read more.
As global food demand continues to rise, conventional agricultural practices face increasing difficulty in sustainably meeting production requirements. In response, deep learning-driven automated systems have emerged as promising solutions for enhancing precision farming. Nevertheless, accurate fruit segmentation remains a significant challenge in orchard environments due to factors such as occlusion, background clutter, and varying lighting conditions. This study proposes the Depthwise Asymmetric Bottleneck with Attention Mechanism Network (DABAMNet), an advanced convolutional neural network (CNN) architecture composed of multiple Depthwise Asymmetric Bottleneck Units (DABou), specifically designed to improve apple segmentation in RGB imagery. The model incorporates the Convolutional Block Attention Module (CBAM), a dual attention mechanism that enhances channel and spatial feature discrimination by adaptively emphasizing salient information while suppressing irrelevant content. Furthermore, the CBAM attention module employs multiple global pooling strategies to enrich feature representation across varying spatial resolutions. Through comprehensive ablation studies, the optimal configuration was identified as early CBAM placement after DABou unit 5, using a reduction ratio of 2 and combined global max-min pooling, which significantly improved segmentation accuracy. DABAMNet achieved an accuracy of 0.9813 and an Intersection over Union (IoU) of 0.7291, outperforming four state-of-the-art CNN benchmarks. These results demonstrate the model’s robustness in complex agricultural scenes and its potential for real-time deployment in fruit detection and harvesting systems. Overall, these findings underscore the value of attention-based architectures for agricultural image segmentation and pave the way for broader applications in sustainable crop monitoring systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 1956 KB  
Article
FCNet: A Transformer-Based Context-Aware Segmentation Framework for Detecting Camouflaged Fruits in Orchard Environments
by Ivan Roy Evangelista, Argel Bandala and Elmer Dadios
Technologies 2025, 13(8), 372; https://doi.org/10.3390/technologies13080372 - 20 Aug 2025
Viewed by 176
Abstract
Fruit segmentation is an essential task due to its importance in accurate disease prevention, yield estimation, and automated harvesting. However, accurate object segmentation in agricultural environments remains challenging due to visual complexities such as background clutter, occlusion, small object size, and color–texture similarities [...] Read more.
Fruit segmentation is an essential task due to its importance in accurate disease prevention, yield estimation, and automated harvesting. However, accurate object segmentation in agricultural environments remains challenging due to visual complexities such as background clutter, occlusion, small object size, and color–texture similarities that lead to camouflaging. Traditional methods often struggle to detect partially occluded or visually blended fruits, leading to poor detection performance. In this study, we propose a context-aware segmentation framework designed for orchard-level mango fruit detection. We integrate multiscale feature extraction based on PVTv2 architecture, a feature enhancement module using Atrous Spatial Pyramid Pooling (ASPP) and attention techniques, and a novel refinement mechanism employing a Position-based Layer Normalization (PLN). We conducted a comparative study against established segmentation models, employing both quantitative and qualitative evaluations. Results demonstrate the superior performance of our model across all metrics. An ablation study validated the contributions of the enhancement and refinement modules, with the former yielding performance gains of 2.43%, 3.10%, 5.65%, 4.19%, and 4.35% in S-measure, mean E-measure, weighted F-measure, mean F-measure, and IoU, respectively, and the latter achieving improvements of 2.07%, 1.93%, 6.85%, 4.84%, and 2.73%, in the said metrics. Full article
Show Figures

Graphical abstract

17 pages, 3569 KB  
Article
A Real-Time Mature Hawthorn Detection Network Based on Lightweight Hybrid Convolutions for Harvesting Robots
by Baojian Ma, Bangbang Chen, Xuan Li, Liqiang Wang and Dongyun Wang
Sensors 2025, 25(16), 5094; https://doi.org/10.3390/s25165094 - 16 Aug 2025
Viewed by 312
Abstract
Accurate real-time detection of hawthorn by vision systems is a fundamental prerequisite for automated harvesting. This study addresses the challenges in hawthorn orchards—including target overlap, leaf occlusion, and environmental variations—which lead to compromised detection accuracy, high computational resource demands, and poor real-time performance [...] Read more.
Accurate real-time detection of hawthorn by vision systems is a fundamental prerequisite for automated harvesting. This study addresses the challenges in hawthorn orchards—including target overlap, leaf occlusion, and environmental variations—which lead to compromised detection accuracy, high computational resource demands, and poor real-time performance in existing methods. To overcome these limitations, we propose YOLO-DCL (group shuffling convolution and coordinate attention integrated with a lightweight head based on YOLOv8n), a novel lightweight hawthorn detection model. The backbone network employs dynamic group shuffling convolution (DGCST) for efficient and effective feature extraction. Within the neck network, coordinate attention (CA) is integrated into the feature pyramid network (FPN), forming an enhanced multi-scale feature pyramid network (HSPFN); this integration further optimizes the C2f structure. The detection head is designed utilizing shared convolution and batch normalization to streamline computation. Additionally, the PIoUv2 (powerful intersection over union version 2) loss function is introduced to significantly reduce model complexity. Experimental validation demonstrates that YOLO-DCL achieves a precision of 91.6%, recall of 90.1%, and mean average precision (mAP) of 95.6%, while simultaneously reducing the model size to 2.46 MB with only 1.2 million parameters and 4.8 GFLOPs computational cost. To rigorously assess real-world applicability, we developed and deployed a detection system based on the PySide6 framework on an NVIDIA Jetson Xavier NX edge device. Field testing validated the model’s robustness, high accuracy, and real-time performance, confirming its suitability for integration into harvesting robots operating in practical orchard environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 16638 KB  
Article
GIA-YOLO: A Target Detection Method for Nectarine Picking Robots in Facility Orchards
by Longlong Ren, Yuqiang Li, Yonghui Du, Ang Gao, Wei Ma, Yuepeng Song and Xingchang Han
Agronomy 2025, 15(8), 1934; https://doi.org/10.3390/agronomy15081934 - 11 Aug 2025
Viewed by 304
Abstract
The complex and variable environment of facility orchards poses significant challenges for intelligent robotic operations. To address issues such as nectarine fruit occlusion by branches and leaves, complex backgrounds, and the demand for high real-time detection performance, this study proposes a target detection [...] Read more.
The complex and variable environment of facility orchards poses significant challenges for intelligent robotic operations. To address issues such as nectarine fruit occlusion by branches and leaves, complex backgrounds, and the demand for high real-time detection performance, this study proposes a target detection model for nectarine fruit based on the YOLOv11 architecture—Ghost–iEMA–ADown You Only Look (GIA-YOLO). We introduce the GhostModule to reduce the model size and the floating-point operations, adopt the fusion attention mechanism iEMA to enhance the feature extraction capability, and further optimize the network structure through the ADown lightweight downsampling module. The test results show that GIA-YOLO achieves 93.9% precision, 88.9% recall, and 96.2% mAP, which are 2.2, 1.1, and 0.7 percentage points higher than YOLOv11, respectively; the size of the model is reduced to 5.0 MB and the floating-point operations is reduced to 5.2 G, which is 9.1% and 17.5% less compared to the original model, respectively. The model was deployed in the picking robot system and field tested in the nectarine facility orchard, the results show that GIA-YOLO maintains high detection precision and stability at different picking distances, with a comprehensive missed detection rate of 6.65%, a false detection rate of 8.7%, and supports real-time detection at 41.6 FPS. The results of the research provide an important reference and support for the optimization of the design and application of the nectarine detection model in the facility agriculture environment. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 19463 KB  
Article
Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms
by Hongjun Wang, Anbang Zhao, Yongqi Zhong, Gengming Zhang, Fengyun Wu and Xiangjun Zou
Agronomy 2025, 15(8), 1924; https://doi.org/10.3390/agronomy15081924 - 9 Aug 2025
Viewed by 334
Abstract
Pineapple harvesting in natural orchard environments faces challenges such as high occlusion rates caused by foliage and the need for complex spatial planning to guide robotic arm movement in cluttered terrains. This study proposes an innovative visual detection model, Yolov10n-SSE, which integrates split [...] Read more.
Pineapple harvesting in natural orchard environments faces challenges such as high occlusion rates caused by foliage and the need for complex spatial planning to guide robotic arm movement in cluttered terrains. This study proposes an innovative visual detection model, Yolov10n-SSE, which integrates split convolution (SPConv), squeeze-and-excitation (SE) attention, and efficient multi-scale attention (EMA) modules. These improvements enhance detection accuracy while reducing computational complexity. The proposed model achieves notable performance gains in precision (93.8%), recall (84.9%), and mAP (91.8%). Additionally, a dimensionality-reduction strategy transforms 3D path planning into a more efficient 2D image-space task using point clouds from a depth camera. Combining the artificial potential field (APF) method with an improved RRT* algorithm mitigates randomness, ensures obstacle avoidance, and reduces computation time. Experimental validation demonstrates the superior stability of this approach and its generation of collision-free paths, while robotic arm simulation in ROS confirms real-world feasibility. This integrated approach to detection and path planning provides a scalable technical solution for automated pineapple harvesting, addressing key bottlenecks in agricultural robotics and fostering advancements in fruit-picking automation. Full article
Show Figures

Figure 1

24 pages, 79369 KB  
Article
A Study on Tree Species Recognition in UAV Remote Sensing Imagery Based on an Improved YOLOv11 Model
by Qian Wang, Zhi Pu, Lei Luo, Lei Wang and Jian Gao
Appl. Sci. 2025, 15(16), 8779; https://doi.org/10.3390/app15168779 - 8 Aug 2025
Viewed by 298
Abstract
Unmanned aerial vehicle (UAV) remote sensing has become an important tool for high-resolution tree species identification in orchards and forests. However, irregular spatial distribution, overlapping canopies, and small crown sizes still limit detection accuracy. To overcome these challenges, we propose YOLOv11-OAM, an enhanced [...] Read more.
Unmanned aerial vehicle (UAV) remote sensing has become an important tool for high-resolution tree species identification in orchards and forests. However, irregular spatial distribution, overlapping canopies, and small crown sizes still limit detection accuracy. To overcome these challenges, we propose YOLOv11-OAM, an enhanced one-stage object detection model based on YOLOv11. The model incorporates three key modules: omni-dimensional dynamic convolution (ODConv), adaptive spatial feature fusion (ASFF), and a multi-point distance IoU (MPDIoU) loss. A class-balanced augmentation strategy is also applied to mitigate category imbalance. We evaluated YOLOv11-OAM on UAV imagery of six fruit tree species—walnut, prune, apricot, pomegranate, saxaul, and cherry. The model achieved a mean Average Precision (mAP@0.5) of 93.1%, an 11.4% improvement over the YOLOv11 baseline. These results demonstrate that YOLOv11-OAM can accurately detect small and overlapping tree crowns in complex orchard environments, offering a reliable solution for precision agriculture and smart forestry applications. Full article
Show Figures

Figure 1

21 pages, 8998 KB  
Article
YOLOv8n-FDE: An Efficient and Lightweight Model for Tomato Maturity Detection
by Xin Gao, Jieyuan Ding, Mengxuan Bie, Hao Yu, Yang Shen, Ruihong Zhang and Xiaobo Xi
Agronomy 2025, 15(8), 1899; https://doi.org/10.3390/agronomy15081899 - 7 Aug 2025
Viewed by 250
Abstract
To address the challenges of tomato maturity detection in natural environments—such as interference from complex backgrounds and the difficulty in distinguishing adjacent fruits with similar maturity levels—this study proposes a lightweight tomato maturity detection model, YOLOv8n-FDE. Four maturity stages are defined: mature, turning-mature, [...] Read more.
To address the challenges of tomato maturity detection in natural environments—such as interference from complex backgrounds and the difficulty in distinguishing adjacent fruits with similar maturity levels—this study proposes a lightweight tomato maturity detection model, YOLOv8n-FDE. Four maturity stages are defined: mature, turning-mature, color-changing, and immature. The model incorporates a newly designed C3-FNet feature extraction and fusion module to enhance target feature representation, and integrates the DySample operator to improve adaptability under complex conditions. Furthermore, the detection head is optimized as the parameter-sharing lightweight detection head (PSLD), which boosts the accuracy of multi-scale tomato fruit feature prediction and precisely focuses on tomato color characteristics. A novel PIoUv2 loss function is also introduced to further improve localization performance and accelerate convergence. Experimental results demonstrate that the improved YOLOv8n-FDE model achieves a parameter count of 1.56 × 106, computational complexity of 4.5 GFLOPs, and a model size of 3.20 MB. The model attains an mAP@0.5 of 97.6%, representing reductions of 46%, 21%, and 60% in parameter count, computation, and size, respectively, compared to YOLOv8n, with a 1.8 percentage point increase in mAP@0.5. This study significantly reduces model complexity and improves the accuracy of tomato maturity detection, providing a more robust data foundation for subsequent orchard yield prediction. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

25 pages, 4021 KB  
Article
A Hybrid Path Planning Algorithm for Orchard Robots Based on an Improved D* Lite Algorithm
by Quanjie Jiang, Yue Shen, Hui Liu, Zohaib Khan, Hao Sun and Yuxuan Huang
Agriculture 2025, 15(15), 1698; https://doi.org/10.3390/agriculture15151698 - 6 Aug 2025
Viewed by 356
Abstract
Due to the complex spatial structure, dense tree distribution, and narrow passages in orchard environments, traditional path planning algorithms often struggle with large path deviations, frequent turning, and reduced navigational safety. In order to overcome these challenges, this paper proposes a hybrid path [...] Read more.
Due to the complex spatial structure, dense tree distribution, and narrow passages in orchard environments, traditional path planning algorithms often struggle with large path deviations, frequent turning, and reduced navigational safety. In order to overcome these challenges, this paper proposes a hybrid path planning algorithm based on improved D* Lite for narrow forest orchard environments. The proposed approach enhances path feasibility and improves the robustness of the navigation system. The algorithm begins by constructing a 2D grid map reflecting the orchard layout and inflates the tree regions to create safety buffers for reliable path planning. For global path planning, an enhanced D* Lite algorithm is used with a cost function that jointly considers centerline proximity, turning angle smoothness, and directional consistency. This guides the path to remain close to the orchard row centerline, improving structural adaptability and path rationality. Narrow passages along the initial path are detected, and local replanning is performed using a Hybrid A* algorithm that accounts for the kinematic constraints of a differential tracked robot. This generates curvature-continuous and directionally stable segments that replace the original narrow-path portions. Finally, a gradient descent method is applied to smooth the overall path, improving trajectory continuity and execution stability. Field experiments in representative orchard environments demonstrate that the proposed hybrid algorithm significantly outperforms traditional D* Lite and KD* Lite-B methods in terms of path accuracy and navigational safety. The average deviation from the centerline is only 0.06 m, representing reductions of 75.55% and 38.27% compared to traditional D* Lite and KD* Lite-B, respectively, thereby enabling high-precision centerline tracking. Moreover, the number of hazardous nodes, defined as path points near obstacles, was reduced to five, marking decreases of 92.86% and 68.75%, respectively, and substantially enhancing navigation safety. These results confirm the method’s strong applicability in complex, constrained orchard environments and its potential as a foundation for efficient, safe, and fully autonomous agricultural robot operation. Full article
(This article belongs to the Special Issue Perception, Decision-Making, and Control of Agricultural Robots)
Show Figures

Figure 1

21 pages, 8731 KB  
Article
Individual Segmentation of Intertwined Apple Trees in a Row via Prompt Engineering
by Herearii Metuarea, François Laurens, Walter Guerra, Lidia Lozano, Andrea Patocchi, Shauny Van Hoye, Helin Dutagaci, Jeremy Labrosse, Pejman Rasti and David Rousseau
Sensors 2025, 25(15), 4721; https://doi.org/10.3390/s25154721 - 31 Jul 2025
Viewed by 481
Abstract
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree [...] Read more.
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree separately. We focus on segmenting individual apple trees as the main task in this context. Segmenting individual apple trees in dense orchard rows is challenging because of the complexity of outdoor illumination and intertwined branches. Traditional methods rely on supervised learning, which requires a large amount of annotated data. In this study, we explore an alternative approach using prompt engineering with the Segment Anything Model and its variants in a zero-shot setting. Specifically, we first detect the trunk and then position a prompt (five points in a diamond shape) located above the detected trunk to feed to the Segment Anything Model. We evaluate our method on the apple REFPOP, a new large-scale European apple tree dataset and on another publicly available dataset. On these datasets, our trunk detector, which utilizes a trained YOLOv11 model, achieves a good detection rate of 97% based on the prompt located above the detected trunk, achieving a Dice score of 70% without training on the REFPOP dataset and 84% without training on the publicly available dataset.We demonstrate that our method equals or even outperforms purely supervised segmentation approaches or non-prompted foundation models. These results underscore the potential of foundational models guided by well-designed prompts as scalable and annotation-efficient solutions for plant segmentation in complex agricultural environments. Full article
Show Figures

Figure 1

31 pages, 3855 KB  
Article
Exploring Sidewalk Built Environment Design Strategies to Promote Walkability in Tropical Humid Climates
by Pakin Anuntavachakorn, Purinat Pawarana, Tarid Wongvorachan, Chaniporn Thampanichwat and Suphat Bunyarittikit
Buildings 2025, 15(15), 2659; https://doi.org/10.3390/buildings15152659 - 28 Jul 2025
Viewed by 576
Abstract
The world is facing a state of “global boiling,” causing damage to various sectors. Developing pedestrian systems is a key to mitigating it, especially in tropical and humid cities where the climate discourages walking and increases the need for shaded walkways. Recent research [...] Read more.
The world is facing a state of “global boiling,” causing damage to various sectors. Developing pedestrian systems is a key to mitigating it, especially in tropical and humid cities where the climate discourages walking and increases the need for shaded walkways. Recent research shows a lack of data and in-depth studies on the built environment promoting walkability in such climates, creating a research gap this study aims to fill. Using Singapore as a case study, four locations—Marina Bay, Orchard Road, Boat Quay, and Chinatown—were surveyed and analyzed through visual decoding and questionnaires. Results show that natural light is the most frequently observed and important element in pedestrian pathway design in tropical and humid areas. Trees and sidewalks are also important in creating a walk-friendly environment. Green spaces significantly influence the desire to walk, though no clear positive outcomes were found. Additionally, “Other Emotions” negatively affect the decision to walk, suggesting these should be avoided in future pedestrian pathway designs to encourage walking. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

17 pages, 1728 KB  
Article
The Impact of Colony Deployment Timing on Tetragonula carbonaria Crop Fidelity and Resource Use in Macadamia Orchards
by Claire E. Allison, James C. Makinson, Robert N. Spooner-Hart and James M. Cook
Plants 2025, 14(15), 2313; https://doi.org/10.3390/plants14152313 - 26 Jul 2025
Viewed by 421
Abstract
Crop fidelity is a desirable trait for managed pollinators and is influenced by factors like competing forage sources and colony knowledge of the surrounding environment. In European honey bees (Apis mellifera L.), colonies deployed when the crop is flowering display the highest [...] Read more.
Crop fidelity is a desirable trait for managed pollinators and is influenced by factors like competing forage sources and colony knowledge of the surrounding environment. In European honey bees (Apis mellifera L.), colonies deployed when the crop is flowering display the highest fidelity. We tested for a similar outcome using a stingless bee species that is being increasingly used as a managed pollinator in Australian macadamia orchards. We observed Tetragonula carbonaria (Smith) colonies deployed in macadamia orchards at three time points: (1) before crop flowering (“permanent”), (2) early flowering (“early”), and (3) later in the flowering period (“later”). We captured returning pollen foragers weekly and estimated crop fidelity from the proportion of macadamia pollen they collected, using light microscopy. Pollen foraging activity was also assessed via weekly hive entrance filming. The early and later introduced colonies initially exhibited high fidelity, collecting more macadamia pollen than the permanent colonies. In most cases, the permanent colonies were already collecting diverse pollen species from the local environment and took longer to shift over to macadamia. Pollen diversity increased over time in all colonies, which was associated with an increase in the proportion of pollen foragers. Our results indicate that stingless bees can initially prioritize a mass-flowering crop, even when flowering levels are low, but that they subsequently reduce fidelity over time. Our findings will inform pollinator management strategies to help growers maximize returns from pollinator-dependent crops like macadamia. Full article
Show Figures

Figure 1

30 pages, 92065 KB  
Article
A Picking Point Localization Method for Table Grapes Based on PGSS-YOLOv11s and Morphological Strategies
by Jin Lu, Zhongji Cao, Jin Wang, Zhao Wang, Jia Zhao and Minjie Zhang
Agriculture 2025, 15(15), 1622; https://doi.org/10.3390/agriculture15151622 - 26 Jul 2025
Viewed by 378
Abstract
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, [...] Read more.
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, it is extremely difficult to accurately and efficiently identify and segment grape pedicels and then reliably locate the picking points. This is attributable to the low distinguishability between grape pedicels and the surrounding environment such as branches, as well as the impacts of other conditions like weather, lighting, and occlusion, which are coupled with the requirements for model deployment on edge devices with limited computing resources. To address these issues, this study proposes a novel picking point localization method for table grapes based on an instance segmentation network called Progressive Global-Local Structure-Sensitive Segmentation (PGSS-YOLOv11s) and a simple combination strategy of morphological operators. More specifically, the network PGSS-YOLOv11s is composed of an original backbone of the YOLOv11s-seg, a spatial feature aggregation module (SFAM), an adaptive feature fusion module (AFFM), and a detail-enhanced convolutional shared detection head (DE-SCSH). And the PGSS-YOLOv11s have been trained with a new grape segmentation dataset called Grape-⊥, which includes 4455 grape pixel-level instances with the annotation of ⊥-shaped regions. After the PGSS-YOLOv11s segments the ⊥-shaped regions of grapes, some morphological operations such as erosion, dilation, and skeletonization are combined to effectively extract grape pedicels and locate picking points. Finally, several experiments have been conducted to confirm the validity, effectiveness, and superiority of the proposed method. Compared with the other state-of-the-art models, the main metrics F1 score and mask mAP@0.5 of the PGSS-YOLOv11s reached 94.6% and 95.2% on the Grape-⊥ dataset, as well as 85.4% and 90.0% on the Winegrape dataset. Multi-scenario tests indicated that the success rate of positioning the picking points reached up to 89.44%. In orchards, real-time tests on the edge device demonstrated the practical performance of our method. Nevertheless, for grapes with short pedicels or occluded pedicels, the designed morphological algorithm exhibited the loss of picking point calculations. In future work, we will enrich the grape dataset by collecting images under different lighting conditions, from various shooting angles, and including more grape varieties to improve the method’s generalization performance. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

31 pages, 11649 KB  
Article
Development of Shunt Connection Communication and Bimanual Coordination-Based Smart Orchard Robot
by Bin Yan and Xiameng Li
Agronomy 2025, 15(8), 1801; https://doi.org/10.3390/agronomy15081801 - 25 Jul 2025
Viewed by 297
Abstract
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of [...] Read more.
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of a six-degree-of-freedom bimanual apple-harvesting robot. Leveraging the kinematic architecture of the AUBO-i5 manipulator, three spatial layout configurations for dual-arm systems were evaluated, culminating in the adoption of a “workspace-overlapping Type B” arrangement. A functional prototype of the bimanual apple-harvesting system was subsequently fabricated. The study further involved developing control architectures for two end-effector types: a compliant gripper and a vacuum-based suction mechanism, with corresponding operational protocols established. A networked communication framework for parallel arm coordination was implemented via Ethernet switching technology, enabling both independent and synchronized bimanual operation. Additionally, an intersystem communication protocol was formulated to integrate the robotic vision system with the dual-arm control architecture, establishing a modular parallel execution model between visual perception and motion control modules. A coordinated bimanual harvesting strategy was formulated, incorporating real-time trajectory and pose monitoring of the manipulators. Kinematic simulations were executed to validate the feasibility of this strategy. Field evaluations in modern Red Fuji apple orchards assessed multidimensional harvesting performance, revealing 85.6% and 80% success rates for the suction and gripper-based arms, respectively. Single-fruit retrieval averaged 7.5 s per arm, yielding an overall system efficiency of 3.75 s per fruit. These findings advance the technological foundation for intelligent apple-harvesting systems, offering methodologies for the evolution of precision agronomic automation. Full article
(This article belongs to the Special Issue Smart Farming: Advancing Techniques for High-Value Crops)
Show Figures

Figure 1

18 pages, 2469 KB  
Article
Neural Network-Based SLAM/GNSS Fusion Localization Algorithm for Agricultural Robots in Orchard GNSS-Degraded or Denied Environments
by Huixiang Zhou, Jingting Wang, Yuqi Chen, Lian Hu, Zihao Li, Fuming Xie, Jie He and Pei Wang
Agriculture 2025, 15(15), 1612; https://doi.org/10.3390/agriculture15151612 - 25 Jul 2025
Viewed by 338
Abstract
To address the issue of agricultural robot loss of control caused by GNSS signal degradation or loss in complex agricultural environments such as farmland and orchards, this study proposes a neural network-based SLAM/GNSS fusion localization algorithm aiming to enhance the robot’s localization accuracy [...] Read more.
To address the issue of agricultural robot loss of control caused by GNSS signal degradation or loss in complex agricultural environments such as farmland and orchards, this study proposes a neural network-based SLAM/GNSS fusion localization algorithm aiming to enhance the robot’s localization accuracy and stability in weak or GNSS-denied environments. It achieves multi-sensor observed pose coordinate system unification through coordinate system alignment preprocessing, optimizes SLAM poses via outlier filtering and drift correction, and dynamically adjusts the weights of poses from distinct coordinate systems via a neural network according to the GDOP. Experimental results on the robotic platform demonstrate that, compared to the SLAM algorithm without pose optimization, the proposed SLAM/GNSS fusion localization algorithm reduced the whole process average position deviation by 37%. Compared to the fixed-weight fusion localization algorithm, the proposed SLAM/GNSS fusion localization algorithm achieved a 74% reduction in average position deviation during transitional segments with GNSS signal degradation or recovery. These results validate the superior positioning accuracy and stability of the proposed SLAM/GNSS fusion localization algorithm in weak or GNSS-denied environments. Orchard experimental results demonstrate that, at an average speed of 0.55 m/s, the proposed SLAM/GNSS fusion localization algorithm achieves an overall average position deviation of 0.12 m, with average position deviation of 0.06 m in high GNSS signal quality zones, 0.11 m in transitional sections under signal degradation or recovery, and 0.14 m in fully GNSS-denied environments. These results validate that the proposed SLAM/GNSS fusion localization algorithm maintains high localization accuracy and stability even under conditions of low and highly fluctuating GNSS signal quality, meeting the operational requirements of most agricultural robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 6123 KB  
Article
SDA-YOLO: An Object Detection Method for Peach Fruits in Complex Orchard Environments
by Xudong Lin, Dehao Liao, Zhiguo Du, Bin Wen, Zhihui Wu and Xianzhi Tu
Sensors 2025, 25(14), 4457; https://doi.org/10.3390/s25144457 - 17 Jul 2025
Viewed by 558
Abstract
To address the challenges of leaf–branch occlusion, fruit mutual occlusion, complex background interference, and scale variations in peach detection within complex orchard environments, this study proposes an improved YOLOv11n-based peach detection method named SDA-YOLO. First, in the backbone network, the LSKA module is [...] Read more.
To address the challenges of leaf–branch occlusion, fruit mutual occlusion, complex background interference, and scale variations in peach detection within complex orchard environments, this study proposes an improved YOLOv11n-based peach detection method named SDA-YOLO. First, in the backbone network, the LSKA module is embedded into the SPPF module to construct an SPPF-LSKA fusion module, enhancing multi-scale feature representation for peach targets. Second, an MPDIoU-based bounding box regression loss function replaces CIoU to improve localization accuracy for overlapping and occluded peaches. The DyHead Block is integrated into the detection head to form a DMDetect module, strengthening feature discrimination for small and occluded targets in complex backgrounds. To address insufficient feature fusion flexibility caused by scale variations from occlusion and illumination differences in multi-scale peach detection, a novel Adaptive Multi-Scale Fusion Pyramid (AMFP) module is proposed to enhance the neck network, improving flexibility in processing complex features. Experimental results demonstrate that SDA-YOLO achieves precision (P), recall (R), mAP@0.95, and mAP@0.5:0.95 of 90.8%, 85.4%, 90%, and 62.7%, respectively, surpassing YOLOv11n by 2.7%, 4.8%, 2.7%, and 7.2%. This verifies the method’s robustness in complex orchard environments and provides effective technical support for intelligent fruit harvesting and yield estimation. Full article
Show Figures

Figure 1

Back to TopTop