Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,706)

Search Parameters:
Keywords = autonomous robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6045 KB  
Article
Energy-Aware Sensor Fusion Architecture for Autonomous Channel Robot Navigation in Constrained Environments
by Mohamed Shili, Hicham Chaoui and Khaled Nouri
Sensors 2025, 25(21), 6524; https://doi.org/10.3390/s25216524 - 23 Oct 2025
Abstract
Navigating autonomous robots in confined channels is inherently challenging due to limited space, dynamic obstacles, and energy constraints. Existing sensor fusion strategies often consume excessive power because all sensors remain active regardless of environmental conditions. This paper presents an energy-aware adaptive sensor fusion [...] Read more.
Navigating autonomous robots in confined channels is inherently challenging due to limited space, dynamic obstacles, and energy constraints. Existing sensor fusion strategies often consume excessive power because all sensors remain active regardless of environmental conditions. This paper presents an energy-aware adaptive sensor fusion framework for channel robots that deploys RGB cameras, laser range finders, and IMU sensors according to environmental complexity. Sensor data are fused using an adaptive Extended Kalman Filter (EKF), which selectively integrates multi-sensor information to maintain high navigation accuracy while minimizing energy consumption. An energy management module dynamically adjusts sensor activation and computational load, enabling significant reductions in power consumption while preserving navigation reliability. The proposed system is implemented on a low-power microcontroller and evaluated through simulations and prototype testing in constrained channel environments. Results show a 35% reduction in energy consumption with minimal impact on navigation performance, demonstrating the framework’s effectiveness for long-duration autonomous operations in pipelines, sewers, and industrial ducts. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

20 pages, 2618 KB  
Article
TBC-HRL: A Bio-Inspired Framework for Stable and Interpretable Hierarchical Reinforcement Learning
by Zepei Li, Yuhan Shan and Hongwei Mo
Biomimetics 2025, 10(11), 715; https://doi.org/10.3390/biomimetics10110715 - 22 Oct 2025
Abstract
Hierarchical Reinforcement Learning (HRL) is effective for long-horizon and sparse-reward tasks by decomposing complex decision processes, but its real-world application remains limited due to instability between levels, inefficient subgoal scheduling, delayed responses, and poor interpretability. To address these challenges, we propose Timed and [...] Read more.
Hierarchical Reinforcement Learning (HRL) is effective for long-horizon and sparse-reward tasks by decomposing complex decision processes, but its real-world application remains limited due to instability between levels, inefficient subgoal scheduling, delayed responses, and poor interpretability. To address these challenges, we propose Timed and Bionic Circuit Hierarchical Reinforcement Learning (TBC-HRL), a biologically inspired framework that integrates two mechanisms. First, a timed subgoal scheduling strategy assigns a fixed execution duration τ to each subgoal, mimicking rhythmic action patterns in animal behavior to improve inter-level coordination and maintain goal consistency. Second, a Neuro-Dynamic Bionic Circuit Network (NDBCNet), inspired by the neural circuitry of C. elegans, replaces conventional fully connected networks in the low-level controller. Featuring sparse connectivity, continuous-time dynamics, and adaptive responses, NDBCNet models temporal dependencies more effectively while offering improved interpretability and reduced computational overhead, making it suitable for resource-constrained platforms. Experiments across six dynamic and complex simulated tasks show that TBC-HRL consistently improves policy stability, action precision, and adaptability compared with traditional HRL, demonstrating the practical value and future potential of biologically inspired structures in intelligent control systems. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

24 pages, 6113 KB  
Article
Vision-Based Reinforcement Learning for Robotic Grasping of Moving Objects on a Conveyor
by Yin Cao, Xuemei Xu and Yazheng Zhang
Machines 2025, 13(10), 973; https://doi.org/10.3390/machines13100973 - 21 Oct 2025
Abstract
This study introduces an autonomous framework for grasping moving objects on a conveyor belt, enabling unsupervised detection, grasping, and categorization. The work focuses on two common object shapes—cylindrical cans and rectangular cartons—transported at a constant speed of 3–7 cm/s on the conveyor, emulating [...] Read more.
This study introduces an autonomous framework for grasping moving objects on a conveyor belt, enabling unsupervised detection, grasping, and categorization. The work focuses on two common object shapes—cylindrical cans and rectangular cartons—transported at a constant speed of 3–7 cm/s on the conveyor, emulating typical scenarios. The proposed framework combines a vision-based neural network for object detection, a target localization algorithm, and a deep reinforcement learning model for robotic control. Specifically, a YOLO-based neural network was employed to detect the 2D position of target objects. These positions are then converted to 3D coordinates, followed by pose estimation and error correction. A Proximal Policy Optimization (PPO) algorithm was then used to provide continuous control decisions for the robotic arm. A tailored reinforcement learning environment was developed using the Gymnasium interface. Training and validation were conducted on a 7-degree-of-freedom (7-DOF) robotic arm model in the PyBullet physics simulation engine. By leveraging transfer learning and curriculum learning strategies, the robotic agent effectively learned to grasp multiple categories of moving objects. Simulation experiments and randomized trials show that the proposed method enables the 7-DOF robotic arm to consistently grasp conveyor belt objects, achieving an approximately 80% success rate at conveyor speeds of 0.03–0.07 m/s. These results demonstrate the potential of the framework for deployment in automated handling applications. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

28 pages, 5802 KB  
Review
AI and Robotics in Agriculture: A Systematic and Quantitative Review of Research Trends (2015–2025)
by Abderrachid Hamrani, Amin Allouhi, Fatma Zohra Bouarab and Krish Jayachandran
Crops 2025, 5(5), 75; https://doi.org/10.3390/crops5050075 - 21 Oct 2025
Abstract
The swift integration of AI, robotics, and advanced sensing technologies has revolutionized agriculture into a data-centric, autonomous, and sustainable sector. This systematic study examines the interplay between artificial intelligence and agricultural robotics in intelligent farming systems. Artificial intelligence, machine learning, computer vision, swarm [...] Read more.
The swift integration of AI, robotics, and advanced sensing technologies has revolutionized agriculture into a data-centric, autonomous, and sustainable sector. This systematic study examines the interplay between artificial intelligence and agricultural robotics in intelligent farming systems. Artificial intelligence, machine learning, computer vision, swarm robotics, and generative AI are analyzed for crop monitoring, precision irrigation, autonomous harvesting, and post-harvest processing. Employing PRISMA to categorize more than 10,000 high-impact publications from Scopus, WoS, and IEEE. Drones and vision-based models predominate the industry, while IoT integration, digital twins, and generative AI are on the rise. Insufficient field validation rates, inadequate crop and regional representation, and the implementation of explainable AI continue to pose significant challenges. Inadequate model generalization, energy limitations, and infrastructural restrictions impede scalability. We identify solutions in federated learning, swarm robotics, and climate-smart agricultural artificial intelligence. This paper presents a framework for inclusive, resilient, and feasible AI-robotic agricultural systems. Full article
Show Figures

Figure 1

45 pages, 1074 KB  
Systematic Review
A Systematic Review of Sustainable Ground-Based Last-Mile Delivery of Parcels: Insights from Operations Research
by Nima Moradi, Fereshteh Mafakheri and Chun Wang
Vehicles 2025, 7(4), 121; https://doi.org/10.3390/vehicles7040121 - 21 Oct 2025
Abstract
The importance of Last-Mile Delivery (LMD) in the current economy cannot be overstated, as it is the final and most crucial step in the supply chain between retailers and consumers. In major cities, absent intervention, urban LMD emissions are projected to rise by [...] Read more.
The importance of Last-Mile Delivery (LMD) in the current economy cannot be overstated, as it is the final and most crucial step in the supply chain between retailers and consumers. In major cities, absent intervention, urban LMD emissions are projected to rise by >30% by 2030 as e-commerce grows (top-100-city “do-nothing” baseline). Sustainable, innovative ground-based solutions for LMD, such as Electric Vehicles, autonomous delivery robots, parcel lockers, pick-up points, crowdsourcing, and freight-on-transit, can revolutionize urban logistics by reducing congestion and pollution while improving efficiency. However, developing these solutions presents challenges in Operations Research (OR), including problem modeling, optimization, and computations. This systematic review aims to provide an OR-centric synthesis of sustainable, ground-based LMD by (i) classifying these innovative solutions across problem types and methods, (ii) linking technique classes to sustainability goals (cost, emissions/energy, service, resilience, and equity), and (iii) identifying research gaps and promising hybrid designs. We support this synthesis by systematically screening 283 records (2010–2025) and analyzing 265 eligible studies. After the gap analysis, the researchers and practitioners are recommended to explore new combinations of innovative solutions for ground-based LMD. While they offer benefits, their complexity requires advanced solution algorithms and decision-making frameworks. Full article
Show Figures

Figure 1

22 pages, 4351 KB  
Article
A Deployment-Oriented Benchmarking of You Look Only Once (YOLO) Models for Orange Detection and Segmentation in Agricultural Robotics
by Caner Beldek, Emre Sariyildiz and Gursel Alici
Agriculture 2025, 15(20), 2170; https://doi.org/10.3390/agriculture15202170 - 20 Oct 2025
Viewed by 190
Abstract
The deployment of autonomous robots is critical for advancing sustainable agriculture, but their effectiveness hinges on visual perception systems that can reliably operate in natural, real-world environments. Selecting an appropriate vision model for these robots requires a practical evaluation that extends beyond standard [...] Read more.
The deployment of autonomous robots is critical for advancing sustainable agriculture, but their effectiveness hinges on visual perception systems that can reliably operate in natural, real-world environments. Selecting an appropriate vision model for these robots requires a practical evaluation that extends beyond standard accuracy metrics to include critical deployment factors such as computational efficiency, energy consumption, and robustness to environmental disturbances. To address this need, this study presents a deployment-oriented benchmark of state-of-the-art You Look Only Once (YOLO)-based models for orange detection and segmentation. Following a systematic process, the selected models were evaluated on a unified public dataset, annotated to rigorously assess real-world challenges. Performance was compared across five key dimensions: (i) identification accurac, (ii) robustness, (iii) model complexity, (iv) execution time, and (v) energy consump-tion. The results show that the YOLOv5 variants achieved the most accurate detection and segmentation. Notably, YOLO11-based models demonstrated strong and consistent results under all disturbance levels, highlighting their robustness. Lightweight architectures proved well-suited for resource-constrained operations. Interestingly, custom models did not consistently outperform their baselines, while nanoscale models showed demonstra-ble potential for meeting real-time and energy-efficient requirements. These findings offer valuable, evidence-based guidelines for the vision systems of precision agriculture robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

21 pages, 5019 KB  
Article
Real-Time Parking Space Detection Based on Deep Learning and Panoramic Images
by Wu Wei, Hongyang Chen, Jiayuan Gong, Kai Che, Wenbo Ren and Bin Zhang
Sensors 2025, 25(20), 6449; https://doi.org/10.3390/s25206449 - 18 Oct 2025
Viewed by 322
Abstract
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. [...] Read more.
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. Accurate and effective detection of parking spaces is still the core problem that needs to be solved in automatic parking systems. In this study, building upon existing public parking space datasets, a comprehensive panoramic parking space dataset named PSEX (Parking Slot Extended) with complex environmental diversity was constructed by integrating the concept of GAN (Generative Adversarial Network)-based image style transfer. Meanwhile, an improved algorithm based on PP-Yoloe (Paddle-Paddle Yoloe) is used to detect the state (free or occupied) and angle (T-shaped or L-shaped) of the parking space in real-time. For the many and small labels of the parking space, the ResSpp in it is replaced by the ResSimSppf module, the SimSppf structure is introduced at the neck end, and Silu is replaced by Relu in the basic structure of the CBS (Conv-BN-SiLU), and finally an auxiliary detector head is added at the prediction head. Experimental results show that the proposed SimSppf_mepre-Yoloe model achieves an average improvement of 4.5% in mAP50 and 2.95% in mAP50:95 over the baseline PP-Yoloe across various parking space detection tasks. In terms of efficiency, the model maintains comparable inference latency with the baseline, reaching up to 33.7 FPS on the Jetson AGX Xavier platform under TensorRT optimization. And the improved enhancement algorithm can greatly enrich the diversity of parking space data. These results demonstrate that the proposed model achieves a better balance between detection accuracy and real-time performance, making it suitable for deployment in intelligent vehicle and robotic perception systems. Full article
(This article belongs to the Special Issue Robot Swarm Collaboration in the Unstructured Environment)
Show Figures

Figure 1

24 pages, 14492 KB  
Article
Design and Control of a Bionic Underwater Collector Based on the Mouth Mechanism of Stomiidae
by Zexing Mo, Ping Ren, Lei Zhang, Jisheng Zhou, Yaru Li, Bowei Cui and Luze Wang
J. Mar. Sci. Eng. 2025, 13(10), 2001; https://doi.org/10.3390/jmse13102001 - 18 Oct 2025
Viewed by 196
Abstract
Deep-sea mining has gradually emerged as a core domain in global resource exploitation. Underwater autonomous robots, characterized by low cost, high flexibility, and lightweight properties, demonstrate significant advantages in deep-sea mineral development. To address the limitations of traditional deep-sea mining equipment, such as [...] Read more.
Deep-sea mining has gradually emerged as a core domain in global resource exploitation. Underwater autonomous robots, characterized by low cost, high flexibility, and lightweight properties, demonstrate significant advantages in deep-sea mineral development. To address the limitations of traditional deep-sea mining equipment, such as large volume, high energy consumption, and insufficient flexibility, this paper proposes an innovative Underwater Vehicle Collector System (UVCS). Integrating bionic design with autonomous robotic technology, this system features a collection device mimicking the large opening–closing kinematics of the mouth of deep-sea dragonfish (Stomiidae). A dual-rocker mechanism is employed to realize the mouth opening-closing function, and the collection process is driven by the pitching motion of the vehicle without the need for additional motors, thus achieving the advantages of high flexibility, low energy consumption, and light weight. The system is capable of collecting seabed polymetallic nodules with diameters ranging from 1 to 12 cm, thus providing a new solution for sustainable deep-sea mining. Based on the dynamics of UVCS, this paper verifies its attitude stability and collection efficiency in planar motions through single-cycle and multi-cycle simulation analyses. The simulation results indicate that the system operates stably with reliable collection actions. Furthermore, water tank testings demonstrate the opening and closing functions of the UVCS collection device, fully confirming its design feasibility and application potential. In conclusion, the UVCS system, through the integration of bionic design, opens up a new path for practical applications in deep-sea resource exploitation. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 2114 KB  
Review
A Conceptual Framework for Sustainable AI-ERP Integration in Dark Factories: Synthesising TOE, TAM, and IS Success Models for Autonomous Industrial Environments
by Md Samirul Islam, Md Iftakhayrul Islam, Abdul Quddus Mozumder, Md Tamjidul Haq Khan, Niropam Das and Nur Mohammad
Sustainability 2025, 17(20), 9234; https://doi.org/10.3390/su17209234 - 17 Oct 2025
Viewed by 631
Abstract
This study explores a conceptual framework for integrating Artificial Intelligence (AI) into Enterprise Resource Planning (ERP) systems, emphasising its transformative potential in highly automated industrial environments, often referred to as ‘dark factories’, where operations are carried out with minimal human intervention using robotics, [...] Read more.
This study explores a conceptual framework for integrating Artificial Intelligence (AI) into Enterprise Resource Planning (ERP) systems, emphasising its transformative potential in highly automated industrial environments, often referred to as ‘dark factories’, where operations are carried out with minimal human intervention using robotics, AI, and IoT. These lights-out manufacturing environments demand intelligent, autonomous systems that go beyond traditional ERP functionalities to deliver sustainable enterprise operations and supply chain management. Drawing from secondary data and a comprehensive review of existing literature, the study identifies significant gaps in current AI-ERP research and practice, namely, the absence of a unified adoption framework, limited focus on AI-specific implementation challenges, and a lack of structured post-adoption evaluation metrics. In response, this paper proposes a novel integrated conceptual framework that combines the Technology–Organisation–Environment (TOE) framework, the Technology Acceptance Model (TAM), and the Information Systems (IS) Success Model. The model incorporates industry-specific dark factors, such as AI autonomy, human–machine collaboration, operational agility, and sustainability, by optimising resource efficiency, enabling predictive maintenance, enhancing supply chain resilience, and supporting circular economy practices. The primary research aim of the current study is to provide a theoretical foundation for further empirical research on the input of AI-ERP systems into autonomous industry settings. The framework provides a robust theoretical foundation and actionable guidance for researchers, technology leaders, and policy-makers navigating the integration of AI and ERP in sustainable enterprise operations and supply chain management. Full article
(This article belongs to the Special Issue Sustainable Enterprise Operation and Supply Chain Management)
Show Figures

Figure 1

24 pages, 9665 KB  
Article
Achieving Accurate Turns with LEGO SPIKE Prime Robots
by Attila Körei, Szilvia Szilágyi and Ingrida Vaičiulyté
Robotics 2025, 14(10), 145; https://doi.org/10.3390/robotics14100145 - 17 Oct 2025
Viewed by 267
Abstract
LEGO SPIKE Prime robots (The LEGO Group, Billund, Denmark) are widely used in educational settings to foster STEM skills and develop problem-solving competencies. A common task in robotics classes and competitions is moving and controlling wheeled vehicles where precise manoeuvrability, especially turning, is [...] Read more.
LEGO SPIKE Prime robots (The LEGO Group, Billund, Denmark) are widely used in educational settings to foster STEM skills and develop problem-solving competencies. A common task in robotics classes and competitions is moving and controlling wheeled vehicles where precise manoeuvrability, especially turning, is essential for successful navigation. This study aims to provide a comprehensive analysis of the turning mechanisms of LEGO SPIKE Prime robots to facilitate more accurate and effective control. This research combines theoretical analysis with experimental validation. We mathematically derived formulas to relate wheel speeds and steering parameters to the turning radius, and we used regression analysis to refine our models. Additionally, we developed a method where the robot itself collects data on its turning performance, enabling autonomous regression modelling. We found that directly adjusting wheel speeds offers greater precision in turning than using a steering parameter. This finding is supported by the results of the Wilcoxon test, which was performed on a random sample of 30 elements and showed that the effect size is significant (r = 0.7) at a significance level of 0.05. This study provides educators and students with a detailed understanding of turning mechanisms and offers guidance on practical and effective means for achieving the accuracy and consistency needed in educational robotics and robot competitions. Full article
(This article belongs to the Section Educational Robotics)
Show Figures

Figure 1

51 pages, 4751 KB  
Review
Large Language Models and 3D Vision for Intelligent Robotic Perception and Autonomy
by Vinit Mehta, Charu Sharma and Karthick Thiyagarajan
Sensors 2025, 25(20), 6394; https://doi.org/10.3390/s25206394 - 16 Oct 2025
Viewed by 268
Abstract
With the rapid advancement of artificial intelligence and robotics, the integration of Large Language Models (LLMs) with 3D vision is emerging as a transformative approach to enhancing robotic sensing technologies. This convergence enables machines to perceive, reason, and interact with complex environments through [...] Read more.
With the rapid advancement of artificial intelligence and robotics, the integration of Large Language Models (LLMs) with 3D vision is emerging as a transformative approach to enhancing robotic sensing technologies. This convergence enables machines to perceive, reason, and interact with complex environments through natural language and spatial understanding, bridging the gap between linguistic intelligence and spatial perception. This review provides a comprehensive analysis of state-of-the-art methodologies, applications, and challenges at the intersection of LLMs and 3D vision, with a focus on next-generation robotic sensing technologies. We first introduce the foundational principles of LLMs and 3D data representations, followed by an in-depth examination of 3D sensing technologies critical for robotics. The review then explores key advancements in scene understanding, text-to-3D generation, object grounding, and embodied agents, highlighting cutting-edge techniques such as zero-shot 3D segmentation, dynamic scene synthesis, and language-guided manipulation. Furthermore, we discuss multimodal LLMs that integrate 3D data with touch, auditory, and thermal inputs, enhancing environmental comprehension and robotic decision-making. To support future research, we catalog benchmark datasets and evaluation metrics tailored for 3D-language and vision tasks. Finally, we identify key challenges and future research directions, including adaptive model architectures, enhanced cross-modal alignment, and real-time processing capabilities, which pave the way for more intelligent, context-aware, and autonomous robotic sensing systems. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

27 pages, 5279 KB  
Article
Concept-Guided Exploration: Building Persistent, Actionable Scene Graphs
by Noé José Zapata Cornejo, Gerardo Pérez, Alejandro Torrejón, Pedro Núñez and Pablo Bustos
Appl. Sci. 2025, 15(20), 11084; https://doi.org/10.3390/app152011084 - 16 Oct 2025
Viewed by 174
Abstract
The perception of 3D space by mobile robots is rapidly moving from flat metric grid representations to hybrid metric-semantic graphs built from human-interpretable concepts. While most approaches first build metric maps and then add semantic layers, we explore an alternative, concept-first architecture in [...] Read more.
The perception of 3D space by mobile robots is rapidly moving from flat metric grid representations to hybrid metric-semantic graphs built from human-interpretable concepts. While most approaches first build metric maps and then add semantic layers, we explore an alternative, concept-first architecture in which spatial understanding emerges from asynchronous concept agents that directly instantiate and manage semantic entities. Our robot employs two spatial concepts—room and door—implemented as autonomous processes within a cognitive distributed architecture. These concept agents cooperatively build a shared scene graph representation of indoor layouts through active exploration and incremental validation. The key architectural principle is hierarchical constraint propagation: Room instantiation provides geometric and semantic priors to guide and support door detection within wall boundaries. The resulting structure is maintained by a complementary functional principle based on prediction-matching loops. This approach is designed to yield an actionable, human-interpretable spatial representation without relying on any pre-existing global metric map, supporting scalable operation and persistent, task-relevant understanding in structured indoor environments. Full article
(This article belongs to the Special Issue Advances in Cognitive Robotics and Control)
Show Figures

Figure 1

37 pages, 1690 KB  
Review
Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability
by Zhen Ma, Xinzhong Wang, Xuegeng Chen, Bin Hu and Jingbin Li
Agriculture 2025, 15(20), 2151; https://doi.org/10.3390/agriculture15202151 - 16 Oct 2025
Viewed by 445
Abstract
Crop row detection technology, as one of the key technologies for agricultural robots to achieve autonomous navigation and precise operations, is related to the precision and stability of agricultural machinery operations. Its research and development will also significantly determine the development process of [...] Read more.
Crop row detection technology, as one of the key technologies for agricultural robots to achieve autonomous navigation and precise operations, is related to the precision and stability of agricultural machinery operations. Its research and development will also significantly determine the development process of intelligent agriculture. The paper first summarizes the mainstream technical methods, performance evaluation systems, and adaptability analysis of typical agricultural scenes for crop row detection. The paper also summarizes and explains the technical principles and characteristics of traditional methods based on visual sensors, point cloud preprocessing based on LiDAR, line structure extraction and 3D feature calculation methods, and multi-sensor fusion methods. Secondly, a review was conducted on performance evaluation criteria such as accuracy, efficiency, robustness, and practicality, analyzing and comparing the applicability of different methods in typical scenarios such as open fields, facility agriculture, orchards, and special terrains. Based on the multidimensional analysis above, it is concluded that a single technology has specific environmental adaptability limitations. Multi-sensor fusion can help improve robustness in complex scenarios, and the fusion advantage will gradually increase with the increase in the number of sensors. Suggestions on the development of agricultural robot navigation technology are made based on the current status of technological applications in the past five years and the needs for future development. This review systematically summarizes crop row detection technology, providing a clear technical framework and scenario adaptation reference for research in this field, and striving to promote the development of precision and efficiency in agricultural production. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

25 pages, 2727 KB  
Article
Berthing State Estimation for Autonomous Surface Vessels Using Ship-Based 3D LiDAR
by Haichao Wang, Yong Yin, Qianfeng Jing and Chen-Liang Zhang
J. Mar. Sci. Eng. 2025, 13(10), 1975; https://doi.org/10.3390/jmse13101975 - 15 Oct 2025
Viewed by 148
Abstract
Automated berthing remains a critical challenge for autonomous surface vessels (ASVs), necessitating precise berthing state estimation as a fundamental prerequisite. In this paper, we present a novel berthing state estimation method tailored for ASVs and based on 3D LiDAR technology. Firstly, a berthing [...] Read more.
Automated berthing remains a critical challenge for autonomous surface vessels (ASVs), necessitating precise berthing state estimation as a fundamental prerequisite. In this paper, we present a novel berthing state estimation method tailored for ASVs and based on 3D LiDAR technology. Firstly, a berthing plane acquisition scheme based on point cloud plane fitting is proposed; the feasibility of the scheme was verified by experiments. The point cloud registration algorithm was used to realize the ship pose estimation. Before registration, the preprocessing technology was used to filter out the noise and outliers in the point cloud data to improve the accuracy of pose estimation. A detailed method for calculating the berthing state information is proposed. This method considers the influence of ship roll, pitch, and yaw during berthing, and ensures the accuracy of the obtained state information. Finally, a real-time ship berthing perception framework was constructed using the Robot Operating System (ROS), enabling the continuous output of vital berthing state information, including berthing distance, velocity, approaching angle, and yaw rate, at a frequency of 10 Hz. To validate the effectiveness of our algorithm, extensive real ship experiments were conducted, yielding highly promising results. The average angle error was found to be less than 0.26°, with an average distance error below 0.023 m. Full article
(This article belongs to the Special Issue New Technologies in Autonomous Ship Navigation)
Show Figures

Figure 1

19 pages, 3837 KB  
Article
RTK-GNSS Increment Prediction with a Complementary “RTK-SeqNet” Network: Exploring Hybridization with State-Space Systems
by Hassan Ali, Malik Muhammad Waqar, Ruihan Ma, Sang Cheol Kim, Yujun Baek, Jongrin Kim and Haksung Lee
Sensors 2025, 25(20), 6349; https://doi.org/10.3390/s25206349 - 14 Oct 2025
Viewed by 407
Abstract
Accurate and reliable localization is crucial for autonomous systems operating in dynamic and semi-structured environments, such as precision agriculture and outdoor robotics. Advances in Global Navigation Satellite System (GNSS) technologies, particularly Differential GPS (DGPS) and Real-Time Kinematic (RTK) positioning, have significantly enhanced position [...] Read more.
Accurate and reliable localization is crucial for autonomous systems operating in dynamic and semi-structured environments, such as precision agriculture and outdoor robotics. Advances in Global Navigation Satellite System (GNSS) technologies, particularly Differential GPS (DGPS) and Real-Time Kinematic (RTK) positioning, have significantly enhanced position estimation precision, achieving centimeter-level accuracy. However, GNSS-based localization continues to encounter inherent limitations due to signal degradation and intermittent data loss, known as GNSS outages. This paper proposes a novel complementary RTK-like position increment prediction model with the purpose of mitigating challenges posed by GNSS outages and RTK signal discontinuities. This model can be integrated with a Dual Extended Kalman Filter (Dual EKF) sensor fusion framework, widely utilized in robotic navigation. The proposed model uses time-synchronized inertial measurement data combined with the velocity inputs to predict GNSS position increments during periods of outages and RTK disengagement, effectively substituting for missing GNSS measurements. The model demonstrates high accuracy, as the total aDTW across 180 s trajectories averages at 1.6 m while the RMSE averages at 3.4 m. The 30 s test shows errors below 30 cm. We leave the actual Dual EKF fusion to future work, and here, we evaluate the standalone deep network. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop