sensors-logo

Journal Browser

Journal Browser

Control Systems, Vision Technology and Sensor Fusion for Unmanned Robotic Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 25 April 2025 | Viewed by 13288

Special Issue Editors


E-Mail Website
Guest Editor
Assistant Professor, Department of Informatics & Telecommunications, University of Thessaly, 35100 Lamia, Greece
Interests: unmanned vehicles (underwater, aerial, mobile); visual servo control; sensor fusion; system identification; computer vision; robotics, motion planning and control

E-Mail Website
Guest Editor
Division of Systems and Automatic Control, Department of Electrical and Computer Engineering, University of Patras, Greece
Interests: Prescribed performance control; Multi-agent systems; Robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are inviting you to submit to the Special Issue “Control systems, Vision Technology and Sensor Fusion for Unmanned Robotic Vehicles” in Sensors.

Unmanned robotic vehicles (marine, aerial, ground) have recently been established as a popular solution for a variety of autonomous or semi-autonomous tasks. As sensor technology and actuators rapidly advance, unmanned robots have concurrently evolved with improved endurance. In critical missions such as search and rescue, load transportation, precision agriculture, field surveillance and monitoring, the deployment of single or multi-agent unmanned systems provides safe, robust, fast and efficient solutions.

This Special Issue aims to highlight innovative research on computer vision, visual servo control, sensor fusion, control systems and their application in unmanned robotic vehicles. We welcome contributions from all fields related to unmanned vehicles and robotics, including, but not limited to, the following:

  • Unmanned robotic vehicles (marine, aerial, ground);
  • Multi-agent systems;
  • Motion planning and control;
  • Computer vision;
  • Visual servo control;
  • Sensor fusion;
  • Sensing technology.

Dr. George C. Karras
Dr. Charalampos P. Bechlioulis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 11087 KiB  
Article
Path Planning and Motion Control of Robot Dog Through Rough Terrain Based on Vision Navigation
by Tianxiang Chen, Yipeng Huangfu, Sutthiphong Srigrarom and Boo Cheong Khoo
Sensors 2024, 24(22), 7306; https://doi.org/10.3390/s24227306 - 15 Nov 2024
Viewed by 649
Abstract
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from [...] Read more.
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from MIT Cheetah. The system employs an Intel RealSense D435i depth camera for depth vision-based navigation, which enables high-fidelity 3D environmental mapping and real-time path planning. A significant innovation is the customization of the EGO-Planner to optimize trajectory planning in dynamically changing terrains, coupled with the implementation of a multi-body dynamics model that significantly improves the robot’s stability and maneuverability across various surfaces. The experimental results show that the RGB-D system exhibits superior velocity stability and trajectory accuracy to the SLAM system, with a 20% reduction in the cumulative velocity error and a 10% improvement in path tracking precision. The experimental results also show that the RGB-D system achieves smoother navigation, requiring 15% fewer iterations for path planning, and a 30% faster success rate recovery in challenging environments. The successful application of these technologies in simulated urban disaster scenarios suggests promising future applications in emergency response and complex urban environments. Part two of this paper presents the development of a robust path planning algorithm for a robot dog on a rough terrain based on attached binocular vision navigation. We use a commercial-of-the-shelf (COTS) robot dog. An optical CCD binocular vision dynamic tracking system is used to provide environment information. Likewise, the pose and posture of the robot dog are obtained from the robot’s own sensors, and a kinematics model is established. Then, a binocular vision tracking method is developed to determine the optimal path, provide a proposal (commands to actuators) of the position and posture of the bionic robot, and achieve stable motion on tough terrains. The terrain is assumed to be a gentle uneven terrain to begin with and subsequently proceeds to a more rough surface. This work consists of four steps: (1) pose and position data are acquired from the robot dog’s own inertial sensors, (2) terrain and environment information is input from onboard cameras, (3) information is fused (integrated), and (4) path planning and motion control proposals are made. Ultimately, this work provides a robust framework for future developments in the vision-based navigation and control of quadruped robots, offering potential solutions for navigating complex and dynamic terrains. Full article
Show Figures

Figure 1

24 pages, 6023 KiB  
Article
Probability-Based LIDAR–Camera Calibration Considering Target Positions and Parameter Evaluation Using a Data Fusion Map
by Ryuhei Yamada and Yuichi Yaguchi
Sensors 2024, 24(12), 3981; https://doi.org/10.3390/s24123981 - 19 Jun 2024
Viewed by 764
Abstract
The data fusion of a 3-D light detection and ranging (LIDAR) point cloud and a camera image during the creation of a 3-D map is important because it enables more efficient object classification by autonomous mobile robots and facilitates the construction of a [...] Read more.
The data fusion of a 3-D light detection and ranging (LIDAR) point cloud and a camera image during the creation of a 3-D map is important because it enables more efficient object classification by autonomous mobile robots and facilitates the construction of a fine 3-D model. The principle behind data fusion is the accurate estimation of the LIDAR–camera’s external parameters through extrinsic calibration. Although several studies have proposed the use of multiple calibration targets or poses for precise extrinsic calibration, no study has clearly defined the relationship between the target positions and the data fusion accuracy. Here, we strictly investigated the effects of the deployment of calibration targets on data fusion and proposed the key factors to consider in the deployment of the targets in extrinsic calibration. Thereafter, we applied a probability method to perform a global and robust sampling of the camera external parameters. Subsequently, we proposed an evaluation method for the parameters, which utilizes the color ratio of the 3-D colored point cloud map. The derived probability density confirmed the good performance of the deployment method in estimating the camera external parameters. Additionally, the evaluation quantitatively confirmed the effectiveness of our deployments of the calibration targets in achieving high-accuracy data fusion compared with the results obtained using the previous methods. Full article
Show Figures

Figure 1

26 pages, 1421 KiB  
Article
Trajectory Following Control of an Unmanned Vehicle for Marine Environment Sensing
by Tegen Eyasu Derbew, Nak Yong Ko and Sung Hyun You
Sensors 2024, 24(4), 1262; https://doi.org/10.3390/s24041262 - 16 Feb 2024
Cited by 1 | Viewed by 1207
Abstract
An autonomous surface vehicle is indispensable for sensing of marine environments owing to its challenging and dynamic conditions. To accomplish this task, the vehicle has to navigate through a desired trajectory. However, due to the complexity and dynamic nature of a marine environment [...] Read more.
An autonomous surface vehicle is indispensable for sensing of marine environments owing to its challenging and dynamic conditions. To accomplish this task, the vehicle has to navigate through a desired trajectory. However, due to the complexity and dynamic nature of a marine environment affected by factors such as ocean currents, waves, and wind, a robust controller is of paramount importance for maintaining the vehicle along the desired trajectory by minimizing the trajectory error. To this end, in this study, we propose a robust discrete-time super-twisting second-order sliding mode controller (DSTA). Besides, this control method effectively suppresses the chattering effect. To start with, the vehicle’s model is discretized using an integral approximation with nonlinear terms including environmental disturbances treated as perturbation terms. Then, the perturbation is estimated using a time delay estimator (TDE), which further enhances the robustness of the proposed method and allows us to choose smaller controller gains. Moreover, we employ a genetic algorithm (GA) to tune the controller gains based on a quadratic cost function that considers the tracking error and control energy. The stability of the proposed sliding mode controller (SMC) is rigorously demonstrated using a Lyapunov approach. The controller is implemented using the Simulink® software. Finally, a conventional discrete-time SMC based on the reaching law (DSMR) and a heuristically tuned DSTA controller are used as benchmarks to compare the tracking accuracy and chattering attenuation capability of the proposed GA based DSTA (GA-DSTA). Simulation results are presented both with or without external disturbances. The simulation results demonstrate that the proposed controller drives the vehicle along the desired trajectory successfully and outperforms the other two controllers. Full article
Show Figures

Figure 1

23 pages, 8215 KiB  
Article
Detection and Control Framework for Unpiloted Ground Support Equipment within the Aircraft Stand
by Tianxiong Zhang, Zhiqiang Zhang and Xinping Zhu
Sensors 2024, 24(1), 205; https://doi.org/10.3390/s24010205 - 29 Dec 2023
Cited by 5 | Viewed by 1424
Abstract
The rapid advancement in Unpiloted Robotic Vehicle technology has significantly influenced ground support operations at airports, marking a critical shift towards future development. This study presents a novel Unpiloted Ground Support Equipment (GSE) detection and control framework, comprising virtual channel delineation, boundary line [...] Read more.
The rapid advancement in Unpiloted Robotic Vehicle technology has significantly influenced ground support operations at airports, marking a critical shift towards future development. This study presents a novel Unpiloted Ground Support Equipment (GSE) detection and control framework, comprising virtual channel delineation, boundary line detection, object detection, and navigation and docking control, to facilitate automated aircraft docking within the aircraft stand. Firstly, we developed a bespoke virtual channel layout for Unpiloted GSE, aligning with operational regulations and accommodating a wide spectrum of aircraft types. This layout employs turning induction markers to define essential navigation points, thereby streamlining GSE movement. Secondly, we integrated cameras and Lidar sensors to enable rapid and precise pose adjustments during docking. The introduction of a boundary line detection system, along with an optimized, lightweight YOLO algorithm, ensures swift and accurate identification of boundaries, obstacles, and docking sites. Finally, we formulated a unique control algorithm for effective obstacle avoidance and docking in varied apron conditions, guaranteeing meticulous management of vehicle pose and speed. Our experimental findings reveal an 89% detection accuracy for the virtual channel boundary line, a 95% accuracy for guiding markers, and an F1-Score of 0.845 for the YOLO object detection algorithm. The GSE achieved an average docking error of less than 3 cm and an angular deviation under 5 degrees, corroborating the efficacy and advanced nature of our proposed approach in Unpiloted GSE detection and aircraft docking. Full article
Show Figures

Figure 1

32 pages, 9318 KiB  
Article
Vibration-Based Recognition of Wheel–Terrain Interaction for Terramechanics Model Selection and Terrain Parameter Identification for Lugged-Wheel Planetary Rovers
by Fengtian Lv, Nan Li, Haibo Gao, Liang Ding, Zongquan Deng, Haitao Yu and Zhen Liu
Sensors 2023, 23(24), 9752; https://doi.org/10.3390/s23249752 - 11 Dec 2023
Viewed by 982
Abstract
Identifying terrain parameters is important for high-fidelity simulation and high-performance control of planetary rovers. The wheel–terrain interaction classes (WTICs) are usually different for rovers traversing various types of terrain. Every terramechanics model corresponds to its wheel–terrain interaction class (WTIC). Therefore, for terrain parameter [...] Read more.
Identifying terrain parameters is important for high-fidelity simulation and high-performance control of planetary rovers. The wheel–terrain interaction classes (WTICs) are usually different for rovers traversing various types of terrain. Every terramechanics model corresponds to its wheel–terrain interaction class (WTIC). Therefore, for terrain parameter identification of the terramechanics model when rovers traverse various terrains, terramechanics model switching corresponding to the WTIC needs to be solved. This paper proposes a speed-independent vibration-based method for WTIC recognition to switch the terramechanics model and then identify its terrain parameters. In order to switch terramechanics models, wheel–terrain interactions are divided into three classes. Three vibration models of wheels under three WTICs have been built and analyzed. Vibration features in the models are extracted and non-dimensionalized to be independent of wheel speed. A vibration-feature-based recognition method of the WTIC is proposed. Then, the terrain parameters of the terramechanics model corresponding to the recognized WTIC are identified. Experiment results obtained using a Planetary Rover Prototype show that the identification method of terrain parameters is effective for rovers traversing various terrains. The relative errors of estimated wheel–terrain interaction force with identified terrain parameters are less than 16%, 12%, and 9% for rovers traversing hard, gravel, and sandy terrain, respectively. Full article
Show Figures

Figure 1

19 pages, 7055 KiB  
Article
Research on Attitude Detection and Flight Experiment of Coaxial Twin-Rotor UAV
by Deyi You, Yongping Hao, Jiulong Xu and Liyuan Yang
Sensors 2022, 22(24), 9572; https://doi.org/10.3390/s22249572 - 7 Dec 2022
Cited by 1 | Viewed by 1958
Abstract
Aiming at the problem that the single sensor of the coaxial UAV cannot accurately measure attitude information, a pose estimation algorithm based on unscented Kalman filter information fusion is proposed. The kinematics and dynamics characteristics of coaxial folding twin-rotor UAV are studied, and [...] Read more.
Aiming at the problem that the single sensor of the coaxial UAV cannot accurately measure attitude information, a pose estimation algorithm based on unscented Kalman filter information fusion is proposed. The kinematics and dynamics characteristics of coaxial folding twin-rotor UAV are studied, and a mathematical model is established. The common attitude estimation methods are analyzed, and the extended Kalman filter algorithm and unscented Kalman filter algorithm are established. In order to complete the test of the prototype of a small coaxial twin-rotor UAV, a test platform for the dynamic performance and attitude angle of the semi-physical flight of the UAV was established. The platform can analyze the mechanical vibration, attitude angle and noise of the aircraft. It can also test and analyze the characteristics of the mechanical vibration and noise produced by the UAV at different rotor speeds. Furthermore, the static and time-varying trends of the pitch angle and yaw angle of the Kalman filter attitude estimation algorithm is further analyzed through static and dynamic experiments. The analysis results show that the attitude estimation of the UKF is better than that of the EKF when the throttle is between 0.2σ and 0.9σ. The error of the algorithm is less than 0.6°. The experiment and analysis provide a reference for the optimization of the control parameters and flight control strategy of the coaxial folding dual-rotor aircraft. Full article
Show Figures

Figure 1

21 pages, 5249 KiB  
Article
Adverse Weather Target Detection Algorithm Based on Adaptive Color Levels and Improved YOLOv5
by Jiale Yao, Xiangsuo Fan, Bing Li and Wenlin Qin
Sensors 2022, 22(21), 8577; https://doi.org/10.3390/s22218577 - 7 Nov 2022
Cited by 20 | Viewed by 3448
Abstract
With the continuous development of artificial intelligence and computer vision technology, autonomous vehicles have developed rapidly. Although self-driving vehicles have achieved good results in normal environments, driving in adverse weather can still pose a challenge to driving safety. To improve the detection ability [...] Read more.
With the continuous development of artificial intelligence and computer vision technology, autonomous vehicles have developed rapidly. Although self-driving vehicles have achieved good results in normal environments, driving in adverse weather can still pose a challenge to driving safety. To improve the detection ability of self-driving vehicles in harsh environments, we first construct a new color levels offset compensation model to perform adaptive color levels correction on images, which can effectively improve the clarity of targets in adverse weather and facilitate the detection and recognition of targets. Then, we compare several common one-stage target detection algorithms and improve on the best-performing YOLOv5 algorithm. We optimize the parameters of the Backbone of the YOLOv5 algorithm by increasing the number of model parameters and incorporating the Transformer and CBAM into the YOLOv5 algorithm. At the same time, we use the loss function of EIOU to replace the loss function of the original CIOU. Finally, through the ablation experiment comparison, the improved algorithm improves the detection rate of the targets, with the mAP reaching 94.7% and the FPS being 199.86. Full article
Show Figures

Figure 1

15 pages, 3671 KiB  
Article
Cooperative Location Method for Leader-Follower UAV Formation Based on Follower UAV’s Moving Vector
by Xudong Zhu, Jizhou Lai and Sheng Chen
Sensors 2022, 22(19), 7125; https://doi.org/10.3390/s22197125 - 20 Sep 2022
Cited by 9 | Viewed by 1820
Abstract
The traditional leader-follower Unmanned Aerial Vehicle (UAV) formation cooperative positioning (CP) algorithm, based on relative ranging, requires at least four leader UAV positions to be known accurately, using relative distance with leader UAVs to achieve the unknown position follower UAV’s high-precision positioning. When [...] Read more.
The traditional leader-follower Unmanned Aerial Vehicle (UAV) formation cooperative positioning (CP) algorithm, based on relative ranging, requires at least four leader UAV positions to be known accurately, using relative distance with leader UAVs to achieve the unknown position follower UAV’s high-precision positioning. When the number of the known position leader UAVs is limited, the traditional CP algorithm is not applicable. Aiming at the minimum cooperative unit, which consists of a known position leader UAV and an unknown position follower UAV, this paper proposes a CP method based on the follower UAV’s moving vector. Considering the follower UAV can only acquire the single distance with the leader UAV at each distance-sampling period, it is difficult to determine the follower UAV’s spatial location. The follower UAV’s moving vector is used to construct position observation of the follower UAV’s inertial navigation system (INS). High-precision positioning is achieved by combining the follower UAV’s moving vector. In the process of CP, the leader UAV obtains a high-precision position by an INS/Global Positioning System (GPS) loosely integrated navigation system and transmits its position information to the follower UAV. Based on accurate modeling of the follower UAV’s INS, the position, velocity and heading observation equation of the follower UAV’s INS are constructed. The improved extended Kalman filtering is designed to estimate the state vector to improve the follower UAV’s positioning accuracy. In addition, considering that the datalink system based on radio signals may be interfered with by the external environment, it is difficult for the follower UAV to obtain relative distance information from the leader UAV in real time. In this paper, the availability of the relative distance information is judged by a two-state Markov chain. Finally, a real flight test is conducted to validate the performance of the proposed algorithm. Full article
Show Figures

Figure 1

Back to TopTop