sensors-logo

Journal Browser

Journal Browser

Applications of Intelligent Robots: Sensing, Interaction, Navigation and Control Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 August 2024) | Viewed by 18329

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatic Control, Electrical and Electronics Engieneering and Industrial Computing, Universidad Politécnica de Madrid, Madrid, Spain
Interests: control education; machine learning; autonomous systems; fuzzy control; social robots; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatic Control, Electrical and Electronics Engineering and Industrial Computing, Universidad Politécnica de Madrid, Madrid, Spain
Interests: robots; underwater robots; robot control; nonlinear control

E-Mail Website
Guest Editor
Department of Automatic Control, Electrical and Electronics Engineering and Industrial Computing, Universidad Politécnica de Madrid, Madrid, Spain
Interests: rehabilitation robotics; assistive robotics; exoskeleton; human-robot interaction; bio-signal processing; machine learning

Special Issue Information

Dear Colleagues,

The latest advances in artificial intelligence are revolutionizing all sectors, including robotics. On the one hand, industrial robotics has produced significant advances in sensing, perception, manipulation, and collaboration with humans. On the other hand, social robotics has benefited from these developments by generating better conversational agents, improving the identification of emotions, and expanding the range of tasks it can perform.

This Special Issue aims to showcase all research in which robotics is enriched by integrating the latest trends in artificial intelligence. Authors are invited to submit high-quality papers on topics including (but not limited to) the following:

  • Robot localization and navigation.
  • Rehabilitation robotics.
  • Assistive robotics.
  • Exoskeleton.
  • Autonomous robots (air, land, sea, submarine, or aerospace) in unstructured environments.
  • Social robotics.
  • Emotion recognition.
  • Human–robot interaction.
  • Bio-signal processing.
  • Machine learning
  • Cognitive robotics.
  • System control.
  • Human–robot collaboration technologies.
  • Multi-agent robotic systems.

Dr. Daniel Galan
Dr. Ramon A. Suarez Fernandez
Dr. Francisco Javier Badesa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6835 KiB  
Article
Development and Investigation of a Grasping Analysis System with Two-Axis Force Sensors at Each of the 16 Points on the Object Surface for a Hardware-Based FinRay-Type Soft Gripper
by Takahide Kitamura, Kojiro Matsushita, Naoki Nakatani and Shunsei Tsuchiyama
Sensors 2024, 24(15), 4896; https://doi.org/10.3390/s24154896 - 28 Jul 2024
Viewed by 642
Abstract
The FinRay soft gripper achieves passive enveloping grasping through its functional flexible structure, adapting to the contact configuration of the object to be grasped. However, variations in beam position and thickness lead to different behaviors, making it important to research the relationship between [...] Read more.
The FinRay soft gripper achieves passive enveloping grasping through its functional flexible structure, adapting to the contact configuration of the object to be grasped. However, variations in beam position and thickness lead to different behaviors, making it important to research the relationship between structure and force. Conventional research using FEM simulations has tested various virtual FinRay models but replicating phenomena such as buckling and slipping has been challenging. While hardware-based methods that involve installing sensors on the gripper and the object to analyze their states have been attempted, no studies have focused on the tangential contact force related to slipping. Therefore, we developed a 16-way object contact force measurement device incorporating two-axis force sensors into each of the 16 segmented objects and compared the normal and tangential components of the enveloping grasping force of the FinRay soft gripper under two types of contact friction conditions. In the first experiment, the proposed device was compared with a device containing a six-axis force sensor in one segmented object, confirming that the proposed device has no issues with measurement performance. In the second experiment, comparisons of the proposed device were made under various conditions: two contact friction states, three object contact positions, and two object motion states. The results demonstrated that the proposed device could decompose and analyze the grasping force into its normal and tangential components for each segmented object. Moreover, low friction conditions result in a wide contact area with lower tangential frictional force and a uniform normal pushing force, achieving effective enveloping grasping. Full article
Show Figures

Figure 1

21 pages, 9152 KiB  
Article
Robotic Grasping of Unknown Objects Based on Deep Learning-Based Feature Detection
by Kai Sherng Khor, Chao Liu and Chien Chern Cheah
Sensors 2024, 24(15), 4861; https://doi.org/10.3390/s24154861 - 26 Jul 2024
Viewed by 705
Abstract
In recent years, the integration of deep learning into robotic grasping algorithms has led to significant advancements in this field. However, one of the challenges faced by many existing deep learning-based grasping algorithms is their reliance on extensive training data, which makes them [...] Read more.
In recent years, the integration of deep learning into robotic grasping algorithms has led to significant advancements in this field. However, one of the challenges faced by many existing deep learning-based grasping algorithms is their reliance on extensive training data, which makes them less effective when encountering unknown objects not present in the training dataset. This paper presents a simple and effective grasping algorithm that addresses this challenge through the utilization of a deep learning-based object detector, focusing on oriented detection of key features shared among most objects, namely straight edges and corners. By integrating these features with information obtained through image segmentation, the proposed algorithm can logically deduce a grasping pose without being limited by the size of the training dataset. Experimental results on actual robotic grasping of unknown objects over 400 trials show that the proposed method can achieve a higher grasp success rate of 98.25% compared to existing methods. Full article
Show Figures

Figure 1

22 pages, 7752 KiB  
Article
Research on the Collision Risk of Fusion Operation of Manned Aircraft and Unmanned Aircraft at Zigong Airport
by Longyang Huang, Chi Huang, Chao Zhou, Chuanjiang Xie, Zerong Zhao and Tao Huang
Sensors 2024, 24(15), 4842; https://doi.org/10.3390/s24154842 - 25 Jul 2024
Viewed by 625
Abstract
Low-altitude airspace is developing rapidly, but the utilization rate of airspace resources is low. Therefore, in order to solve the problem of the safe operation of the fusion of large UAVs and manned aircraft in the same airspace, this paper analyzes the theoretical [...] Read more.
Low-altitude airspace is developing rapidly, but the utilization rate of airspace resources is low. Therefore, in order to solve the problem of the safe operation of the fusion of large UAVs and manned aircraft in the same airspace, this paper analyzes the theoretical calculation of the collision risk of the fusion operation of manned aircraft and UAVs at Feng Ming Airport in Zigong, verifying that while assessing the safety spacing of 10 km in the lateral direction, it further simulates the possibility of calculating the theoretical smaller safety spacing. The study will propose a new theory of error spacing safety margin and improve it according to the traditional Event collision risk model, combining the error spacing safety margin to establish an improved collision model more suitable for the fusion operation of manned and unmanned aircraft and reduce the redundancy of calculation. The error factors affecting manned and unmanned aircraft at Zigong Airport are analyzed, and theoretical calculations are analyzed by combining the actual data of Zigong Airport. Finally, the Monte Carlo simulation method is used to solve the error, substitute the calculation results, and simulate a section of the trajectory of the fusion operation for the reverse argument. The theoretical calculation results show that the collision risk from 10 km to 8 km satisfies the lateral target safety level (TSL) specified by ICAO under both traditional and improved models. The collision risk calculated by the improved model incorporating the error spacing safety margin is smaller, which enhances the safety of the model calculations. The results of the study can provide theoretical references for the fusion operation of manned and unmanned aircraft. Full article
Show Figures

Figure 1

17 pages, 4383 KiB  
Article
Discrete-Time Visual Servoing Control with Adaptive Image Feature Prediction Based on Manipulator Dynamics
by Chenlu Liu, Chao Ye, Hongzhe Shi and Weiyang Lin
Sensors 2024, 24(14), 4626; https://doi.org/10.3390/s24144626 - 17 Jul 2024
Viewed by 530
Abstract
In this paper, a practical discrete-time control method with adaptive image feature prediction for the image-based visual servoing (IBVS) scheme is presented. In the discrete-time IBVS inner-loop/outer-loop control architecture, the time delay caused by image capture and computation is noticed. Considering the dynamic [...] Read more.
In this paper, a practical discrete-time control method with adaptive image feature prediction for the image-based visual servoing (IBVS) scheme is presented. In the discrete-time IBVS inner-loop/outer-loop control architecture, the time delay caused by image capture and computation is noticed. Considering the dynamic characteristics of a 6-DOF manipulator velocity input system, we propose a linear dynamic model to describe the motion of a robot end effector. Furthermore, for better estimation of image features and smoothing of the robot’s velocity input, we propose an adaptive image feature prediction method that employs past image feature data and real robot velocity data to adopt the prediction parameters. The experimental results on a 6-DOF robotic arm demonstrate that the proposed method can ensure system stability and accelerate system convergence. Full article
Show Figures

Figure 1

16 pages, 9644 KiB  
Article
FF3D: A Rapid and Accurate 3D Fruit Detector for Robotic Harvesting
by Tianhao Liu, Xing Wang, Kewei Hu, Hugh Zhou, Hanwen Kang and Chao Chen
Sensors 2024, 24(12), 3858; https://doi.org/10.3390/s24123858 - 14 Jun 2024
Cited by 2 | Viewed by 865
Abstract
This study presents the Fast Fruit 3D Detector (FF3D), a novel framework that contains a 3D neural network for fruit detection and an anisotropic Gaussian-based next-best view estimator. The proposed one-stage 3D detector, which utilizes an end-to-end 3D detection network, shows superior accuracy [...] Read more.
This study presents the Fast Fruit 3D Detector (FF3D), a novel framework that contains a 3D neural network for fruit detection and an anisotropic Gaussian-based next-best view estimator. The proposed one-stage 3D detector, which utilizes an end-to-end 3D detection network, shows superior accuracy and robustness compared to traditional 2D methods. The core of the FF3D is a 3D object detection network based on a 3D convolutional neural network (3D CNN) followed by an anisotropic Gaussian-based next-best view estimation module. The innovative architecture combines point cloud feature extraction and object detection tasks, achieving accurate real-time fruit localization. The model is trained on a large-scale 3D fruit dataset and contains data collected from an apple orchard. Additionally, the proposed next-best view estimator improves accuracy and lowers the collision risk for grasping. Thorough assessments on the test set and in a simulated environment validate the efficacy of our FF3D. The experimental results show an AP of 76.3%, an AR of 92.3%, and an average Euclidean distance error of less than 6.2 mm, highlighting the framework’s potential to overcome challenges in orchard environments. Full article
Show Figures

Figure 1

28 pages, 9285 KiB  
Article
Toward Fully Automated Inspection of Critical Assets Supported by Autonomous Mobile Robots, Vision Sensors, and Artificial Intelligence
by Javier Sanchez-Cubillo, Javier Del Ser and José Luis Martin
Sensors 2024, 24(12), 3721; https://doi.org/10.3390/s24123721 - 7 Jun 2024
Viewed by 1134
Abstract
Robotic inspection is advancing in performance capabilities and is now being considered for industrial applications beyond laboratory experiments. As industries increasingly rely on complex machinery, pipelines, and structures, the need for precise and reliable inspection methods becomes paramount to ensure operational integrity and [...] Read more.
Robotic inspection is advancing in performance capabilities and is now being considered for industrial applications beyond laboratory experiments. As industries increasingly rely on complex machinery, pipelines, and structures, the need for precise and reliable inspection methods becomes paramount to ensure operational integrity and mitigate risks. AI-assisted autonomous mobile robots offer the potential to automate inspection processes, reduce human error, and provide real-time insights into asset conditions. A primary concern is the necessity to validate the performance of these systems under real-world conditions. While laboratory tests and simulations can provide valuable insights, the true efficacy of AI algorithms and robotic platforms can only be determined through rigorous field testing and validation. This paper aligns with this need by evaluating the performance of one-stage models for object detection in tasks that support and enhance the perception capabilities of autonomous mobile robots. The evaluation addresses both the execution of assigned tasks and the robot’s own navigation. Our benchmark of classification models for robotic inspection considers three real-world transportation and logistics use cases, as well as several generations of the well-known YOLO architecture. The performance results from field tests using real robotic devices equipped with such object detection capabilities are promising, and expose the enormous potential and actionability of autonomous robotic systems for fully automated inspection and maintenance in open-world settings. Full article
Show Figures

Figure 1

27 pages, 22018 KiB  
Article
Planning Socially Expressive Mobile Robot Trajectories
by Philip Scales, Olivier Aycard and Véronique Aubergé
Sensors 2024, 24(11), 3533; https://doi.org/10.3390/s24113533 - 30 May 2024
Viewed by 515
Abstract
Many mobile robotics applications require robots to navigate around humans who may interpret the robot’s motion in terms of social attitudes and intentions. It is essential to understand which aspects of the robot’s motion are related to such perceptions so that we may [...] Read more.
Many mobile robotics applications require robots to navigate around humans who may interpret the robot’s motion in terms of social attitudes and intentions. It is essential to understand which aspects of the robot’s motion are related to such perceptions so that we may design appropriate navigation algorithms. Current works in social navigation tend to strive towards a single ideal style of motion defined with respect to concepts such as comfort, naturalness, or legibility. These algorithms cannot be configured to alter trajectory features to control the social interpretations made by humans. In this work, we firstly present logistic regression models based on perception experiments linking human perceptions to a corpus of linear velocity profiles, establishing that various trajectory features impact human social perception of the robot. Secondly, we formulate a trajectory planning problem in the form of a constrained optimization, using novel constraints that can be selectively applied to shape the trajectory such that it generates the desired social perception. We demonstrate the ability of the proposed algorithm to accurately change each of the features of the generated trajectories based on the selected constraints, enabling subtle variations in the robot’s motion to be consistently applied. By controlling the trajectories to induce different social perceptions, we provide a tool to better tailor the robot’s actions to its role and deployment context to enhance acceptability. Full article
Show Figures

Figure 1

15 pages, 6793 KiB  
Article
Consensus-Based Information Filtering in Distributed LiDAR Sensor Network for Tracking Mobile Robots
by Isabella Luppi, Neel Pratik Bhatt and Ehsan Hashemi
Sensors 2024, 24(9), 2927; https://doi.org/10.3390/s24092927 - 4 May 2024
Viewed by 1103
Abstract
A distributed state observer is designed for state estimation and tracking of mobile robots amidst dynamic environments and occlusions within distributed LiDAR sensor networks. The proposed novel framework enhances three-dimensional bounding box detection and tracking utilizing a consensus-based information filter and a region [...] Read more.
A distributed state observer is designed for state estimation and tracking of mobile robots amidst dynamic environments and occlusions within distributed LiDAR sensor networks. The proposed novel framework enhances three-dimensional bounding box detection and tracking utilizing a consensus-based information filter and a region of interest for state estimation of mobile robots. The framework enables the identification of the input to the dynamic process using remote sensing, enhancing the state prediction accuracy for low-visibility and occlusion scenarios in dynamic scenes. Experimental evaluations in indoor settings confirm the effectiveness of the framework in terms of accuracy and computational efficiency. These results highlight the benefit of integrating stationary LiDAR sensors’ state estimates into a switching consensus information filter to enhance the reliability of tracking and to reduce estimation error in the sense of mean square and covariance. Full article
Show Figures

Figure 1

17 pages, 7168 KiB  
Article
Fast 50 Hz Updated Static Infrared Positioning System Based on Triangulation Method
by Maciej Ciężkowski and Rafał Kociszewski
Sensors 2024, 24(5), 1389; https://doi.org/10.3390/s24051389 - 21 Feb 2024
Viewed by 993
Abstract
One of the important issues being explored in Industry 4.0 is collaborative mobile robots. This collaboration requires precise navigation systems, especially indoor navigation systems where GNSS (Global Navigation Satellite System) cannot be used. To enable the precise localization of robots, different variations of [...] Read more.
One of the important issues being explored in Industry 4.0 is collaborative mobile robots. This collaboration requires precise navigation systems, especially indoor navigation systems where GNSS (Global Navigation Satellite System) cannot be used. To enable the precise localization of robots, different variations of navigation systems are being developed, mainly based on trilateration and triangulation methods. Triangulation systems are distinguished by the fact that they allow for the precise determination of an object’s orientation, which is important for mobile robots. An important feature of positioning systems is the frequency of position updates based on measurements. For most systems, it is 10–20 Hz. In our work, we propose a high-speed 50 Hz positioning system based on the triangulation method with infrared transmitters and receivers. In addition, our system is completely static, i.e., it has no moving/rotating measurement sensors, which makes it more resistant to disturbances (caused by vibrations, wear and tear of components, etc.). In this paper, we describe the principle of the system as well as its design. Finally, we present tests of the built system, which show a beacon bearing accuracy of Δφ = 0.51°, which corresponds to a positioning accuracy of ΔR = 6.55 cm, with a position update frequency of fupdate = 50 Hz. Full article
Show Figures

Figure 1

20 pages, 8194 KiB  
Article
Novel near E-Field Topography Sensor for Human–Machine Interfacing in Robotic Applications
by Dariusz J. Skoraczynski and Chao Chen
Sensors 2024, 24(5), 1379; https://doi.org/10.3390/s24051379 - 21 Feb 2024
Viewed by 1115
Abstract
This work investigates a new sensing technology for use in robotic human–machine interface (HMI) applications. The proposed method uses near E-field sensing to measure small changes in the limb surface topography due to muscle actuation over time. The sensors introduced in this work [...] Read more.
This work investigates a new sensing technology for use in robotic human–machine interface (HMI) applications. The proposed method uses near E-field sensing to measure small changes in the limb surface topography due to muscle actuation over time. The sensors introduced in this work provide a non-contact, low-computational-cost, and low-noise method for sensing muscle activity. By evaluating the key sensor characteristics, such as accuracy, hysteresis, and resolution, the performance of this sensor is validated. Then, to understand the potential performance in intention detection, the unmodified digital output of the sensor is analysed against movements of the hand and fingers. This is done to demonstrate the worst-case scenario and to show that the sensor provides highly targeted and relevant data on muscle activation before any further processing. Finally, a convolutional neural network is used to perform joint angle prediction over nine degrees of freedom, achieving high-level regression performance with an RMSE value of less than six degrees for thumb and wrist movements and 11 degrees for finger movements. This work demonstrates the promising performance of this novel approach to sensing for use in human–machine interfaces. Full article
Show Figures

Figure 1

21 pages, 6387 KiB  
Article
A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks
by Dailin Marrero, John Kern and Claudio Urrea
Sensors 2024, 24(2), 491; https://doi.org/10.3390/s24020491 - 12 Jan 2024
Cited by 1 | Viewed by 1755
Abstract
This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information [...] Read more.
This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information processing, providing significant advantages in the representation of temporal information in robotic arm control compared to conventional neural networks. Exploring specific implementations of SNN in robot control, this study analyzes neuron models and learning mechanisms inherent to SNN. Based on the principles of the Neural Engineering Framework (NEF), a novel spiking PID controller is designed and simulated for a 3-DoF robotic arm using Nengo and MATLAB R2022b. The controller demonstrated good accuracy and efficiency in following designated trajectories, showing minimal deviations, overshoots, or oscillations. A thorough quantitative assessment, utilizing performance metrics like root mean square error (RMSE) and the integral of the absolute value of the time-weighted error (ITAE), provides additional validation for the efficacy of the SNN-based controller. Competitive performance was observed, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance. This work highlights the utility of NEF and SNN in developing effective robotic controllers, laying the groundwork for future research focused on SNN adaptability in dynamic environments and advanced robotic applications. Full article
Show Figures

Figure 1

17 pages, 16981 KiB  
Article
Human–Robot Interaction Using Learning from Demonstrations and a Wearable Glove with Multiple Sensors
by Rajmeet Singh, Saeed Mozaffari, Masoud Akhshik, Mohammed Jalal Ahamed, Simon Rondeau-Gagné and Shahpour Alirezaee
Sensors 2023, 23(24), 9780; https://doi.org/10.3390/s23249780 - 12 Dec 2023
Cited by 2 | Viewed by 1595
Abstract
Human–robot interaction is of the utmost importance as it enables seamless collaboration and communication between humans and robots, leading to enhanced productivity and efficiency. It involves gathering data from humans, transmitting the data to a robot for execution, and providing feedback to the [...] Read more.
Human–robot interaction is of the utmost importance as it enables seamless collaboration and communication between humans and robots, leading to enhanced productivity and efficiency. It involves gathering data from humans, transmitting the data to a robot for execution, and providing feedback to the human. To perform complex tasks, such as robotic grasping and manipulation, which require both human intelligence and robotic capabilities, effective interaction modes are required. To address this issue, we use a wearable glove to collect relevant data from a human demonstrator for improved human–robot interaction. Accelerometer, pressure, and flexi sensors were embedded in the wearable glove to measure motion and force information for handling objects of different sizes, materials, and conditions. A machine learning algorithm is proposed to recognize grasp orientation and position, based on the multi-sensor fusion method. Full article
Show Figures

Figure 1

12 pages, 2378 KiB  
Article
Analysis of Differences in Single-Joint Movement of Dominant and Non-Dominant Hands for Human-like Robotic Control
by Samyoung Kim, Kyuengbo Min, Yeongdae Kim, Shigeyuki Igarashi, Daeyoung Kim, Hyeonseok Kim and Jongho Lee
Sensors 2023, 23(23), 9443; https://doi.org/10.3390/s23239443 - 27 Nov 2023
Cited by 1 | Viewed by 1082
Abstract
Although several previous studies on laterality of upper limb motor control have reported functional differences, this conclusion has not been agreed upon. It may be conjectured that the inconsistent results were caused because upper limb motor control was observed in multi-joint tasks that [...] Read more.
Although several previous studies on laterality of upper limb motor control have reported functional differences, this conclusion has not been agreed upon. It may be conjectured that the inconsistent results were caused because upper limb motor control was observed in multi-joint tasks that could generate different inter-joint motor coordination for each arm. Resolving this, we employed a single wrist joint tracking task to reduce the effect of multi-joint dynamics and examined the differences between the dominant and non-dominant hands in terms of motor control. Specifically, we defined two sections to induce feedback (FB) and feedforward (FF) controls: the first section involved a visible target for FB control, and the other section involved an invisible target for FF control. We examined the differences in the position errors of the tracer and the target. Fourteen healthy participants performed the task. As a result, we found that during FB control, the dominant hand performed better than the non-dominant hand, while we did not observe significant differences in FF control. In other words, in a single-joint movement that is not under the influence of the multi-joint coordination, only FB control showed laterality and not FF control. Furthermore, we confirmed that the dominant hand outperformed the non-dominant hand in terms of responding to situations that required a change in control strategy. Full article
Show Figures

Figure 1

15 pages, 2830 KiB  
Article
Particle Swarm Algorithm Path-Planning Method for Mobile Robots Based on Artificial Potential Fields
by Li Zheng, Wenjie Yu, Guangxu Li, Guangxu Qin and Yunchuan Luo
Sensors 2023, 23(13), 6082; https://doi.org/10.3390/s23136082 - 1 Jul 2023
Cited by 21 | Viewed by 4251
Abstract
Path planning is an important part of the navigation control system of mobile robots since it plays a decisive role in whether mobile robots can realize autonomy and intelligence. The particle swarm algorithm can effectively solve the path-planning problem of a mobile robot, [...] Read more.
Path planning is an important part of the navigation control system of mobile robots since it plays a decisive role in whether mobile robots can realize autonomy and intelligence. The particle swarm algorithm can effectively solve the path-planning problem of a mobile robot, but the traditional particle swarm algorithm has the problems of a too-long path, poor global search ability, and local development ability. Moreover, the existence of obstacles makes the actual environment more complex, thus putting forward more stringent requirements on the environmental adaptation ability, path-planning accuracy, and path-planning efficiency of mobile robots. In this study, an artificial potential field-based particle swarm algorithm (apfrPSO) was proposed. First, the method generates robot planning paths by adjusting the inertia weight parameter and ranking the position vector of particles (rPSO), and second, the artificial potential field method is introduced. Through comparative numerical experiments with other state-of-the-art algorithms, the results show that the algorithm proposed was very competitive. Full article
Show Figures

Figure 1

Back to TopTop