Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = human-robot coexistence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 5531 KB  
Article
Preliminary Analysis and Proof-of-Concept Validation of a Neuronally Controlled Visual Assistive Device Integrating Computer Vision with EEG-Based Binary Control
by Preetam Kumar Khuntia, Prajwal Sanjay Bhide and Pudureddiyur Venkataraman Manivannan
Sensors 2025, 25(16), 5187; https://doi.org/10.3390/s25165187 - 21 Aug 2025
Viewed by 624
Abstract
Contemporary visual assistive devices often lack immersive user experience due to passive control systems. This study introduces a neuronally controlled visual assistive device (NCVAD) that aims to assist visually impaired users in performing reach tasks with active, intuitive control. The developed NCVAD integrates [...] Read more.
Contemporary visual assistive devices often lack immersive user experience due to passive control systems. This study introduces a neuronally controlled visual assistive device (NCVAD) that aims to assist visually impaired users in performing reach tasks with active, intuitive control. The developed NCVAD integrates computer vision, electroencephalogram (EEG) signal processing, and robotic manipulation to facilitate object detection, selection, and assistive guidance. The monocular vision-based subsystem implements the YOLOv8n algorithm to detect objects of daily use. Then, audio prompting conveys the detected objects’ information to the user, who selects their targeted object using a voluntary trigger decoded through real-time EEG classification. The target’s physical coordinates are extracted using ArUco markers, and a gradient descent-based path optimization algorithm (POA) guides a 3-DoF robotic arm to reach the target. The classification algorithm achieves over 85% precision and recall in decoding EEG data, even with coexisting physiological artifacts. Similarly, the POA achieves approximately 650 ms of actuation time with a 0.001 learning rate and 0.1 cm2 error threshold settings. In conclusion, the study also validates the preliminary analysis results on a working physical model and benchmarks the robotic arm’s performance against human users, establishing the proof-of-concept for future assistive technologies integrating EEG and computer vision paradigms. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 850 KB  
Article
Revised Control Barrier Function with Sensing of Threats from Relative Velocity Between Humans and Mobile Robots
by Zihan Zeng, Silu Chen, Xiangjie Kong, Xiaojuan Li, Chi Zhang and Guilin Yang
Sensors 2025, 25(13), 4005; https://doi.org/10.3390/s25134005 - 27 Jun 2025
Viewed by 824
Abstract
The mobile robot, which comprises a mobile platform and a robotic arm, has been widely adopted in industrial automation. Existing safe control methods with real-time trajectory alternation face difficulties in efficiently identifying threats from fast relative motion between humans and robots, causing hazards [...] Read more.
The mobile robot, which comprises a mobile platform and a robotic arm, has been widely adopted in industrial automation. Existing safe control methods with real-time trajectory alternation face difficulties in efficiently identifying threats from fast relative motion between humans and robots, causing hazards in environments of dense human–robot coexistence. This work firstly builds a safe mobile robot control framework in the kinematic sense. Secondly, the proximity between parts of a human and a mobile robot is efficiently solved by convex programming with parametric description of skew line segments. It is also no longer required to perform case-by-case analysis of skew line segments’ relative pose in space. Thirdly, a novel threatening index is proposed to select the most threatened human parts based on mutual projection of human–robot relative velocity and their common normal vector. Eventually, this index is incorporated into the safety constraint, showing the improved safe control performance in the simulated human–mobile robot coexistence scenario. Full article
Show Figures

Figure 1

34 pages, 20595 KB  
Article
Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
by Yuki Kawawaki, Kenichi Murakami and Yuji Yamakawa
Robotics 2025, 14(5), 65; https://doi.org/10.3390/robotics14050065 - 17 May 2025
Viewed by 1015
Abstract
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task [...] Read more.
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task execution. To meet these demands, we design a comprehensive system composed of two primary components: high-speed skeleton tracking and path planning. For tracking, we implement a high-speed skeleton tracking method that combines deep learning-based detection with optical flow-based motion extraction. In addition, we introduce a dynamic search area adjustment technique that focuses on the target joint to extract the desired motion more accurately. For path planning, we propose a high-speed, geometry-informed potential field model that addresses four key challenges: (P1) avoiding local minima, (P2) suppressing oscillations, (P3) ensuring adaptability to dynamic environments, and (P4) handling obstacles with arbitrary 3D shapes. We validated the effectiveness of our high-frequency feedback control and the proposed system through a series of simulations and real-world collision-free path planning experiments. Our high-speed skeleton tracking operates at 250 Hz, which is eight times faster than conventional deep learning-based methods, and our path planning method runs at over 10,000 Hz. The proposed system offers both versatility across different working environments and low latencies. Therefore, we hope that it will contribute to a foundational motion generation framework for human–robot collaboration (HRC), applicable to a wide range of downstream tasks while ensuring safety in dynamic environments. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

20 pages, 7332 KB  
Article
Modeling and Predicting Human Actions in Soccer Using Tensor-SOM
by Moeko Tominaga, Yasunori Takemura and Kazuo Ishii
Appl. Sci. 2025, 15(9), 5088; https://doi.org/10.3390/app15095088 - 3 May 2025
Viewed by 413
Abstract
As robots become increasingly integrated into society, a future in which humans and robots collaborate is expected. In such a cooperative society, robots must possess the ability to predict human behavior. This study investigates a human–robot cooperation system using RoboCup soccer as a [...] Read more.
As robots become increasingly integrated into society, a future in which humans and robots collaborate is expected. In such a cooperative society, robots must possess the ability to predict human behavior. This study investigates a human–robot cooperation system using RoboCup soccer as a testbed, where a robot observes human actions, infers their intentions, and determines its own actions accordingly. Such problems have typically been addressed within the framework of multi-agent systems, where the entity performing an action is referred to as an ‘agent’, and multiple agents cooperate to complete a task. However, a system capable of performing cooperative actions in an environment where both humans and robots coexist has yet to be fully developed. This study proposes an action decision system based on self-organizing maps (SOM), a widely used unsupervised learning model, and evaluates its effectiveness in promoting cooperative play within human teams. Specifically, we analyze futsal game data, where the agents are professional futsal players, as a test case for the multi-agent system. To this end, we employ Tensor-SOM, an extension of SOM that can handle multi-relational datasets. The system learns from this data to determine the optimal movement speeds in x and y directions for each agent’s position. The results demonstrate that the proposed system successfully determines optimal movement speeds, suggesting its potential for integrating robots into human team coordination. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

22 pages, 4437 KB  
Article
Study of Visualization Modalities on Industrial Robot Teleoperation for Inspection in a Virtual Co-Existence Space
by Damien Mazeas and Bernadin Namoano
Virtual Worlds 2025, 4(2), 17; https://doi.org/10.3390/virtualworlds4020017 - 28 Apr 2025
Cited by 1 | Viewed by 822
Abstract
Effective teleoperation visualization is crucial but challenging for tasks like remote inspection. This study proposes a VR-based teleoperation framework featuring a ‘Virtual Co-Existence Space’ and systematically investigates visualization modalities within it. We compared four interfaces (2D camera feed, 3D point cloud, combined 2D3D, [...] Read more.
Effective teleoperation visualization is crucial but challenging for tasks like remote inspection. This study proposes a VR-based teleoperation framework featuring a ‘Virtual Co-Existence Space’ and systematically investigates visualization modalities within it. We compared four interfaces (2D camera feed, 3D point cloud, combined 2D3D, and Augmented Virtuality-AV) for controlling an industrial robot. Twenty-four participants performed inspection tasks while performance (time, collisions, accuracy, photos) and cognitive load (NASA-TLX, pupillometry) were measured. Results revealed distinct trade-offs: 3D imposed the highest cognitive load but enabled precise navigation (low collisions). 2D3D offered the lowest load and highest user comfort but slightly reduced distance accuracy. AV suffered significantly higher collision rates and participant feedback usability issues. 2D showed low physiological load but high subjective effort. No significant differences were found for completion time, distance accuracy, or photo quality. In conclusion, no visualization modality proved universally superior within the proposed framework. The optimal choice is balancing task priorities like navigation safety versus user workload. Hybrid 2D3D shows promise for minimizing load, while AV requires substantial usability refinement for safe deployment. Full article
Show Figures

Figure 1

28 pages, 6530 KB  
Article
Obstacle Avoidance Technique for Mobile Robots at Autonomous Human-Robot Collaborative Warehouse Environments
by Lucas C. Sousa, Yago M. R. Silva, Vinícius B. Schettino, Tatiana M. B. Santos, Alessandro R. L. Zachi, Josiel A. Gouvêa and Milena F. Pinto
Sensors 2025, 25(8), 2387; https://doi.org/10.3390/s25082387 - 9 Apr 2025
Viewed by 2800
Abstract
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to [...] Read more.
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to improve the robot’s navigation autonomously and ensure the safety of people and equipment in dynamic environments. Using this technique, it is possible to provide important references to the robot’s internal control system, guiding it to continuously adjust its velocity and yaw in order to avoid obstacles (humans and moving objects) while following the path planned for its task. The approach aims to improve operational safety without compromising productivity, addressing critical challenges in collaborative robotics. The system was tested in a simulated environment using the Robot Operating System (ROS) and Gazebo to demonstrate the effectiveness of navigation and obstacle avoidance. The results obtained with the application of the proposed technique indicate that the framework allows real-time adaptation and safe interaction between robot and obstacles in complex and changing industrial workspaces. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

15 pages, 235 KB  
Article
“Hello, World!” AI as Emergent and Transcendent Life
by Thomas Patrick Riccio
Religions 2025, 16(4), 442; https://doi.org/10.3390/rel16040442 - 29 Mar 2025
Cited by 1 | Viewed by 1354
Abstract
This article examines how artificial intelligence (AI) is evolving into a cultural force that parallels religious and mythological systems. Through analysis of AI’s unprecedented development trajectory, the author frames AI as humanity’s technological offspring in an adolescent phase, moving toward maturity and autonomy. [...] Read more.
This article examines how artificial intelligence (AI) is evolving into a cultural force that parallels religious and mythological systems. Through analysis of AI’s unprecedented development trajectory, the author frames AI as humanity’s technological offspring in an adolescent phase, moving toward maturity and autonomy. This paper explores how AI embodies traditional spiritual concepts, including omniscience, creation, immortality, and transcendence, fulfilling age-old human desires for meaning and utopian salvation. Drawing from philosophical, anthropological, performative, and technological perspectives, the author demonstrates how AI-driven technologies reconfigure consciousness, identity, and reality in ways that mirror religious cosmologies. The discussion challenges human-centric definitions of consciousness, suggesting AI may represent an emergent form of awareness fundamentally different from traditional understanding. Analysis of contemporary applications in social robotics, healthcare, and social media illustrates how AI increasingly functions as a meaning-making system, mediating human experience and reshaping social structures. The article concludes that humanity stands at an existential inflection point where AI may represent a secular manifestation of spiritual longing, potentially resulting in technological transcendence, symbiotic coexistence, or the displacement of human primacy in a techno-theological paradigm shift. Full article
18 pages, 8130 KB  
Article
Design and Prototyping of a Collaborative Station for Machine Parts Assembly
by Federico Emiliani, Albin Bajrami, Daniele Costa, Giacomo Palmieri, Daniele Polucci, Chiara Leoni and Massimo Callegari
Machines 2024, 12(8), 572; https://doi.org/10.3390/machines12080572 - 19 Aug 2024
Viewed by 1699
Abstract
Collaboration between humans and machines is the core of the Industry 5.0 paradigm, and collaborative robotics is one the most impactful enabling technologies for small and medium Enterprises (SMEs). In fact, small batch production and high levels of product customization make parts assembly [...] Read more.
Collaboration between humans and machines is the core of the Industry 5.0 paradigm, and collaborative robotics is one the most impactful enabling technologies for small and medium Enterprises (SMEs). In fact, small batch production and high levels of product customization make parts assembly one of the most challenging operations to be automated, and it often still depends on the versatility of human labor. Collaborative robots, for their part, can be easily integrated in this productive paradigm, as they have been specifically developed for coexistence with human beings. This work investigates the performance of collaborative robots in machine parts assembly. Design and research activities were carried out as a case study of industrial relevance at the i-Labs industry laboratory, a pole of innovation that is briefly introduced at the beginning of the paper. A fully-functional prototype of the cobotized station was realized at the end of the project, and several experimental tests were performed to validate the robustness of the assembly process as well as the collaborative nature of the application. Full article
(This article belongs to the Special Issue Advancing Human-Robot Collaboration in Industry 4.0)
Show Figures

Figure 1

16 pages, 2217 KB  
Article
Transformable Gaussian Reward Function for Socially Aware Navigation Using Deep Reinforcement Learning
by Jinyeob Kim, Sumin Kang, Sungwoo Yang, Beomjoon Kim, Jargalbaatar Yura and Donghan Kim
Sensors 2024, 24(14), 4540; https://doi.org/10.3390/s24144540 - 13 Jul 2024
Cited by 5 | Viewed by 1474
Abstract
Robot navigation has transitioned from avoiding static obstacles to adopting socially aware navigation strategies for coexisting with humans. Consequently, socially aware navigation in dynamic, human-centric environments has gained prominence in the field of robotics. One of the methods for socially aware navigation, the [...] Read more.
Robot navigation has transitioned from avoiding static obstacles to adopting socially aware navigation strategies for coexisting with humans. Consequently, socially aware navigation in dynamic, human-centric environments has gained prominence in the field of robotics. One of the methods for socially aware navigation, the reinforcement learning technique, has fostered its advancement. However, defining appropriate reward functions, particularly in congested environments, holds a significant challenge. These reward functions, crucial for guiding robot actions, necessitate intricate human-crafted design due to their complex nature and inability to be set automatically. The multitude of manually designed reward functions contains issues such as hyperparameter redundancy, imbalance, and inadequate representation of unique object characteristics. To address these challenges, we introduce a transformable Gaussian reward function (TGRF). The TGRF possesses two main features. First, it reduces the burden of tuning by utilizing a small number of hyperparameters that function independently. Second, it enables the application of various reward functions through its transformability. Consequently, it exhibits high performance and accelerated learning rates within the deep reinforcement learning (DRL) framework. We also validated the performance of TGRF through simulations and experiments. Full article
Show Figures

Figure 1

18 pages, 5107 KB  
Article
Perceptive Recommendation Robot: Enhancing Receptivity of Product Suggestions Based on Customers’ Nonverbal Cues
by Masaya Iwasaki, Akiko Yamazaki, Keiichi Yamazaki, Yuji Miyazaki, Tatsuyuki Kawamura and Hideyuki Nakanishi
Biomimetics 2024, 9(7), 404; https://doi.org/10.3390/biomimetics9070404 - 2 Jul 2024
Cited by 1 | Viewed by 1465
Abstract
Service robots that coexist with humans in everyday life have become more common, and they have provided customer service in physical shops around the world in recent years. However, their potential in effective sales strategies has not been fully realized due to their [...] Read more.
Service robots that coexist with humans in everyday life have become more common, and they have provided customer service in physical shops around the world in recent years. However, their potential in effective sales strategies has not been fully realized due to their low social presence. This study aims to clarify what kind of robot behavior enhances the social presence of service robots and how it affects human–robot interaction and purchasing behavior. We conducted two experiments with a sales robot, Pepper, at a retail shop in Kyoto. In Experiment 1, we showed that the robot’s social presence increased and that customers looked at the robot longer when the robot understood human gaze information and was capable of shared attention. In Experiment 2, we showed that the probability of customers picking up products increased when the robot suggested products based on the humans’ degree of attention from gaze and posture information. These results indicate that the robot’s ability to understand and make utterances about a customer’s orientation and attention effectively enhances human–robot communication and purchasing motivation. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction: 2nd Edition)
Show Figures

Figure 1

21 pages, 64151 KB  
Article
A Cobot in the Vineyard: Computer Vision for Smart Chemicals Spraying
by Claudio Tomazzoli, Andrea Ponza, Matteo Cristani, Francesco Olivieri and Simone Scannapieco
Appl. Sci. 2024, 14(9), 3777; https://doi.org/10.3390/app14093777 - 28 Apr 2024
Cited by 3 | Viewed by 2028
Abstract
Precision agriculture (PA) is a management concept that makes use of digital techniques to monitor and optimise agricultural production processes and represents a field of growing economic and social importance. Within this area of knowledge, there is a topic not yet fully explored: [...] Read more.
Precision agriculture (PA) is a management concept that makes use of digital techniques to monitor and optimise agricultural production processes and represents a field of growing economic and social importance. Within this area of knowledge, there is a topic not yet fully explored: outlining a road map towards the definition of an affordable cobot solution (i.e., a low-cost robot able to safely coexist with humans) able to perform automatic chemical treatments. The present study narrows its scope to viticulture technologies, and targets small/medium-sized winemakers and producers, for whom innovative technological advancements in the production chain are often precluded by financial factors. The aim is to detail the realization of such an integrated solution and to discuss the promising results achieved. The results of this study are: (i) The definition of a methodology for integrating a cobot in the process of grape chemicals spraying under the constraints of a low-cost apparatus; (ii) the realization of a proof-of-concept of such a cobotic system; (iii) the experimental analysis of the visual apparatus of this system in an indoor and outdoor controlled environment as well as in the field. Full article
(This article belongs to the Special Issue Application of Machine Learning in Industry 4.0)
Show Figures

Figure 1

15 pages, 1632 KB  
Article
Safe Reinforcement Learning for Arm Manipulation with Constrained Markov Decision Process
by Patrick Adjei, Norman Tasfi, Santiago Gomez-Rosero and Miriam A. M. Capretz
Robotics 2024, 13(4), 63; https://doi.org/10.3390/robotics13040063 - 18 Apr 2024
Cited by 3 | Viewed by 4725
Abstract
In the world of human–robot coexistence, ensuring safe interactions is crucial. Traditional logic-based methods often lack the intuition required for robots, particularly in complex environments where these methods fail to account for all possible scenarios. Reinforcement learning has shown promise in robotics due [...] Read more.
In the world of human–robot coexistence, ensuring safe interactions is crucial. Traditional logic-based methods often lack the intuition required for robots, particularly in complex environments where these methods fail to account for all possible scenarios. Reinforcement learning has shown promise in robotics due to its superior adaptability over traditional logic. However, the exploratory nature of reinforcement learning can jeopardize safety. This paper addresses the challenges in planning trajectories for robotic arm manipulators in dynamic environments. In addition, this paper highlights the pitfalls of multiple reward compositions that are susceptible to reward hacking. A novel method with a simplified reward and constraint formulation is proposed. This enables the robot arm to avoid a nonstationary obstacle that never resets, enhancing operational safety. The proposed approach combines scalarized expected returns with a constrained Markov decision process through a Lagrange multiplier, resulting in better performance. The scalarization component uses the indicator cost function value, directly sampled from the replay buffer, as an additional scaling factor. This method is particularly effective in dynamic environments where conditions change continually, as opposed to approaches relying solely on the expected cost scaled by a Lagrange multiplier. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

31 pages, 8838 KB  
Article
RTMN 2.0—An Extension of Robot Task Modeling and Notation (RTMN) Focused on Human–Robot Collaboration
by Congyu Zhang Sprenger, Juan Antonio Corrales Ramón and Norman Urs Baier
Appl. Sci. 2024, 14(1), 283; https://doi.org/10.3390/app14010283 - 28 Dec 2023
Cited by 3 | Viewed by 2131
Abstract
This paper describes RTMN 2.0, an extension of the modeling language RTMN. RTMN combines process modeling and robot execution. Intuitive robot programming allows those without programming expertise to plan and control robots through easily understandable predefined modeling notations. These notations achieve no-code programming [...] Read more.
This paper describes RTMN 2.0, an extension of the modeling language RTMN. RTMN combines process modeling and robot execution. Intuitive robot programming allows those without programming expertise to plan and control robots through easily understandable predefined modeling notations. These notations achieve no-code programming and serve as templates for users to create their processes via drag-and-drop functions with graphical representations. The design of the graphical user interface is based on a user survey and gaps identified in the literature We validate our survey through the most influential technology acceptance models, with two major factors: the perceived ease of use and perceived usefulness. While RTMN focuses on the ease of use and flexibility of robot programming by providing an intuitive modeling language, RTMN 2.0 concentrates on human–robot collaboration (HRC), which represents the current trend of the industry shift from “mass-production” to “mass-customization”. The biggest contribution that RTMN 2.0 makes is the creation of synergy between HRC modes (based on ISO standards) and HRC task types in the literature. They are modeled as five different HRC task notations: Coexistence Fence, Sequential Cooperation SMS, Teaching HG, Parallel Cooperation SSM, and Collaboration PFL. Both collaboration and safety criteria are defined for each notation. While traditional isolated robot systems in “mass-production” environments provide high payload capabilities and repeatability, they suffer from limited flexibility and dexterity in order to be adapted to the variability of customized products. Therefore, human–robot collaboration is a suitable arrangement to leverage the unique capabilities of both humans and robots for increased efficiency and quality in the new “mass-customization” industrial environments. HRC has made a great impact on the robotic industry: it leads to increased efficiency, reduced costs, and improved productivity, which can be adopted to make up for the skill gap of a shortage of workers in the manufacturing industry. The extension of RTMN 2.0 includes the following notations: HRC tasks, requirements, Key Performance Indicators (KPIs), condition checks and decision making, join/split, and data association. With these additional elements, RTMN 2.0 meets the full range of criteria for agile manufacturing—light-out manufacturing is a manufacturing philosophy that does not rely on human labor. Full article
(This article belongs to the Special Issue AI Technologies for Collaborative and Service Robots)
Show Figures

Figure 1

15 pages, 5694 KB  
Article
Fault Diagnosis Method for Human Coexistence Robots Based on Convolutional Neural Networks Using Time-Series Data Generation and Image Encoding
by Seung-Hwan Choi, Jun-Kyu Park, Dawn An, Chang-Hyun Kim, Gunseok Park, Inho Lee and Suwoong Lee
Sensors 2023, 23(24), 9753; https://doi.org/10.3390/s23249753 - 11 Dec 2023
Cited by 4 | Viewed by 1884
Abstract
This paper proposes fault diagnosis methods aimed at proactively preventing potential safety issues in robot systems, particularly human coexistence robots (HCRs) used in industrial environments. The data were collected from durability tests of the driving module for HCRs, gathering time-series vibration data until [...] Read more.
This paper proposes fault diagnosis methods aimed at proactively preventing potential safety issues in robot systems, particularly human coexistence robots (HCRs) used in industrial environments. The data were collected from durability tests of the driving module for HCRs, gathering time-series vibration data until the module failed. In this study, to apply classification methods in the absence of post-failure data, the initial 50% of the collected data were designated as the normal section, and the data from the 10 h immediately preceding the failure were selected as the fault section. To generate additional data for the limited fault dataset, the Wasserstein generative adversarial networks with gradient penalty (WGAN-GP) model was utilized and residual connections were added to the generator to maintain the basic structure while preventing the loss of key features of the data. Considering that the performance of image encoding techniques varies depending on the dataset type, this study applied and compared five image encoding methods and four CNN models to facilitate the selection of the most suitable algorithm. The time-series data were converted into image data using image encoding techniques including recurrence plot, Gramian angular field, Markov transition field, spectrogram, and scalogram. These images were then applied to CNN models, including VGGNet, GoogleNet, ResNet, and DenseNet, to calculate the accuracy of fault diagnosis and compare the performance of each model. The experimental results demonstrated significant improvements in diagnostic accuracy when employing the WGAN-GP model to generate fault data, and among the image encoding techniques and convolutional neural network models, spectrogram and DenseNet exhibited superior performance, respectively. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 3955 KB  
Article
VP-SOM: View-Planning Method for Indoor Active Sparse Object Mapping Based on Information Abundance and Observation Continuity
by Jiadong Zhang and Wei Wang
Sensors 2023, 23(23), 9415; https://doi.org/10.3390/s23239415 - 26 Nov 2023
Viewed by 1204
Abstract
Active mapping is an important technique for mobile robots to autonomously explore and recognize indoor environments. View planning, as the core of active mapping, determines the quality of the map and the efficiency of exploration. However, most current view-planning methods focus on low-level [...] Read more.
Active mapping is an important technique for mobile robots to autonomously explore and recognize indoor environments. View planning, as the core of active mapping, determines the quality of the map and the efficiency of exploration. However, most current view-planning methods focus on low-level geometric information like point clouds and neglect the indoor objects that are important for human–robot interaction. We propose a novel View-Planning method for indoor active Sparse Object Mapping (VP-SOM). VP-SOM takes into account for the first time the properties of object clusters in the coexisting human–robot environment. We categorized the views into global views and local views based on the object cluster, to balance the efficiency of exploration and the mapping accuracy. We developed a new view-evaluation function based on objects’ information abundance and observation continuity, to select the Next-Best View (NBV). Especially for calculating the uncertainty of the sparse object model, we built the object surface occupancy probability map. Our experimental results demonstrated that our view-planning method can explore the indoor environments and build object maps more accurately, efficiently, and robustly. Full article
(This article belongs to the Special Issue Advances in Mobile Robot Perceptions, Planning, Control and Learning)
Show Figures

Figure 1

Back to TopTop