Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (209)

Search Parameters:
Keywords = industrial and robotic vision systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2979 KB  
Article
Computer Vision-Enabled Construction Waste Sorting: A Sensitivity Analysis
by Xinru Liu, Zeinab Farshadfar and Siavash H. Khajavi
Appl. Sci. 2025, 15(19), 10550; https://doi.org/10.3390/app151910550 - 29 Sep 2025
Abstract
This paper presents a comprehensive sensitivity analysis of the pioneering real-world deployment of computer vision-enabled construction waste sorting in Finland, implemented by a leading provider of robotic recycling solutions. Building upon and extending the findings of prior field research, the study analyzes an [...] Read more.
This paper presents a comprehensive sensitivity analysis of the pioneering real-world deployment of computer vision-enabled construction waste sorting in Finland, implemented by a leading provider of robotic recycling solutions. Building upon and extending the findings of prior field research, the study analyzes an industry flagship case to examine the financial feasibility of computer vision-enabled robotic sorting compared to conventional sorting. The sensitivity analysis covers cost parameters related to labor, wages, personnel training, machinery (including AI software, hardware, and associated components), and maintenance operations, as well as capital expenses. We further expand the existing cost model by integrating the net present value (NPV) of investments. The results indicate that the computer vision-enabled automated system (CVAS) achieves cost competitiveness over conventional sorting (CS) under conditions of higher labor-related costs, such as increased headcount, wages, and training expenses. For instance, when annual wages exceed EUR 20,980, CVAS becomes more cost-effective. Conversely, CS retains cost advantages in scenarios dominated by higher machinery and maintenance costs or extremely elevated discount rates. For example, when the average machinery cost surpasses EUR 512,000 per unit, CS demonstrates greater economic viability. The novelty of this work arises from the use of a pioneering real-world case study and the improvements offered to a comprehensive comparative cost model for CVAS and CS, and furthermore from clarification of the impact of key cost variables on solution (CVAS or CS) selection. Full article
Show Figures

Figure 1

35 pages, 6625 KB  
Review
Industrial Robotic Setups: Tools and Technologies for Tracking and Analysis in Industrial Processes
by Mantas Makulavičius, Juratė Jolanta Petronienė, Ernestas Šutinys, Vytautas Bučinskas and Andrius Dzedzickis
Appl. Sci. 2025, 15(18), 10249; https://doi.org/10.3390/app151810249 - 20 Sep 2025
Viewed by 331
Abstract
Since the development of industrial robots, they have been used to enhance efficiency and reduce the need for manual labor. Industrial robots have become a universal tool across all economic sectors, with the integration of software that is extremely important for the effective [...] Read more.
Since the development of industrial robots, they have been used to enhance efficiency and reduce the need for manual labor. Industrial robots have become a universal tool across all economic sectors, with the integration of software that is extremely important for the effective operation of machines and processes. Robotic action accuracy is currently experiencing rapid development in all robot-involving activities. Currently, a significant breakthrough has been observed in modifying algorithms and controlling robot actions, as well as in monitoring and planning software and hardware compatibility to prevent errors in real-time. The integration of the Internet of Things, machine learning, and other advanced techniques has enhanced the intelligent features of industrial robots. As industrial automation advances, there is an increasing demand for precise control in a variety of robotic arm applications. It is essential to refine current solutions to address the challenges posed by the high connectivity, complex computations, and various scenarios involved. This review examines the application of vision-based models, particularly YOLO (You Only Look Once) variants, in object detection within industrial robotic environments, as well as other machine learning models for tasks such as classification and localization. Finally, this review summarizes the results presented in selected publications, compares represented methods, identifies challenges in prospective object-tracking technologies, and suggests future research directions. Full article
(This article belongs to the Special Issue Multimodal Robot Intelligence for Grasping and Manipulation)
Show Figures

Figure 1

8 pages, 1451 KB  
Proceeding Paper
Development of a System for Flexible Feeding of Parts with Robot and Machine Vision
by Penko Mitev
Eng. Proc. 2025, 104(1), 84; https://doi.org/10.3390/engproc2025104084 - 6 Sep 2025
Viewed by 1775
Abstract
This article presents a design solution for feeding cylindrical parts with axial orientation. A working algorithm was developed to control and synchronize the main components, which was verified via a simulation. The pneumatic and electrical circuits were designed using a software platform for [...] Read more.
This article presents a design solution for feeding cylindrical parts with axial orientation. A working algorithm was developed to control and synchronize the main components, which was verified via a simulation. The pneumatic and electrical circuits were designed using a software platform for engineering purposes. Based on the CAD project created, a real prototype was built. The energy consumption of the system was tested and evaluated. The results from the prototype verified the solution. This article emphasizes the use of specific sensors for detecting part orientation and their role in improving process reliability. The system is suitable for industrial implementation due to its functionality, stable operation, low energy consumption, and ability to be integrated into automated production systems. Full article
Show Figures

Figure 1

17 pages, 4980 KB  
Article
Deep Reinforcement Learning-Based Autonomous Docking with Multi-Sensor Perception in Sim-to-Real Transfer
by Yanyan Dai and Kidong Lee
Processes 2025, 13(9), 2842; https://doi.org/10.3390/pr13092842 - 5 Sep 2025
Viewed by 564
Abstract
Autonomous docking is a critical capability for enabling fully automated operations in industrial and logistics environments using Autonomous Mobile Robots (AMRs). Traditional rule-based docking approaches often struggle with generalization and robustness in complex, dynamic scenarios. This paper presents a deep reinforcement learning-based autonomous [...] Read more.
Autonomous docking is a critical capability for enabling fully automated operations in industrial and logistics environments using Autonomous Mobile Robots (AMRs). Traditional rule-based docking approaches often struggle with generalization and robustness in complex, dynamic scenarios. This paper presents a deep reinforcement learning-based autonomous docking framework that integrates Proximal Policy Optimization (PPO) with multi-sensor fusion. It includes YOLO-based vision detection, depth estimation, and LiDAR-based orientation correction. A concise 4D state vector, comprising relative position and angle indicators, is used to guide a continuous control policy. The outputs are linear and angular velocity commands for smooth and accurate docking. The training is conducted in a Gym-compatible Gazebo simulation, acting as a digital twin of the real-world system, and incorporates realistic variations in lighting, obstacle placement, and marker visibility. A designed reward function encourages alignment accuracy, progress, and safety. The final policy is deployed on a real robot via a sim-to-real transfer pipeline, supported by a ROS-based transfer node. Experimental results demonstrate that the proposed method achieves robust and precise docking behavior under diverse real-world conditions, validating the effectiveness of PPO-based learning and sensor fusion for practical autonomous docking applications. Full article
Show Figures

Figure 1

16 pages, 3657 KB  
Article
Development and Performance Evaluation of a Vision-Based Automated Oyster Size Classification System
by Jonghwan Baek, Seolha Kim, Chang-Hee Lee, Myeongsu Jeong, Jin-Ho Suh and Jaeyoul Lee
Inventions 2025, 10(5), 76; https://doi.org/10.3390/inventions10050076 - 27 Aug 2025
Viewed by 416
Abstract
This study presents the development and validation of an automated oyster classification system designed to classify oysters by size and place them into trays for freezing. Addressing limitations in conventional manual processing, the proposed system integrates a vision-based recognition algorithm and a delta [...] Read more.
This study presents the development and validation of an automated oyster classification system designed to classify oysters by size and place them into trays for freezing. Addressing limitations in conventional manual processing, the proposed system integrates a vision-based recognition algorithm and a delta robot (parallel robot) equipped with a soft gripper. The vision system identifies oyster size and optimal grasp points using image moment calculations, enhancing the accuracy of classification for irregularly shaped oysters. Experimental tests demonstrated classification and grasping success rates of 99%. A process simulation based on real industrial conditions revealed that seven units of the automated system are required to match the daily output of 7 tons achieved by 60 workers. When compared with a theoretical 100% success rate, the system showed a marginal production loss of 715 oysters and 15 trays. These results confirm the potential of the proposed system to improve consistency, reduce labor dependency, and increase productivity in oyster processing. Future work will focus on gripper design optimization and parameter tuning to further improve system stability and efficiency. Full article
(This article belongs to the Section Inventions and Innovation in Advanced Manufacturing)
Show Figures

Figure 1

23 pages, 6098 KB  
Article
Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach
by Carmen-Cristiana Cazacu, Teodor Cristian Nasu, Mihail Hanga, Dragos-Alexandru Cazacu and Costel Emil Cotet
Appl. Sci. 2025, 15(17), 9375; https://doi.org/10.3390/app15179375 - 26 Aug 2025
Viewed by 554
Abstract
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform [...] Read more.
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform and leverages computer vision with HSV color segmentation for real-time fuse validation. A custom ROI-based calibration method is implemented to address visual variation across fuse types, and a 5-s time-window validation improves detection robustness under fluctuating conditions. The system achieves a 95% accuracy rate across two fuse box types, with confidence intervals reported for statistical significance. Experimental findings indicate an approximate 85% decrease in manual intervention duration. Because of its adaptability and extensibility, the design can be implemented in a variety of assembly processes and provides a foundation for smart factory systems that are more scalable and independent. Full article
Show Figures

Figure 1

21 pages, 5469 KB  
Article
Radio Frequency Passive Tagging System Enabling Object Recognition and Alignment by Robotic Hands
by Armin Gharibi, Mahmoud Tavakoli, André F. Silva, Filippo Costa and Simone Genovesi
Electronics 2025, 14(17), 3381; https://doi.org/10.3390/electronics14173381 - 25 Aug 2025
Viewed by 1164
Abstract
Robotic hands require reliable and precise sensing systems to achieve accurate object recognition and manipulation, particularly in environments where vision- or capacitive-based approaches face limitations such as poor lighting, dust, reflective surfaces, or non-metallic materials. This paper presents a novel radiofrequency (RF) pre-touch [...] Read more.
Robotic hands require reliable and precise sensing systems to achieve accurate object recognition and manipulation, particularly in environments where vision- or capacitive-based approaches face limitations such as poor lighting, dust, reflective surfaces, or non-metallic materials. This paper presents a novel radiofrequency (RF) pre-touch sensing system that enables robust localization and orientation estimation of objects prior to grasping. The system integrates a compact coplanar waveguide (CPW) probe with fully passive chipless RF resonator tags fabricated using a patented flexible and stretchable conductive ink through additive manufacturing. This approach provides a low-cost, durable, and highly adaptable solution that operates effectively across diverse object geometries and environmental conditions. The experimental results demonstrate that the proposed RF sensor maintains stable performance under varying distances, orientations, and inter-tag spacings, showing robustness where traditional methods may fail. By combining compact design, cost-effectiveness, and reliable near-field sensing independent of an object or lighting, this work establishes RF sensing as a practical and scalable alternative to optical and capacitive systems. The proposed method advances robotic perception by offering enhanced precision, resilience, and integration potential for industrial automation, warehouse handling, and collaborative robotics. Full article
Show Figures

Figure 1

33 pages, 17334 KB  
Review
Scheduling in Remanufacturing Systems: A Bibliometric and Systematic Review
by Yufan Zheng, Wenkang Zhang, Runjing Wang and Rafiq Ahmad
Machines 2025, 13(9), 762; https://doi.org/10.3390/machines13090762 - 25 Aug 2025
Viewed by 742
Abstract
Global ambitions for net-zero emissions and resource circularity are propelling industry from linear “make-use-dispose”models toward closed-loop value creation. Remanufacturing, which aims to restore end-of-life products to a “like-new” condition, plays a central role in this transition. However, its stochastic inputs and complex, multi-stage [...] Read more.
Global ambitions for net-zero emissions and resource circularity are propelling industry from linear “make-use-dispose”models toward closed-loop value creation. Remanufacturing, which aims to restore end-of-life products to a “like-new” condition, plays a central role in this transition. However, its stochastic inputs and complex, multi-stage processes pose significant challenges to traditional production planning methods. This study delivers an integrated overview of remanufacturing scheduling by combining a systematic bibliometric review of 190 publications (2005–2025) with a critical synthesis of modelling approaches and enabling technologies. The bibliometric results reveal five thematic clusters and a 14% annual growth rate, highlighting a shift from deterministic, shop-floor-focused models to uncertainty-aware, sustainability-oriented frameworks. The scheduling problems are formalised to capture features arising from variable core quality, multi-phase precedence, and carbon reduction goals, in both centralised and cloud-based systems. Advances in human–robot disassembly, vision-based inspection, hybrid repair, and digital testing demonstrate feedback-rich environments that increasingly integrate planning and execution. A comparative analysis shows that, while mixed-integer programming and metaheuristics perform well in small static settings, dynamic and large-scale contexts benefit from reinforcement learning and hybrid decomposition models. Finally, future directions for dynamic, collaborative, carbon-conscious, and digital-twin-driven scheduling are outlined and investigated. Full article
Show Figures

Figure 1

31 pages, 34013 KB  
Article
Vision-Based 6D Pose Analytics Solution for High-Precision Industrial Robot Pick-and-Place Applications
by Balamurugan Balasubramanian and Kamil Cetin
Sensors 2025, 25(15), 4824; https://doi.org/10.3390/s25154824 - 6 Aug 2025
Viewed by 882
Abstract
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, [...] Read more.
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, Horgen, Switzerland) robot arm to pick up metal plates from various locations and place them into a precisely defined slot on a brake pad production line. The system uses a fixed eye-to-hand Intel RealSense D435 RGB-D camera (manufactured by Intel Corporation, Santa Clara, California, USA) to capture color and depth data. A robust software infrastructure developed in LabVIEW (ver.2019) integrated with the NI Vision (ver.2019) library processes the images through a series of steps, including particle filtering, equalization, and pattern matching, to determine the X-Y positions and Z-axis rotation of the object. The Z-position of the object is calculated from the camera’s intensity data, while the remaining X-Y rotation angles are determined using the angle-of-inclination analytics method. It is experimentally verified that the proposed analytical solution outperforms the hybrid-based method (YOLO-v8 combined with PnP/RANSAC algorithms). Experimental results across four distinct picking scenarios demonstrate the proposed solution’s superior accuracy, with position errors under 2 mm, orientation errors below 1°, and a perfect success rate in pick-and-place tasks. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

32 pages, 5560 KB  
Article
Design of Reconfigurable Handling Systems for Visual Inspection
by Alessio Pacini, Francesco Lupi and Michele Lanzetta
J. Manuf. Mater. Process. 2025, 9(8), 257; https://doi.org/10.3390/jmmp9080257 - 31 Jul 2025
Cited by 1 | Viewed by 708
Abstract
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such [...] Read more.
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such reconfigurable machines remains a complex, expert-dependent, and time-consuming task. This is primarily due to the lack of structured methodologies and the reliance on trial-and-error processes. In this context, this study proposes a novel theoretical framework to facilitate the design of fully reconfigurable handling systems for VISs, with a particular focus on fixture design. The framework is grounded in Model-Based Definition (MBD), embedding semantic information directly into the 3D CAD models of the inspected product. As an additional contribution, a general hardware architecture for the inspection of axisymmetric components is presented. This architecture integrates an anthropomorphic robotic arm, Numerically Controlled (NC) modules, and adaptable software and hardware components to enable automated, software-driven reconfiguration. The proposed framework and architecture were applied in an industrial case study conducted in collaboration with a leading automotive half-shaft manufacturer. The resulting system, implemented across seven automated cells, successfully inspected over 200 part types from 12 part families and detected more than 60 defect types, with a cycle below 30 s per part. Full article
Show Figures

Figure 1

20 pages, 3729 KB  
Article
Can AIGC Aid Intelligent Robot Design? A Tentative Research of Apple-Harvesting Robot
by Qichun Jin, Jiayu Zhao, Wei Bao, Ji Zhao, Yujuan Zhang and Fuwen Hu
Processes 2025, 13(8), 2422; https://doi.org/10.3390/pr13082422 - 30 Jul 2025
Viewed by 648
Abstract
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in [...] Read more.
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in conceptual and technical design, functional module design, and the training of the perception ability to accelerate prototyping. Taking the design of an apple-harvesting robot, for example, we demonstrate a basic framework of the AIGC-assisted robot design methodology, leveraging the generation capabilities of available multimodal large language models, as well as the human intervention to alleviate AI hallucination and hidden risks. Second, we study the enhancement effect on the robot perception system using the generated apple images based on the large vision-language models to expand the actual apple images dataset. Further, an apple-harvesting robot prototype based on an AIGC-aided design is demonstrated and a pick-up experiment in a simulated scene indicates that it achieves a harvesting success rate of 92.2% and good terrain traversability with a maximum climbing angle of 32°. According to the tentative research, although not an autonomous design agent, the AIGC-driven design workflow can alleviate the significant complexities and challenges of intelligent robot design, especially for beginners or young engineers. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

22 pages, 6487 KB  
Article
An RGB-D Vision-Guided Robotic Depalletizing System for Irregular Camshafts with Transformer-Based Instance Segmentation and Flexible Magnetic Gripper
by Runxi Wu and Ping Yang
Actuators 2025, 14(8), 370; https://doi.org/10.3390/act14080370 - 24 Jul 2025
Viewed by 640
Abstract
Accurate segmentation of densely stacked and weakly textured objects remains a core challenge in robotic depalletizing for industrial applications. To address this, we propose MaskNet, an instance segmentation network tailored for RGB-D input, designed to enhance recognition performance under occlusion and low-texture conditions. [...] Read more.
Accurate segmentation of densely stacked and weakly textured objects remains a core challenge in robotic depalletizing for industrial applications. To address this, we propose MaskNet, an instance segmentation network tailored for RGB-D input, designed to enhance recognition performance under occlusion and low-texture conditions. Built upon a Vision Transformer backbone, MaskNet adopts a dual-branch architecture for RGB and depth modalities and integrates multi-modal features using an attention-based fusion module. Further, spatial and channel attention mechanisms are employed to refine feature representation and improve instance-level discrimination. The segmentation outputs are used in conjunction with regional depth to optimize the grasping sequence. Experimental evaluations on camshaft depalletizing tasks demonstrate that MaskNet achieves a precision of 0.980, a recall of 0.971, and an F1-score of 0.975, outperforming a YOLO11-based baseline. In an actual scenario, with a self-designed flexible magnetic gripper, the system maintains a maximum grasping error of 9.85 mm and a 98% task success rate across multiple camshaft types. These results validate the effectiveness of MaskNet in enabling fine-grained perception for robotic manipulation in cluttered, real-world scenarios. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

22 pages, 11043 KB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Viewed by 1734
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

22 pages, 3768 KB  
Article
A Collaborative Navigation Model Based on Multi-Sensor Fusion of Beidou and Binocular Vision for Complex Environments
by Yongxiang Yang and Zhilong Yu
Appl. Sci. 2025, 15(14), 7912; https://doi.org/10.3390/app15147912 - 16 Jul 2025
Viewed by 535
Abstract
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references [...] Read more.
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references and binocular vision enabling local environmental perception through a collaborative fusion strategy. The Unscented Kalman Filter (UKF) is used to integrate data from multiple sensors to ensure high-precision positioning and dynamic obstacle avoidance capabilities for robots in complex environments. Simulation results show that the Beidou–Binocular Cooperative Navigation (BBCN) model achieves a global positioning error of less than 5 cm in non-interference scenarios, and an error of only 6.2 cm under high-intensity electromagnetic interference, significantly outperforming the single Beidou model’s error of 40.2 cm. The path planning efficiency is close to optimal (with an efficiency factor within 1.05), and the obstacle avoidance success rate reaches 95%, while the system delay remains within 80 ms, meeting the real-time requirements of industrial scenarios. The innovative fusion approach enables unprecedented reliability for autonomous robot inspection in high-voltage environments, offering significant practical value in reducing human risk exposure, lowering maintenance costs, and improving inspection efficiency in power industry applications. This technology enables continuous monitoring of critical power infrastructure that was previously difficult to automate due to navigation challenges in electromagnetically complex environments. Full article
(This article belongs to the Special Issue Advanced Robotics, Mechatronics, and Automation)
Show Figures

Figure 1

30 pages, 2023 KB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Cited by 1 | Viewed by 2157
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

Back to TopTop