Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (116)

Search Parameters:
Keywords = laser vision sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7207 KB  
Article
YOLO–LaserGalvo: A Vision–Laser-Ranging System for High-Precision Welding Torch Localization
by Jiajun Li, Tianlun Wang and Wei Wei
Sensors 2025, 25(20), 6279; https://doi.org/10.3390/s25206279 - 10 Oct 2025
Viewed by 431
Abstract
A novel closed loop visual positioning system, termed YOLO–LaserGalvo (YLGS), is proposed for precise localization of welding torch tips in industrial welding automation. The proposed system integrates a monocular camera, an infrared laser distance sensor with a galvanometer scanner, and a customized deep [...] Read more.
A novel closed loop visual positioning system, termed YOLO–LaserGalvo (YLGS), is proposed for precise localization of welding torch tips in industrial welding automation. The proposed system integrates a monocular camera, an infrared laser distance sensor with a galvanometer scanner, and a customized deep learning detector based on an improved YOLOv11 model. In operation, the vision subsystem first detects the approximate image location of the torch tip using the YOLOv11-based model. Guided by this detection, the galvanometer steers the IR laser beam to that point and measures the distance to the torch tip. The distance feedback is then fused with the vision coordinates to compute the precise 3D position of the torch tip in real-time. Under complex illumination, the proposed YLGS system exhibits superior robustness compared with color-marker and ArUco baselines. Experimental evaluation shows that the system outperforms traditional color-marker and ArUco-based methods in terms of accuracy, robustness, and processing speed. This marker-free method provides high-precision torch positioning without requiring structured lighting or artificial markers. Its pedagogical implications in engineering education are also discussed. Potential future work includes extending the method to full 6-DOF pose estimation and integrating additional sensors for enhanced performance. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

23 pages, 16731 KB  
Article
WeldLight: A Lightweight Weld Classification and Feature Point Extraction Model for Weld Seam Tracking
by Ang Gao, Anning Li, Fukang Su, Xinqi Yang, Wenping Liu, Fuxin Du and Chao Chen
Sensors 2025, 25(18), 5761; https://doi.org/10.3390/s25185761 - 16 Sep 2025
Viewed by 672
Abstract
To address the issues of intense image noise interference and computational intensity faced by traditional vision-based weld tracking systems, we propose WeldLight, a lightweight and noise-resistant convolutional neural network for precise classification and positioning of welding seam feature points using single-line structured light [...] Read more.
To address the issues of intense image noise interference and computational intensity faced by traditional vision-based weld tracking systems, we propose WeldLight, a lightweight and noise-resistant convolutional neural network for precise classification and positioning of welding seam feature points using single-line structured light vision. Our approach includes (1) an online data augmentation method to enhance training samples and improve noise adaptability; (2) a one-stage lightweight network for simultaneous positioning and classification; and (3) an attention module to filter features corrupted by intense noise, thereby improving stability. Experiments show that WeldLight achieves an F1-score of 0.9668 for seam classification on an adjusted test set, with mean absolute positioning errors of 1.639 pixels and 1.736 pixels on low-noise and high-noise test sets, respectively. With an inference time of 29.32 ms on a CPU platform, it meets real-time seam tracking requirements. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

21 pages, 4909 KB  
Article
Rapid 3D Camera Calibration for Large-Scale Structural Monitoring
by Fabio Bottalico, Nicholas A. Valente, Christopher Niezrecki, Kshitij Jerath, Yan Luo and Alessandro Sabato
Remote Sens. 2025, 17(15), 2720; https://doi.org/10.3390/rs17152720 - 6 Aug 2025
Cited by 1 | Viewed by 1003
Abstract
Computer vision techniques such as three-dimensional digital image correlation (3D-DIC) and three-dimensional point tracking (3D-PT) have demonstrated broad applicability for monitoring the conditions of large-scale engineering systems by reconstructing and tracking dynamic point clouds corresponding to the surface of a structure. Accurate stereophotogrammetry [...] Read more.
Computer vision techniques such as three-dimensional digital image correlation (3D-DIC) and three-dimensional point tracking (3D-PT) have demonstrated broad applicability for monitoring the conditions of large-scale engineering systems by reconstructing and tracking dynamic point clouds corresponding to the surface of a structure. Accurate stereophotogrammetry measurements require the stereo cameras to be calibrated to determine their intrinsic and extrinsic parameters by capturing multiple images of a calibration object. This image-based approach becomes cumbersome and time-consuming as the size of the tested object increases. To streamline the calibration and make it scale-insensitive, a multi-sensor system embedding inertial measurement units and a laser sensor is developed to compute the extrinsic parameters of the stereo cameras. In this research, the accuracy of the proposed sensor-based calibration method in performing stereophotogrammetry is validated experimentally and compared with traditional approaches. Tests conducted at various scales reveal that the proposed sensor-based calibration enables reconstructing both static and dynamic point clouds, measuring displacements with an accuracy higher than 95% compared to image-based traditional calibration, while being up to an order of magnitude faster and easier to deploy. The novel approach has broad applications for making static, dynamic, and deformation measurements to transform how large-scale structural health monitoring can be performed. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Figure 1

30 pages, 4582 KB  
Review
Review on Rail Damage Detection Technologies for High-Speed Trains
by Yu Wang, Bingrong Miao, Ying Zhang, Zhong Huang and Songyuan Xu
Appl. Sci. 2025, 15(14), 7725; https://doi.org/10.3390/app15147725 - 10 Jul 2025
Viewed by 2417
Abstract
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes [...] Read more.
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes the damage detection methods for high-speed trains, and compares and analyzes different detection technologies and application research results. The analysis results show that the detection methods for high-speed train rail damage mainly focus on the research and application of non-destructive testing technology and methods, as well as testing platform equipment. Detection platforms and equipment include a new type of vortex meter, integrated track recording vehicles, laser rangefinders, thermal sensors, laser vision systems, LiDAR, new ultrasonic detectors, rail detection vehicles, rail detection robots, laser on-board rail detection systems, track recorders, self-moving trolleys, etc. The main research and application methods include electromagnetic detection, optical detection, ultrasonic guided wave detection, acoustic emission detection, ray detection, vortex detection, and vibration detection. In recent years, the most widely studied and applied methods have been rail detection based on LiDAR detection, ultrasonic detection, eddy current detection, and optical detection. The most important optical detection method is machine vision detection. Ultrasonic detection can detect internal damage of the rail. LiDAR detection can detect dirt around the rail and the surface, but the cost of this kind of equipment is very high. And the application cost is also very high. In the future, for high-speed railway rail damage detection, the damage standards must be followed first. In terms of rail geometric parameters, the domestic standard (TB 10754-2018) requires a gauge deviation of ±1 mm, a track direction deviation of 0.3 mm/10 m, and a height deviation of 0.5 mm/10 m, and some indicators are stricter than European standard EN-13848. In terms of damage detection, domestic flaw detection vehicles have achieved millimeter-level accuracy in crack detection in rail heads, rail waists, and other parts, with a damage detection rate of over 85%. The accuracy of identifying track components by the drone detection system is 93.6%, and the identification rate of potential safety hazards is 81.8%. There is a certain gap with international standards, and standards such as EN 13848 have stricter requirements for testing cycles and data storage, especially in quantifying damage detection requirements, real-time damage data, and safety, which will be the key research and development contents and directions in the future. Full article
Show Figures

Figure 1

16 pages, 8568 KB  
Article
A New Slice Template Matching Method for Full-Field Temporal–Spatial Deflection Measurement of Slender Structures
by Jiayan Zheng, Yongzhi Sang, Haijing Liu, Ji He and Zhixiang Zhou
Appl. Sci. 2025, 15(11), 6188; https://doi.org/10.3390/app15116188 - 30 May 2025
Viewed by 509
Abstract
A sufficient number of sensors installed in all structural components is a prerequisite for obtaining the full-field temporal–spatial displacement and is essential for large-scale structure health monitoring. In this paper, a novel lightweight vision-based temporal–spatial deflection measurement method is proposed to measure the [...] Read more.
A sufficient number of sensors installed in all structural components is a prerequisite for obtaining the full-field temporal–spatial displacement and is essential for large-scale structure health monitoring. In this paper, a novel lightweight vision-based temporal–spatial deflection measurement method is proposed to measure the full-field temporal–spatial displacement of slender structures. First, the geometric and mechanical properties of slender members are introduced as the priori information to vision-based measurement. Then, a slice template matching model is proposed by deploying a one-dimensional template matching model in every pixel column of each image frame, based on traditional digital image correlation (DIC) method. An indoor experiment was carried out to verify the proposed method, and experiment results show that measurement precision of STMM agrees well with the theory and the laser ranger, with a maximum measurement error of 0.03 pixels and the root-mean-square error (RMSE) of 0.052 mm, for transient beam deflection curve; with the correlation coefficient and coefficient of determination of 0.9994 and 0.9986, for dynamic deflection–time history curves at the middle-span point. Finally, further investigation reveals that brightness inconstancy is the source of STMM measurement error. Full article
(This article belongs to the Special Issue Advances in Solid Mechanics and Applications to Slender Structures)
Show Figures

Figure 1

26 pages, 10564 KB  
Article
DynaFusion-SLAM: Multi-Sensor Fusion and Dynamic Optimization of Autonomous Navigation Algorithms for Pasture-Pushing Robot
by Zhiwei Liu, Jiandong Fang and Yudong Zhao
Sensors 2025, 25(11), 3395; https://doi.org/10.3390/s25113395 - 28 May 2025
Cited by 1 | Viewed by 1192
Abstract
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system [...] Read more.
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system is proposed based on a loosely coupled architecture of Cartographer–RTAB-Map (real-time appearance-based mapping). Through laser-vision inertial guidance multi-sensor data fusion, the system achieves high-precision mapping and robust path planning in complex scenes. First, comparing the mainstream laser SLAM algorithms (Hector/Gmapping/Cartographer) through simulation experiments, Cartographer is found to have a significant memory efficiency advantage in large-scale scenarios and is thus chosen as the front-end odometer. Secondly, a two-way position optimization mechanism is innovatively designed: (1) When building the map, Cartographer processes the laser with IMU and odometer data to generate mileage estimations, which provide positioning compensation for RTAB-Map. (2) RTAB-Map fuses the depth camera point cloud and laser data, corrects the global position through visual closed-loop detection, and then uses 2D localization to construct a bimodal environment representation containing a 2D raster map and a 3D point cloud, achieving a complete description of the simulated ranch environment and material morphology and constructing a framework for the navigation algorithm of the pushing robot based on the two types of fused data. During navigation, the combination of RTAB-Map’s global localization and AMCL’s local localization is used to generate a smoother and robust positional attitude by fusing IMU and odometer data through the EKF algorithm. Global path planning is performed using Dijkstra’s algorithm and combined with the TEB (Timed Elastic Band) algorithm for local path planning. Finally, experimental validation is performed in a laboratory-simulated pasture environment. The results indicate that when the RTAB-Map algorithm fuses with the multi-source odometry, its performance is significantly improved in the laboratory-simulated ranch scenario, the maximum absolute value of the error of the map measurement size is narrowed from 24.908 cm to 4.456 cm, the maximum absolute value of the relative error is reduced from 6.227% to 2.025%, and the absolute value of the error at each location is significantly reduced. At the same time, the introduction of multi-source mileage fusion can effectively avoid the phenomenon of large-scale offset or drift in the process of map construction. On this basis, the robot constructs a fusion map containing a simulated pasture environment and material patterns. In the navigation accuracy test experiments, our proposed method reduces the root mean square error (RMSE) coefficient by 1.7% and Std by 2.7% compared with that of RTAB-MAP. The RMSE is reduced by 26.7% and Std by 22.8% compared to that of the AMCL algorithm. On this basis, the robot successfully traverses the six preset points, and the measured X and Y directions and the overall position errors of the six points meet the requirements of the pasture-pushing task. The robot successfully returns to the starting point after completing the task of multi-point navigation, achieving autonomous navigation of the robot. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

17 pages, 1160 KB  
Article
Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC
by Guojun Chen, Yanduo Zhang, Yuming Ai, Baocheng Yu and Wenxia Xu
Sensors 2025, 25(11), 3268; https://doi.org/10.3390/s25113268 - 22 May 2025
Cited by 1 | Viewed by 1176
Abstract
Laser vision sensors for weld seam extraction face critical challenges due to arc light and spatter interference in welding environments. This paper presents a real-time weld seam extraction method. The proposed framework enhances robustness through the sequential processing of historical frame data. First, [...] Read more.
Laser vision sensors for weld seam extraction face critical challenges due to arc light and spatter interference in welding environments. This paper presents a real-time weld seam extraction method. The proposed framework enhances robustness through the sequential processing of historical frame data. First, an initial noise-free laser stripe image of the weld seam is acquired prior to arc ignition, from which the laser stripe region and slope characteristics are extracted. Subsequently, during welding, a dynamic region of interest (ROI) is generated for the current frame based on the preceding frame, effectively suppressing spatter and arc interference. Within the ROI, adaptive Otsu thresholding segmentation and morphological filtering are applied to isolate the laser stripe. An optimized RANSAC algorithm, incorporating slope constraints derived from historical frames, is then employed to achieve robust laser stripe fitting. The geometric center coordinates of the weld seam are derived through the rigorous analysis of the optimized laser stripe profile. Experimental results from various types of weld seam extraction validated the accuracy and real-time performance of the proposed method. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

49 pages, 1114 KB  
Review
A Survey on the Main Techniques Adopted in Indoor and Outdoor Localization
by Massimo Stefanoni, Imre Kovács, Peter Sarcevic and Ákos Odry
Electronics 2025, 14(10), 2069; https://doi.org/10.3390/electronics14102069 - 20 May 2025
Cited by 2 | Viewed by 1446
Abstract
In modern engineering applications, localization and orientation play an increasingly crucial role in ensuring the successful execution of assigned tasks. Industrial robots, smart home systems, healthcare environments, nuclear facilities, agriculture, and autonomous vehicles are just a few examples of fields where localization technologies [...] Read more.
In modern engineering applications, localization and orientation play an increasingly crucial role in ensuring the successful execution of assigned tasks. Industrial robots, smart home systems, healthcare environments, nuclear facilities, agriculture, and autonomous vehicles are just a few examples of fields where localization technologies are applied. Over the years, these technologies have evolved significantly, with numerous methods being developed, proposed, and refined. This paper aims to provide a comprehensive review of the primary localization and orientation technologies available in the literature, detailing the fundamental principles on which they are based and the key algorithms used to implement them. To achieve accurate and reliable localization, fusion-based approaches are often necessary, integrating data from multiple sensors and systems or estimating hidden states. For this purpose, algorithms such as Kalman Filters, Particle Filters, or Neural Networks are usually adopted. The first part of this article presents an extensive review of localization technologies, including radio frequency, RFID, laser-based systems, vision-based techniques, light-based positioning, IMU-based methods, odometry, and ultrasound-based solutions. The second part focuses on the most widely used algorithms for localization. Finally, summary tables provide an overview of the best and most consistent accuracies reported in the literature for the investigated technologies and systems. Full article
Show Figures

Figure 1

22 pages, 9648 KB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 1088
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

23 pages, 10860 KB  
Article
Tea Harvest Robot Navigation Path Generation Algorithm Based on Semantic Segmentation Using a Visual Sensor
by Houqi Tao, Ruirui Zhang, Linhuan Zhang, Danzhu Zhang, Tongchuan Yi and Mingqi Wu
Electronics 2025, 14(5), 988; https://doi.org/10.3390/electronics14050988 - 28 Feb 2025
Cited by 2 | Viewed by 1165
Abstract
During the process of autonomous tea harvesting, it is essential for the tea-harvesting robots to navigate along the tea canopy while obtaining real-time and precise information about these tea canopies. Considering that most tea gardens are located in hilly and mountainous areas, GNSS [...] Read more.
During the process of autonomous tea harvesting, it is essential for the tea-harvesting robots to navigate along the tea canopy while obtaining real-time and precise information about these tea canopies. Considering that most tea gardens are located in hilly and mountainous areas, GNSS signals often encounter disturbances, and laser sensors provide insufficient information, which fails to meet the navigation requirements of tea-harvesting robots. This study develops a vision-based semantic segmentation method for the identification of tea canopies and the generation of navigation paths. The proposed CDSC-Deeplabv3+ model integrates a Convnext backbone network with the DenseASP_SP module for feature fusion and a CFF module for enhanced semantic segmentation. The experimental results demonstrate that our proposed CDSC-Deeplabv3+ model achieves mAP, mIoU, F1-score, and FPS metrics of 96.99%, 94.71%, 98.66%, and 5.0, respectively; both the accuracy and speed performance indicators meet the practical requirements outlined in this study. Among the three compared methods for fitting the navigation central line, RANSAC shows superior performance, with minimum average angle deviations of 2.02°, 0.36°, and 0.46° at camera tilt angles of 50°, 45°, and 40°, respectively, validating the effectiveness of our approach in extracting stable tea canopy information and generating navigation paths. Full article
Show Figures

Figure 1

18 pages, 11193 KB  
Article
A Deep Semantic Segmentation Approach to Accurately Detect Seam Gap in Fixtured Workpiece Laser Welding
by Fotios Panagiotis Basamakis, Dimosthenis Dimosthenopoulos, Angelos Christos Bavelos, George Michalos and Sotiris Makris
J. Manuf. Mater. Process. 2025, 9(3), 69; https://doi.org/10.3390/jmmp9030069 - 20 Feb 2025
Cited by 1 | Viewed by 1076
Abstract
The recent technological advancements in today’s manufacturing industry have extended the quality control operations for welding processes. However, the realm of pre-welding inspection, which significantly influences the quality of the final products, remains relatively uncharted. To this end, this study proposes an innovative [...] Read more.
The recent technological advancements in today’s manufacturing industry have extended the quality control operations for welding processes. However, the realm of pre-welding inspection, which significantly influences the quality of the final products, remains relatively uncharted. To this end, this study proposes an innovative vision system designed to extract the seam gap width and centre between two components before welding and make informed decisions regarding the initiation of the welding process. The system incorporates a deep learning semantic segmentation network for identifying and isolating the desired gap area within an acquired image from the vision sensor. Then, additional processing is performed, with conventional computer vision techniques and fundamental Euclidean geometry operations for acquiring the desired width and the centre of that area with a precision of 0.1 mm. Additionally, a control graphical interface has been implemented that allows the operator to initiate and monitor the entire inspection procedure. The overall framework is applied and tested on a manufacturing case study involving the laser welding operations of sheet metal parts, and although it is designed to handle gaps of different shapes and sizes, it is mainly focused on obtaining the characteristics of butt weld gaps. Full article
(This article belongs to the Special Issue Robotics in Manufacturing Processes)
Show Figures

Figure 1

15 pages, 11058 KB  
Article
Plate Wall Offset Measurement for U-Shaped Groove Workpieces Based on Multi-Line-Structured Light Vision Sensors
by Yaoqiang Ren, Lu Wang, Qinghua Wu, Zhoutao Li and Zheming Zhang
Sensors 2025, 25(4), 1018; https://doi.org/10.3390/s25041018 - 8 Feb 2025
Viewed by 1125
Abstract
To address the challenge of measuring the plate wall offset at the U-shaped groove positions after assembling large cylindrical shell arc segments, this paper proposes a measurement method based on multi-line-structured light vision sensors. The sensor is designed and calibrated to collect U-shaped [...] Read more.
To address the challenge of measuring the plate wall offset at the U-shaped groove positions after assembling large cylindrical shell arc segments, this paper proposes a measurement method based on multi-line-structured light vision sensors. The sensor is designed and calibrated to collect U-shaped groove workpiece images containing multiple laser stripes. The central points of the laser stripes are extracted and matched to their corresponding light plane equations to obtain local point cloud data of the measured positions. Subsequently, point cloud data from the plate wall regions on both sides of the groove are separated, and the plate wall offset is calculated using the local distance computation method between planes in space. The experimental results demonstrate that, when measuring a standard sphere with a diameter of 30 mm from multiple angles, the measurement uncertainty is ±0.015 mm within a 95% confidence interval. Within a measurement range of 155 mm × 220 mm × 80 mm, using articulated arm measurements as reference values, the plate wall offset measurement uncertainty of the multi-line-structured light vision sensor is ±0.013 mm within a 95% confidence interval, showing close agreement with reference values. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

13 pages, 2207 KB  
Article
Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer
by Michalis Ntoulmperis, Silvia Discepolo, Paolo Castellini, Paolo Catti, Nikolaos Nikolakis, Wilhelm van de Kamp and Kosmas Alexopoulos
Machines 2025, 13(2), 88; https://doi.org/10.3390/machines13020088 - 23 Jan 2025
Cited by 2 | Viewed by 1269
Abstract
Modern vision-based inspection systems are inherently limited by their two-dimensional nature, particularly when inspecting complex product geometries. These systems are often unable to capture critical depth information, leading to challenges in accurately measuring features such as holes, edges, and surfaces with irregular curvature. [...] Read more.
Modern vision-based inspection systems are inherently limited by their two-dimensional nature, particularly when inspecting complex product geometries. These systems are often unable to capture critical depth information, leading to challenges in accurately measuring features such as holes, edges, and surfaces with irregular curvature. To address these shortcomings, this study introduces an approach that leverages computer-aided design-oriented three-dimensional point clouds, captured via a laser line triangulation sensor mounted onto a motorized linear guide. This setup facilitates precise surface scanning, extracting complex geometrical features, which are subsequently processed through an AI-based analytical component. Dimensional properties, such as radii and inter-feature distances, are computed using a combination of K-nearest neighbors and least-squares circle fitting algorithms. This approach is validated in the context of steel part manufacturing, where traditional 2D vision-based systems often struggle due to the material’s reflectivity and complex geometries. This system achieves an average accuracy of 95.78% across three different product types, demonstrating robustness and adaptability to varying geometrical configurations. An uncertainty analysis confirms that the measurement deviations remain within acceptable limits, supporting the system’s potential for improving quality control in industrial environments. Thus, the proposed approach may offer a reliable, non-destructive inline testing solution, with the potential to enhance manufacturing efficiency. Full article
(This article belongs to the Special Issue Application of Sensing Measurement in Machining)
Show Figures

Figure 1

24 pages, 6038 KB  
Article
Research on Positioning and Tracking Method of Intelligent Mine Car in Underground Mine Based on YOLOv5 Algorithm and Laser Sensor Fusion
by Linxin Zhang, Xiaoquan Li, Yunjie Sun, Junhong Liu and Yonghe Xu
Sustainability 2025, 17(2), 542; https://doi.org/10.3390/su17020542 - 12 Jan 2025
Cited by 2 | Viewed by 1741
Abstract
Precise positioning has become a key technology in the intelligent development of underground mines. To improve the positioning accuracy of mining vehicles, this paper proposes an intelligent underground mining vehicle positioning and tracking method based on the fusion of the YOLOv5 and laser [...] Read more.
Precise positioning has become a key technology in the intelligent development of underground mines. To improve the positioning accuracy of mining vehicles, this paper proposes an intelligent underground mining vehicle positioning and tracking method based on the fusion of the YOLOv5 and laser sensor technology. The system utilizes a camera and the YOLOv5 algorithm for real-time identification and precise tracking of mining vehicles, while the laser sensor is used to accurately measure the straight-line distance between the vehicle and the positioning device. By combining the strengths of both vision and laser sensors, the system can efficiently identify mining vehicles in complex environments and accurately calculate their position using geometric principles based on laser distance measurements. Experimental results show that the YOLOv5 algorithm can efficiently identify and track mining vehicles in real time. When integrated with the laser sensor’s distance measurement, the system achieves high-precision positioning, with horizontal and vertical positioning errors of 1.66 cm and 1.96 cm, respectively, achieving centimeter-level accuracy overall. This system significantly improves the accuracy and real-time performance of mining vehicle positioning, effectively reducing operational errors and safety risks, providing essential technical support for the intelligent development of underground mining transportation systems. Full article
(This article belongs to the Special Issue Sustainability for Disaster Mitigation in Underground Engineering)
Show Figures

Figure 1

16 pages, 5538 KB  
Article
Vision-Based Acquisition Model for Molten Pool and Weld-Bead Profile in Gas Metal Arc Welding
by Gwang-Gook Kim, Dong-Yoon Kim and Jiyoung Yu
Metals 2024, 14(12), 1413; https://doi.org/10.3390/met14121413 - 10 Dec 2024
Viewed by 1717
Abstract
Gas metal arc welding (GMAW) is widely used for its productivity and ease of automation across various industries. However, certain tasks in shipbuilding and heavy industry still require manual welding, where quality depends heavily on operator skill. Defects in manual welding often necessitate [...] Read more.
Gas metal arc welding (GMAW) is widely used for its productivity and ease of automation across various industries. However, certain tasks in shipbuilding and heavy industry still require manual welding, where quality depends heavily on operator skill. Defects in manual welding often necessitate costly rework, reducing productivity. Vision sensing has become essential in automated welding, capturing dynamic changes in the molten pool and arc length for real-time defect insights. Laser vision sensors are particularly valuable for their high-precision bead profile data; however, most current models require offline inspection, limiting real-time application. This study proposes a deep learning-based system for the real-time monitoring of both the molten pool and weld-bead profile during GMAW. The system integrates an optimized optical design to reduce arc light interference, enabling the continuous acquisition of both molten pool images and 3D bead profiles. Experimental results demonstrate that the molten pool classification models achieved accuracies of 99.76% with ResNet50 and 99.02% with MobileNetV4, fulfilling real-time requirements with inference times of 6.53 ms and 9.06 ms, respectively. By combining 2D and 3D data through a semantic segmentation algorithm, the system enables the accurate, real-time extraction of weld-bead geometry, offering comprehensive weld quality monitoring that satisfies the performance demands of real-time industrial applications. Full article
(This article belongs to the Special Issue Welding and Fatigue of Metallic Materials)
Show Figures

Figure 1

Back to TopTop