Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (315)

Search Parameters:
Keywords = automated visual inspection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10827 KB  
Article
Smart Monitoring of Power Transformers in Substation 4.0: Multi-Sensor Integration and Machine Learning Approach
by Fabio Henrique de Souza Duz, Tiago Goncalves Zacarias, Ronny Francis Ribeiro Junior, Fabio Monteiro Steiner, Frederico de Oliveira Assuncao, Erik Leandro Bonaldi and Luiz Eduardo Borges-da-Silva
Sensors 2025, 25(17), 5469; https://doi.org/10.3390/s25175469 - 3 Sep 2025
Abstract
Power transformers are critical components in electrical power systems, where failures can cause significant outages and economic losses. Traditional maintenance strategies, typically based on offline inspections, are increasingly insufficient to meet the reliability requirements of modern digital substations. This work presents an integrated [...] Read more.
Power transformers are critical components in electrical power systems, where failures can cause significant outages and economic losses. Traditional maintenance strategies, typically based on offline inspections, are increasingly insufficient to meet the reliability requirements of modern digital substations. This work presents an integrated multi-sensor monitoring framework that combines online frequency response analysis (OnFRA® 4.0), capacitive tap-based monitoring (FRACTIVE® 4.0), dissolved gas analysis, and temperature measurements. All data streams are synchronized and managed within a SCADA system that supports real-time visualization and historical traceability. To enable automated fault diagnosis, a Random Forest classifier was trained using simulated datasets derived from laboratory experiments that emulate typical transformer and bushing degradation scenarios. Principal Component Analysis was employed for dimensionality reduction, improving model interpretability and computational efficiency. The proposed model achieved perfect classification metrics on the simulated data, demonstrating the feasibility of combining high-fidelity monitoring hardware with machine learning techniques for anomaly detection. Although no in-service failures have been recorded to date, the monitoring infrastructure is already tested and validated through laboratory conditions, enabling continuous data acquisition. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

24 pages, 973 KB  
Review
Machine Learning in Thermography Non-Destructive Testing: A Systematic Review
by Shaoyang Peng, Sri Addepalli and Maryam Farsi
Appl. Sci. 2025, 15(17), 9624; https://doi.org/10.3390/app15179624 - 1 Sep 2025
Viewed by 5
Abstract
This paper reviews recent advances in machine learning (ML) algorithms to improve the postprocessing and interpretation of thermographic data in non-destructive testing (NDT). While traditional NDT methods (e.g., visual inspection, ultrasonic testing) each have their own advantages and limitations, thermographic techniques (e.g., pulsed [...] Read more.
This paper reviews recent advances in machine learning (ML) algorithms to improve the postprocessing and interpretation of thermographic data in non-destructive testing (NDT). While traditional NDT methods (e.g., visual inspection, ultrasonic testing) each have their own advantages and limitations, thermographic techniques (e.g., pulsed thermography, laser thermography) have become valuable complementary tools, particularly in inspecting advanced materials such as carbon fiber-reinforced polymers (CFRPs) and superalloys. These techniques generate large volumes of thermal data, which can be challenging to analyze efficiently and accurately. This review focuses on how ML can accelerate defect detection and automated classification in thermographic NDT. We summarize currently popular algorithms and analyze the limitations of existing workflows. Furthermore, this structured analysis provides an in-depth understanding of how artificial intelligence can assist in processing NDT data, with the potential to enable more accurate defect detection and characterization in industrial applications. Full article
Show Figures

Figure 1

19 pages, 10307 KB  
Review
Advancements in Individual Animal Identification: A Historical Perspective from Prehistoric Times to the Present
by Shiva Paudel and Tami Brown-Brandl
Animals 2025, 15(17), 2514; https://doi.org/10.3390/ani15172514 - 27 Aug 2025
Viewed by 498
Abstract
Precision livestock farming (PLF) is rapidly advancing, with a growing array of technologies being explored and implemented to improve both productivity and animal welfare. One of the major challenges in this field is the identification of individual animals. Despite numerous efforts having been [...] Read more.
Precision livestock farming (PLF) is rapidly advancing, with a growing array of technologies being explored and implemented to improve both productivity and animal welfare. One of the major challenges in this field is the identification of individual animals. Despite numerous efforts having been made to automate this process, there remains a lack of holistic reviews that comprehensively integrate and evaluate these technological developments. Historically, humans have employed various techniques to identify individual animals. This article provides an overview of the evolution of animal identification methods, highlighting significant transitions across various time periods. In prehistoric times, identification relied solely on visual inspection. Today, advanced methods are being utilized, such as radio frequency identification (RFID), computer vision-based systems, biometric recognition, and DNA profiling. Each identification method has its own strengths and limitations. Interestingly, early methods such as visual inspection and drawing can still inspire the development of novel automated systems when combined with modern technologies. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

23 pages, 6098 KB  
Article
Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach
by Carmen-Cristiana Cazacu, Teodor Cristian Nasu, Mihail Hanga, Dragos-Alexandru Cazacu and Costel Emil Cotet
Appl. Sci. 2025, 15(17), 9375; https://doi.org/10.3390/app15179375 - 26 Aug 2025
Viewed by 328
Abstract
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform [...] Read more.
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform and leverages computer vision with HSV color segmentation for real-time fuse validation. A custom ROI-based calibration method is implemented to address visual variation across fuse types, and a 5-s time-window validation improves detection robustness under fluctuating conditions. The system achieves a 95% accuracy rate across two fuse box types, with confidence intervals reported for statistical significance. Experimental findings indicate an approximate 85% decrease in manual intervention duration. Because of its adaptability and extensibility, the design can be implemented in a variety of assembly processes and provides a foundation for smart factory systems that are more scalable and independent. Full article
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 - 25 Aug 2025
Viewed by 457
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

20 pages, 5528 KB  
Article
Wearable Smart Gloves for Optimization Analysis of Disassembly and Assembly of Mechatronic Machines
by Chin-Shan Chen, Hung Wei Chang and Bo-Chen Jiang
Sensors 2025, 25(17), 5223; https://doi.org/10.3390/s25175223 - 22 Aug 2025
Viewed by 521
Abstract
With the rapid development of smart manufacturing, the optimization of real-time monitoring in operating procedures has become a crucial issue in modern industry. Traditional disassembly and assembly (D/A) work, relying on human experience and visual inspection, lacks immediacy and a quantitative basis, further [...] Read more.
With the rapid development of smart manufacturing, the optimization of real-time monitoring in operating procedures has become a crucial issue in modern industry. Traditional disassembly and assembly (D/A) work, relying on human experience and visual inspection, lacks immediacy and a quantitative basis, further affecting operating quality and efficiency. This study aims to develop a thin-film force sensor and an inertial measurement unit (IMU)-integrated wearable device for monitoring and analyzing operators’ behavioral characteristics during D/A tasks. First, by having operators wear self-made smart gloves and 17 IMU sensors, the work tables with three different heights are equipped with a mechatronics machine for the D/A experiment. Common D/A motions are designed into the experiment. Several subjects are invited to execute the standardized operating procedure, with upper limbs used to collect data on operators’ hand gestures and movements. Then, the measured data are applied to verify the performance measure functional best path of machine D/A. The results reveal that the system could effectively identify various D/A motions as well as observe operators’ force difference and motion mode, which, through the theory of performance indicator optimization and the verification of data analysis, could provide a reference for the best path planning, D/A sequence, and work table height design in the machine D/A process. The optimal workbench height for a standing operator is 5 to 10 cm above their elbow height. Performing assembly and disassembly tasks at this optimal height can help the operator save between 14.3933% and 35.2579% of physical effort. Such outcomes could aid in D/A behavior monitoring in industry, worker training, and operational optimization, as well as expand the application to instant feedback design for automation and smartization in a smart factory. Full article
Show Figures

Figure 1

36 pages, 9430 KB  
Article
Numerical Method for Internal Structure and Surface Evaluation in Coatings
by Tomas Kačinskas and Saulius Baskutis
Inventions 2025, 10(4), 71; https://doi.org/10.3390/inventions10040071 - 13 Aug 2025
Viewed by 300
Abstract
This study introduces a MATrix LABoratory (MATLAB, version R2024b, update 1 (24.2.0.2740171))-based automated system for the detection and measurement of indication areas in coated surfaces, enhancing the accuracy and efficiency of quality control processes in metal, polymeric and thermoplastic coatings. The developed code [...] Read more.
This study introduces a MATrix LABoratory (MATLAB, version R2024b, update 1 (24.2.0.2740171))-based automated system for the detection and measurement of indication areas in coated surfaces, enhancing the accuracy and efficiency of quality control processes in metal, polymeric and thermoplastic coatings. The developed code identifies various indication characteristics in the image and provides numerical results, assesses the size and quantity of indications and evaluates conformity to ISO standards. A comprehensive testing method, involving non-destructive penetrant testing (PT) and radiographic testing (RT), allowed for an in-depth analysis of surface and internal porosity across different coating methods, including aluminum-, copper-, polytetrafluoroethylene (PTFE)- and polyether ether ketone (PEEK)-based materials. Initial findings had a major impact on indicating a non-homogeneous surface of obtained coatings, manufactured using different technologies and materials. Whereas researchers using non-destructive testing (NDT) methods typically rely on visual inspection and manual counting, the system under study automates this process. Each sample image is loaded into MATLAB and analyzed using the Image Processing Tool, Computer Vision Toolbox, Statistics and Machine Learning Toolbox. The custom code performs essential tasks such as image conversion, filtering, boundary detection, layering operations and calculations. These processes are integral to rendering images with developed indications according to NDT method requirements, providing a detailed visual and numerical representation of the analysis. RT also validated the observations made through surface indication detection, revealing either the absence of hidden defects or, conversely, internal porosity correlating with surface conditions. Matrix and graphical representations were used to facilitate the comparison of test results, highlighting more advanced methods and materials as the superior choice for achieving optimal mechanical and structural integrity. This research contributes to addressing challenges in surface quality assurance, advancing digital transformation in inspection processes and exploring more advanced alternatives to traditional coating technologies and materials. Full article
(This article belongs to the Section Inventions and Innovation in Advanced Manufacturing)
Show Figures

Figure 1

16 pages, 1693 KB  
Article
Limitations of Transfer Learning for Chilean Cherry Tree Health Monitoring: When Lab Results Do Not Translate to the Orchard
by Mauricio Hidalgo, Fernando Yanine, Renato Galleguillos, Miguel Lagos, Sarat Kumar Sahoo and Rodrigo Paredes
Processes 2025, 13(8), 2559; https://doi.org/10.3390/pr13082559 - 13 Aug 2025
Viewed by 424
Abstract
Chile, which accounts for 27% of global cherry exports (USD 2.26 billion annually), faces a critical industry challenge in crop health monitoring. While automated sensors monitor environmental variables, phytosanitary diagnosis still relies on manual visual inspection, leading to detection errors and delays. Given [...] Read more.
Chile, which accounts for 27% of global cherry exports (USD 2.26 billion annually), faces a critical industry challenge in crop health monitoring. While automated sensors monitor environmental variables, phytosanitary diagnosis still relies on manual visual inspection, leading to detection errors and delays. Given this reality and the growing use of AI models in agriculture, our study quantifies the theory–practice gap through comparative evaluation of three transfer learning architectures (namely, VGG16, ResNet50, and EfficientNetB0) for automated disease identification in cherry leaves under both controlled and real-world orchard conditions. Our analysis reveals that excellent laboratory performance does not guarantee operational effectiveness: while two of the three models exceeded 97% controlled validation accuracy, their field performance degraded significantly, reaching only 52% in the best-case scenario (ResNet50). These findings identify a major risk in agricultural transfer learning applications: strong laboratory performance does not ensure real-world effectiveness, creating unwarranted confidence in model performance under real conditions that may compromise crop health management. Full article
(This article belongs to the Special Issue Transfer Learning Methods in Equipment Reliability Management)
Show Figures

Figure 1

30 pages, 10586 KB  
Article
Autonomous UAV-Based System for Scalable Tactile Paving Inspection
by Tong Wang, Hao Wu, Abner Asignacion, Zhengran Zhou, Wei Wang and Satoshi Suzuki
Drones 2025, 9(8), 554; https://doi.org/10.3390/drones9080554 - 7 Aug 2025
Viewed by 472
Abstract
Tactile pavings (Tenji Blocks) are prone to wear, obstruction, and improper installation, posing significant safety risks for visually impaired pedestrians. This system incorporates a lightweight YOLOv8 (You Only Look Once version 8) model for real-time detection using a fisheye camera to maximize field-of-view [...] Read more.
Tactile pavings (Tenji Blocks) are prone to wear, obstruction, and improper installation, posing significant safety risks for visually impaired pedestrians. This system incorporates a lightweight YOLOv8 (You Only Look Once version 8) model for real-time detection using a fisheye camera to maximize field-of-view coverage, which is highly advantageous for low-altitude UAV navigation in complex urban settings. To enable lightweight deployment, a novel Lightweight Shared Detail Enhanced Oriented Bounding Box (LSDE-OBB) head module is proposed. The design rationale of LSDE-OBB leverages the consistent structural patterns of tactile pavements, enabling parameter sharing within the detection head as an effective optimization strategy without significant accuracy compromise. The feature extraction module is further optimized using StarBlock to reduce computational complexity and model size. Integrated Contextual Anchor Attention (CAA) captures long-range spatial dependencies and refines critical feature representations, achieving an optimal speed–precision balance. The framework demonstrates a 25.13% parameter reduction (2.308 M vs. 3.083 M), 46.29% lower GFLOPs, and achieves 11.97% mAP50:95 on tactile paving datasets, enabling real-time edge deployment. Validated through public/custom datasets and actual UAV flights, the system realizes robust tactile paving detection and stable navigation in complex urban environments via hierarchical control algorithms for dynamic trajectory planning and obstacle avoidance, providing an efficient and scalable platform for automated infrastructure inspection. Full article
Show Figures

Figure 1

60 pages, 8707 KB  
Review
Automation in Construction (2000–2023): Science Mapping and Visualization of Journal Publications
by Mohamed Marzouk, Abdulrahman A. Bin Mahmoud, Khalid S. Al-Gahtani and Kareem Adel
Buildings 2025, 15(15), 2789; https://doi.org/10.3390/buildings15152789 - 7 Aug 2025
Viewed by 832
Abstract
This paper presents a scientometric review that provides a quantitative perspective on the evolution of Automation in Construction Journal (AICJ) research, emphasizing its developmental paths and emerging trends. The study aims to analyze the journal’s growth and citation impact over time. It also [...] Read more.
This paper presents a scientometric review that provides a quantitative perspective on the evolution of Automation in Construction Journal (AICJ) research, emphasizing its developmental paths and emerging trends. The study aims to analyze the journal’s growth and citation impact over time. It also seeks to identify the most influential publications and the cooperation patterns among key contributors. Furthermore, the study explores the journal’s primary research themes and their evolution. Accordingly, 4084 articles were identified using the Web of Science (WoS) database and subjected to a multistep analysis using VOsviewer version 1.6.18 and Biblioshiny as software tools. First, the growth and citation of the publications over time are inspected and evaluated, in addition to ranking the most influential documents. Second, the co-authorship analysis method is applied to visualize the cooperation patterns between countries, organizations, and authors. Finally, the publications are analyzed using keyword co-occurrence and keyword thematic evolution analyses, revealing five major research clusters: (i) foundational optimization, (ii) deep learning and computer vision, (iii) building information modeling, (iv) 3D printing and robotics, and (v) machine learning. Additionally, the analysis reveals significant growth in publications (54.5%) and citations (78.0%) from 2018 to 2023, indicating the journal’s increasing global influence. This period also highlights the accelerated adoption of digitalization (e.g., BIM, computational design), increased integration of AI and machine learning for automation and predictive analytics, and rapid growth of robotics and 3D printing, driving sustainable and innovative construction practices. The paper’s findings can help readers and researchers gain a thorough understanding of the AICJ’s published work, aid research groups in planning and optimizing their research efforts, and inform editorial boards on the most promising areas in the existing body of knowledge for further investigation and development. Full article
Show Figures

Figure 1

22 pages, 6482 KB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 - 1 Aug 2025
Viewed by 354
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

32 pages, 5560 KB  
Article
Design of Reconfigurable Handling Systems for Visual Inspection
by Alessio Pacini, Francesco Lupi and Michele Lanzetta
J. Manuf. Mater. Process. 2025, 9(8), 257; https://doi.org/10.3390/jmmp9080257 - 31 Jul 2025
Viewed by 512
Abstract
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such [...] Read more.
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such reconfigurable machines remains a complex, expert-dependent, and time-consuming task. This is primarily due to the lack of structured methodologies and the reliance on trial-and-error processes. In this context, this study proposes a novel theoretical framework to facilitate the design of fully reconfigurable handling systems for VISs, with a particular focus on fixture design. The framework is grounded in Model-Based Definition (MBD), embedding semantic information directly into the 3D CAD models of the inspected product. As an additional contribution, a general hardware architecture for the inspection of axisymmetric components is presented. This architecture integrates an anthropomorphic robotic arm, Numerically Controlled (NC) modules, and adaptable software and hardware components to enable automated, software-driven reconfiguration. The proposed framework and architecture were applied in an industrial case study conducted in collaboration with a leading automotive half-shaft manufacturer. The resulting system, implemented across seven automated cells, successfully inspected over 200 part types from 12 part families and detected more than 60 defect types, with a cycle below 30 s per part. Full article
Show Figures

Figure 1

27 pages, 6715 KB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 582
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

18 pages, 5309 KB  
Article
LGM-YOLO: A Context-Aware Multi-Scale YOLO-Based Network for Automated Structural Defect Detection
by Chuanqi Liu, Yi Huang, Zaiyou Zhao, Wenjing Geng and Tianhong Luo
Processes 2025, 13(8), 2411; https://doi.org/10.3390/pr13082411 - 29 Jul 2025
Viewed by 354
Abstract
Ensuring the structural safety of steel trusses in escalators is critical for the reliable operation of vertical transportation systems. While manual inspection remains widely used, its dependence on human judgment leads to extended cycle times and variable defect-recognition rates, making it less reliable [...] Read more.
Ensuring the structural safety of steel trusses in escalators is critical for the reliable operation of vertical transportation systems. While manual inspection remains widely used, its dependence on human judgment leads to extended cycle times and variable defect-recognition rates, making it less reliable for identifying subtle surface imperfections. To address these limitations, a novel context-aware, multi-scale deep learning framework based on the YOLOv5 architecture is proposed, which is specifically designed for automated structural defect detection in escalator steel trusses. Firstly, a method called GIES is proposed to synthesize pseudo-multi-channel representations from single-channel grayscale images, which enhances the network’s channel-wise representation and mitigates issues arising from image noise and defocused blur. To further improve detection performance, a context enhancement pipeline is developed, consisting of a local feature module (LFM) for capturing fine-grained surface details and a global context module (GCM) for modeling large-scale structural deformations. In addition, a multi-scale feature fusion module (MSFM) is employed to effectively integrate spatial features across various resolutions, enabling the detection of defects with diverse sizes and complexities. Comprehensive testing on the NEU-DET and GC10-DET datasets reveals that the proposed method achieves 79.8% mAP on NEU-DET and 68.1% mAP on GC10-DET, outperforming the baseline YOLOv5s by 8.0% and 2.7%, respectively. Although challenges remain in identifying extremely fine defects such as crazing, the proposed approach offers improved accuracy while maintaining real-time inference speed. These results indicate the potential of the method for intelligent visual inspection in structural health monitoring and industrial safety applications. Full article
Show Figures

Figure 1

23 pages, 7839 KB  
Article
Automated Identification and Analysis of Cracks and Damage in Historical Buildings Using Advanced YOLO-Based Machine Vision Technology
by Kui Gao, Li Chen, Zhiyong Li and Zhifeng Wu
Buildings 2025, 15(15), 2675; https://doi.org/10.3390/buildings15152675 - 29 Jul 2025
Viewed by 372
Abstract
Structural cracks significantly threaten the safety and longevity of historical buildings, which are essential parts of cultural heritage. Conventional inspection techniques, which depend heavily on manual visual evaluations, tend to be inefficient and subjective. This research introduces an automated framework for crack and [...] Read more.
Structural cracks significantly threaten the safety and longevity of historical buildings, which are essential parts of cultural heritage. Conventional inspection techniques, which depend heavily on manual visual evaluations, tend to be inefficient and subjective. This research introduces an automated framework for crack and damage detection using advanced YOLO (You Only Look Once) models, aiming to improve both the accuracy and efficiency of monitoring heritage structures. A dataset comprising 2500 high-resolution images was gathered from historical buildings and categorized into four levels of damage: no damage, minor, moderate, and severe. Following preprocessing and data augmentation, a total of 5000 labeled images were utilized to train and evaluate four YOLO variants: YOLOv5, YOLOv8, YOLOv10, and YOLOv11. The models’ performances were measured using metrics such as precision, recall, mAP@50, mAP@50–95, as well as losses related to bounding box regression, classification, and distribution. Experimental findings reveal that YOLOv10 surpasses other models in multi-target detection and identifying minor damage, achieving higher localization accuracy and faster inference speeds. YOLOv8 and YOLOv11 demonstrate consistent performance and strong adaptability, whereas YOLOv5 converges rapidly but shows weaker validation results. Further testing confirms YOLOv10’s effectiveness across different structural components, including walls, beams, and ceilings. This study highlights the practicality of deep learning-based crack detection methods for preserving building heritage. Future advancements could include combining semantic segmentation networks (e.g., U-Net) with attention mechanisms to further refine detection accuracy in complex scenarios. Full article
(This article belongs to the Special Issue Structural Safety Evaluation and Health Monitoring)
Show Figures

Figure 1

Back to TopTop