Next Article in Journal
Preparation of Copper-Doped Zinc Oxide (CZO) Nanoparticles and CZO/Acrylic Copolymer Emulsion with Polyvinylpyrrolidone (PVP) Coated on Glass Substrate for Optical Properties
Previous Article in Journal
Tunable Electronic and Optical Properties of MoGe2N4/AlN and MoSiGeN4/AlN van der Waals Heterostructures toward Optoelectronic and Photocatalytic Applications
Previous Article in Special Issue
Heat Treatment Optimization of 2219 Aluminum Alloy Fabricated by Wire-Arc Additive Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Defect Detection of Jet Engine Turbine and Compressor Blade Surface Coatings Using a Deep Learning-Based Algorithm

1
School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
2
School of Materials, Sun Yat-sen University, and Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Shenzhen 518107, China
3
AI Technology Innovation Group, School of Economics and Management, Communication University of China, Beijing 100024, China
4
School of Mechanical Engineering, Shenyang Aerospace University, Shenyang 110136, China
*
Authors to whom correspondence should be addressed.
Coatings 2024, 14(4), 501; https://doi.org/10.3390/coatings14040501
Submission received: 24 March 2024 / Revised: 15 April 2024 / Accepted: 16 April 2024 / Published: 18 April 2024
(This article belongs to the Special Issue Recent Advances in Additive Manufacturing Techniques)

Abstract

:
The application of additive manufacturing (AM) in the aerospace industry has led to the production of very complex parts like jet engine components, including turbine and compressor blades, that are difficult to manufacture using any other conventional manufacturing process but can be manufactured using the AM process. However, defects like nicks, surface irregularities, and edge imperfections can arise during the production process, potentivally affecting the operational integrity and safety of jet engines. Aiming at the problems of poor accuracy and below-standard efficiency in existing methodologies, this study introduces a deep learning approach using the You Only Look Once version 8 (YOLOv8) algorithm to detect surface, nick, and edge defects on jet engine turbine and compressor blades. The proposed method achieves high accuracy and speed, making it a practical solution for detecting surface defects in AM turbine and compressor blade specimens, particularly in the context of quality control and surface treatment processes in AM. The experimental findings confirmed that, in comparison to earlier automatic defect recognition procedures, the YOLOv8 model effectively detected nicks, edge defects, and surface defects in the turbine and compressor blade dataset, attaining an elevated level of accuracy in defect detection, reaching up to 99.5% in just 280 s.

1. Introduction

Additive manufacturing (AM) has gained a massive reputation as a fast-expanding technology within the aerospace industry, acclaimed for its capability to fabricate intricate components such as turbine and compressor blades, thereby enhancing structural efficiency and reducing weight [1,2]. Despite its considerable advantages, the AM process is susceptible to various defects, including nicks, edge inconsistencies, and surface roughness, attributable to variations in material properties, printing parameters, and post-processing techniques. These imperfections could significantly undermine the quality, safety, and performance integrity of high-stakes aerospace components [3,4]. Therefore, maintaining the structural integrity and operational efficacy of turbine and compressor blades in jet engines necessitates the implementation of real-time defect detection strategies, facilitated by advanced deep learning methodologies. This area of study has been extensively researched, underscoring its significance within the aerospace engineering domain.
The aerospace industry has adopted AM technology since the mid-2010s, when aviation industries like GE Aviation, SpaceX, and Lockheed Martin, as well as the government, like the US Air Force, invested massively due to its distinctive attributes, including diminished material waste, facilitation of lightweight designs, decreased reliance on assembly via component consolidation, and the capacity to manufacture intricately complex components. These advantages collectively result in reduced fuel consumption and cost savings, a consequence of streamlined certification processes [5].
Traditional defect detection methods, such as visual inspection, are labor-intensive, time-consuming subjective, require skilled personnel to perform the inspection, and may not always provide precise results. In addition, non-destructive testing (NDT) techniques [6], optical methods (OM) [7], engine borescope inspection [8], though precise, are significantly costly due to the necessity of not only certified tools but also engineers with specialized licenses [9]. In the recent past, image-based analytic tools, such as deep machine learning (DML), have shown considerable promise and extensive potential in image recognition tasks, including object detection, that is applicable to industries to enhance quality control in production systems and have yielded significant outcomes in quality assurance of surface coatings defect detection.
Several studies have delved into the application of deep learning for failure detection across various manufacturing settings, particularly focusing on the outer layers of additive manufacturing (AM) parts, including turbine and compressor blades for jet engines [10,11,12]. However, the majority of these investigations have predominantly utilized traditional convolutional neural networks (CNNs), which are not optimal for spotting subtle and asymmetrical flaws. In the Malta et al. [13] study, a CNN algorithm was trained to detect surface defects in automotive engine component detection. The results demonstrated the model’s ability to accurately identify specific components in live video monitoring. Aust et al. [14] devised a technique for identifying edge defects in high-pressure compressor blades utilizing small datasets, employing traditional computer vision methods, followed by defect feature point calculation and clustering via the DBSCAN algorithm, although this method is primarily limited to edge defect detection.
In a separate study by Matthias et al. [15], a multi-scale technique of various length scales was presented, utilizing the sustained finite element method (FEM) for analyzing surface defects in three-dimensional space. To validate the model, its application was focused on the quadrature area of a jet turbine and compressor blade, resulting in effective computation and analysis of crack growth. In a research study by Yoon et al. [16], an analysis was conducted on the defects found in gas turbine blades within a cogeneration plant. Scanning electron microscopy (SEM) was implemented to obtain photographs, revealing that cracks were initiated as a result of concentrated stresses around preexisting defects. Liu et al. [17] analyzed the FEM-based deep vibration analysis method in the gas turbine rotor blades dataset to detect potential locations of dynamic crack propagation.
He et al. [18] proposed an improved R-CNN cascade mask deep learning-based methodology aimed at achieving accurate edge failure detection in turbine blades. In the field of nick damage detection in rotating 3D blade-like structural elements, Buezas et al. [19] employed genetic algorithms (GAs) to implement multi-layer detection. Advancements in continuum robotics have enabled researchers, as mentioned in references [20,21], to design snake-like robots capable of autonomously exploring the intricate inner spaces of gas turbines. Morini et al. [22] conducted a specialized study on compressor fouling, addressing associated issues and potential solutions. Furthermore, in order to optimize the blade defect analysis process, Li et al. [23] focused on utilizing measurement parameters, while Zhou et al. [24] emphasized the establishment of mapping and fault index relationships.
In the context of object detection using the YOLO network, Redmon et al. [25] approached the issue of defect detection as a regression problem. The method involves direct regression of the target’s bounding box from varied locations within the input dataset, leading to a substantial increase in detection speed while maintaining precision. Concurrently, Hui et al. [26] developed an advanced model leveraging the YOLOv4 framework to detect cracks in jet engine blades. This enhancement incorporated an attention mechanism within the architecture network to augment background differentiation and enhanced multi-scale feature fusion through the application of bilinear interpolation, thereby significantly elevating detection capabilities.
In the pursuit of achieving 100% accuracy in quick succession of time, intensive efforts have been devoted to various deep learning-based defect detection algorithms, and the YOLOv8 model has been distinguished as the preeminent version, surpassing its predecessors in efficacy as corroborated by a multitude of studies referenced in the literature reviews [10,27,28]. The principal aim was the pragmatic application of the YOLOv8 architecture, which, despite its baseline proficiency, required bespoke alterations to accommodate the unique defect typologies inherent to AM specimens. Concomitantly, the model underwent further customization to proficiently identify defects that manifest post-production, including but not limited to surface coatings, nicks, and edge deformities, particularly in the context of AM-fabricated components such as aero-engine compressor and turbine blades.
In this research, the innovative application of YOLOv8, one of the most widely used state-of-the-art object detection frameworks, was explored within the context of aero-engine turbine and compressor blade defect detection, significantly enhancing inspection speed, accuracy, and reliability over traditional methods. The proposed methodology seeks to enhance the detection of surface, nick, and edge defects on jet engine turbine and compressor blades, as demonstrated in Figure 1. In pursuit of this objective, the study is designed to achieve the following aims:
  • To better understand the relationship between additive manufacturing, surface defect detection in jet engine turbine and compressor blades, and deep learning, we conduct a comprehensive comparative analytical review of deep learning architectures proposed by previous researchers for image analysis of gas engine turbine and compressor blades.
  • To implement a deep learning-based YOLOv8 algorithm for identifying surface flaws on the turbine and compressor blades of jet engines that have been produced using conventional and AM methods.
  • To assess the execution of the proposed approaches in detecting defects, a dataset consisting of turbine and compressor blade images is employed in this research investigation.
Figure 1. Surface, nick, and edge defects observed in the manufacturing of turbine blades using AM.
Figure 1. Surface, nick, and edge defects observed in the manufacturing of turbine blades using AM.
Coatings 14 00501 g001
The paper’s structure is delineated as follows: Section 2 comprises a comprehensive review of deep learning architectures proposed by prior researchers concerning the analysis of gas turbine and compressor blade images. Section 3 outlines the research approach utilized in this investigation, while Section 4 scrutinizes the definitive results of defect detection.

2. Methodology

In this research, a novel deep learning methodology was developed for identifying surface defects in jet engine turbine and compressor blades, utilizing the advanced YOLOv8 algorithm. The approach is distinguished by the incorporation of both conventionally manufactured and additively manufactured blades, enhancing the dataset’s diversity and accuracy. This study marks the first application of YOLOv8 for defect detection in turbine blades, demonstrating the model’s adaptability and effectiveness in a new, high-stakes domain. The novelty of this work lies in the targeted adaptation and optimization of YOLOv8 for aerospace component surface inspection, a critical area previously unexplored by this technology. The contributions include a bespoke dataset tailored for turbine blade defects and significant enhancements to the model’s performance in this specific context. This research advances the field of aerospace manufacturing and maintenance, offering a pioneering approach to defect detection that promises to improve quality assurance and operational safety in jet engines. The methodology section involved three main components: dataset preparation, deep learning architecture and algorithm selection, and training process and evaluation. A schematic research flowchart is illustrated below in Figure 2.

2.1. Dataset Preparation

For this research, an openly accessible compilation of multiple datasets (refer to Table 1) containing 302 augmented images derived from 151 original images of jet engine turbine and compressor blade surfaces featuring various surface defects was utilized. To evaluate the efficacy of the proposed YOLOv8 model, a dataset comprising turbine blade images from diverse origins was employed to assess the model’s proficiency in achieving 100% accuracy in detecting nick, edge, and surface defects while utilizing a minimal dataset within the shortest possible detection time frame. The dataset was prepared by taking high-resolution images of the turbine and compressor blades and manually annotating the surface defects by creating a bounding box surrounding each defect area using a software tool named roboflow (https://roboflow.com/auto-label (accessed on 14 April 2024)). The annotated images were then split into train, validation, and test sets. The training dataset is focused on training the required deep learning experimental model, while the validation set is used to select the best architecture based on model performance. The images used for the dataset are of three defects found in turbine and compressor blades and non-defective turbine and compressor blades illustrated in Figure 3. In particular, inside the dataset, the orange square labels reflect the areas of nick and red as surface defects, while the purple squares signify edge defects.

2.2. Preprocessing and Dataset Balancing

Preprocessing entails converting the unprocessed data into a suitable layout that is appropriate for deep learning model training. In this investigation, the pixel values were adjusted to a standard deviation value of 1 with a mean value of 0, and the photos were scaled to an anchored size of 640 × 640 pixels and converted to the RGB color space. Each stage is essential to ensure a real-world dataset that helps the deep learning model accurately detect surface flaws in the jet engine turbine and compressor blade dataset. In this study, the dataset was splattered randomly into three subsets: 67% for training, 18% for validation, and 15% for testing. They can vary based on the dataset’s size, the complexity of the model, and the specific requirements of the research or application, which serve distinct purposes in deep machine learning model development. Although no universally mandated rule exists for specifying precise allocation ratios, standard practice in deep learning involves a distribution strategy that typically reserves approximately 60%–80% of the dataset for training, 10%–20% for validation, and 10%–20% for testing. The choice of these percentages is based on ensuring that the model has sufficient data for effective training while also providing sufficient unseen data for validation and testing, meeting both the model’s needs and the project’s specific requirements. The larger training set (67%) eases the model’s learning development by exposing it to a diverse range of turbine and compressor blade defects, for example, aiding in defect recognition and feature extraction. The validation subset (18%) plays a crucial role in adjusting the model’s hyperparameters and preventing overfitting by assessing its performance on unseen data during training. It helps in optimizing the model performance by providing feedback on how well the model is generalizing from the training data. Lastly, the testing dataset (15%) remains entirely separate from both the training and validation sets, allowing an independent evaluation of the model’s generalization to the new dataset, thereby evaluating its real-world performance.

2.3. Deep Learning Architecture and Algorithm Selection

For this study, the YOLOv8 deep learning architecture was selected as the latest version for real-time surface defect inspection. YOLOv8, released in 2023, is intended to combine the finest real-time image-detecting algorithms launching upon the principles of cross-stage partial (CSP), state-of-the-art (SOTA) [38], the spatial pyramid pooling fast (SPPF) module [39], and the path aggregation network with feature pyramid network (PAN-FPN) feature fusion approach [40,41]. The backbone design of YOLOv8 is substantially evocative of YOLOv5, but it adds the C2f module in favor of C3 [42]. To fulfill the requirements of the project, it was likewise created using a scaling coefficient akin to that of YOLOv5 and YOLOv7 [43], except YOLOv8 uses binary cross entropy (BCE) loss for classification.
YOLOv8 maintains the use of the PAN-FPN methodology in the neck region, which improves the integration of layer details at various scales and offers more complete representation learning capabilities by merging C3 and ELAN. By successfully merging confidence and regression boxes, YOLOv8’s neck is built on the concept of extracting head architecture in YOLOx through the use of several C2f modules. This enables YOLOv8 to attain a better level of accuracy. This method’s application in the final section of the neck module enhances YOLOv8’s accuracy and performance.
Fundamentally, one important feature of YOLOv8 is its exceptional extensibility, which allows for seamless transition between various versions and complete compatibility with all YOLO variations. More significantly, the concept of the rapid detection YOLOv8 architecture is incorporated as an efficient solution for accurate detection in different turbine and compressor blade defect structures like edge, dent, and nick, enhancing the precision of defect classification. Figure 4 elucidates the advancements in the YOLOv8 architecture, designated as the proposed model. Specifically, Figure 4a delineates the network partition flow chart of the YOLOv8 architecture. Figure 4b elaborates on the backbone design, which is analogous to that of YOLOv5 but integrates the C2f module replacing the traditional C3 module. Lastly, Figure 4c illustrates the decoupled head of the YOLOv8, highlighting its structural modifications.

2.4. Training Process and Performance Evaluation

The final step in the methodology is the training process and evaluation of the model’s performance. Following the development of the YOLOv8 architecture, the subsequent phase is dedicated to training the model on the annotated dataset. This involves using the training set to adjust the weights of the neural network through a process called backpropagation. The primary objective is to minimize the loss function by quantifying the disparity between the predicted and actual output. During each iteration of the training process, a batch of images is processed by the model, and the weights are adjusted based on the error between the predicted outputs and the ground truth annotations. This process is repeated for 25 epochs for the YOLOv8 model or until the model reaches a certain level of performance. The details of the experiment’s initialization model settings and hardware and software environments for turbine and compressor blade defect detection are presented in Table 2. The accuracy of the proposed approach was systematically evaluated using globally recognized established metrics pertinent to computer vision deep learning image processing, specifically within the context of object detection models [44]. In evaluating the performance of the YOLOv8 model on both training and validation sets, several key metrics are employed to provide comprehensive insights into the model’s precision (P), recall (R), and overall efficacy. These metrics include F1_curve, PR_curve, P_curve, and R_curve, as illustrated in Figure 5, which are standard metrics used to evaluate object detection models. During this phase, adjustments to hyperparameters like the number of epochs or model fine-tuning may be undertaken to optimize performance. The choice of learning rate, set at 0.001, was crucial for balancing the speed of convergence and the stability of training. This process is replicated until the model accomplishes satisfactory performance on the validation set. Fine-tuning encompasses meticulous adjustments to hyperparameters and potential architectural modifications. These adjustments can encompass variables such as batch size and epoch count in the model’s layers. Following successful fine-tuning and the attainment of satisfactory performance on the validation dataset, the model undergoes comprehensive testing to determine its ultimate efficacy. An essential aspect of this phase involves a comparative analysis of testing results against the model’s performance on the validation dataset, ensuring overfitting is mitigated.
In the case of surface defect analysis of the jet engine compressor and turbine blades, the training process would involve feeding a dataset of annotated images of turbine and compressor blades with both defective and non-defective surfaces into the YOLOv8 architecture. The model would adjust its weights during training in response to the training data, and the process would be repeated for a fixed number of epochs. The process of weight adjustment, a cornerstone of the YOLOv8’s learning algorithm, plays a paramount role in the architecture’s ability to incrementally refine its predictive accuracy. This intricate process is undergirded by the principles of backpropagation and sophisticated optimization algorithms, facilitating a methodical adjustment of weights in accordance with the gradient of the loss function. During the model’s training phase, a forward propagation mechanism is employed to generate predictions based on input data, followed by the computation of loss using a predefined function that quantifies the deviation between these predictions and the actual labels. This recursive process is designed to minimize loss, thereby progressively ameliorating the model’s predictive performance across successive epochs.
Figure 5a showcases the F1-score, a critical metric derived as the harmonic mean of precision and recall, plotted against varying confidence thresholds for distinct defect classifications: edge defect, nick, and surface defect. This plotting elucidates the model’s performance spectrum across diverse defect types alongside a consolidated assessment of its overall efficacy across all classes. In addition, the precision–recall curve, as shown in Figure 5b, elucidates the model’s precision against recall, devoid of the direct influence of confidence thresholds. The precision–confidence (Figure 5c) and recall–confidence curves (Figure 5d) elaborate on how the model’s precision and recall metrics evolve with fluctuating confidence levels, showcasing a sophisticated balance between these metrics, as highlighted by an aggregate precision score of 1 with a standard deviation of 0.458.
This intricate analysis underscores the model’s commendable performance, yet it also illuminates significant optimization avenues, especially in bolstering the model’s consistency across varying defect classifications and confidence thresholds. After training, the model would be evaluated on a separate validation set of annotated images to determine its performance and identify areas for improvement. This might involve fine-tuning the model or modifying the architecture by changing hyperparameters.
Following the attainment of satisfactory metrics on the validation set, it will be tested on a separate testing set of annotated images to evaluate its final performance. The testing results would be compared to the performance on the validation dataset to verify that the model has not overfitted the validation set and to obtain an estimate of the model’s generalization performance. In the assessment of object detection performance, Intersection over Union (IoU) serves as a critical metric by assessing the ground truth bounding box to the predicted one, as articulated in Equation (1). Within the domain of defect detection, IoU assumes the role of quantifying the congruence between two bounding boxes, thereby evaluating the concurrence of ground truth and prediction regions in defect detection tasks.
Precision (P), denoting the accuracy of predictions emanating from the model’s performance, and recall (R), reflecting the model’s ability to detect all possible positive cases among top-priority predictions, are mathematically represented by Equations (2) and (3). These metrics are pivotal in gauging the model’s precision-recall trade-off and its effectiveness in capturing relevant instances. Average precision (AP) is utilized as the calculating matrix for turbine and compressor blade defect detection, while the mean average precision (mAP) and ([email protected]:0.95) [45] are used to assess the model’s overall performance, analyzed through Equations (4)–(6), where true positive (TP) and false positive (FP) rates indicate that misalignment defects exist in the non-defect areas as a false detection. A false negative (FN) rate, representing ground truth, is present in the dataset, and the model failed to detect exact defect types.
Intersection   Over   Union IOU = A B A B = Area   of   overalp Area   of   union
Average   Precision AP = 0   1 p r dr
Mean   Average   Precision m Λ P = 1 N i   N Λ P i
Precision P = True   Positive TP True   Positive TP + False   Positive FP
Recall R = True   positive TP True   Positive TP + False   Negative FN
mAP @ 0.5 : 0.95 = mAP 0.50 + mAP 0.55 + mAP 0.95 N

3. Results and Discussion

The suggested deep learning strategy with YOLOv8 achieved 99.5% accuracy, 94% precision, 98% recall, and 96% F1-score on the validation set, as shown by the experiments. The proposed approach successfully detected surface flaws on jet engine blade components, and the findings reveal that the proposed method is more accurate and reliable in comparison to other conventional deep learning approaches. With a mAP of 0.99 and a model detection speed of 150 frames per second, it proves to be a valuable tool with excellent defect detection, particularly in the context of quality assurance within the aerospace sector. The detailed results of the surface defect detection model would typically be evaluated based on several metrics, including F1-score, accuracy, recall, and precision, as depicted in Figure 6 below.
In the training dataset results presented in the top row of Figure 6b, an assortment of metrics provides a detailed account of the model’s proficiency through the training phase. The train/box_loss metric, indicative of the model’s accuracy in bounding box predictions, shows a declining trend, suggesting enhanced capability in localizing objects within the dataset. Complementing this is the train/cls_loss, which charts the classification loss; its steady decrease is emblematic of the model’s improving accuracy in object classification. Additionally, the train/dfl_loss, potentially representing a specialized loss function, exhibits a downward trajectory, further affirming the model’s refinement over the course of training. Fluctuations in the metrics/precision(B) and metrics/recall(B) notwithstanding, both metrics exhibit an overall trend toward improvement. This trend is indicative of an increasing proportion of true positive detections and an amelioration in the model’s ability to identify relevant objects. The metrics/mAP50(B), reflecting the mean average precision at an Intersection Over Union (IOU) threshold of 0.50, remains notably stable and high, denoting a commendable level of performance. Conversely, the metrics/mAP50-95(B), encompassing a range of IOU thresholds from 0.50 to 0.95, presents a lower value, which is consistent with the increased difficulty in maintaining precision across more stringent IOU thresholds. The bottom row, delineating the validation losses such as val/box_loss, val/cls_loss, and val/df1_loss, offers insights into the model’s generalizability. These metrics, expectedly higher than their training counterparts, should ideally parallel the downward trends observed in training.
The accurate identification of samples among the total samples defines accuracy, while precision represents the percentage of accurately identified faulty surfaces within a set of samples flagged as positive. Additionally, the recall denotes the percentage of accurate predictions in relation to the total validation predictions. Incorporating both precision and quantity of accurate predictions, the F1-score reaches a maximum value of 1. Notably, the confusion matrix demonstrates a 100% detection accuracy for nick, edge, and surface defects (see Figure 6a). Some faults in turbine and compressor blades were picked up by the YOLOv8 algorithm, as depicted in Figure 7 below.
Validation determines the YOLOv8 model’s performance using multiple datasets, where it identifies surface, edge, and nick flaws with an average value of 0.9, derived from the turbine and compressor blade training model. By testing the algorithm on a validation set, one can evaluate its potential for standardization and make the required modifications to enhance its performance. Table 3 provides a detailed analysis of the algorithm’s test, where the average mean pixel size (MAP) was 0.99, and the accuracy values for surface, nick, and edge defects were up to 0.91, 0.91, and 1, respectively. All of the detected images had 640 × 640 pixels.
Overall, the YOLOv8 gas turbine and compressor blade defect detection algorithm performed magnificently as an excellent detector to identify metal AM turbine and compressor blade defects like nicks, edges, and surface defects as a novel approach. To extend its applicability to additional surface defect types, a similar methodology can be employed. This involves annotating new defect types in the dataset and subsequently training the YOLOv8 model to enhance its capabilities for identifying and classifying these new types of defects.
In evaluating the most effective method for detecting aero-engine blade defects, it is crucial to assess the efficacy, accuracy, and applicability of NDT testing and optical methods in comparison to the proposed deep learning-based YOLOv8 algorithms. While non-destructive testing and optical methods are traditional and reliable, they each have limitations either in the type of defects they can detect or in their operational efficiency. The deep learning-based YOLOv8 algorithm, on the other hand, offers a comprehensive solution that can continuously evolve and adapt, providing high accuracy and real-time detection capabilities. Therefore, for the specific case of inspecting surface coating issues, edges, and nicks in aero-engine blades, the YOLOv8 deep learning approach is generally more advantageous, especially in scenarios where high throughput and precision are required.
This study found that the proposed deep learning strategy based on the YOLOv8 method surpassed existing methodologies in accuracy and reliability. However, the method did have a few limitations, like the small size of the dataset, which may limit the model’s applicability in other contexts. When it comes to machine learning, the quality and amount of data used for training the model are crucial to its success. Overfitting, where the model becomes overly particular to the examples in the dataset and fails to generalize adequately to new, unseen examples, can occur when the dataset is small.
The computational complexity of the YOLOv8 architecture was another shortcoming of the proposed solution. The computational complexity of an algorithm describes how much time and space the program needs to execute in real time. It is possible that real-time detection is crucial for ensuring safety and preventing faults from producing catastrophic failures when it comes to identifying surface defects on jet engine turbine and compressor blades.

4. Conclusions

In deep learning, numerous layers of artificial neural networks are used to learn elaborate representations of data. When applied to tasks like image identification, identifying items, and natural language processing, deep learning has proven to be remarkably effective. In this research, a YOLOv8-based deep learning approach was devised for identifying flaws in the surface of jet engine turbine and compressor blades. The great precision and speed attained by the suggested technology make it a viable option for quality control in the aerospace jet engine industry. The findings show that deep learning-based methods can be utilized to detect surface defects in jet engine turbine and compressor blades produced by using additive manufacturing with high accuracy and efficiency.
The experimental findings suggested that the YOLOv8 model adequately detected nick, edge, and surface defects in the turbine and compressor blade dataset, attaining an optimized defect detection precision of up to 99.5% within just 280 s in comparison to former automatic defect identification techniques. The suggested approach can immensely minimize the cost and time required for defect identification when contrasted to other existing methods. In addition, the proposed method can be executed in real time, enabling instantaneous problem detection and rectification all through the production cycle.
In the future, researchers may look at generalizing the model to detect flaws in different types of components and expanding the dataset to cover a broader spectrum of defects. In order to discover and rectify flaws in the manufacturing process in real time, the proposed technology can be incorporated into an automated flaw detection system. Despite stringent controls, manual annotation of defects introduces a possibility of human error, potentially affecting the uniformity and dependability of the training inputs. Additionally, the reliance on high-resolution imagery raises concerns about the model’s performance degradation when deployed with lower-quality images commonly encountered in practical settings. To further boost the accuracy and acceleration of the model, the suggested method can be enhanced by investigating alternative deep learning architectures and optimization strategies. The performance of the suggested model can be enhanced through the application of pre-trained models on larger datasets via transfer learning.
Overall, surface flaw identification on additively built jet engine turbine and compressor blades using the suggested deep learning approach with YOLOv8 showed encouraging results in terms of precision, efficiency, and speed. Potentially applicable to other defect detection activities in the manufacturing industry, the proposed method can provide an efficient and precise solution for fault identification and drastically cut costs while simultaneously raising product quality during the manufacturing process.

Author Contributions

Conceptualization, supervision, funding acquisition, review and editing, C.Z.; Methodology, validation, software development and formal analysis, M.H.Z.; Review and editing, Y.W.; Review and editing, validation, funding acquisition, W.L.; Image acquisition, re-validation, H.M.I. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful for the financial support from the Innovation Group Project of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (Grant No. 311021013). This work was also supported by funds from the State Key Laboratory of Clean and Efficient Turbomachinery Power Equipment (Grant No. 16ZR1417100), the State Key Laboratory of Long-Life High Temperature Materials (Grant No. DTCC28EE190933), the National Natural Science Foundation of China (Grant No. 51605287), the Natural Science Foundation of Shanghai (Grant No. 16ZR1417100), the Fundamental Research Funds for the Central Universities (Grant No. E3E40808), and the National Natural Science Foundation of China (Grant No. 71932009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

This study did not involve human participants, or human data. However, the research employed datasets consisting of publicly available, de-identified images of jet engine turbine and compressor blades for the purpose of developing and validating a deep learning model.

Data Availability Statement

Data available upon request.

Acknowledgments

The authors would like to acknowledge Yawei He, School of Media and Communication, Shanghai Jiao Tong University, for YOLO experimental technology support for the open access database to conduct this research article.

Conflicts of Interest

The authors whose names are mentioned certify that they have NO affiliations with or involvement in any organization or entity with any financial interests or professional relationships and affiliations in the subject matter discussed in this manuscript.

References

  1. Uriondo, A.; Esperon-Miguez, M.; Perinpanayagam, S. The present and future of additive manufacturing in the aerospace sector: A review of important aspects. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2015, 229, 2132–2147. [Google Scholar] [CrossRef]
  2. Simpson, T.W. Book Review: Additive Manufacturing for the Aerospace Industry. Am. Inst. Aeronaut. Astronaut. 2020, 58, 1901–1902. [Google Scholar] [CrossRef]
  3. Clijsters, S.; Craeghs, T.; Buls, S.; Kempen, K.; Kruth, J.P. In situ quality control of the selective laser melting process using a high-speed, real-time melt pool monitoring system. Int. J. Adv. Manuf. Technol. 2014, 75, 1089–1101. [Google Scholar] [CrossRef]
  4. Bourell, D.L.; Leu, M.C.; Rosen, D.W. Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing; The University of Texas at Austin: Austin, TX, USA, 2009; pp. 11–15. [Google Scholar]
  5. Milewski, J.O.; Milewski, J.O. Additive Manufacturing Metal, the Art of the Possible; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  6. Brown, M.; Wright, D.; M’Saoubi, R.; McGourlay, J.; Wallis, M.; Mantle, A. Destructive and non-destructive testing methods for characterization and detection of machining-induced white layer: A review paper. CIRP J. Manuf. Sci. Technol. 2018, 23, 39–53. [Google Scholar] [CrossRef]
  7. Xie, L.; Lian, Y.; Du, F.; Wang, Y.; Lu, Z. Optical methods of laser ultrasonic testing technology in the industrial and engineering applications: A review. Opt. Laser Technol. 2024, 176, 110876. [Google Scholar] [CrossRef]
  8. Shang, H.; Sun, C.; Liu, J.; Chen, X.; Yan, R. Deep learning-based borescope image processing for aero-engine blade in-situ damage detection. Aerosp. Sci. Technol. 2022, 123, 107473. [Google Scholar] [CrossRef]
  9. Kim, Y.H.; Lee, J. Videoscope-based inspection of turbofan engine blades using convolutional neural networks and image processing. Struct. Health Monit. 2019, 18, 2020–2039. [Google Scholar] [CrossRef]
  10. Li, X.; Wang, W.; Sun, L.; Hu, B.; Zhu, L.; Zhang, J. Deep learning-based defects detection of certain aero-engine blades and vanes with DDSC-YOLOv5s. Sci. Rep. 2022, 12, 13067. [Google Scholar] [CrossRef]
  11. Shen, Z.; Wan, X.; Ye, F.; Guan, X.; Liu, S. Deep learning based framework for automatic damage detection in aircraft engine borescope inspection. In Proceedings of the 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 18–21 February 2019. [Google Scholar]
  12. Yixuan, L.; Dongbo, W.; Jiawei, L.; Hui, W. Aeroengine Blade Surface Defect Detection System Based on Improved Faster RCNN. Int. J. Intell. Syst. 2023, 2023, 1992415. [Google Scholar] [CrossRef]
  13. Malta, A.; Mendes, M.; Farinha, T. Augmented reality maintenance assistant using yolov5. Appl. Sci. 2021, 11, 4758. [Google Scholar] [CrossRef]
  14. Aust, J.; Shankland, S.; Pons, D.; Mukundan, R.; Mitrovic, A. Automated defect detection and decision-support in gas turbine blade inspection. Aerospace 2021, 8, 30. [Google Scholar] [CrossRef]
  15. Holl, M.; Rogge, T.; Loehnert, S.; Wriggers, P.; Rolfes, R. 3D multiscale crack propagation using the XFEM applied to a gas turbine blade. Comput. Mech. 2014, 53, 173–188. [Google Scholar] [CrossRef]
  16. Yoon, W.N.; Kang, M.S.; Jung, N.K.; Kim, J.S.; Choi, B. Failure analysis of the defect-induced blade damage of a compressor in the gas turbine of a cogeneration plant. Int. J. Precis. Eng. Manuf. 2012, 13, 717–722. [Google Scholar] [CrossRef]
  17. Liu, B.; Tang, L.; Liu, T.; Liu, Z.; Xu, K. Blade health monitoring of gas turbine using online crack detection. In Proceedings of the 2017 Prognostics and System Health Management Conference (PHM-Harbin), Harbin, China, 9–12 July 2017. [Google Scholar]
  18. Zhang, H.; Chen, J.; Yang, D. Fibre misalignment and breakage in 3D printing of continuous carbon fibre reinforced thermoplastic composites. Addit. Manuf. 2021, 38, 101775. [Google Scholar] [CrossRef]
  19. Buezas, F.S.; Rosales, M.B.; Filipich, C. Damage detection with genetic algorithms taking into account a crack contact model. Eng. Fract. Mech. 2011, 78, 695–712. [Google Scholar] [CrossRef]
  20. Wang, Y.; Ju, F.; Cao, Y.; Yun, Y.; Bai, D.; Chen, B. An aero-engine inspection continuum robot with tactile sensor based on EIT for exploration and navigation in unknown environment. In Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Hong Kong, China, 8–12 July 2019. [Google Scholar]
  21. Dong, X.; Axinte, D.; Palmer, D.; Cobos, S.; Raffles, M.; Rabani, A. Development of a slender continuum robotic system for on-wing inspection/repair of gas turbine engines. Robot. Comput. Manuf. 2017, 44, 218–229. [Google Scholar] [CrossRef]
  22. Morini, M.; Pinelli, M.; Spina, P.R.; Venturini, M. Numerical analysis of the effects of nonuniform surface roughness on compressor stage performance. J. Eng. Gas Turbines Power 2011, 133, 072402. [Google Scholar] [CrossRef]
  23. Li, Y.G. Gas turbine performance and health status estimation using adaptive gas path analysis. J. Eng. Gas Turbines Power 2010, 132, 041701. [Google Scholar] [CrossRef]
  24. Zhou, D.; Wei, T.; Huang, D.; Li, Y.; Zhang, H. A gas path fault diagnostic model of gas turbines based on changes of blade profiles. Eng. Fail. Anal. 2020, 109, 104377. [Google Scholar] [CrossRef]
  25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  26. Hui, T.; Xu, Y.; Jarhinbek, R. Detail texture detection based on Yolov4-tiny combined with attention mechanism and bicubic interpolation. IET Image Process. 2021, 15, 2736–2748. [Google Scholar] [CrossRef]
  27. Hussain, M.J.M. YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  28. Li, S.; Yu, J.; Wang, H. Damages detection of aeroengine blades via deep learning algorithms. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  29. Zaretsky, E.V.; Litt, J.S.; Hendricks, R.C.; Soditus, S. Determination of turbine blade life from engine field data. J. Propuls. Power 2012, 28, 1156–1167. [Google Scholar] [CrossRef]
  30. Zhang, X.; Li, W.; Liou, F. Damage detection and reconstruction algorithm in repairing compressor blade by direct metal deposition. Int. J. Adv. Manuf. Technol. 2018, 95, 2393–2404. [Google Scholar] [CrossRef]
  31. Sinha, A.; Swain, B.; Behera, A.; Mallick, P.; Samal, S.K.; Vishwanatha, H.; Behera, A. A review on the processing of aero-turbine blade using 3D print techniques. J. Manuf. Mater. Process. 2022, 6, 16. [Google Scholar] [CrossRef]
  32. Han, P. Additive design and manufacturing of jet engine parts. Engineering 2017, 3, 648–652. [Google Scholar] [CrossRef]
  33. Błachnio, J.; Chalimoniuk, M.; Kułaszka, A.; Borowczyk, H.; Zasada, D. Exemplification of detecting gas turbine blade structure defects using the x-ray computed tomography method. Aerospace 2021, 8, 119. [Google Scholar] [CrossRef]
  34. Aust, J.; Pons, D. Methodology for evaluating risk of visual inspection tasks of aircraft engine blades. Aerospace 2021, 8, 117. [Google Scholar] [CrossRef]
  35. Aust, J.; Pons, D. Assessment of aircraft engine blade inspection performance using attribute agreement analysis. Safety 2022, 8, 23. [Google Scholar] [CrossRef]
  36. Kellner, T. The Blade Runners: This Factory Is 3D Printing Turbine Parts for the World’s Largest Jet Engine. 2018. Available online: https://www.ge.com/additive/stories/cameri-factory-3d-printing-turbine-parts-worlds-largest-jet-engine (accessed on 20 March 2018).
  37. Mishra, R.; Thomas, J.; Srinivasan, K.; Nandi, V.; Bhatt, R. Investigation of HP turbine blade failure in a military turbofan engine. Int. J. Turbo Jet-Engines 2017, 34, 23–31. [Google Scholar] [CrossRef]
  38. Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  39. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  40. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  41. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  42. Cao, Y.; Chen, K.; Loy, C.C.; Lin, D. Prime sample attention in object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  43. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  44. Everingham, M.; Van, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  45. Zhao, W.; Huang, H.; Li, D.; Chen, F.; Cheng, W. Pointer defect detection based on transfer learning and improved cascade-RCNN. Sensors 2020, 20, 4939. [Google Scholar] [CrossRef] [PubMed]
Figure 2. Schematic experimental flowchart of the research approach.
Figure 2. Schematic experimental flowchart of the research approach.
Coatings 14 00501 g002
Figure 3. Sample dataset images for the deep learning training. (a) Nick marks as orange, (b) red denotes the area of surface defects, and (c) purple squares signify edge defects.
Figure 3. Sample dataset images for the deep learning training. (a) Nick marks as orange, (b) red denotes the area of surface defects, and (c) purple squares signify edge defects.
Coatings 14 00501 g003
Figure 4. The Overall improvement custom dataset.
Figure 4. The Overall improvement custom dataset.
Coatings 14 00501 g004
Figure 5. Metrics for training and validation sets. (a) F1_curve; (b) PR_curve; (c) P_curve; (d) R_curve.
Figure 5. Metrics for training and validation sets. (a) F1_curve; (b) PR_curve; (c) P_curve; (d) R_curve.
Coatings 14 00501 g005
Figure 6. YOLOv8 performance evaluation. (a) Confusion matrix; (b) Performance results.
Figure 6. YOLOv8 performance evaluation. (a) Confusion matrix; (b) Performance results.
Coatings 14 00501 g006
Figure 7. Surface defects detected by the YOLOv8 algorithm.
Figure 7. Surface defects detected by the YOLOv8 algorithm.
Coatings 14 00501 g007
Table 1. Dataset source.
Table 1. Dataset source.
Image
Acquisition
StudyReference
16Engine blade LLP status from engine data information.[29]
12Compressor blade damage assessment algorithm dataset.[30]
18A survey on the aero-engine blade processing techniques by AM.[31]
14Additive design and manufacturing of jet engine components.[32]
15Gas turbine blade fault detection by XCT-ray computed tomography model.[33]
44Evaluating risk assessment of engine blades visual inspection.[34,35]
16The Blade Runners Investigation.[36]
16Investigation of failure mechanism in a turbofan blade.[37]
Table 2. Hardware and software environments for turbine and compressor blade defect detection.
Table 2. Hardware and software environments for turbine and compressor blade defect detection.
CategoryConfiguration
HardwareGPUNVIDIA NVTX 11.6
CPUIntel Core i7-8565
Operating systemLinux Ubuntu 16.04
SoftwareProgram environmentPython 3.9
Deep learning frameworkPyTorch 1.7
Table 3. Detailed YOLOv8 summarized results.
Table 3. Detailed YOLOv8 summarized results.
ClassBox (Precision)Box (Recall)mAPmAP 50-95
All0.9350.9850.9950.58
Nick0.9110.9950.57
Surface defects10.950.9950.54
Edge defects0.9110.9950.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zubayer, M.H.; Zhang, C.; Liu, W.; Wang, Y.; Imdadul, H.M. Automatic Defect Detection of Jet Engine Turbine and Compressor Blade Surface Coatings Using a Deep Learning-Based Algorithm. Coatings 2024, 14, 501. https://doi.org/10.3390/coatings14040501

AMA Style

Zubayer MH, Zhang C, Liu W, Wang Y, Imdadul HM. Automatic Defect Detection of Jet Engine Turbine and Compressor Blade Surface Coatings Using a Deep Learning-Based Algorithm. Coatings. 2024; 14(4):501. https://doi.org/10.3390/coatings14040501

Chicago/Turabian Style

Zubayer, Md Hasib, Chaoqun Zhang, Wen Liu, Yafei Wang, and Haque Md Imdadul. 2024. "Automatic Defect Detection of Jet Engine Turbine and Compressor Blade Surface Coatings Using a Deep Learning-Based Algorithm" Coatings 14, no. 4: 501. https://doi.org/10.3390/coatings14040501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop