Next Article in Journal
Self-Healing Composites: A Path to Redefining Material Resilience—A Comprehensive Recent Review
Previous Article in Journal
Buckling Performance Evaluation of Double-Double Laminates with Cutouts Using Artificial Neural Network and Genetic Algorithm
Previous Article in Special Issue
Ultra-High Strength in FCC+BCC High-Entropy Alloy via Different Gradual Morphology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Precision Instance Segmentation Detection of Micrometer-Scale Primary Carbonitrides in Nickel-Based Superalloys for Industrial Applications

College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
*
Author to whom correspondence should be addressed.
Materials 2024, 17(19), 4679; https://doi.org/10.3390/ma17194679
Submission received: 23 July 2024 / Revised: 4 September 2024 / Accepted: 11 September 2024 / Published: 24 September 2024

Abstract

:
In industrial production, the identification and characterization of micron-sized second phases, such as carbonitrides in alloys, hold significant importance for optimizing alloy compositions and processes. However, conventional methods based on threshold segmentation suffer from drawbacks, including low accuracy, inefficiency, and subjectivity. Addressing these limitations, this study introduced a carbonitride instance segmentation model tailored for various nickel-based superalloys. The model enhanced the YOLOv8n network structure by integrating the SPDConv module and the P2 small target detection layer, thereby augmenting feature fusion capability and small target detection performance. Experimental findings demonstrated notable improvements: the mAP50 (Box) value increased from 0.676 to 0.828, and the mAP50 (Mask) value from 0.471 to 0.644 for the enhanced YOLOv8n model. The proposed model for carbonitride detection surpassed traditional threshold segmentation methods, meeting requirements for precise, rapid, and batch-automated detection in industrial settings. Furthermore, to assess the carbonitride distribution homogeneity, a method for quantifying dispersion uniformity was proposed and integrated into a data processing framework for seamless automation from prediction to analysis.

1. Introduction

Nickel-based superalloys represent a specialized alloy system designed for applications under elevated temperatures and complex stress conditions. Operating in such environments necessitates exceptional mechanical properties, including high-temperature strength, fatigue and creep resistance, fracture toughness, and microstructure stability [1]. Additionally, these alloys must exhibit surface stability to withstand environmental challenges such as high-temperature oxidation and corrosion. Due to their comprehensive performance in high-temperature operational settings, nickel-based superalloys are extensively utilized in critical hot-end components for engines and gas turbines within the aerospace industry.
The superior performance of nickel-based superalloys at elevated temperatures can be attributed to their intricate chemical composition and sophisticated manufacturing processes. Optimization of their properties predominantly involves alloying strategies, such as the introduction of specific elements like Ti and Nb into the alloys. These elements facilitate the formation of finely dispersed intermetallic compounds such as γ′ phase Ni3 (Al, Ti) or γ″ phase Ni3 (Nb, Al, Ti), which contribute to precipitation strengthening, solid-solution strengthening, and grain-boundary strengthening. Consequently, these alloys exhibit high strength and exceptional creep resistance in high-temperature environments [2]. However, Nb and Ti also serve as carbide and nitride formers, leading to the precipitation of carbonitride phases during solidification, either in the liquid or solid two-phase region, due to microscopic segregation [3,4]. These are known as primary carbonitrides, and their sizes can be quite large, even reaching tens of micrometers. The enhancement in the mechanical properties of materials by fine precipitates in alloys is unquestionable. However, when the key alloying elements in an alloy fail to effectively form fine or nanoscale-sized precipitates and instead exist in the material as larger, hard particles, these particles are likely to have a negative impact on the mechanical properties of the steel. The detrimental effects of such large-sized hard particles on alloys are comparable to the negative impacts of large, undeformed oxide particles on the fatigue life, impact toughness, and other properties of steel [3,5]. In industrial production, the identification and characterization of micron-sized second phases, such as carbonitrides in alloys, are critical for subsequent composition adjustment and process improvement.
To measure the effect of primary carbonitrides in microstructures, researchers often performed quantitative evaluations of these carbonitrides. The traditional recognition method primarily relied on software such as Image-Pro Plus 6.0 [6]. Import the image into the software and adjust the RGB values to effectively recognize carbonitrides. However, this method mainly had the following disadvantages:
(1)
It processed images singly, lacking batch processing capability and necessitating manual intervention, which was time-consuming and labor-intensive.
(2)
Images with significant impurities or noise could hinder effective carbonitride recognition despite parameter adjustments, thereby impacting data processing.
(3)
The method’s reliance on operator expertise introduced subjectivity into results [7].
Moreover, the high demands on picture quality imposed by Image-Pro Plus 6.0 and similar software further complicate alloy sample imaging and preprocessing in industrial environments, where variability and complexity are prevalent. Consequently, such software may only recognize a subset of high-quality images.
With the development of deep learning technology, the application areas of computer vision become increasingly extensive. Many researchers have started to develop automatic identification models by combining rapid image acquisition techniques and deep learning [8] frameworks. This approach aimed to address the shortcomings of traditional methods and accommodate the unique characteristics of alloy microstructures. Brian et al. [9] investigated and discussed three feature extraction methods: BoW, the VLAD coding, and CNN networks. The SVM algorithm was used to test the performance of each feature extraction method in the task of classification of ultrahigh carbon steel microstructures. Additionally, the clustering and correlation of the microstructures were observed by reducing the high-dimensional space to a two-dimensional space using the t-SNE algorithm. Li et al. [10] utilized the U-Net architecture as a base network model and employed a deep transfer learning approach to identify the γ′ phase in nickel-based high-temperature alloys at 900 °C and 1000 °C, achieving an identification accuracy of 92%. Azimi et al. [11] employed the MVFCNN network structure for the classification and segmentation of martensite, tempered martensite, bainite, and pearlite in mild steel, achieving high accuracy rates. Ghauri et al. [12] utilized the RF algorithm for the segmentation detection of carbides in HP40-Nb stainless steel, achieving notable segmentation accuracies at intergranular and grain boundaries.
Previous generations have conducted a lot of work on the combination of microstructure detection and deep learning model of nickel-based superalloy. For example, Jia et al. [13] used the Unet++ network model to detect the γ′ phase in nickel-based superalloy. They obtained training data through SEM and cut the image size to 512 × 512 to facilitate model training. The Unet++ model is used to segment the γ′ phase, and the mIoU (mean Intersection over Union) value reaches 0.98. That is, the Unet++ network can identify the γ′ phase in nickel-based superalloys well. Senanayake et al. [14] used different methods to identify the γ′ phase and γ″ phase in IN718 alloys, including digital image processing, random forest algorithm (RF), support vector machine (SVM), convolutional neural network (CNN). They used scanned electron phases of IN718 alloy taken at NASA Glenn Research Center as a training dataset. The experimental results show that the use of the convolutional neural network (CNN) can obtain the fastest recognition speed and the most accurate recognition results, and the recognition accuracy rate is 0.95. Previous studies have shown that a deep learning network is the best solution for the identification of microstructure in nickel-based superalloys.
In this study, a microstructure segmentation model was proposed for large-size primary carbonitrides in nickel-based superalloys based on an improved YOLOv8 framework. The improved model for detecting carbonitrides offered a complete alternative to the threshold segmentation method, which satisfied the requirements for high-precision, high-speed, and batch-automated detection in industrial scenarios and was not affected by the quality of the input image. Additionally, the article introduced a method for assessing carbonitride distribution homogeneity, integrating it into a data processing program for automated processing from prediction to analysis. Finally, the YOLOV8-trained model has been able to be deployed in industrial sites, directly applied to on-site production processes, and reduce the operating time of workers.

2. Improved YOLOv8n Instance Segmentation Algorithm

The YOLOv8 algorithm, released in 2023, was one of the most advanced deep-learning models available. It included the following major improvements:
(1)
Compared with YOLOv5, YOLOv8 replaced the C3 module in the backbone network with the C2f module. The C2f module integrated the CSP structure and the ELAN [15] concept from YOLOv7. This integration enhanced the feature extraction capability of the YOLOv8 network and reduced computation and model complexity. The SPPF [16] module is retained at the conclusion of the backbone network, facilitating multi-scale feature fusion, thereby enhancing the detection capabilities of the model.
(2)
The neck network, similar to YOLOv5, still adopted the PAN-FPN [17,18] feature fusion method. It removed the Conv module in the upsampling stage and replaced the C3 module with the C2f module. These changes maintain the advantages of the YOLOv5 network and improve the detection performance of the model in various scenarios.
(3)
For the detection head, YOLOv8 used a decoupled head structure to separate the classification and detection tasks, along with the Anchor-Free [19] algorithm.
(4)
The practice of using Mosaic [20] data augmentation during training and turning it off for the last 10 epochs was effective in improving model robustness [21].
Despite these advancements, YOLOv8 encountered challenges in effectively detecting small and densely clustered targets. Previous research has proposed several enhancement methods to address these limitations. For instance, Lou et al. [22] introduced the DC-YOLOv8 algorithm for the detection of small targets captured by cameras. This enhancement to the network’s learning capability was achieved by revising the original downsampling module to an MDC module, swapping out the C2f module for a DC module, and refining the feature fusion process. Li et al. [23] enhanced the Neck layer of the original network by incorporating the BIFPN concept and replacing the C2f module with the Ghostblock module. They also refined the original network’s CIoU with WIoU, applying these improvements to address target detection in UAV aerial imagery. The enhanced model not only reduces the complexity of the model but also the miss rate for small targets. Wang et al. [24] introduced the YOLOv8-QSD network to address the challenge of small target detection in unmanned scenarios. This network incorporated a DBB module in place of the traditional Bottleneck module within the C2f module. It also integrated a BIFPN structure to enhance the original network’s Neck layer. Furthermore, the authors added a novel dynamic detection head, termed DyHead, to the network’s Head layer. With these enhancements, the YOLOv8-QSD network achieved an accuracy of 64.5% on a dataset specifically designed for small target detection.
In industrial applications, the features of alloy carbonitrides were assessed through the examination of microstructures under an optical microscope. In this study, the carbonitride dataset constructed for nickel-based high-temperature alloys was obtained using a metallurgical microscope with image sizes of 2240 × 1524 pixels. The pixel count for individual carbonitrides ranged from approximately 50 to 5000. The MS COCO [25] dataset categorized targets as small if they were below 32 × 32 pixels in size, with the total pixel count for these small targets typically being around or under 1000. In this research, we initially quantified the size distribution of typical carbonitrides in our dataset, as illustrated in Figure 1. The findings revealed that 77.72% of the carbonitrides were classified as small targets, indicating that the majority of carbonitrides examined in this study fall into this category. For ease of calculation, it was assumed that the size of a single carbonitride is 30 × 30, and the number of pixels it occupies is 900. When the training set images were input into the YOLOv8 network, the size of the input image was reduced to 640 × 448, which was 3.5 times smaller compared to the original image, and at this time, the size of the carbonitride was about 9 × 9, which was much smaller than the size of 32 × 32 in the definition of the small target. In summary, the carbonitride dataset for nickel-based superalloys predominantly featured small targets.
In order to illustrate this problem further, we have included more detailed statistics on carbonitrides in the picture. We used the thresholding segmentation method to make more detailed statistics on carbonitrides in the pictures. In order to ensure the accuracy of the statistical results, it is necessary to preprocess the pictures before statistics, such as removing noise and adjusting contrast and brightness. The specific statistical results are shown in Table 1. It can be seen from Table 1 that among the 20 carbon nitrides, only the areas (pixels) of 9#, 10#, 12#, and 18# are less than 32 × 32, which is the size definition of small targets in the MS COCO dataset. Thus, it can be clearly seen that most of the targets detected in the carbon-nitride data set are small targets.
Regarding the average area of a carbonitrides (μm2), it can be seen that the area (μm2) of a single carbonitrides is from a few square microns to hundreds of square microns, and the average area of these 20 carbon nitrides is about 41.7 μm2, calculated in a square, corresponding to a side length of about 6.5 μm.
To enhance the YOLOv8 network’s segmentation capabilities for carbonitrides, this paper mainly improves the YOLOv8n network. We chose YOLOv8n as the basic model of this study for the following reasons: First of all, YOLOv8, as the most advanced deep learning model at present, has a good performance in detection speed and accuracy. Varghese et al. [26] conducted performance tests on YOLOv8 and its previous YOLO series models on COCO data sets. They used Average Precision Across Scales (APAS) and Frames Per Second (FPS) as the evaluation indexes of the models, among which YOLOv8 series models showed the best performance with an APAS score of 52.7 and a FPS score of 150. Compared with the YOLOv7 series model, its indexes are increased by 2.4 and 30, respectively. Secondly, according to the needs of industrial scenarios, the YOLOv8n model is the most suitable choice, considering the training time and calculation amount of the model, as well as the detection accuracy and speed of the model. Finally, the YOLOv8n model is more convenient for model improvement. During the research, we tried to use a larger model and improve it, but its training speed and detection speed were too slow to meet the needs of industrial sites. In addition, the YOLOv8n model also makes it easier to make more adjustments based on subsequent field conditions.
This research proposed several key refinements depicted in Figure 2 [27], and the refinements included:
  • Add Space-to-Deep convolution (SPDConv) to the backbone layer.
  • Adding a small target detection layer makes the network more focused on small target detection.

2.1. Space-to-Deep Convolution (SPDConv)

Space-to-Deep Convolution (SPDConv) was proposed by Sunkara et al. [28], and its purpose was mainly to solve the problem of target detection in the case of small targets and low-resolution images. In general detection scenarios, images possess high resolution, and the targets are of moderate size, meaning that much of the images contain redundant information. These excess data could be effectively filtered through convolutional operations, residual connections, and pooling layers, allowing the model to discern and learn the essential features of the targets. However, in low-resolution images with small targets, the presence of redundant information was minimal. Continuous downsampling by the model could result in the loss of critical feature information for these targets, potentially rendering them undetectable.
SPDConv primarily comprised a space-to-depth layer and a non-strided convolutional layer. It began by taking an input feature map of size (S, S, C1). The space-to-depth layer then rearranged the input X into multiple sub-feature maps, with the transformation calculated as follows:
f ( 0,0 ) = X [ 0 : S : s c a l e , 0 : S : s c a l e ] , f ( 1,0 ) = X [ 1 : S : s c a l e , 0 : S : s c a l e ] ,
f ( s c a l e 1,0 ) = X [ s c a l e 1 : S : s c a l e , s c a l e 1 : S : s c a l e ] ;
f ( 0,1 ) = X [ 0 : S : s c a l e , 1 : S : s c a l e ] , f ( 1.1 ) = X [ 1 : S : s c a l e , 1 : S : s c a l e ] ,
f ( s c a l e 1,1 ) = X [ s c a l e 1 : S : s c a l e , s c a l e 1 : S : s c a l e ] ;
f ( 0 , s c a l e 1 ) = X [ 0 : S : s c a l e , s c a l e 1 : S : s c a l e ] ,
f ( 1 , s c a l e 1 ) = X [ 1 : S : s c a l e , s c a l e 1 : S : s c a l e ] ,
f ( s c a l e 1 , s c a l e 1 ) = X [ s c a l e 1 : S : s c a l e , s c a l e 1 : S : s c a l e ]
Then, several sub-feature maps were concatenated along the channel direction to obtain X with dimensions (s/scale, s/scale, scale2C1). If the scale was taken as 2, the calculation process was illustrated in Figure 3. In Figure 3, the plus sign indicates that all sub-feature graphs are spliced according to the channel direction, and the five-pointed star indicates that the convolution calculation with step size 1 is performed on the spliced feature graphs.The dimensions of X would be (s/2, s/2, 4×C1 ). Subsequently, a non-strided convolution was applied to transform X into a new feature map X with dimensions (s/2, s/2, C2), where C 2 < s c a l e 2 C 1 . The use of non-strided convolution here primarily aimed to retain all relevant information. While transformations from X to X were possible with strides greater than 1, they might result in the loss of some features.
In order to further illustrate the role of the SPDConv module for small target detection, we use a concrete example. Suppose the input dimension is (640, 640, 3); after passing through the space-to-depth layer, its dimension becomes (320, 320, 12), and then the number of channels is modified by the convolution module with step 1. After such operation, the feature information of small and medium-sized objects in the picture can be well preserved. That is, the SPDConv module rearranges the spatial dimension information to the depth dimension, thereby avoiding the information loss caused by traditional step volume. If the convolution check with the size of 3 × 3 is used for the convolution operation of the input dimension, although the feature graph size of 320 × 320 can be obtained, the feature information of the small target will be lost during the convolution process, resulting in the failure of the model to recognize the small target, and the subsequent data processing will be seriously affected.

2.2. Small Target Detection Layer

The original YOLOv8n network primarily operated on feature maps sized at 80 × 80, 40 × 40, and 20 × 20. However, during downsampling, features representing carbonitrides might diminish to a few pixels or vanish entirely. To address this issue, Zhai et al. [29] investigated and enhanced the Neck and Head layers of the YOLOv8n network. The 160 × 160 scale feature maps were introduced in the Neck layer to enhance feature fusion, emphasizing that this larger scale better-preserved feature information for small targets and enhanced overall detection accuracy. Concurrently, a corresponding detection head in the Head layer was integrated, utilizing four detection heads collectively to optimize model performance.
In order to further explain the role of the P2 small target layer, we use the structure of the P2 small target layer to illustrate. Its structure diagram is shown in Figure 4. As can be seen from the structure diagram, a P2 small target layer is added to the network structure, and four detection heads are used for multi-scale detection. Among them, the feature map size of the P2 small target layer is 160 × 160, which has a higher resolution and can give a finer representation of small targets.

3. Experiments

3.1. Image Acquisition

In this experiment, metallographic specimens sampled from nickel-based superalloy bars of different dimensions were selected. The specimens were prepared by grinding and polishing without etching. Images were captured using an optical microscope (OM, Olympus, Tokyo, Japan) at 200× magnification, resulting in a dataset where each photograph had a resolution of 2240 × 1524 pixels. This dataset included two types of compounds: TiN and NbC.

3.2. Data Annotation

All images within the dataset underwent preprocessing and were subsequently annotated for carbonitrides using the Labelme v1.8.1 software. During annotation, the delineation precisely matched the contour of the carbonitride. The detailed annotation view and software interface are shown in Figure 5, where the green contour indicates NbC and the red contour indicates TiN. After completion of the annotation process, a JSON file was generated. This file primarily contained the positional information of the carbonitride contours and the category names of the carbonitrides. Subsequently, the JSON file was converted into a TXT file to enable the improved YOLOv8 network to recognize the annotation information, ensuring the normal progression of subsequent training.

3.3. Data Amplification

To ensure the effectiveness and enhance the generalization of the model, the original dataset was augmented. Image enhancement techniques were employed, including mirroring, Gaussian noise addition, brightness adjustment, and random point overlay. The augmented dataset comprised 1530 images, evenly split into training, validation, and test sets at a ratio of 8:1:1.

3.4. Model Training

The hyperparameters used in the experiment are shown in Table 2. Upon completion of training, a model file named ‘best.pt’ was generated, which was utilized for subsequent carbonitride prediction tasks.

4. Results and Discussion

4.1. Experimental Environment

For this experiment, the compiler used was Python 3.8, with PyTorch version 2.0.0 and CUDA version 11.8. On the hardware front, the CPU was an Intel (R) Xeon (R) Platinum 8474C, and the graphics card was an RTX 4090D with 24 GB of memory.

4.2. Model Performance Evaluation

To assess model performance, the following key metrics were selected for evaluation: the confusion matrix, Precision (P), Recall (R), and mean Average Precision (mAP).
The confusion matrix is a tabular form used to evaluate the performance of a classification model. It serves as a visualization tool, primarily for comparing classification outcomes with actual measured values, and it can display the accuracy of classification results within the matrix. Taking binary classification as an example, the confusion matrix is shown in Table 3. In it, TP (True Positive) indicates that the sample’s actual value category is the positive class, and the model identification result is also the positive class. FN (False Negative) indicates that the sample’s actual category is the positive class, but the model identification result is the negative class. FP (False Positive) indicates that the sample’s actual category is the negative class, but the model identifies it as the positive class. TN (True Negative) indicates that the sample’s actual value is the negative class, and the model also identifies it as the negative class. Subsequent advanced evaluation metrics are also calculated based on these four parameters of the confusion matrix.
Precision (P) refers to the proportion of data correctly predicted as the positive class among all data predicted as positive by the model. The calculation formula is:
P = T P ( T P + F P )
Recall (R) refers to the proportion of the actual positive instances in the sample that are correctly identified by the model. The calculation formula is:
R = T P ( T P + F N )
Mean Average Precision (mAP) is an important metric for evaluating model performance. A higher mAP value indicates better model performance. Before calculating mAP, you need to calculate the Average Precision (AP) for each class first. The calculation formula is:
A P = P ( R ) d R
In the formula, P(R) refers to the function curve of Precision (P)–Recall (R) for a single class. From this, the value of mAP can be calculated, with the calculation formula being:
m A P = 1 n i n A P i
In the formula, n represents the total number of classes. In this experiment, n = 2.

4.3. Comparison of SPDConv Module Improvement Effects

To verify the adaptability of the SPDConv module to the overall model, comparative experiments were conducted by adding the SPDConv module to different positions within the model. The specific locations of addition are shown in Figure 6. SPD0 indicates that the SPDConv module is added at the 0th layer of the backbone. SPD01 indicates that the SPDConv module is added at the 0th and 3rd layers of the backbone, a method of addition that is consistent with the improvement approach in this research. SPD012 indicates that the SPDConv module is added at the 0th, 3rd, and 6th layers of the backbone. SPD0123 indicates that the SPDConv module is added at the 0th, 3rd, 6th, and 9th layers of the backbone. SPD01234 indicates that the SPDConv modules are added at the 0th, 3rd, 6th, 9th, and 12th layers of the backbone. The comparative experimental results for TiN and NbC are shown in Table 4 and Table 5, respectively. The main evaluation metrics selected are Precision (P), Recall (R), and mAP50.
Data from Table 4 and Table 5 show that, considering detection and segmentation accuracy alongside computational load, the SPD01 structural improvement is the most effective. For TiN, the mAP50 values for the detection box and mask are 0.808 and 0.576, increasing by 4.6% and 5.4% from the unimproved model. For NbC, these values are 0.622 and 0.439, with increases of 3.1% and 1.8%. Consequently, this research selected the SPD01 module for improvement.

4.4. Heatmap Visualization and Analysis

In order to further clarify the mechanism of the SPD01 module, this research used the GradCAM [30] method to visualize the attention region of the model by generating a heat map, and the specific effect is shown in Figure 7, where the closer the color is to red, the more attention the model pays to the region.
Figure 7 illustrates that the unimproved model attention is focused on the background and edges of the image, or it only partially captures the carbonitrides and sometimes fails to focus on them at all. After adding SPD01, the model’s attention to carbonitrides was significantly improved, its attention accuracy increased substantially, and it was also able to distinguish between impurities and carbonitrides. In summary, the SPD01 module significantly improved the model recognition rate and model performance for carbonitrides by increasing the model attention to small targets.

4.5. Ablation Experiment

To verify the overall effect of the improved model, YOLOv8n was used as the baseline model, and ablation experiments were conducted on each improved module. The effectiveness of the model improvements was determined by comparing evaluation metrics such as Precision, Recall, and mAP50 values.
The ablation experiment results for TiN and NbC are detailed in Table 6 and Table 7, respectively. For clarity, the small target detection layer is referred to as “small”. In Table 6 and Table 7, “√” indicates that the module was added to the model and “×” indicates that the module was not added to the model. As indicated in Table 6, the addition of only the SPDConv module to TiN results in a 4.6% and 5.4% increase in mAP50 values for the detection box and mask, respectively, compared to the baseline model. When the small layer is added alone, the mAP50 values for the detection box and mask rise by 9.3% and 15.9%, respectively. With both modules incorporated, the mAP50 values for the detection box and mask see an increase of 11.5% and 20.8%, respectively. Table 7 reveals that for NbC, the addition of SPDConv alone leads to a 3.1% and 1.8% increase in mAP50 values for the detection box and mask, respectively. When the “small” layer is added alone, the mAP50 values improve by 15.4% for the detection box and 13.5% for the mask. Upon adding both modules, the mAP50 values for the detection box and mask increased by 18.9% and 13.6%, respectively. These findings underscore that the enhanced network significantly bolstered model performance and enhanced the precision of carbonitride detection and segmentation.

4.6. Model Prediction Effect Comparison

After training the model, the batch prediction was programmed using PyCharm 2023.2.1 software. The parameters used for prediction are shown in Table 8. The prediction results before and after the improvement under the same parameter conditions are shown in Figure 8, where the blue contour represents TiN, and the red contour represents NbC. It can be observed that the unimproved model has a high false-negative rate for dense, small-sized carbonitrides, with a large number of NbCs not being correctly identified; the improved model can more easily detect them, reducing the false-negative rate, and also has good discrimination ability for impurities. Overall, the improved model was more effective for detecting carbonitrides in high-temperature alloys.

4.7. Data Processing

To analyze the impact of carbonitrides on nickel-based high-temperature alloys, the data of the masked area were processed at the same time when using the model prediction. The main metrics selected for calculation include the number of carbonitrides, centroid coordinates, area of the region, area fraction, and dispersion.

4.8. Calculation of Carbonitride Dispersity

During the process improvement of nickel-based superalloys, the uniformity of the distribution of carbonitrides was found to have a critical impact on the differences in transverse and longitudinal properties of the nickel-based superalloys. It was possible to accurately quantify the average size, number, and other characteristics of carbonitride particles, but there was no suitable method for the indicator of the uniformity of the distribution of carbonitride particles. After searching through the literature, no reliable research basis could be found.
The objective of this research was to quantitatively evaluate the uniformity of the distribution of carbonitride particles. This question could be generalized into a standard computational measure: the quantification of particle dispersion. Specifically, that is calculating the standard deviation of the distances between each mass point and other mass points, as shown in Figure 9. Ideally, if the distribution of mass points was completely uniform, the standard deviation of the distance between each mass point and its nearest neighboring mass point should be zero. At this point, the dispersion of carbonitride particles was zero, indicating that the distribution of carbonitride particles was absolutely uniform.
Considering the limited area that could be sampled in each image, the nearest particles of carbonitrides near the edge of the image might not be present within the same image. This could lead to an overestimation of the measured dispersion. To address this issue, this study adopted the Moore neighborhood type from cellular automata, duplicating the original image eight times and translating it to form eight adjacent neighborhoods around the original image (1 × 1), creating a new image array (3 × 3). By calculating the distance from each carbonitride particle in the central image (1 × 1) to the nearest carbonitride particle in the entire new image (3 × 3), we could mitigate the bias in the dispersion measurement compared to the actual value. This approach is illustrated in Figure 10.
In Figure 9 and Figure 10, both the yellow and orange heptagon stars represent carbonitrides, the blue background represents the original image, and the white background represents the reproduced image.
However, simply calculating the standard deviation of the distance between each particle and the nearest particle could still lead to misjudgment. When the distribution of particles was extremely uneven, such as when large-sized primary carbonitrides were just slightly broken and clustered together, it was observed that the calculated dispersion was also relatively low. However, under actual conditions, the distribution of carbonitrides in this situation was not uniform. This was an inevitable problem encountered when calculating the nearest neighbor distance because it only considered the standard deviation of the shortest distance between particles, ignoring the distance between particles and edges or corners. Considering that the number of actual carbonitride particles was not sufficiently large, the array composed of the shortest distances was supplemented with the nearest distance from the particle group to the four corners of the rectangle. At this time, a new dispersion could be calculated.
The dispersion calculation program is embedded into the carbonitride recognition program, and the final data processing results are shown in Table 9. The corresponding image effect diagrams are shown in Figure 11.

4.9. Advantages in Industrial Scenarios

The complexity and uniqueness of engineering problems result in a lack of common data sets in our field, making direct comparisons with experimental results from other authors difficult. At the same time, previous methods used in this field are based on threshold segmentation, such as Image-Pro Plus 6.0 and other software for segmentation and statistics. For example, Chen et al. [31] analyzed the effects of dual melting (VIM + VAR) and triple melting (VIM + ESR + VAR) technologies on the properties of superalloy GH4738 by means of XRD, SEM, EDS, and Image-Pro Plus software. The Image-Pro Plus software is mainly used to count the number and size of inclusions in the alloy. Yang et al. [32] used characterization methods such as SEM and Image-Pro Plus software to observe and quantify the microstructure in dual-phase Ti-6AI-4V alloy to analyze its mechanical properties. The Image-Pro Plus software is mainly used to calculate the number and size of grains in the alloy. Specific statistical accuracy is not given in the literature.
In order to verify the validity and universality of our method, we will conduct a comparative analysis from two aspects:
(1) More extensive literature research. Related research in other fields has shown that approaches based on computer vision and deep learning have significant advantages over traditional approaches based on threshold segmentation. For example, Wang et al. [33] used ResNet network and Image-Pro Plus 6.0 software to count the number of nuclei in the process of cell proliferation, in which the average accuracy of Image-Pro Plus 6.0 software was 67.2%, and the average accuracy of ResNet network recognition was 90%. Zhu et al. [34] used the improved FCN network and Image-Pro Plus software to segment the eutectic silicon in Al-Si alloy, calculate the parameters of its microstructure characteristics, and compare with the theoretical results. The results obtained by the improved FCN network segmentation were closer to the theoretical values.
(2) Data statistics. In the actual industrial application environment, the improved model in this study had significant advantages in both accuracy and speed. To illustrate this, this research compared the identification results of Image-Pro Plus 6.0 software based on threshold segmentation with the instance segmentation results of the improved model and used manual statistical data as a benchmark to measure the accuracy of the two detection methods by counting the number of carbonitrides. In the Image-Pro Plus 6.0 software, the set values for R, G, and B are 200, 190, and 185, respectively, and the minimum segmentation recognition area was set to 10 to filter out the noise in the image. The specific comparison results are shown in Table 10, from which it can be seen that the number of identifications by the improved model is closer to the manual statistical results, with the recognition error between zero and five. Although the minimum recognition area was set in the Image-Pro Plus 6.0 software, due to the impact of image quality, there were still many noise points and impurities recognized. The identification results of the Image-Pro Plus 6.0 software were about three-to-seven times the manual statistical results, some of which were affected by noise and impurities, significantly affecting the subsequent data processing results. In summary, the improved model had superior performance and far exceeded the Image-Pro Plus 6.0 software in detection speed and accuracy. At the same time, the model could be used as a pre-trained weight file for automatic annotation software after processing for automatic segmentation of micron-level carbonitrides in other steel grades, significantly accelerating the detection and instance segmentation tasks of carbonitrides in other steel grades.
In order to further illustrate the effectiveness of the improved model, based on the actual data of the industrial field, a detailed statistical analysis is carried out. With the manual statistical data as a reference, the recognition effect of deep learning and threshold segmentation method is compared by calculating Precision and Recall. The calculation results of the two methods are shown in Table 11. From Table 11, it is obvious that the average value of the improved YOLOv8 model in terms of Precision and Recall exceeds 90%. Although the Recall of the recognition results of Image-Pro Plus 6.0 is high due to the impact of the shooting environment of industrial site pictures and the lack of preprocessing of pictures, it can recognize many impurities—that is, the Precision is low. In summary, the recognition accuracy of the improved YOLOv8 model is far higher than that of the recognition based on the threshold segmentation method.

4.10. Practical Challenges and Industrial Relevance of Model Implementation

The ultimate goal of this research is to build a system in which the AI model can iterate itself, which also brings challenges to our work. Encouragingly, the model has significant versatility and growth. After industrial mass production has shown the characteristics of accuracy, speed, standard, and low labor cost, the model has been rapidly applied to the carbonitrides quantitative statistics of other grades of the industrial production process. In the future, we need to obtain more microstructure photos of steel varieties to continuously improve the performance of the model and build the above system.
In the subsequent research, more improvement methods will be applied, such as adding CBAM and EMA attention mechanisms, trying more loss functions, changing the feature extraction strategy of the backbone network, modifying the feature fusion mechanism of the neck network, etc. Through these methods, the performance of the model is again improved to meet the needs of the industrial field. Beyond that, existing improved models can be leveraged to process new data and added to datasets that can be used to train more general models. Implement the iterative update of the AI model.

5. Conclusions

(1) The structural improvement of SPD01 mapped spatial dimension information to depth dimension, effectively addressing the problem of feature information loss. Heatmap visualization indicated that the SPD01 module enhanced model attention on carbonitrides, with substantial accuracy gains. The mAP50 values for TiN and NbC masks improved by 5.4% and 1.8%, respectively.
(2) By incorporating a specialized P2 layer, the model was endowed with a more refined recognition capability for small targets, thereby significantly enhancing the segmentation accuracy for these targets. The mAP50 values for TiN and NbC masks improved by 15.9% and 13.5%, respectively.
(3) Compared to the original network, integrating SPD01 and the P2 layer further enhanced model performance, increasing precision in carbonitride detection and segmentation while reducing missed detection rates. The mAP50 values for TiN and NbC masks improved by 20.8% and 13.6%, respectively.
(4) A method and program for calculating the dispersity of carbonitrides developed and embedded into the carbonitride recognition program, enabling batch detection and data processing. The current model was capable of performing high-precision instance segmentation detection tasks for primary carbonitrides in industrial scenarios.

Author Contributions

Conceptualization, J.Z.; Methodology, C.G.; Software, H.Z.; Validation, H.Z.; Formal analysis, H.Z.; Investigation, J.Z.; Resources, J.Z.; Data curation, C.Z.; Writing—original draft, H.Z.; Writing—review & editing, C.Z. and C.G.; Visualization, C.Z.; Supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Major Scientific and Technological Innovation Project of CITIC Group] grant number [2022ZXKYA06100].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Reed, R.C. The Superalloys: Fundamentals and Applications; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  2. Durand-Charre, M. The Microstructure of Superalloys; Routledge: Abingdon-on-Thames, UK, 2017. [Google Scholar]
  3. Cieslak, M.J.; Knorovsky, G.A.; Headley, T.J.; Romig, A.D., Jr. The Solidification Metallurgy of Alloy 718 and Other Nb-Containing Superalloys; Sandia National Lab.: Albuquerque, NM, USA, 1989; Volume 20, pp. 2149–2158. [Google Scholar] [CrossRef]
  4. Leonardo, I.M.; da Hora, C.S.; dos Reis Silva, M.B.; Sernik, K. Production of Nitride-Free 718 by the VIM-VAR Processing Route. In Proceedings of the 9th International Symposium on Superalloy 718 & Derivatives: Energy, Aerospace, and Industrial Applications; Springer: Cham, Switzerland, 2018; pp. 303–315. [Google Scholar] [CrossRef]
  5. Xie, Y.; Cheng, G.G.; Chen, L.; Zhang, Y.D.; Yan, Q.Z. Characteristics and generating mechanism of large precipitates in Nb–Ti-microalloyed H13 tool steel. ISIJ Int. 2016, 56, 995–1002. [Google Scholar] [CrossRef]
  6. Chen, Q.L.; LI, W.; Chen, Z. Analysis of microstructure characteristics of high sulfur steel based on computer image processing technology. Results Phys. 2019, 12, 392–397. [Google Scholar] [CrossRef]
  7. Wang, Y.; Huang, X.X.; Xie, G.L.; Zhang, N.P. A high-precision automatic recognition method based on target detection for nanometer scaled precipitates or carbides in different alloys. J. Mater. Res. Technol. 2023, 26, 7767–7774. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  9. Brian, L.D.; Francis, T.; Holm, E.A. Exploring the microstructure manifold: Image texture representations applied to ultrahigh carbon steel microstructures. Acta Mater. 2017, 133, 30–40. [Google Scholar] [CrossRef]
  10. Li, W.; Li, W.; Qin, Z.; Tan, L.; Huang, L.; Liu, F.; Xiao, C. Deep Transfer Learning for Ni-Based Superalloys Microstructure Recognition on γ′ Phase. Materials 2022, 15, 4251. [Google Scholar] [CrossRef]
  11. Azimi, S.M.; Britz, D.; Engstler, M.; Fritz, M.; Mucklich, F. Advanced Steel Microstructural Classification by Deep Learning Methods. Sci Rep. 2018, 8, 2128. [Google Scholar] [CrossRef]
  12. Ghauri, H.; Tafreshi, R.; Mansoor, B. Toward automated microstructure characterization of stainless steels through machine learning-based analysis of replication micrographs. J. Mater. Sci. Mater. Eng. 2024, 4, 19. [Google Scholar] [CrossRef]
  13. Jia, K.; Li, W.F.; Wang, Z.L.; Qin, Z.J. Accelerating Microstructure Recognition of Nickel-Based Superalloy Data by UNet++. Int. Symp. Intell. Autom. Soft Comput. (IASC) 2021, 80, 863–870. [Google Scholar] [CrossRef]
  14. Senanayake, N.M.; Carter, J.L.W. Computer Vision Approaches for Segmentation of Nanoscale Precipitates in Nickel-Based Superalloy IN718. Integr. Mater. Manuf. Innov. 2020, 9, 446–458. [Google Scholar] [CrossRef]
  15. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7467–7475. [Google Scholar] [CrossRef]
  16. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
  17. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.M.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
  18. Li, H.C.; Xiong, P.F.; An, J.; Wang, L.X. Pyramid attention network for semantic segmentation. arXiv 2018. [Google Scholar] [CrossRef]
  19. Zhu, C.C.; He, Y.H.; Savvides, M. Feature Selective Anchor-Free Module for Single-Shot Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 840–849. [Google Scholar] [CrossRef]
  20. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020. [Google Scholar] [CrossRef]
  21. Ge, Z.; Liu, S.T.; Wang, F.; Li, Z.M.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021. [Google Scholar] [CrossRef]
  22. Lou, H.T.; Duan, X.H.; Guo, J.M.; Liu, H.Y.; Gu, J.; Bi, L.Y.; Chen, H.N. DC-YOLOv8: Small-Size Object Detection Algorithm Based on Camera Sensor. Electronics 2023, 12, 2323. [Google Scholar] [CrossRef]
  23. Li, Y.T.; Fan, Q.S.; Huang, H.S.; Han, Z.G.; Gu, Q. A Modified YOLOv8 Detection Network for UAV Aerial Image Recognition. Drones 2023, 7, 304. [Google Scholar] [CrossRef]
  24. Wang, H.; Liu, C.Y.; Cai, Y.F.; Chen, L.; Li, Y.C. YOLOv8-QSD: An Improved Small Object Detection Algorithm for Autonomous Vehicles Based on YOLOv8. IEEE Trans. Instrum. Meas. 2024, 73, 1–16. [Google Scholar] [CrossRef]
  25. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Volume 8693, pp. 740–755. [Google Scholar] [CrossRef]
  26. Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
  27. Zhong, R.; Peng, E.D.; Li, Z.Q.; Ai, Q.; Han, T.; Tang, Y. SPD-YOLOv8: An small-size object detection model of UAV imagery in complex scene. J. Supercomput. 2024, 80, 17021–17041. [Google Scholar] [CrossRef]
  28. Sunkara, R.; Luo, T. No More Strided Convolutions or Pooling: A New CNN Building Block for Low-Resolution Images and Small Objects. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Grenoble, France, 19–23 September 2022; pp. 443–459. [Google Scholar] [CrossRef]
  29. Zhai, X.X.; Huang, Z.H.; Li, T.; Liu, H.Z.; Wang, S.Y. YOLO-Drone: An Optimized YOLOv8 Network for Tiny UAV Object Detection. Electronics 2023, 12, 3664. [Google Scholar] [CrossRef]
  30. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
  31. Chen, Z.Y.; Yang, S.F.; Qu, J.L.; Li, J.S.; Dong, A.P.; Gu, Y. Effects of Different Melting Technologies on the Purity of Superalloy GH4738. Materials 2018, 11, 1838. [Google Scholar] [CrossRef]
  32. Yang, D.; Liu, Z.Q. Quantification of Microstructural Features and Prediction of Mechanical Properties of a Dual-Phase Ti-6Al-4V Alloy. Materials 2016, 9, 628. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, H.; Lv, X.Y.; Wu, G.H.; Lv, G.D.; Zheng, X.X. Cell proliferation detection based on deep learning. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 18–20 December 2020; pp. 208–212. [Google Scholar] [CrossRef]
  34. Zhu, L.F.; Luo, Q.; Chen, C.H.; Zhang, Y.; Zhang, L.J.; Hu, B.; Han, Y.X.; Li, Q. Prediction of ultimate tensile strength of Al-Si alloys based on multimodal fusion learning. MGE Adv. 2024, 2, 1. [Google Scholar] [CrossRef]
Figure 1. Carbonitride Size Distribution Map.
Figure 1. Carbonitride Size Distribution Map.
Materials 17 04679 g001
Figure 2. Improved Network Structure [28].
Figure 2. Improved Network Structure [28].
Materials 17 04679 g002
Figure 3. SPDConv Structure Diagram.
Figure 3. SPDConv Structure Diagram.
Materials 17 04679 g003
Figure 4. Small target detection layer.
Figure 4. Small target detection layer.
Materials 17 04679 g004
Figure 5. Labelme Labeling Software Interface.
Figure 5. Labelme Labeling Software Interface.
Materials 17 04679 g005
Figure 6. SPDConv Module Different Add Locations.
Figure 6. SPDConv Module Different Add Locations.
Materials 17 04679 g006
Figure 7. Visualization of Heatmap Results. (ad): Original Image; (eh): Unimproved model; (il): The Model with SPD01 Layer added.
Figure 7. Visualization of Heatmap Results. (ad): Original Image; (eh): Unimproved model; (il): The Model with SPD01 Layer added.
Materials 17 04679 g007
Figure 8. Comparison of The Effects Before and After Prediction. (ad): Original Image; (eh): Unimproved Model; (il): Improved Model.
Figure 8. Comparison of The Effects Before and After Prediction. (ad): Original Image; (eh): Unimproved Model; (il): Improved Model.
Materials 17 04679 g008
Figure 9. Schematic of Carbonitride Dispersion Statistics.
Figure 9. Schematic of Carbonitride Dispersion Statistics.
Materials 17 04679 g009
Figure 10. Schematic of the expanded image (Moore-type neighborhood).
Figure 10. Schematic of the expanded image (Moore-type neighborhood).
Materials 17 04679 g010
Figure 11. Dispersion calculation results (a): 1-1# (b) 1-2# (c) 1-3# (d) 1-4# (e) 1-5# (f) 1-6#.
Figure 11. Dispersion calculation results (a): 1-1# (b) 1-2# (c) 1-3# (d) 1-4# (e) 1-5# (f) 1-6#.
Materials 17 04679 g011
Table 1. Carbonitrides statistics.
Table 1. Carbonitrides statistics.
Obj.TypeArea (Pixels)Area (μm2)Center-XCenter-Y
1#TiN60434.8737.1463.5
2#TiN40523.31943.0584.0
3#TiN59034.0803.5717.3
4#NbC28916.6314.1895.9
5#NbC814.71406.2936.9
6#NbC45626.3315.0951.9
7#NbC1227.0714.4969.0
8#NbC76644.11075.6972.7
9#NbC2129122.61369.2999.0
10#NbC2344135.01162.11000.5
11#NbC62536.01098.91007.0
12#NbC2043117.7993.81021.9
13#NbC543.1967.61004.8
14#NbC85849.4910.41020.7
15#NbC512.9952.51032.4
16#NbC1227.01219.11041.7
17#NbC87550.4953.71065.4
18#NbC105360.71476.41121.9
19#NbC58433.61246.61121.2
20#NbC41624.01170.01127.0
Table 2. Hyperparameter Settings.
Table 2. Hyperparameter Settings.
ParametersMeaningValue
imgszInput Image Size640
epochsThe Rounds of Training300
batchBatch of Training Data1
optimizerTraining Optimizer SGD
Ir0Initial Learning Rate0.01
momentumMomentum Factor0.937
workersWorker Thread Count8
Table 3. Confusion Matrix.
Table 3. Confusion Matrix.
RealityPrediction
PositiveNegative
PositiveTrue Positive (TP)False Negative (FN)
NegativeFalse Positive (FP)True Negative (TN)
Table 4. TiN Test Results.
Table 4. TiN Test Results.
ModelBox (P)Box (R)mAP50Mask (P)Mask (R)mAP50
YOLOv8n0.9510.6410.7620.7010.4750.522
SPD00.9820.6200.8090.7320.4630.542
SPD010.9750.6490.8080.7490.5000.576
SPD0120.9540.6330.7920.7160.4780.538
SPD01230.9750.6410.7980.7430.4910.526
SPD012340.9620.6440.7970.7540.5060.549
Table 5. NbC Test Results.
Table 5. NbC Test Results.
ModelBox (P)Box (R)mAP50Mask (P)Mask (R)mAP50
YOLOv8n0.7140.4940.5910.5290.3730.421
SPD00.7040.4920.5810.5180.3650.396
SPD010.7370.5140.6220.5570.3960.439
SPD0120.7600.4830.6110.5620.3600.424
SPD01230.7820.5120.6430.5630.3720.429
SPD012340.7710.5010.6330.5600.3650.427
Table 6. Results of TiN Ablation Experiments.
Table 6. Results of TiN Ablation Experiments.
ModelSPD01SmallmAP50 (Box)mAP50 (Mask)
Model01××0.7620.522
Model02×0.8080.576
Model03×0.8550.681
Model040.8770.730
Table 7. Results of NbC Ablation Experiments.
Table 7. Results of NbC Ablation Experiments.
ModelSPD01SmallmAP50 (Box)mAP50 (Mask)
Model01××0.5910.421
Model02×0.6220.439
Model03×0.7450.556
Model040.7800.557
Table 8. Image prediction parameter settings.
Table 8. Image prediction parameter settings.
ParametersImplicationValue
yolo predict modelModel Filebest.pt
sourceImage file or path1.JPG
saveSave the prediction resultsTrue
save_cropSave the cropped image with the resultsTrue
imgszImage size640
retina_masksUse high-resolution segmentation maskTrue
Table 9. Data processing results.
Table 9. Data processing results.
ImageQuantitiesArea FractionDispersionDispersion-XDispersion-Y
1-1#480.72%2959.43721936.87892306.4717
1-2#180.21%1059.4677836.2208762.9323
1-3#601.04%1743.01291722.0014731.2126
1-4#381.08%1935.6972808.19591863.9304
1-5#110.32%815.4331455.6615773.5230
1-6#70.27%657.2003364.0994622.1773
Table 10. Comparison results of different methods.
Table 10. Comparison results of different methods.
ImageStatistical Method (Number of Carbonitrides)
Manual StatisticsImage-Pro PlusModified Model
2-1#4710146
2-2#5216950
2-3#2429625
2-4#3122227
2-5#5720654
2-6#3522538
2-7#3720335
2-8#2215022
2-9#6720462
2-10#4215943
Table 11. Comparison of calculation results.
Table 11. Comparison of calculation results.
Obj.P (Model)R (Model)P (IPP)R (IPP)
2-1#0.9781.0000.4460.957
2-2#0.9600.9230.2490.808
2-3#0.9601.0000.0770.958
2-4#1.0000.9350.1310.935
2-5#0.9810.9290.2620.947
2-6#0.9221.0000.1420.914
2-7#1.0000.9460.1630.892
2-8#0.8570.8570.1330.909
2-9#0.9840.9100.3280.910
2-10#0.9530.9760.2260.857
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Zheng, H.; Zeng, C.; Gu, C. High-Precision Instance Segmentation Detection of Micrometer-Scale Primary Carbonitrides in Nickel-Based Superalloys for Industrial Applications. Materials 2024, 17, 4679. https://doi.org/10.3390/ma17194679

AMA Style

Zhang J, Zheng H, Zeng C, Gu C. High-Precision Instance Segmentation Detection of Micrometer-Scale Primary Carbonitrides in Nickel-Based Superalloys for Industrial Applications. Materials. 2024; 17(19):4679. https://doi.org/10.3390/ma17194679

Chicago/Turabian Style

Zhang, Jie, Haibin Zheng, Chengwei Zeng, and Changlong Gu. 2024. "High-Precision Instance Segmentation Detection of Micrometer-Scale Primary Carbonitrides in Nickel-Based Superalloys for Industrial Applications" Materials 17, no. 19: 4679. https://doi.org/10.3390/ma17194679

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop