Next Article in Journal
Fingerprint Fusion Location Method Based on Wireless Signal Distribution Characteristic
Previous Article in Journal
Managing Digital Transformation: A Case Study in a Higher Education Institution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inspection Algorithm of Welding Bead Based on Image Projection

Department of Artificial Intelligence Convergence, Pukyong National University, 45, Yongso-ro, Nam-gu, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(11), 2523; https://doi.org/10.3390/electronics12112523
Submission received: 17 April 2023 / Revised: 18 May 2023 / Accepted: 31 May 2023 / Published: 2 June 2023

Abstract

:
The shear reinforcement of dual-anchorage (SRD) is used to enhance the safety of reinforced concrete structures in construction sites. In SRD, welding is used to create shear reinforcement, and after production, a quality inspection of the welding bead is required. Since the welding bead of SRD is inspected for quality by measuring both horizontal and vertical lengths, it is necessary to obtain this information for quality inspection. However, it is difficult to inspect the quality of welding beads using existing methods based on segmentation, due to the similarity in texture between the welding bead and the base material, as well as discoloration around the welded area after welding. In this paper, we propose an algorithm that detects the welding bead using an image projection algorithm for pixels and classifies the quality of the welding bead. This algorithm detects the position of welding beads using the brightness values of an image. The proposed algorithm reduces the amount of computation time by first specifying the region of interest and then performing the analysis. Results from experiments reveal that the algorithm accurately classifies welding beads into good or bad classes by obtaining all brightness values in the vertical and horizontal directions in the SRD image. Furthermore, comparison tests with conventional algorithms demonstrate that the classification accuracy of the proposed algorithm is the highest. The proposed algorithm will be helpful in the real-time welding bead inspection field where fast and accurate inspection is crucial.

1. Introduction

Shear reinforcement, such as SRD, is used in reinforced concrete structures to prevent shear failure during construction. As shown in Figure 1, the SRD is composed of three base materials that are welded together at the left, right, and center positions. This requires welding to be performed at four positions on the SRD, as illustrated in Figure 2. As shown in Figure 3, the SRD is produced by welding using a robot, and the welding quality is inspected after it is moved to the inspection table. The quality of the welding is determined by several parameters during the welding process, such as optimal voltage, current supply, amount of gas, welding time, and base metal geometry [1]. If any of these factors are not appropriate, they can have a negative impact on the productivity, competitiveness, and safety of the final product. Therefore, welding quality inspection is essential to determine whether a product is good or bad. There are various methods to inspect the quality of welding bead, including visual inspection, radiographic inspection, liquid penetrant inspection, and ultrasonic inspection [2,3,4,5]. Visual inspection is direct examination by human eyes. This can reduce the reliability of the product and take a long time, as the inspection results can vary depending on the inspector’s experience. However, it is still commonly used. Radiographic inspection is a non-destructive inspection technique that is effective in identifying and analyzing internal defects within a welding bead. It has relatively high resolution and can detect small defects inside the welding bead. However, there is a risk of safety issues due to the use of radiation. Ultrasonic inspection is a widely used non-destructive inspection technique that provides high sensitivity and precision, making it capable of detecting even small defects. The disadvantage of this method is that it requires trained personnel to operate the equipment. This is because its accuracy may be compromised if performed by an inexperienced inspector. Furthermore, a limitation of using ultrasonic inspection is that undetectable defects may arise depending on the shape or thickness of the welding bead. Recently, various methods have been proposed to analyze the quality of welding beads using image processing techniques such as segmentation and machine learning [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Segmentation algorithms are utilized to detect and inspect only the welding bead areas in images. Moreover, machine learning-based methods are utilized to evaluate the quality of the welding bead by using models trained on various welding bead shapes. However, this task is still challenging. This is because the texture of the welding bead and base metal is similar, and discoloration occurs around the bead after welding. This interference can compromise the accurate detection of welding beads. Therefore, there are limitations to using segmentation and machine learning algorithms for inspecting the quality of welding beads.
In this paper, we propose an algorithm for quality inspecting welding beads using image processing techniques. The algorithm detects the welding bead region by utilizing an image projection algorithm that analyzes the brightness of the welding bead, and categorizes good and bad welding beads in order to inspect the quality. The proposed algorithm reduces the computational cost by testing only the region of interest (ROI) in the image. Additionally, unlike many segmentation methods that require creating a mask followed by quality testing that involves measuring the length of the welding bead, the proposed algorithm omits this process resulting in relatively faster calculation. Furthermore, the algorithm statistically analyzes the brightness values in all directions within the ROI, which improves the detection accuracy of both the horizontal and vertical lengths, critical parameters in welding bead inspection. The remainder of this paper is organized as follows: Section 2 introduces conventional methods of segmenting the welding bead area in the image. Section 3 describes the algorithm proposed in this paper, and Section 4 presents the experimental results obtained using the proposed and existing algorithms. Finally, Section 5 presents the conclusions of this study.

2. Related Works

Various methods, such as segmentation and machine learning, have been proposed to inspect the quality of welding beads. Among these methods, the K-means is used to divide similar objects into clusters and has the advantage of being easy to apply. A study has been published on segmenting areas of interest in magnetic resonance imaging (MRI) using iterative Gaussian filtering, Canny edge detection, and Chan–Vese segmentation methods with K-means [6]. Other studies have also used K-means for image segmentation using RGB and HSV color spaces [7]. Khrissi et al. in [8] introduced a new image segmentation method based on clustering, optimized by the Sinus Cosine Algorithm (SCA) meta-heuristic algorithm. Qiao et al. in [9] applied and compared various segmentation methods, including the Otsu algorithm, K-nearest neighbors (KNN), Grabcut, and K-means, to segment rock core images of geological data, and showed that K-means performed the best. Furthermore, a study proposed a convolution-based modified adaptive K-means (MAKM) approach to divide an image so that performance does not depend on the initially set parameters [10]. Another study proposed an image segmentation method using an adaptive K-means algorithm [11]. However, K-means has the disadvantage of being sensitive to outliers, and the results may vary depending on the number of clusters pre-defined by the user. KNN is an unsupervised learning method that classifies data with similar properties into K clusters, similar to K-means. Various studies have proposed using KNN in different fields, such as medicine, marketing, character and face recognition. For example, a study used S-KNN to detect areas of interest, background, and ambiguous areas [12], and another study compared and analyzed image segmentation results using KNN and histograms for the leaf image data of crops [13]. Additionally, a study used the texture features of plant leaf for the classification of plant leaf disease images [14]. However, KNN has the disadvantage of requiring a large amount of computational time, making it difficult to apply in real-time inspections that require the quality to be determined in a short time. Grabcut is an algorithm that segments objects and backgrounds by minimizing the energy function in the distribution of a designated area in an image. Recently, there have been studies on image segmentation using Grabcut, such as a study that detected areas of interest using a hybrid segmentation method combining adaptive K-means and Grabcut [15], and a study that segmented cancer in images using Mask R-CNN and Grabcut [16]. However, the accuracy of Grabcut may be poor when the background in the image is complicated or the object and background are similar. In addition, repetitive tasks are required to obtain optimal results, resulting in a long processing time. Thus, good results are not expected since the welding bead has a similar color to the base metal corresponding to the background, and there are still limitations in computational speed compared to other algorithms [17]. Recently, deep learning methods have been proposed for image segmentation, and among them, U-Net belongs to the class of encoder–decoder-based models such as auto-encoder [18]. The initial drawback of U-Net is that during the encoding phase, it undergoes dimension reduction, leading to the loss of detailed positional information on image objects. In the decoding phase, only low-dimensional information is utilized, which can lead to the inability to recover the positional information loss. However, the research on UNET has indeed seen significant improvements through the incorporation of shortcuts. Nevertheless, U-Net still requires a large number of training samples to achieve good performance, which may not always be feasible in some applications for welding bead inspection.
The morphological geodesic active contour method is a combination of the morphological snakes introduced in [19] and the geodesic active contour in [20]. This method segments the area of interest by expanding the gradually developing contour. Recently, Mlyahilu et al. in [21] proposed a segmentation method using the morphological geodesic active contour algorithm with the histogram equalization method to normalize the distribution of the welding bead image, as shown in Figure 4. One disadvantage of the morphological geodesic active contour method is that the area parameters need to be adjusted appropriately when creating a bounding box for segmentation. However, if the same parameters are used for all welding bead images, it is possible that the welding bead may not be detected accurately. For example, as shown in Figure 4, in (a) the bright floor is detected, and in (b–d) the welding bead and base metal regions are segmented together. In particular, the entire welding bead area is not detected in (c,d). Therefore, the method may not be suitable for real-time inspection.
Binarization is a segmentation method that converts grayscale images to black and white images based on a threshold value. This method includes various techniques, such as adaptive thresholding and Otsu thresholding. Adaptive thresholding divides an image into several regions and applies binarization by considering the surrounding pixels in each region. Recently, a study has proposed an optimal two-dimensional direction filter to improve the contrast between regions of interest, such as thin scratches and backgrounds, and automatically detect regions of interest using adaptive thresholding [22]. Additionally, for images with low contrast or noise, another study has proposed determining the threshold value with means obtained using the standard deviation and entropy of pixels within a certain area and segmenting the area of interest [23]. Otsu’s thresholding algorithm applies binarization by finding a threshold value that minimizes dispersion in the image histogram. Zhang et al. [24] proposed a method of segmenting the ROI from the real-time navigation image using an improved 2D fuzzy Fisher method based on criteria related to Otsu and entropy. Ma et al. [25] proposed a method of segmenting images with multiple threshold values based on RAV-WOA as an objective function. The Otsu algorithm has also been widely used in medical data segmentation. Recently, Alhasan [26] proposed an EFEHO-OTSU method to increase segmentation accuracy by using enhanced fuzzy elephant herding optimization (EFEHO) in image data combined with Otsu segmentation for Alzheimer’s disease diagnostics in MRI images. Fazilov et al. [27] presented the result of applying image quality in CAD systems by enhancing the image quality to detect cancer. However, when applying the adaptive thresholding algorithm, the user should appropriately determine the parameter values, such as the block size, because the result value can vary greatly depending on the values. Additionally, the threshold value varies for each area, resulting in mathematical complexity. Moreover, since Otsu’s algorithm requires calculating all threshold values, it can be difficult to find the optimal threshold values in an area with a lot of noise or a large distribution difference with the background. Therefore, there are limitations in using these methods in real-time welding inspections that require high accuracy.

3. Proposed Algorithm

As described above, existing algorithms have several disadvantages, such as being sensitive to outliers and significantly changing the result depending on the parameter values. This may cause the accuracy to be lowered when inspecting welding beads. Additionally, the complex computational processes make the computational time longer, making them difficult to use for real-time welding quality inspection. Furthermore, as shown in Figure 2, the welding bead area of the SRD in the image is relatively bright compared to the other surrounding areas, making it challenging to accurately detect welding beads. To overcome these limitations, this study proposes an algorithm based on the color difference characteristics between the welding bead and the base metal. In this section, we describe the proposed algorithm for detecting welding bead areas using the image projection algorithm and determining whether the welding bead is defective.

3.1. Image Projection

We have used and modified the image projection algorithm presented in [28] to identify only the welding bead area in the welding bead images. We define the color image as P V R 3 and P H R 3 , with each v -th vertical direction (PV) and h -th horizontal direction ( P H ). To calculate the mean brightness ( M B ) of the ROI in each vertical and horizontal direction, we compute it using the following equation:
P V = i = x s t a r t x e n d p i x e l i , j ,
M B v = k = 0 2 P V k 3 ,
P H = j = y s t a r t y e n d p i x e l i , j ,
M B h = k = 0 2 P H k 3 ,
where k = 0 , 1 , 2 represents color channels (Red: 0, Green: 1, Blue: 2). x s t a r t refers to the starting position of the x-axis in the image, x e n d refers to the ending position of the x-axis, y s t a r t refers to the starting position of the y-axis in the image, and y e n d refers to the ending position of the y-axis. As shown in Figure 5, we scan the image brightness from left to right in the vertical direction and simultaneously obtain the mean of the RGB for each vertical line. After obtaining the values in the vertical direction, we repeat the process in the horizontal direction to obtain the image brightness values. From the mean RGB values obtained in Equations (2) and (4), histograms of the values are generated in the vertical and horizontal directions, respectively.

3.2. Inspection Algorithm for Welding Bead

We inspect the quality of the welding bead by following the procedure illustrated in Figure 6. To achieve this, first, we reduce the computational complexity by cutting out the welded bead area in the image, as shown in Figure 7. Then, we calculate the average brightness value in the vertical direction using an image projection algorithm. During this process, the RGB values for all columns from left to right of the image are calculated, and the histogram of the average brightness is obtained, as shown in Figure 8. We can find points where the average brightness value changes rapidly in the histogram shown in Figure 8. These points correspond to the start and end of the welding bead. To determine this point, we calculate the axis value corresponding to 50% of the average brightness histogram using Equation (5):
M B [ T ] = M B [ m a x ( p i x e l ) ] M B [ m i n ( p i x e l ) ] × T + M B [ m i n ( p i x e l ) ] , w h e r e   T = 0.5
where M B [ T ] means the height corresponding to 50% of the histogram of average brightness. It can be calculated in the vertical ( v e r ) and horizontal ( h o r ) directions for images. The reason why the height for the average brightness was determined to be 50% is that if the value is less than 50%, only a part of the bead is divided, not the entire welding bead. On the other hand, if a value greater than 50% is used for the histogram, not only the welding bead but also various areas of the base material are divided together. For example, in Figure 7d, the x-axis positions of the upper and lower parts of the welding bead are not the same. Therefore, we determine the 50% midpoint of the brightness change point in the histogram of Figure 8 to detect the welding bead area. In Figure 8, we obtain the values corresponding to the axes, W L [ m i n ( p i x e l ) ] and W L [ m a x ( p i x e l ) ] , respectively, from two points where the mean brightness values intersect in the vertical direction. W L [ m i n ( p i x e l ) ] and W L [ m a x ( p i x e l ) ] refer to the starting point and ending point of the location of the welding bead in the image, respectively. Based on these results, we can detect the ROI in the vertical direction of the welding bead R O I v , which corresponds to the axis range of W L [ m i n ( p i x e l ) ] to W L [ m a x ( p i x e l ) ] . Next, the average brightness value is obtained in the horizontal direction using the R O I v image. In the same procedure for the vertical direction described above, the welding bead area is calculated for the horizontal direction, and the R O I h area is calculated. R O I h is a result of being performed in both vertical and horizontal directions, and finally, only the welding bead region remains as the final output. For that R O I h image, we calculate the width H T and the height V T of the welding bead. If H T and V T are greater than the threshold values T h o r and T v e r , the welding bead is classified as good, as shown in Equations (6) and (7).
H T = g o o d , > T h o r b a d , o t h e r w i s e ,
V T = g o o d , > T v e r b a d , o t h e r w i s e ,
Considering that the average height and width of the welding bead in the product used for this study were 480 and 70, respectively, we decided to define a welding bead as satisfactory when it has a length of approximately half, based on advice from the manufacturer. Therefore, we set the threshold values T h o r and T v e r to 40 and 200, respectively.

4. Experimental Results

4.1. Data and Experimental Environment

In the experiment, we extracted a total of 480 good welding bead images from 120 SRDs produced by the welding robot. In addition, 12 defective welding bead products were added to verify whether the welding bead is good or bad in the image classification experiment. As shown in Figure 9, the defective products are images with or without a small welding bead. The size of the input images was 256 by 256 in terms of width and height. We conducted an experiment with an illuminance of approximately 700 lux and an average brightness of the welding bead of around 200. The PC specifications were as follows: Windows 10 Pro, i9 with NVIDIA GeForce RTX 3080 with GDDR6X 10GB, 3.7-GHz per processor, and Python 3.8.

4.2. Evaluation Metrics

To evaluate the performance of the proposed algorithm, we calculated the evaluation metrics based on the confusion matrix between the good and bad images as follows.
  • The accuracy measures the proportion of predicted results that match the actual results out of all the results:
    Accuracy = ( T P + T N ) / T P + T N + F P + F N ,
  • The recall is the proportion of instances that were actually true and were predicted as true by the model:
    Recall = T P / T P + F N ,
  • The precision refers to the proportion of instances that were predicted as true by the model and were actually true:
    Precision = T P / T P + F P ,
  • The F1-score is the harmonic mean of precision and recall, and it can accurately evaluate the performance of a model, particularly when the data labels are imbalanced:
    F 1 - score = 2 × P r e c i s i o n × R e c a l l / P r e c i s i o n + R e c a l l ,
  • The specificity measures the proportion of instances that were actually true and were predicted as true by the model, out of all instances that were actually true:
    Specificity = T N / T N + F P ,
  • The loss ratio measures the proportion of predicted results that match the actual results out of all the results:
    Loss   ratio = F P + F N / T P + T N ,
    where TP is true positive, TN is true negative, FP is false positive, and FN is false negative. As the number of correct images used in this study is greater than wrong, we presented four evaluation metrics together. In addition, when evaluating the proposed method, we followed the procedure in Figure 8 and used four evaluation metrics. However, when conducting experiments on comparison methods for welding quality assessment, we generated masks after segmentation. Then, the horizontal and vertical length of the detected welding bead was measured on the mask, and the quality of the welding bead was evaluated according to Equations (6) and (7). Based on the results, we created a confusion matrix and compared the performance using the evaluation metrics described earlier.

4.3. Performance of the Proposed Algorithm

We conducted an experiment to locate the positions of welding bead images in the vertical direction, as shown in Figure 10, following the algorithm procedure proposed in Figure 6. The right ‘Bright’ image in each result of Figure 10 represents the average RGBvalue along the vertical direction of the left ‘ROI’ image, which is indicated by a black curve. In the ‘Bright’ figure, the red line represents a portion corresponding to 50% of the height of the black curve, which is the average value of RGB by location. The two intersection points between the red line and the black curve correspond to the welding bead area in the vertical direction, as depicted in the left ‘ROI’ image of Figure 10. By using the proposed algorithm, all welding bead regions in the vertical direction were detected in all images (①–④), as shown in the results of Figure 10. Figure 11 shows the results of applying the proposed algorithm to an image that was divided vertically, as shown in Figure 10, to detect the horizontal welding bead areas. Similar to the previous case, the average RGB value along the horizontal direction of the left ‘ROI’ image of Figure 11 is represented by a black curve in the right ‘Bright’ image. The red line in the histogram corresponds to the portion that is 50% of the height of the histogram. The values at both ends of the black curve where it intersects with the red line correspond to the two red lines on the ‘ROI’ image, which indicate the area of the welding bead detected horizontally. As shown in the results of Figure 11, it can be confirmed that all the welding bead regions were detected even in the horizontal direction of the welding bead in sub-images ①–④. In Figure 12, the image on the left, labeled as ‘ROI-1’, is the image before applying the proposed algorithm, and the image on the right, labeled as ‘ROI-2’, is the final welding bead image after applying the proposed algorithm. As shown in Figure 12, it was confirmed that only the welding bead could be extracted by the segmentation areas in the vertical and horizontal directions of all SRDs’ welding beads using the proposed algorithm. Therefore, the proposed algorithm was found to be capable of detecting the required length of the welding bead for the inspection of the welding bead’s appearance.

4.4. Comparative Experiments

To evaluate the performance of the proposed algorithm, we conducted a comparative experiment using several existing algorithms, including K-means with HSV [7], KNN, improved Grabcut [26], Morphological geodesic active contour with Canny [21], adaptive thresholding, and Otsu thresholding algorithms. The results obtained using the existing algorithms are shown in Figure 13, where the red line in the image represents the segmentation area. In the K-means with HSV results, the welding bead area and some other areas were segmented in images ① and ②, but the welding bead was not effectively segmented in images ③ and ④. In the KNN results, image ① was well segmented, but in images ②–③, the welding bead area and another area were segmented together, and in image ④, an area larger than the welding bead was segmented. In the improved Grabcut, adaptive algorithm and Otsu algorithm results, the welding bead area and other areas were also segmented together. In the morphological geodesic active contour with the Canny method, the welding bead area in the images should be set individually to make a bounding box, but we appointed the regions of the same location for segmenting the welding beads because there are a lot of data. However, the welding beads were not properly segmented in all images, and more areas were segmented than the welding beads. For the U-Net algorithm, the training and test data were split in a ratio of 7:3, and a quality inspection experiment was performed. Figure 13g shows that the first image was successfully segmented, while only some bright areas of the welding bead were segmented in images ②–④. On the other hand, in Clustering with SCA, the welding bead was not segmented at all, similar to the results obtained with the morphological geodesic active contour using the Canny method. Thus, in the segmentation experiment, the KNN, the improved Grabcut, and the Otsu algorithms showed relatively high accuracy results compared to the others. However, in most cases, the segmentation results of the comparison methods detected more areas that were not related to the welding bead, or divided the welding bead area into more segments than necessary. Based on the results of the comparison methods, it is difficult to confirm the horizontal and vertical lengths of the welding beads in the segmented image obtained using these algorithms. Therefore, when comparing the results of the experiments shown in Figure 12 and Figure 13, the comparison methods did not perform better than the proposed algorithm.
An experiment was conducted to classify welding beads as either good or bad using the proposed algorithm and various comparison methods, as shown in Table 1 and Table 2. Table 1 displays the confusion matrix for the classification results of all methods and U-Net is the experimental result on the test data. As shown in Table 1, only the proposed algorithm accurately classified all the bad products, whereas all methods, except for K-means with HSV, were not able to accurately inspect the bad products. Furthermore, both K-means with HSV and adaptive thresholding methods misclassified many actual good products as bad products. Table 2 was created using evaluation metrics and Equations (8)–(13) based on the confusion matrix presented in Table 1. As shown in Table 2, the proposed algorithm precisely classified all good and defective welding beads at 100%. In contrast, the improved Grabcut, KNN, morphological geodesic active contour with Canny, Otsu thresholding algorithm, U-Net and Clustering with SCA showed high accuracy and precision, at approximately 97%. However, none of them showed higher performance than the proposed algorithm. Specifically, the U-Net, which is a machine learning method specializing in image segmentation, did not show a better performance than the proposed algorithm. As can be seen from the specificity values in Table 2, all comparison methods except for K-means with HSV resulted in 0. This implies that the improved Grabcut, KNN, Otsu thresholding, morphological geodesic active contour with Canny, and adaptive thresholding algorithms were not able to classify all 12 defective welding bead products accurately, as shown in Table 1. The K-means with HSV algorithm classified 4 of the 12 defective products as bad. However, the accuracy, recall, and loss ratio of this algorithm were 76.22%, 77.29% and 31.20%, respectively, which differed significantly from the accuracy, recall and loss ratio of the proposed algorithm. The adaptive thresholding algorithm had the lowest accuracy, recall, precision, F1-score and specificity, which were at 37.20%, 38.13%, 93.85%, 54.22%, and 0.00%, respectively. Therefore, the adaptive thresholding algorithm showed the poorest performance.
The Otsu thresholding and adaptive thresholding algorithms had the fastest computation times, at 16.60 and 16.74 s, respectively, followed by our proposed algorithm at 24.79 s. On the other hand, the improved Grabcut, U-Net, and Clustering with SCA had the longest computation times, which were 669.56, 358.44, and 1054.85 s, respectively. The reason why the proposed algorithm takes longer to compute than the thresholding algorithms is that it cannot perform segmentation simultaneously in both the vertical and horizontal directions. When finding the welding bead location, the proposed algorithm calculates the mean brightness of each color image k 2 P V k 3 by using Equations (1) and (2) vertically from left to right. The algorithm then uses Equation (5) to find the boundary between the background and the welding bead point in the histogram of mean brightness, and to subsequently find the welding bead area. After that, the algorithm repeats the same process in the horizontal direction. Therefore, finding the location of the welding bead is performed separately in both the vertical and horizontal directions, which takes slightly longer to compute compared to the thresholding algorithm. However, we prioritized accuracy over speed and focused on improving it, even if it meant sacrificing some computational time. As shown in Table 2, our algorithm achieved higher accuracy compared to other methods.

5. Conclusions and Discussion

In this paper, we proposed an algorithm for the quality inspection of the welding bead region using an image projection algorithm. One of the limitations of the existing methods used to detect welding beads is not accurately classifying the welding bead area due to factors such as image color and noise. To overcome this, we proposed a quality inspection method that uses statistical methods to classify whether the welding bead is of good or bad quality. The proposed algorithm analyzes the brightness values of the vertical and horizontal directions of the ROI in the entire image, and the welding bead area is found and segmented at the point where the brightness value changes rapidly, improving the inspection accuracy of the welding bead. The proposed algorithm reduces the computational time by designating a welding bead area to be analyzed in the input image and performing the analysis. Additionally, the proposed algorithm was able to detect the length of both the horizontal and vertical dimensions of the welding bead for all data. To compare the performance of the proposed algorithm, we conducted a comparative experiment using existing algorithms, including the improved Grabcut, K-means with HSV, KNN, morphological geodesic active contour with Canny, adaptive thresholding, and Otsu thresholding algorithms. However, it was difficult to detect the length of the horizontal and vertical dimensions of the welding bead. Most of the compared algorithms did not correctly classify the defective welding bead products due to similarities in texture between the base metal and welding bead, as well as discoloration around the bead after welding. Furthermore, the proposed algorithm was much faster than the other comparative algorithms, except for the adaptive thresholding and Otsu thresholding algorithms. Therefore, via experiments, it was confirmed that the proposed algorithm outperforms existing methods in terms of accuracy.
In future work, we plan to expand our research beyond just classifying defective welding beads and to study various types of welding bead defects, as well as investigate the geometric and morphological properties of the welding bead. Additionally, since our study was limited to analyzing only 492 image data, we plan to propose methods that can achieve high performance even with a small amount of data using deep learning algorithms such as few-shot learning.

Author Contributions

Conceptualization, J.L. and J.K.; methodology, J.L. and J.K.; software, J.L. and J.K.; validation, J.L., H.C. and J.K.; formal analysis, J.L. and J.K.; investigation, J.L. and J.K.; resources, J.L. and J.K.; data curation, J.L. and J.K.; writing—original draft preparation, J.L.; writing—review and editing, H.C. and J.K.; visualization, J.K.; supervision, J.K.; project administration, J.K.; funding acquisition, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Research Foundation of Korea, grant number CD202112500001 and Small and Medium Business Technology Innovation Development Project from TIPA, grant number 00220304.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to extend our heartfelt gratitude to Kyeongmin Yum for their invaluable contributions in providing critical feedback and English language editing, which greatly contributed to the improvement of our paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kurt, H.I.; Oduncuoglu, M.; Yilmaz, N.F.; Ergul, E.; Asmatulu, R. A comparative study on the effect of welding parameters of austenitic stainless steels using artificial neural network and Taguchi approaches with ANOVA analysis. Metals 2018, 8, 326. [Google Scholar] [CrossRef] [Green Version]
  2. Xu, F.; Xu, Y.; Zhang, H.; Chen, S. Application of sensing technology in intelligent robotic arc welding: A review. J. Manuf. Process. 2022, 79, 854–880. [Google Scholar] [CrossRef]
  3. He, Y.; Li, D.; Pan, Z.; Ma, G.; Yu, L.; Yuan, H.; Le, J. Dynamic modeling of weld bead geometry features in thick plate GMAW based on machine vision and learning. Sensors 2020, 20, 7104. [Google Scholar] [CrossRef] [PubMed]
  4. Jia, N.; Li, Z.; Ren, J.; Wang, Y.; Yang, L. A 3D reconstruction method based on grid laser and gray scale photo for visual inspection of welds. Opt. Laser Technol. 2019, 119, 105648. [Google Scholar] [CrossRef]
  5. Zhang, L.; Basantes-Defaz, A.C.; Ozevin, D.; Indacochea, E. Real-time monitoring of welding process using air-coupled ultrasonics and acoustic emission. J. Adv. Manuf. Technol. 2019, 101, 1623–1634. [Google Scholar] [CrossRef]
  6. Nasor, M.; Obaid, W. Segmentation of osteosarcoma in MRI images by K-means clustering, Chan-Vese segmentation, and iterative Gaussian filtering. IET Image Process. 2021, 15, 1310–1318. [Google Scholar] [CrossRef]
  7. Hassan, M.R.; Ema, R.R.; Islam, T. Color image segmentation using automated K-means clustering with RGB and HSV color spaces. J. Comput. Sci. Technol. 2017, 17, 26–33. [Google Scholar]
  8. Khrissi, L.; El Akkad, N.; Satori, H.; Satori, K. Clustering method and sine cosine algorithm for image segmentation. Evol. Intell. 2022, 15, 669–682. [Google Scholar] [CrossRef]
  9. Qiao, D.; Zhang, X.; Ren, Y.; Liang, J. Comparison of the Rock Core Image Segmentation Algorithm. In Proceedings of the 2022 IEEE 7th International Conference on Image, Vision and Computing (ICIVC), Xi’an, China, 26–28 July 2022; pp. 335–339. [Google Scholar]
  10. Debelee, T.G.; Schwenker, F.; Rahimeto, S.; Yohannes, D. Evaluation of modified adaptive k-means segmentation algorithm. Comput. Vis. Media. 2019, 5, 347–361. [Google Scholar] [CrossRef] [Green Version]
  11. Zheng, X.; Lei, Q.; Yao, R.; Gong, Y.; Yin, Q. Image segmentation based on adaptive K-means algorithm. EURASIP J. Image Video Process. 2018, 2018, 68. [Google Scholar] [CrossRef]
  12. Wazarkar, S.; Keshavamurthy, B.N.; Hussain, A. Region-based segmentation of social images using soft KNN algorithm. Procedia Comput. Sci. 2018, 125, 93–98. [Google Scholar] [CrossRef]
  13. Rangel, B.M.S.; Fernández, M.A.A.; Murillo, J.C.; Ortega, J.C.P.; Arreguín, J.M.R. KNN-based image segmentation for grapevine potassium deficiency diagnosis. In Proceedings of the 2016 IEEE International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 24–26 February 2016; pp. 48–53. [Google Scholar]
  14. Hossain, E.; Hossain, M.F.; Rahaman, M.A. A color and texture based approach for the detection and classification of plant leaf disease using KNN classifier. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, 7–9 February 2019; pp. 1–6. [Google Scholar]
  15. Prabu, S. Object Segmentation Based on the Integration of Adaptive K-means and GrabCut Algorithm. In Proceedings of the IEEE 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 24–26 March 2022; pp. 213–216. [Google Scholar]
  16. Saranya, M.P.; Praveena, V.; Dhanalakshmi, M.B.; Karpagavadivu, M.K.; Chinnasamy, P. Diagnosis of gastric cancer using mask R–CNN and Grabcut segmentation method. J. Posit. Sch. Psychol. 2022, 6, 203–206. [Google Scholar]
  17. Li, Y.; Zhang, J.; Gao, P.; Jiang, L.; Chen, M. Grabcut image segmentation based on image region. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 311–315. [Google Scholar]
  18. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  19. Papolu, J.S.; Prasad, M.B.; Vasavi, S.; Geetha, G. A Framework for Sea Breeze Front Detection from Coastal Regions of India Using Morphological Snake Algorithm. ECS Trans. 2022, 107, 585. [Google Scholar] [CrossRef]
  20. Quan, M.; Liu, Q.; Chen, X.; Deng, X.; He, K.; Liu, Y. Application of improved geodesic active contour model in kidney CT image segmentation. Chin. J. Tissue Eng. Res. 2023, 27, 171. [Google Scholar]
  21. Mlyahilu, J.N.; Mlyahilu, J.N.; Lee, J.E.; Kim, Y.B.; Kim, J.N. Morphological geodesic active contour algorithm for the segmentation of the histogram-equalized welding bead image edges. IET Image Process. 2022, 16, 2680–2696. [Google Scholar] [CrossRef]
  22. Huang, D.; Liao, S.; Sunny, A.I.; Yu, S. A novel automatic surface scratch defect detection for fluid-conveying tube of Coriolis mass flow-meter based on 2D-direction filter. Measurement 2018, 126, 332–341. [Google Scholar] [CrossRef]
  23. Zhang, M.; Cheng, S.; Cao, X.; Chen, H.; Xu, X. Entropy-Based Locally Adaptive Thresholding for Image Segmentation. SSRN 4010416. 2022. Available online: http://dx.doi.org/10.2139/ssrn.4010416 (accessed on 30 May 2023).
  24. Zhang, C.; Xie, Y.; Liu, D.; Wang, L. Fast threshold image segmentation based on 2D fuzzy Fisher and random local optimized QPSO. IEEE Trans. Image Process. 2016, 26, 1355–1362. [Google Scholar] [CrossRef]
  25. Ma, G.; Yue, X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Eng. Appl. Artif. Intell. 2022, 113, 104960. [Google Scholar] [CrossRef]
  26. Alhassan, A.M. Alzheimer’s Disease Neuroimaging Initiative, & Australian Imaging Biomarkers and Lifestyle Flagship Study of Ageing. Enhanced Fuzzy Elephant Herding Optimization-Based OTSU Segmentation and Deep Learning for Alzheimer’s Disease Diagnosis. Mathematics 2022, 10, 1259. [Google Scholar]
  27. Fazilov, S.K.; Yusupov, O.R.; Abdiyeva, K.S. Mammography image segmentation in breast cancer identification using the otsu method. Web Sci. Int. Sci. Res. J. 2022, 3, 196–205. [Google Scholar]
  28. Zhu, W.; Chen, Q.; Wei, C.; Li, Z. A segmentation algorithm based on image projection for complex text layout. In Proceedings of the AIP Conference (AIP Publishing LLC), Wuhan, China, 27–29 October 2017; p. 030011. [Google Scholar]
Figure 1. Shear reinforcement of dual anchorage (SRD). (a) SRD products-1 (b) SRD products-2.
Figure 1. Shear reinforcement of dual anchorage (SRD). (a) SRD products-1 (b) SRD products-2.
Electronics 12 02523 g001
Figure 2. SRD welding beads. (a) Welding bead-1; (b) Welding bead-2; (c) Welding bead-3; (d) Welding bead-4.
Figure 2. SRD welding beads. (a) Welding bead-1; (b) Welding bead-2; (c) Welding bead-3; (d) Welding bead-4.
Electronics 12 02523 g002aElectronics 12 02523 g002b
Figure 3. SRD production and inspection. (a) Production machine; (b) Inspection table.
Figure 3. SRD production and inspection. (a) Production machine; (b) Inspection table.
Electronics 12 02523 g003
Figure 4. Segmentation results from morphological geodesic active contour. The red line represents the results of the detected welding bead using the mentioned method. (a) Result-1; (b) Result-2; (c) Result-3; (d) Result-4.
Figure 4. Segmentation results from morphological geodesic active contour. The red line represents the results of the detected welding bead using the mentioned method. (a) Result-1; (b) Result-2; (c) Result-3; (d) Result-4.
Electronics 12 02523 g004
Figure 5. Image projection for welding bead image.
Figure 5. Image projection for welding bead image.
Electronics 12 02523 g005
Figure 6. Procedures of the proposed algorithm.
Figure 6. Procedures of the proposed algorithm.
Electronics 12 02523 g006
Figure 7. ROI of welding bead in the SRD image. (a) Welding bead-1; (b) Welding bead-2; (c) Welding bead-3; (d) Welding bead-4.
Figure 7. ROI of welding bead in the SRD image. (a) Welding bead-1; (b) Welding bead-2; (c) Welding bead-3; (d) Welding bead-4.
Electronics 12 02523 g007
Figure 8. The process of calculation of M B [ m i n ( p i x e l ) ] , M B [ m a x ( p i x e l ) ] and M B [ T ] .
Figure 8. The process of calculation of M B [ m i n ( p i x e l ) ] , M B [ m a x ( p i x e l ) ] and M B [ T ] .
Electronics 12 02523 g008
Figure 9. The defective welding bead products. (a) Defective-1; (b) Defective-2.
Figure 9. The defective welding bead products. (a) Defective-1; (b) Defective-2.
Electronics 12 02523 g009
Figure 10. Four examples of the segmentation results for the vertical direction of the welding bead.
Figure 10. Four examples of the segmentation results for the vertical direction of the welding bead.
Electronics 12 02523 g010
Figure 11. Four examples of the segmentation results for the horizontal direction of the welding bead.
Figure 11. Four examples of the segmentation results for the horizontal direction of the welding bead.
Electronics 12 02523 g011
Figure 12. Four examples of the segmentation results using projection algorithm (left: welding bead, right: segmentation result).
Figure 12. Four examples of the segmentation results using projection algorithm (left: welding bead, right: segmentation result).
Electronics 12 02523 g012
Figure 13. Segmentation results of comparison algorithms. The red line represents the results of the detected welding bead using the mentioned method. (a) K-means with HSV; (b) KNN; (c) Improved Grabcut; (d) Morphological geodesic active contour with Canny; (e) Adaptive thresholding; (f) Otsu thresholding; (g) U-Net; (h) Clustering with SCA.
Figure 13. Segmentation results of comparison algorithms. The red line represents the results of the detected welding bead using the mentioned method. (a) K-means with HSV; (b) KNN; (c) Improved Grabcut; (d) Morphological geodesic active contour with Canny; (e) Adaptive thresholding; (f) Otsu thresholding; (g) U-Net; (h) Clustering with SCA.
Electronics 12 02523 g013aElectronics 12 02523 g013b
Table 1. Confusion matrix of the proposed and comparison algorithms.
Table 1. Confusion matrix of the proposed and comparison algorithms.
AlgorithmClassification
TNFPFNTP
Proposed algorithm1200480
Improved Grabcut [15] 0120480
Morphological geodesic active contour with Canny [21]0120480
KNN0121479
Thresholding–Otsu0122478
K-means with HSV [7]48109371
Thresholding–Adaptive 012297183
U-Net [18]030145
Clustering with SCA [8]0120480
Table 2. Classification evaluation metrics and computation time of the proposed and comparison algorithms.
Table 2. Classification evaluation metrics and computation time of the proposed and comparison algorithms.
AlgorithmAccuracyRecallPrecisionF1-ScoreSpecificityLoss RatioTime (s)
Proposed algorithm1.00001.00001.00001.00001.00000.000024.79
Improved Grabcut [15] 0.97561.00000.97560.98770.00000.0250691.56
Morphological geodesic active contour with Canny [21]0.97561.00000.97560.98770.00000.02554.95
KNN0.97350.99790.97560.98660.00000.0271187.02
Thresholding–Otsu0.97150.99580.97550.98560.00000.029316.60
K-means with HSV [7]0.76220.77290.97890.86380.33330.312043.28
Thresholding–Adaptive 0.37200.38130.93850.54220.00000.168916.74
U-Net [18]0.97971.00000.97970.98980.00000.0207358.44
Clustering with SCA [8]0.97561.00000.97560.98770.00000.02501054.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, J.; Choi, H.; Kim, J. Inspection Algorithm of Welding Bead Based on Image Projection. Electronics 2023, 12, 2523. https://doi.org/10.3390/electronics12112523

AMA Style

Lee J, Choi H, Kim J. Inspection Algorithm of Welding Bead Based on Image Projection. Electronics. 2023; 12(11):2523. https://doi.org/10.3390/electronics12112523

Chicago/Turabian Style

Lee, Jaeeun, Hongseok Choi, and Jongnam Kim. 2023. "Inspection Algorithm of Welding Bead Based on Image Projection" Electronics 12, no. 11: 2523. https://doi.org/10.3390/electronics12112523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop