1. Introduction
Welding is a production-based industry that is the basis for all industries, such as automobiles, construction, and shipbuilding. Among the examples in which welding is used include SRDs used in construction sites, as shown in
Figure 1a,b. SRD is used to manage the shear force and shear stress of concrete structures and to improve the strength and safety of structures. In other words, welding work is very important, as SRD is directly related to safety, and inspections to check the quality of the product are essential. Therefore, when welding is performed through the machine, as shown in
Figure 1c, inspection of the welding areas can be performed on the inspection table, as shown in
Figure 1d. To this end, various welding quality inspection studies using sensors and image technologies have been published. At this time, the sensor inspection mainly performs the inspection using voltage, current, and gas amounts that determine the welding quality [
1]. In the inspection method using sensors, various studies have been published, including a method of determining and inspecting sensor data measured in real time using a deep learning model and an inspection method of analyzing the waveforms of sensor data through deep learning [
2,
3,
4,
5,
6,
7,
8]. In the inspection method using 2D images, methods of inspecting products for defects after dividing the welding in images taken with 2D vision cameras have been announced, and recently, methods of inspecting quality using KNN (K-nearest neighbor), K-means, improved Grabcut, etc., have been announced [
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28]. Meanwhile, sensor inspection has the advantage of high accuracy, but it also has the disadvantage in that it is impossible to check the welding location. On the other hand, 2D image inspection can determine the location of the welding bead, but if there is noise or a measurement error in the image, its accuracy decreases, and some methods have difficulty in being slow due to high calculation costs. Therefore, research is needed to solve the requirements of fast inspection time and high accuracy.
In this paper, we proposed a method that combines sensor-based inspection using average current, average voltage, and mixed gas flow rate values with 2D image inspection to address these limitations. For image inspection, we employed an image projection-based method known for its high accuracy and fast inspection speed [
16]. The proposed method offers the advantage of complementing the drawbacks of sensor and image inspection methods, enabling precise inspections. Furthermore, it improves the inspection speed by making a final determination of defects without proceeding to the image inspection stage in which the sensor inspection results classify the product as bad. The remainder of this paper was organized as follows:
Section 2 introduces the conventional methods for inspecting welding beads using sensor and image data.
Section 3 describes the algorithm proposed in this paper, and
Section 4 presents the experimental results obtained using the proposed and existing algorithms. Finally,
Section 5 presents the conclusions of this study.
2. Related Works
Various inspection methods, such as sensor-based inspection and image inspection, as mentioned in the introduction, have been studied to assess the quality of welding beads. However, many small- and medium-sized enterprises still rely on visual inspection for welding quality assessments. This inspection method can suffer from reduced accuracy and slower inspection times, as the inspection results may vary depending on the inspector’s condition and expertise. Consequently, there are limitations to performing extensive inspections solely through human visual inspection.
In sensor-based inspection, data, such as real-time current, voltage, and gas flow rates, are primarily utilized for analysis. This is because welding quality is determined through various parameters during the welding process, including voltage, current, gas levels, welding time, and temperature [
1]. Among the inspection methods using sensor data, several studies have been conducted, including research using real-time current and voltage data to determine the presence of the welding bead, research establishing the criteria for classifying welding bead defects using artificial neural networks (ANNs), algorithms predicting three types of defective welding conditions occurring during spot resistance welding using artificial neural network algorithms, and studies that have converted sensor data into images and train ANNs to assess welding quality [
2,
3,
4,
5]. Additionally, methods for inspecting welding quality using acoustic signals, highly correlated with current magnitude and gas supply, have been proposed [
6,
7,
8]. Tao et al. proposed a digital twin system for weld sound detection [
6]. This system utilized the SeCNN-LSTM model to recognize and classify welding quality based on the characteristics of strong acoustic signal time sequences. Gao introduced a subjective evaluation model that can detect welding process defects based on the hearing of expert welders [
7]. Furthermore, Horvat proposed an algorithm for classifying welding quality by analyzing the major acoustic signals of GMAW welding [
8]. However, when solely relying on sensor data for welding quality inspection, it may be possible to determine the presence of welding defects but confirming the location and shape of welding beads solely based on sensor data can be challenging, potentially leading to defects. Additionally, converting sensor data into images involves complex calculations and lengthy inspection times, and acoustic signal extraction and analysis methods can be quite intricate, making inspections not straightforward. Therefore, solely relying on sensor-based inspection for quality inspection has its limitations. Recently, research on data analysis using multi-sensor/image fusion has also been presented. Qiu et al. introduced various machine learning-based multi-sensor information fusion studies that have recently been published [
9]. Of note, they have introduced wearable sensors, smart wearable devices, and key application areas, and proposed fusion methods for multi-modal and multi-location sensors. Liu et al. presented a multi-modal vision fusion model (MVFM) for comprehensive policy learning using object detection, depth estimation, and semantic segmentation [
10]. However, when evaluating its practical suitability beyond the specific experimental setup, it is important to take into account its limitations related to platform specificity, data availability, and generalizability. Furthermore, Gasmi et al. explored the integration of satellite data from various sensors at different resolutions to predict topsoil clay content, finding that fused spectral band images consistently outperform spectral indices, leading to a notable 10% improvement in prediction accuracy [
11]. This approach highlighted the potential of multi-sensor data fusion for soil mapping and precision agriculture, particularly in northeastern Tunisia. However, while this methodology offers valuable insights into soil attribute prediction using satellite data fusion, it has limitations related to its scope, complexity, and potential applicability to a broader range of soil properties and geographic regions.
Methods for inspecting welding bead quality using image data can be broadly categorized into image segmentation methods and machine learning (ML) methods. Image segmentation involves dividing an image into units at the pixel level to differentiate objects. Among these methods, geodesic active contour segments objects by slowly progressing along curves driven via edge components at the boundaries of the region of interest [
9,
10]. The morphology snake method reliably and quickly segments objects by performing morphology operations [
11]. Recently, methods for welding bead segmentation using the morphological geodesic active contour (MorphGAC) algorithm have been introduced [
12]. However, due to changes in the specific image’s location or its susceptibility to variations in light intensity and resolution, achieving complete image segmentation is challenging, resulting in a significant measurement error. Mlyahilu et al. proposed a segmentation method using the morphological geodesic active contour algorithm with histogram equalization to normalize the distribution of welding bead images [
13]. One drawback of the morphological geodesic active contour algorithm is that it requires an appropriate adjustment of area parameters when creating bounding boxes for segmentation. However, using the same parameters for all welding bead images may lead to the inaccurate detection of welding beads. The Grabcut method is an image segmentation algorithm that extracts the foreground image rather than the background from an image. Recently, research on automatic mask generation using Grabcut and algorithms based on Mask R-CNN and Grabcut for segmenting cancer areas in images has been published [
14,
15]. However, the accuracy of these methods may decrease when dealing with complex backgrounds or objects that are similar to the background. Additionally, they require repetitive tasks, resulting in longer computation times. Therefore, in welding bead inspection, the similarity in color between the base material and the welding bead can lead to reduced detection performance. Recently, welding bead image inspection methods based on the image projection technique have been proposed [
16]. This method uses image projection to obtain pixel values and draws an image histogram to find the
x-axis value corresponding to a specific height. Then, it detects the welding bead’s area by cutting based on the
x-value at that location in the region of interest (ROI). This inspection method is faster compared to other image inspections, as it examines only the ROI area, and offers high accuracy by inspecting the brightness values of all ROI regions.
In studies employing deep learning (DL), various methods, such as KNN and K-means, have been introduced. KNN is an algorithm that classifies data into clusters based on their similarity by measuring distances. Recently, research has been published on the use of S-KNN for detecting the regions of interest, background, and ambiguous areas, as well as segmenting images of crop leaf data using KNN and histograms [
17,
18,
19]. However, KNN is computationally intensive, making it challenging to apply in real-time quality inspection. K-means is an algorithm that groups data with similar features into k clusters, widely used in image segmentation. Recent research has been published on image segmentation algorithms combining adaptive K-means and Grabcut, as well as studies on image segmentation in RGB and HSV color spaces [
20,
21,
22,
23,
24,
25]. Nevertheless, K-means requires users to determine the number of clusters in advance, and results can vary depending on the chosen number of clusters. Additionally, other methods have been proposed, such as a novel image segmentation approach optimized using the sine cosine algorithm (SCA), an approach analyzing welding surfaces using the faster R-CNN algorithm to determine defects, a method that segments welding areas using the entropy algorithm and evaluates defect presence through convolutional neural networks, and an approach that employs deep neural networks (DNNs) to evaluate defects after training on images captured by cameras with CCD (charge-coupled device) image sensors [
26,
27,
28,
29]. However, the accuracy of these methods may not reach levels suitable for commercial use, and further experimentation is required to account for various variables.
3. Proposed Algorithm
As previously mentioned, among the various welding quality inspections, inspections using sensor data offer speed and high accuracy, but suffer from the limitation of not being able to determine the exact welding location. On the other hand, inspections using 2D image data can identify the welding location, but they face several challenges, such as low accuracy when the image contains noise and measurement errors, as well as high computational costs. In this paper, to address these issues, we proposed an inspection method that combines sensor data inspection with 2D image data inspection. For 2D image data inspection, we utilized the image projection method proposed by Lee et al. [
16]. This proposed method complements the strengths and weaknesses of sensor and image data inspections, effectively resolving the existing accuracy issues. Furthermore, if a defect is determined in the sensor data inspection, the inspection skips the image data inspection and directly concludes it as bad, reducing unnecessary computation and time to improve the inspection speed. The procedure of the proposed method is shown in
Figure 2.
In sensor data inspection, welding bead defects are assessed using average current, average voltage, and mixed gas flow data. The mixed gas used is a combination of argon and carbon dioxide gas (
) to prevent contamination. Therefore, we displayed the three values generated during one welding operation on a single graph, as shown in
Figure 3. When plotting some of the measured sensor data values for a single product welding, as shown in
Figure 3a, the blue line represents the average current value, the orange line represents the average voltage value, and the gray line represents the mixed gas value.
The current average and voltage average represent the average values obtained from the current and voltage, respectively. The
x-axis represents the welding time, and the
y-axis represents the values. In
Figure 3a, the work took place from 12:05:08.44 to 12:05:11.23. At 12:05:08.44, the current average value was 0, the voltage average was 1.2, and the gas value was 0.
Figure 3a represents the normal data. However,
Figure 3b depicts the gas defective data, with all values being 0 due to an issue with gas, while
Figure 3c displays the current defective data, where the current becomes problematic and results in a value of zero from the middle. In other words, when defects occur in the sensor data, the values tend to either be zero or very small.
In each sensor dataset, the approximated three values at the beginning and end were unstable, so these values were first removed. Then, the average value (
) was calculated for each of the three sensor datasets, where
represents the average current, average voltage, and mixed gas flow, respectively. The lower bounds (
) were calculated for the previously calculated average values (
) using a threshold value (
), and the formula is as follows:
The error counts (
) were calculated for each of the three sensor datasets, as shown in Equation (2), for sensor data (
) that were smaller than the sensor’s lower bounds (
), which were calculated using Equation (1). Next, the error counts (
) calculated in Equation (2) were then evaluated against a threshold value (
), as described in Equation (3). If
, it was considered as a good product, whereas if
, it was considered as a bad product.
In the sensor data inspection, if any one of the three sensors was determined as bad, the product was ultimately classified as bad, and the inspection was proceeded to the next product. On the other hand, if all three sensors were deemed as good, the image data inspection was performed.
In the image data inspection, the welding area was extracted and its major dimensions were measured to detect welding defects using the image inspection method based on the image projection algorithm [
16]. For color image data, each pixel value (
) was represented as
, where
corresponds to color channels (red: zero, green: one, and blue: two). We calculated the mean brightness (
) in the vertical direction in the ROI of the image using Equation (4):
where
represents the starting position of the
y-axis in the image, and
represents the ending position of the
y-axis. Regarding the mean brightness in the vertical direction, as shown in
Figure 4, we made an image histogram and then calculated the maximum value (
) and the lowest value (
) on the
y-axis. Next, we calculated the position corresponding to 50% of the height, as described in Equation (5).
Using the calculated
value obtained above, we found the minimum (
) and maximum (
) values on the
x-axis, and then cut the ROI based on these values.
and
represent the
x-axis positions where it has been presumed that the welding bead exists within the ROI. The same procedure was followed for the horizontal direction, and in this case, the mean brightness (
) in the horizontal direction was calculated following Equation (6):
where
represents the starting position of the
x-axis in the image, and
represents the ending position of the
x-axis. Using the values calculated in both the vertical and horizontal directions, the dimensions of the welding bead were measured in the extracted region of the welding bead. Then, the threshold values
and
were used in Equations (7) and (8) below. If the calculated values exceed these threshold values, the product was considered good; otherwise, it was classified as bad.