Next Article in Journal
Comparative Study on the Interest in Non-Uniform Rational B-Splines Representation versus Polynomial Surface Description in a Freeform Three-Mirror Anastigmat
Next Article in Special Issue
An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions
Previous Article in Journal
A Comprehensive Exploration of Contemporary Photonic Devices in Space Exploration: A Review
Previous Article in Special Issue
Research on Distortion Control in Off-Axis Three-Mirror Astronomical Telescope Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning

1
College of Intelligent Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Hunan Provincial Key Laboratory of Ultra-Precision Machining Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(9), 874; https://doi.org/10.3390/photonics11090874
Submission received: 15 August 2024 / Revised: 12 September 2024 / Accepted: 14 September 2024 / Published: 18 September 2024
(This article belongs to the Special Issue New Perspectives in Optical Design)

Abstract

:
Defects in additive manufacturing processes are closely related to the mechanical and physical properties of the components. However, the extreme conditions of high temperatures, intense light, and powder during the manufacturing process present significant challenges for defect detection. Additionally, the high reflectivity of metallic components can cause pixels in image sensors to become overexposed, resulting in the loss of many defect signals. Thus, this paper mainly focuses on proposing an accurate inspection and super-resolution reconstruction method for additive manufactured defects based on Stokes vector and deep learning, where the Stokes vectors, polarization degree, and polarization angles of the inspected defects are effectively utilized to suppress the high reflectivity of metallic surfaces, enhance the contrast of defect regions, and highlight the boundaries of defects. Furthermore, a modified SRGAN model designated SRGAN-H is presented by employing an additional convolutional layer and activation functions, including Harswish and Tanh, to accelerate the convergence of the SRGAN-H network and improve the reconstruction of the additive manufactured defect region. The experiment results demonstrated that the SRGAN-H model outperformed SRGAN and traditional SR reconstruction algorithms in terms of the images of Stokes vectors, polarization degree, and polarization angles. For the scratch and hole test sets, the PSNR values were 33.405 and 31.159, respectively, and the SSIM values were 0.890 and 0.896, respectively. These results reflect the effectiveness of the SRGAN-H model in super-resolution reconstruction of scratch and hole images. For the scratch and hole images chosen in this study, the PSNR values of SRGAN-H for single image super-resolution reconstruction ranged from 31.86786 to 43.82374, higher than the results obtained by the pre-improvement SRGAN algorithm.

1. Introduction

Additive manufacturing technology is a highly competitive, cost-effective, and highly flexible manufacturing technology widely used in fields such as aerospace, military, medical equipment, energy, and automotive manufacturing. However, there are still significant limitations in the production process of additive manufacturing, mainly due to two reasons: quality and repeatability, which may be severely affected by certain defects (such as cracks, and spheroidization) in the additive manufacturing process [1]. Defects in the SLM process are closely related to the mechanical and physical properties of the parts, and extreme environments such as high temperature, intense light, and powder splashing during processing pose great challenges to defect detection. In this paper, in response to the above problems and detection requirements, visual inspection is adopted to enhance defect detection capabilities and improve the quality of target images. The focus is on defect image detection and defect image super-resolution reconstruction. This provides solutions and references for defect detection and optimization of processing parameters for laser additive manufacturing workpieces, which are of great significance for the theory and application of defect detection.
So far, SLM technology has successfully processed various alloys and metals, including aluminum alloys [2], stainless steel [3], nickel-based superalloys [4], and titanium alloys [5]. In the SLM process, many factors that affect part quality, including powder size, laser power, scanning speed, etc., and improper parameter control can lead to defect formation and serious degradation of the physical and mechanical properties of parts [6,7,8,9]. The generation of defects will seriously degrade the physical and mechanical properties of the workpiece, so defect detection technology is of great significance for improving the quality and repeatability of processed parts. Many defect detection methods used in the SLM process have been applied in fields such as the nuclear industry, aerospace, mechanical manufacturing, and petrochemical industry [10].
Depending on the sensor type, additive manufacturing defect detection technologies include the use of high-speed cameras, infrared thermal imagers, photodiodes and thermocouples, X-ray microimaging, and acoustic techniques. Valeev et al. found that any changes in input thermal parameters during the SLM process would lead to direct changes in thermal radiation [11]. Therefore, photodiodes and thermocouples are used to measure radiation conditions. Zhang et al. employed a high-speed camera to monitor the dynamic changes in the melt pool and proposed a novel approach based on the use of a wavelength-selective filter (350–800 nm) to enhance the contrast between the melt pool and the surrounding fluid, which was further developed into a new image processing method and extracted features from the melt pool, the fluid, and the particles ejected from the melt pool [12]. This method was designed to provide a comprehensive understanding of the SLM process. Bisht et al. [13] introduced a novel SLM process melt pool monitoring tool, designated as DMP (developed melt pool), and employed a Ge photodiode to monitor the melt pool and a manual data analysis method to assess the quality of the manufactured parts, demonstrating that the DMP melt pool tool is capable of detecting and marking changes in the signal during the manufacturing process, thereby enabling effective prediction of the quality of the manufactured parts.
Yamamoto et al. [14] investigated the melting and solidification processes, employing a dual-color thermometer to assess the energy equilibrium and the quality of the solidifying material during SLM. Their findings indicated that the temperature distribution in the heat-affected zone was asymmetrical, with a more gradual temperature gradient observed in the direction of material solidification than in the direction of powder solidification.
Krauss et al. [15] investigated the impact of insufficient heat dissipation on the precision of manufacturing processes, employing thermal distribution analysis to identify porosity and irregularities. Bartlett [16] developed a novel and full-field monitoring system based on infrared thermography, which was utilized to assess the quality of AlSi10Mg components during SLM processing, demonstrating that infrared thermography was capable of detecting 82% of the identified defects in the SLM-fabricated parts.
However, the aforementioned detection methods employ single-signal measurement techniques, which are limited in terms of precision, scope, and information content. This renders them inadequate for the detection of surface defects in complex environments. In response to these challenges, a detection system based on polarization imaging has been proposed. By optimizing the optical configuration of the imaging system, clear images of the defects can be obtained.
In comparison with traditional optical detection techniques, polarization imaging technology offers a distinctive advantage, as it enables the acquisition of information on the spectral characteristics, polarization, and spatial configuration of the target. By leveraging polarization technology for defect detection, it is possible to extract valuable insights, such as structural patterns, surface materials, and surface roughness from the polarization information of the target. This approach can effectively enhance the precision and reliability of the detection process [17,18]. Ref. [19] demonstrated that, in the context of laser-based additive manufacturing, the use of metallic components with high reflectivity can result in the saturation of image sensors, leading to the obfuscation of a significant proportion of the defect information. Consequently, the incorporation of polarization technology within the framework of defect detection systems has been proposed as a means of mitigating the adverse effects of metallic surfaces with high reflectivity, enhancing the contrast between defect regions and facilitating the delineation of defect boundaries.
The representation of defects on the surface of additive manufactured components is a challenging process, requiring not only improvements to hardware but also significant time and resources. Consequently, enhancing image quality through the application of super-resolution software algorithms is represented as a crucial area of research in this field. The accurate detection and extraction of defect regions, along with their geometric characteristics, are essential for the effective and efficient characterization of manufactured components.
Many scholars have made significant contributions to the field of super-resolution reconstruction algorithms. Existing super-resolution reconstruction algorithms can be broadly classified into three categories: those based on interpolation, those based on reconstruction, and those based on learning. Algorithms based on interpolation primarily utilize the surrounding pixel values of existing images to insert pixels at the center point, resulting in a reconstructed image. Commonly used interpolation algorithms include nearest neighbor interpolation, bilinear interpolation, and trilinear interpolation. However, these algorithms often result in reconstructed images with blurred edges and certain geometric distortions, as discussed in [20,21,22]. In the domain of learning-based approaches, Dong et al. [23] pioneered the integration of convolutional neural networks (CNN) into super-resolution reconstruction, proposing a convolutional neural network-based super-resolution reconstruction algorithm, namely SRCNN, which learns the mapping between low-resolution (LR) and high-resolution (HR) images through a convolutional neural network, enabling the reconstruction of HR images from LR inputs. In contrast to traditional algorithms and machine learning methods, a mere three layers of convolution are employed in SRCNN. This superior performance paves the way for subsequent research. RCAN [24] marked the first introduction of channel attention, which effectively leverages channel-specific characteristics, significantly enhancing reconstruction quality over previous methods, and enabling the optimal utilization of channel features, resulting in a significant enhancement in reconstruction quality compared with previous algorithms. The large-scale convolution kernel is capable of extracting large-scale characteristics, while the small-scale convolution kernel is adept at extracting small-scale characteristics.
Li et al. [25] proposed a method for image super-resolution reconstruction based on a multi-scale residual network (MSRN), employing a multi-scale residual block (MSRB) as its fundamental nonlinear mapping unit, utilizing a dual-path structure with a 3 × 3 and 5 × 5 convolution to extract multi-scale features. Ledig et al. [26] proposed a GAN-based image super-resolution reconstruction network (GAN for SR, SRGAN), whose object was to recover high-frequency details in images that have been subjected to a loss of perceptual fidelity. It was capable of enlarging low-resolution natural images by a factor of four and enhancing the visual perception quality of the resulting SR images. Then, a ranking-based generative adversarial network (Rank-SRGAN) was proposed by Zhang et al. [27], employing a learning-based approach to simulate perceptual metrics, thereby enhancing the visual quality of generated images without modifying the underlying generative network. A physical GAN framework [28] was proposed to enhance the fidelity of generated images by refining the discriminator network, achieved by introducing a new criterion for evaluating the consistency between HR and SR images.
However, the majority of super-resolution reconstruction algorithms are designed for the recovery and reconstruction of natural images. Consequently, there is a significant gap in the research literature on super-resolution reconstruction of SLM defect images. This paper proposes a super-resolution reconstruction algorithm based on an improved SRGAN model for the reconstruction of defect images on the surface of laser-based additive manufacturing parts. As a continuation of previous work on SRGAN [26], we present a defect detection and deep learning system, called SRGAN-H. The objective is to enhance the quality of real SLM defect images reconstructed as SR images. This model is based on the SRGAN framework and comprises a generator and a discriminator. In the generator model, the Hardswish activation function [29] is introduced into the depth residual module to enhance the accuracy of the neural network. As with the existing SRGAN model, SRGAN-H employs multiple depthwise separable convolutions for feature extraction, and recurrent connections are utilized to link the input to the network output, thereby ensuring network stability [26]. At the output layer, a convolutional layer is added and the Tanh activation function is introduced to normalize the output to the range of [−1, 1], shown to alleviate the issues of gradient vanishing and gradient explosion that are commonly encountered in deep learning networks, with the result that the networks in question can converge more rapidly.
In this paper, the detection and representation of defects in SLM processes are discussed. By introducing polarization techniques into defect detection systems, the detection capability of SLM defects is enhanced. Four sets of defect detection images were collected at polarization angles of 0°, 45°, 90°, and 135°. These images were converted into Stokes vectors, polarization degree, and polarization angles using mathematical formulas. Subsequently, SRGAN-H was applied to the aforementioned four groups of images. The SR images exhibited enhanced PNSR, SSIM, and SD parameters, providing a theoretical and practical foundation for further improvements to the processing parameters and the development of defect classification and prediction capabilities. The following is the organizational structure of this paper: Section 2 and Section 3 introduce the defect detection and characterization in the SLM process, respectively, providing a methodology for the research. Section 4 presents experimental findings, discussing the image results of defect detection and the SR images processed by SRGAN-H, thereby verifying the effectiveness of the aforementioned methods. Finally, Section 5 summarizes the research and plan future work.

2. Defect Detection System Based on Stokes Properties

Common metal materials have strong reflective properties and the high reflectivity of metal workpiece surfaces can cause pixels of the image sensor to be overexposed. To address this issue, a polarization technology and a defect detection system based on Stokes properties have been introduced. By changing the polarization angle to obtain multiple sets of polarized images, the Stokes vector images, polarization degree images, and polarization angle images were solved. Analysis was conducted on the contrast of defect regions in the target image, defect contour information, and the effects of high reflectivity suppression, demonstrating the importance of Stokes properties in defect detection.

2.1. Polarization of Light and Stokes Vector Method

In the defect detection process based on polarization technology, the Stokes vector method was used to represent polarized light to extract the polarization state information of the defect detection images. The Stokes vector method can be used to represent the polarization state information and intensity of polarized light [30], indicating that any polarization state of light can be represented by four Stokes vectors, defined as:
S x , y = S 0 x , y S 1 x , y S 2 x , y S 3 x , y = I 0 x , y + I 90 x , y I 0 x , y I 90 x , y I 45 x , y I 135 x , y I r i g h t x , y I l e f t x , y
where S x , y represents the Stokes vector. According to the Stokes vector method, any polarized light can be represented by the angle of polarization (Aop), the degree of polarization (Dop), and the ellipticity of polarization ϖ :
A o p = 1 2 arctan S 2 S 1 D o p = S 1 2 + S 2 2 + S 3 2 S 0 ϖ = 1 2 arcsin S 3 S 0
When the incident light is imaged by the polarization detection system, the Stokes vector of the incident light is denoted by S x , y , while the Stokes vector of the outgoing light is denoted as follows:
S x , y = M u · S x , y = M 11 M 12 M 13 M 14 M 21 M 22 M 23 M 24 M 31 M 32 M 33 M 34 M 41 M 42 M 43 M 44 · S 0 x , y S 1 x , y S 2 x , y S 3 x , y

2.2. Detection System Based on Polarization Technology

The subject of this study was an SLM workpiece, as illustrated in Figure 1. The instrumentation employed in this investigation included the subject workpiece, a light-emitting diode (LED) light source, a linear polarizer, an electric rotating platform, a charge-coupled device (CCD) camera, and a computer. The linear polarizer utilized in this experiment was the THORLABS LPVISE100-A linear polarizer, with a dimension of 25. The CMOS image sensor had a resolution of 2448 × 2448 pixels, with a single pixel size of 3.45 μm. The lens focal length was 50 mm. The motorized rotating platform was a Standa FPSTA-8MPR16-1 (Standa, Wrocław, Poland), which enabled 360° rotation and 0.75 arcmin step resolution for the polarization filter rotation control, and the controller was an 8SMC4-USB.
The computer utilized in the experiment for image acquisition and analysis was a ThinkPad S2, equipped with an Intel Core i5 8250U processor, operating at a maximum frequency of 1.8 GHz. During the experiment, the test specimen was illuminated by an LED light source and the reflected light was captured by a CMOS image sensor mounted on a rotating platform. The metallic test specimen exhibited a high degree of reflection. By controlling the rotation of the platform and adjusting the angle of the polarizing filter, the computer was used to capture and analyze four sets of images— S 1 , S 2 , Aop, and Dop—with different polarization angles, using the Stokes vector method. The Ti6Al4V powder was used on a commercial PBF system Arcam A2X, specifically designed to withstand extremely high process temperatures over 1100 °C. The power of the electron beam could be adjusted between 50 and 3000 W. A layer thickness of 50 µm was chosen for the process, with a spot size of approximately 250 µm in the focal position. The scanning speed was 4530 mm/s and the vacuum degree was 0.2 Pa. The observed specimen was manufactured on a 150 mm × 150 mm stainless steel baseplate mounted on a 200 mm × 200 mm plate. Thus, the preliminary detection of workpiece defects was completed.

3. Fundamentals for the SRGAN-H Defect Reconstruction Model

After the detection of defects in the S 1 , S 2 , A o p , and Dop, the objective was to enhance the quality of the images obtained through the use of super-resolution software algorithms. This would facilitate the precise extraction of the boundaries and fundamental characteristics of the defects. An improved SRGAN [26] model was presented in this section, which proposed the SRGAN-H model by optimizing the generator structure to effectively reconstruct SLM process defect images. This method employed a generator-discriminator network to achieve super-resolution reconstruction, enabling the generation and training of a generator to deceive the discriminator. The discriminator was trained to distinguish between true images and super-resolution images. In a self-generating and self-discriminating process, a high-quality SR (super-resolution reconstruction) image was generated.

3.1. Generator

During the adversarial process between the generator and discriminator, the goal of the generator is to generate SR images that can confuse the discriminator as much as possible, i.e., to minimize the probability that the discriminator classifies its generated images as fake; while the goal of the discriminator is to distinguish between real and fake images as accurately as possible, i.e., to maximize the probability of correct classification. Ledig [26] solved the minimax problem by alternately optimizing the discriminator and generator, which can be represented by Formula (4).
m i n θ G m a x θ D E I H R p t r a i n I H R l o g D θ D I H R + E I L R p G I L R l o g ( 1 D θ D G θ G I L R
The formula is used to train a generator G, which is then used to deceive the discriminator D. The discriminator is designed to distinguish between SR images and real images. This approach enables the generator to be trained to produce images that are highly similar to genuine images.
Regarding the structure of the generator network, SRGAN-H is based on the SRResNet model, comprising three convolutional layers with 3 × 3 kernels and 64 features, and incorporates Parametric ReLU [31] and Hardswish [29] as activation functions to overcome the issues of blurred edges and gradient disappearance caused by the convolutional layers. Then, a batch normalization layer [32] is incorporated. Prajit et al. [33] have demonstrated that Swish is more effective than ReLU in deeper models, proving that a mere replacement of the ReLU unit with the Swish unit is sufficient to achieve an increase in the classification accuracy of 0.9% on the ImageNet dataset. In contrast, Andrew et al. [29] demonstrated that Hardswish is superior to Swish in numerical stability and computation speed. Consequently, this section introduces Hardswish activation functions to enhance the accuracy and speed of the model, which can be expressed as follows:
h s w i s h x = x R e L U 6 x + 3 6
After the completion of the residual module, the image is processed through convolutional layers and batch normalization layers [32], and the sub-pixel convolution module [34] is used to achieve a ×4 magnification of the image. At the image output stage, a convolutional layer and Tanh function are introduced in the generator network to normalize the output to the [−1, 1] range. This not only reduces the differentiation between pixels during upscaling but also prevents gradient explosion during neural network training. This establishes the generator network, with its specific structure shown in Figure 2.

3.2. Discriminator

According to Radford [35], Ledig [26] trained a discriminator network with 3 × 3 filter kernels, increasing the number of filter kernels from 64 to 512 and using Leaky ReLU activation (α = 0.2), avoiding the entire max-pooling network to solve the maximization problem in Formula (4). The resulting 512 feature maps are passed through two dense layers with Leaky ReLU and sigmoid activations, ultimately resulting in a possible classification of real or fake images. The network structure is illustrated in Figure 3. In the output, Dense(1024) and Dense(1) are used, where ‘Dense(1024)’ represents a fully connected layer with 1024 neurons, and ‘Dense(1)’ represents a fully connected layer with one neuron.

3.3. Perceptual Loss Function

The definition of the loss function is crucial to the quality of the reconstructed images. SRGAN-H continues Ledig et al.’s design of the perceptual loss function, which represents the perceptual loss as a combination of content loss and adversarial loss [26], with the specific formulas as follows:
l S R = l X S R + 10 3 l G e n S R
where l X S R represents content loss and l G e n S R represents an adversarial loss. For the training of SRResNet, Ledig et al. utilized the MSE loss function to improve the evaluation of PSNR, which can be expressed as:
l X S R = l M S E S R = 1 r 2 · W · H x = 1 r W y = 1 r H I x , y H R G θ G I L R x , y 2
For the training of SRGAN, Ledig et al. defined the VGG loss based on the ReLU activation layers of a 19-layer VGG network pre-trained by Simonyan and Zisserman [36], where ϕ i , j indicates the feature maps obtained by activating the j-th convolutional layer before the i-th max pooling layer within the VGG19 network. Finally, the VGG loss function is defined as the Euclidean distance between the reconstructed image G θ G I L R and the reference image I H R , which can be expressed as:
l V G G i . j S R = 1 W i , j H i , j x = 1 W i , j y = 1 H i , j ϕ i , j I H R x , y ϕ i , j G θ G I L R x , y 2
For the adversarial loss function, Ledig et al. defined it as shown in Formula (9), where l G e n S R represents the generative loss, which is defined based on the discriminator’s probabilities over all training samples, and D θ D G θ G I L R indicates the probability of the reconstructed image G θ G I L R being classified as a real image [26]. By minimizing this loss, it aims to deceive the discriminator by generating outputs with higher discriminative values:
l G e n S R = n = 1 N l o g D θ D G θ G I L R

3.4. Representation of Defective Images of SLM

For scratch-type (#1) and hole-type (#2) defect images, four original images are obtained by varying the polarizer angles (0°, 45°, 90°, and 135°). However, these four images exhibit poor visual quality and cannot display the defect contours and information. Therefore, using Formulas (2) and (3), two sets of transformed images (Aop, Dop, S 1 ,   S 2 ) are calculated. The transformed images demonstrate significant improvements in contrast and brightness, though the defect contours remain unclear. Consequently, the SRGAN-H model is employed to perform super-resolution reconstruction (SR) on the images (Aop, Dop, S 1 ,   S 2 ). This approach ultimately facilitates the detection and processing of all defect images. The detailed process is illustrated in Figure 4.

4. Experiments

4.1. Quantitative Evaluation Metrics

To quantitatively evaluate the quality of the detected and characterized images, the experimental process involves using the V-channel value to assess the quality of defect detection images in the SLM manufacturing process. Then, PSNR, SSIM, and SD metrics are employed to evaluate the quality of defect image super-resolution reconstruction.

4.1.1. Peak Signal-to-Noise Ratio (PSNR)

The PSNR stands for peak signal-to-noise ratio, which is a metric used to measure image quality, typically employed to assess the similarity between reconstructed images and the original images [37]. It can be represented by Formula (10):
P S N R = 10 · l o g 10 M A X I 2 M S E = 20 · l o g 10 M A X I M S E
where MSE represents mean squared error, and a higher PSNR value indicates a higher similarity between the reconstructed image and the original image, resulting in lower image distortion.

4.1.2. Structural Similarity Degree (SSIM)

SSIM stands for the structural similarity index, which is a metric used to measure image quality. It takes into account factors such as luminance, contrast, and structure to evaluate the similarity between two images [38], which can be represented by formula (11):
S S I M x , y = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
where μ x , μ y represent the means of x, y. σ x 2 , σ y 2 represent the variances of x, y. σ x y represents the covariance. C 1 , C 2 are constants. The value of SSIM closer to 1 indicates a higher similarity in structure, luminance, and contrast between the reconstructed image and the original image, indicating better image quality.

4.1.3. Standard Deviation (SD)

Standard deviation (SD) typically refers to the standard deviation of the brightness or color distribution of an image, serving as a statistical measure of the differences between image pixels. It can be represented by Formula (12), where a larger SD value indicates bigger differences between pixels.
S D = 1 N i = 1 N x i x ¯ 2

4.2. Generating Defect Detection Images of SLM

Four sets of defect detection images for scratch-type (#1) and hole-type (#2) were acquired at polarization angles of 0°, 45°, 90°, and 135°, as shown in Figure 5.
According to Formulas (2) and (3), the polarization angle images Aop, the degree of polarization images Dop, and the Stokes vector images S 1 and S 2 were calculated, as shown in Figure 6 and Figure 7, respectively.
Furthermore, the V-channel value of Aop, Dop S 1 , and S 2 were evaluated. As shown in Figure 7, the Stokes vector images S 1 and S 2 represented the linear polarization information of the defect image, which had a more uniform grayscale distribution compared with the original image. The edge contours of some key defect areas in the image were highlighted, but some details were lost.
In contrast, the degree of polarization images Dop and the polarization angle images Aop provided clearer details of the defect areas compared with the Stokes vector images S 1 and S 2 . It can be seen that the defect detection method based on Stokes properties effectively suppressed high radiation metal, enhanced contrast, and highlighted defect contours.
Next, applying the SRGAN-H technology, the Aop, Dop, S 1 , and S 2 images were reconstructed through super resolution (SR) to further enhance their detail and edge information.

4.3. Super-Resolution (SR) Image Reconstruction

4.3.1. Dataset and Experiment Details

We randomly selected 25,372 images from the COCO2014 dataset as the training set. The test set included commonly used datasets such as BSD100 [39], Set5 [40], and Set14 [41], along with defect categories of scratch-type (#1) and hole-type (#2) detected by the above detection system. All networks were trained using NVIDIA RTX 3090 24G GPU (NVIDIA Corporation, Santa Clara, CA, USA).
All experiments were conducted using a ×4 scaling factor between low-resolution and high-resolution images, which was equivalent to reducing the image pixels by ×16. Following the training conditions of Ledig et al. [26], for each mini-batch, we randomly cropped 16 of 96 × 96 HR sub-images from different training images. For the training of SRGAN-H, which is based on the training of SRResNet using MSE as a premise to accelerate the convergence of SRGAN-H, the learning rate was set, and the residual modules were set to 16.
The iteration rounds of SRResNet were set to 130, while the iteration rounds of SRGAN-H were set to 50. In SRGAN-H, the generator and discriminator were alternately updated to train the model.

4.3.2. The Evaluation of Testing Set

In the experiment, widely recognized datasets including BSD100, Set5, and Set14, along with datasets of scratch-type (#1) and hole-type (#2) images, were selected to evaluate the reconstruction performance of SRGAN-H. The testing metrics were the average PSNR and SSIM of the entire testing set, as shown in Table 1. It can be observed that the SRGAN-H model demonstrated good reconstruction performance for SLM defect images, making it a suitable improvement for SR reconstruction of SLM defect images. Specifically, the PSNR of scratch-type images was higher than that of hole-type images, indicating lower distortion in the SR reconstruction of scratch-type images. On the other hand, the SSIM of hole-type images was higher than that of scratch-class images, suggesting a higher degree of similarity in terms of brightness, contrast, and structure in the SR reconstruction of hole-type images compared with scratch-type images. For the test sets BSD100, Set5, and Set14, the PSNR values were 24.488, 26.788, and 25.390, respectively, with SSIM values all below 0.850. In contrast, for scratch-type and hole-type images, the PSNR values exceeded 30, and the SSIM values were above 0.850. This indicates that SRGAN-H demonstrates better reconstruction performance for scratch-type and hole-type images, depending on the type of image.

4.3.3. Single Image SR Reconstruction

To validate the single image super-resolution (SR) reconstruction performance of the SRGAN-H model, the experiment randomly cropped portions of the scratch-type and hole-type A o p , Dop, S 1 , and S 2 images obtained from the previous section as samples for SR reconstruction. Additionally, the experiment used the improved SRGAN-H model, the original SRGAN model, and nearest and bicubic interpolation methods as comparisons to demonstrate the significant role of the SRGAN-H model in enhancing image quality. As shown in Figure 8, for scratch-type images, the image quality is improved in terms of PSNR, SSIM, and SD metrics after processing with the SRGAN-H model, with these metrics outperforming other super-resolution processing methods.
Comparing the results of the reconstruction of the four groups of images, it can be observed from Figure 8 that, in the case of scratch-type images, the Stokes vector images demonstrated the best reconstruction performance, with a higher PSNR value, SSIM closer to 1, and smaller SD value. This confirmed that after SR processing, the image was closer to the original, with lower distortion, closer brightness, contrast, and other aspects resembling the original image, thereby indicating higher image quality. As shown in Figure 8, the PSNR values of the reconstructed images by the SRGAN-H model exceeded 30 for the selected regions, with SSIM values also above 0.88, demonstrating the effectiveness of the SRGAN-H model.
For the hole-type images, the image quality was also improved in terms of PSNR, SSIM, and SD metrics after processing with the SRGAN-H model. The results are shown in Figure 9. It can be observed that, for the hole-type images, the reconstruction performance of the polar angle images Aop was the best, with a higher PSNR value, SSIM closer to 1, and a smaller SD value, indicating lower distortion. The SR image was more in line with the original image in terms of contrast, brightness, and similarity. For super-resolution reconstruction of selected regions, the PSNR values for SRGAN-H were all above 36, and the SSIM values were all above 0.97. This indicates the effectiveness of the SRGAN-H model in handling defect images of partial regions.
From Figure 10, it can be seen that SRGAN-H outperformed SRGAN, nearest, and bicubic methods in terms of PSNR, SSIM, and SD evaluations. Whether for scratch-type (#1) or hole-type images (#2), the image reconstruction quality was the best, and, compared with the other three algorithms, SRGAN-H showed superiority in PSNR and SSIM evaluations. The SD values of SRGAN-H were smaller than the other three methods, indicating that the images reconstructed by SRGAN-H were more similar to the original images.
This indicated that the polarization angle images Aop, degree of polarization images Dop, and Stokes vector images S 1 and S 2 obtained through polarization technology, after undergoing SR reconstruction with SRGAN-H, highlighted the edge information more prominently and enriched the detail information. This compensated for the limitations of polarization detection technology, making the characterization of SLM process defects more comprehensive.
However, performing super-resolution reconstruction on only a limited region does not result in a significant improvement in visual effect. Therefore, we conducted super-resolution reconstruction on the entire defect area and examined the enlarged reconstructed images, as shown in Figure 11 and Figure 12. It was evident that, compared with the CNN algorithm, the SRGAN-H algorithm showed notable improvements in edge extraction, image contrast, and brightness. However, as indicated in Table 2 and Table 3, when performing super-resolution reconstruction on the entire defect area, SRGAN-H’s PSNR and SSIM values were significantly lower than those of CNN, and the overall image evaluation was poorer. Specifically, the PSNR values for SRGAN-H in reconstructing the entire scratch defect ranged from 23.62629 to 32.55771, which was lower than the CNN range of 25.89608 to 32.88573. The SSIM values for SRGAN-H were also considerably lower than those for CNN. Similar observations can be seen in Table 4 and Table 5. This was because SRGAN-H was not designed according to the PSNR and SSIM evaluation metrics but aimed to enhance visual quality. Thus, when performing super-resolution reconstruction on the entire defect image, SRGAN-H processed images showed a better visual effect compared with CNN, as illustrated in Figure 11 and Figure 12.
However, for the reconstruction of the entire Aop images with holes, the SSIM value of SRGAN-H dropped as low as 0.16376, and for the entire Dop images, the SSIM value also dropped to 0.48044. This indicates that the SRGAN-H algorithm still has room for improvement in handling images with holes.

5. Conclusions

This study introduced Stokes characteristics into the defect detection system and specifically built a polarization channel imaging system. By changing the polarization angle to obtain multiple sets of polarization images, the Stokes vector, degree of polarization, and polarization angle images were solved. The evaluation of these images using the v-channel value showed a significant improvement in the contrast of the defect areas, defect contour information, and high reflection suppression effect, demonstrating the importance of incorporating polarization technology into defect detection systems. Experimental results indicated that this method effectively extracted and characterized surface defects such as cracks, scratches, and pores, providing solutions and references for defect detection and processing parameter optimization of laser additive manufacturing components.
Subsequently, this study applied the SRGAN-H model to perform super-resolution (SR) reconstruction on the target images. The results indicated that, for SR reconstruction of defect images captured from specific regions, the PSNR, SSIM, and SD values showed good performance. Depending on the selected regions, the PSNR value for scratch-type (#1) images reached 36.05476, and the SSIM value reached 0.97888; while for hole-type (#2) images, the PSNR value reached 43.82374, and the SSIM value reached 0.98935. However, the visual improvement compared with traditional methods was not significant. Conversely, when performing SR reconstruction on entire defect images, SRGAN-H yielded poorer PSNR and SSIM values but demonstrated a notable enhancement in visual quality compared with CNN-based methods. The images processed by SRGAN-H exhibited clearer contrast and edge contours upon magnification, with richer image details, making them more suitable for visual inspection. The lower PSNR and SSIM values may be attributed to the fact that the SRGAN model was not specifically designed to optimize these evaluation metrics but rather to enhance visual perception for the human eye. Consequently, this resulted in poorer PSNR and SSIM values but better visual reconstruction quality.
However, the above method still has certain limitations, which can be summarized as follows.
While this study achieved SR reconstruction of defect images in the SLM process using the SRGAN-H model, there was limited research on network adjustment of the SRGAN-H model, and the network architecture was not applied to other algorithms, indicating that its portability requires further investigation.
Training SRGAN-H required high hardware requirements and longer training times. In situations with small datasets, the SR reconstruction effect might even be inferior to traditional interpolation methods. In evaluations on the BSD100, Set5, and Set14 test sets, the PSNR and SSIM values of SRGAN-H were relatively low, since PSNR and SSIM metrics might overly smooth the algorithm, leading to less-than-ideal objective evaluation results.
The instability of SRGAN training and the occurrence of artifacts were not effectively addressed, and for SRGAN-H training, the stability and convergence speed of training need to be improved. While the introduction of the perceptual loss function enhanced the representation of more texture details, it also increased the occurrence of artifacts.

Author Contributions

Conceptualization, S.S.; methodology, S.S.; validation, H.C.; writing—original draft preparation, S.S.; writing—review and editing, X.P.; supervision, X.P.; funding acquisition, X.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 52305594); China Postdoctoral Science Foundation Grant (2024M754299); Natural Science Foundation of Hunan Province (Grant Nos. 2024JJ6460, 2023JJ30079); Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA25020317).

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, X.; Ren, Y.-J.; Liu, S.-F.; Wang, Q.-J.; Shi, M.-J. Microstructure and tensile property of SLM 316L stainless steel manufactured with fine and coarse powder mixtures. J. Cent. South Univ. 2020, 27, 334–343. [Google Scholar] [CrossRef]
  2. Aboulkhair, N.T.; Simonelli, M.; Parry, L.; Ashcroft, I.; Tuck, C.; Hague, R. 3D printing of Aluminium alloys: Additive manufacturing of aluminium alloys using selective laser melting. Prog. Mater. Sci. 2019, 106, 100578. [Google Scholar] [CrossRef]
  3. Yan, X.; Chen, C.; Chang, C.; Dong, D.; Zhao, R.; Jenkins, R.; Wang, J.; Ren, Z.; Liu, M.; Liao, H.; et al. Study of the microstructure and mechanical performance of C-X stainless steel processed by selective laser melting (SLM). Mater. Sci. Eng. 2020, 781, 139227. [Google Scholar] [CrossRef]
  4. Al-Rubaie, K.S.; Melotti, S.; Rabelo, A.; Paiva, J.M.; Elbestawi, M.A.; Veldhuis, S.C. Machinability of SLM-produced Ti6Al4V titanium alloy parts. J. Manuf. Process. 2020, 57, 768–786. [Google Scholar] [CrossRef]
  5. Chan, Y.F.; Chen, C.J.; Zhang, M. Review of on-line monitoring research on metal additive manufacturing process. Mater. Rep. 2019, 33, 2839–2867. [Google Scholar]
  6. Tapia, G.; Elwany, A.A. Review on process monitoring and control in metal-based additive manufacturing. J. Manuf. Sci. Eng. 2014, 13, 060801. [Google Scholar] [CrossRef]
  7. Marco, G.; Bianca, M.C. Process defects and in situ monitoring methods in metal powder bed fusion: A review. Meas. Sci. Technol. 2017, 28, 044005. [Google Scholar]
  8. Charalampous, P.; Kostavelis, I.; Tzovaras, D. Non-destructive quality control methods in additive manufacturing: A survey. Rapid Prototyp. J. 2020, 26, 777–790. [Google Scholar] [CrossRef]
  9. Chua, Z.Y.; Ahn, I.H.; Moon, S.K. Process monitoring and inspection systems in metal additive manufacturing: Status and applications. Int. J. Precis. Eng. Manuf.-Green Technol. 2017, 4, 235–245. [Google Scholar] [CrossRef]
  10. Agyapong, J.; Mateos, D.; Czekanski, A.; Boakye-Yiadom, S. Investigation of effects of process parameters on microstructure and fracture toughness of SLM CoCrFeMnNi. J. Alloys Compd. 2024, 987, 173998. [Google Scholar] [CrossRef]
  11. Valeev, S.I.; Kharlamov, I.E. Determination of powerful active zones of petrochemical equipment. IOP Conf. Ser. Mater. Sci. Eng. 2019, 537, 032059. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Fuh, J.Y.; Ye, D.; Hong, G.S. In-situ monitoring of laser-based PBF via off-axis vision and image processing approaches. Addit. Manuf. 2019, 25, 263–274. [Google Scholar] [CrossRef]
  13. Bisht, M.; Ray, N.; Verbist, F.; Coeck, S. Correlation of selective laser melting-melt pool events with the tensile properties of Ti-6Al-4V ELI processed by laser powder bed fusion. Addit. Manuf. 2018, 22, 302–306. [Google Scholar] [CrossRef]
  14. Rezaeifar, H.; Elbestawi, M.A. On-line melt pool temperature control in L-PBF additive manufacturing. Int. J. Adv. Manuf. 2021, 112, 2789–2804. [Google Scholar] [CrossRef]
  15. Krauss, H.; Eschey, C.; Zaeh, M.F. Thermography for monitoring the selective laser melting process. In Proceedings of the 23rd International Solid Freeform Fabrication Symposium, Austin, TX, USA, 6–8 August 2012. [Google Scholar]
  16. Bartlett, J.L.; Heim, F.M.; Murty, Y.V.; Li, X. In situ defect detection in selective laser melting via full-field infrared thermography. Addit. Manuf. 2018, 24, 595–605. [Google Scholar] [CrossRef]
  17. Lu, R.S.; Wu, A.; Zhang, T.D.; Wang, Y.H. Overview of automatic optical (visual) inspection technology and its application in defect detection. Acta Opt. Sin. 2018, 38, 23–58. [Google Scholar]
  18. Li, X.; Liu, F.; Shao, X.P. Principles and research progress of polarized three-dimensional imaging technology. J. Infrared Millim. Waves 2021, 40, 248–262. [Google Scholar]
  19. Peng, X.; Kong, L. Development of a multi-sensor system for defects detection in additive manufacturing. Opt. Express 2022, 30, 30640–30665. [Google Scholar] [CrossRef]
  20. Wang, L.; Kim, T.K.; Yoon, K.J. EventSR: From asynchronous events to image reconstruction, restoration, and super-resolution via end-to-end adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Partten Recognition(CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA; pp. 8315–8325. [Google Scholar]
  21. Kong, X.; Zhao, H.; Qiao, Y.; Dong, C. ClassSR: A general framework to accelerate super-resolution networks by data characteristic. arXiv 2021, arXiv:2103.04039v1. [Google Scholar]
  22. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1833–1844. [Google Scholar]
  23. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super resolution. In Proceeding of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
  24. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 294–310. [Google Scholar]
  25. Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale residual network for image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 527–542. [Google Scholar]
  26. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A. Photo-realistic single image super-resolution using a generative adversaral network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 105–114. [Google Scholar]
  27. Zhang, W.; Liu, Y.; Dong, C.; Qiao, Y. RankSRGAN: Generative adversarial networks with ranker for image super-resolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3096–3105. [Google Scholar]
  28. Pan, J.; Dong, J.; Liu, Y.; Zhang, J.; Ren, J.; Tang, J.; Tai, Y.W.; Yang, M.H. Physics-based generative adversarial models for image restoration and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2449–2462. [Google Scholar] [CrossRef]
  29. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
  30. Xue, Q.S.; Chen, W. Optical system design of spaceborne ultraviolet panoramic imager. Infrared Laser Eng. 2014, 43, 517–522. [Google Scholar]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  32. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  33. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for Activation Functions. arXiv 2018, arXiv:1710.05941. [Google Scholar]
  34. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar] [CrossRef]
  35. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  36. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  37. Poobathy, D.; Chezian, R.M. Edge detection operators: Peak signal to noise ratio based comparison. Int. J. Image Graph. Signal Process. 2014, 10, 55–61. [Google Scholar] [CrossRef]
  38. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar]
  39. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Vancouver, BC, Canada, 7–14 July 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 2, pp. 416–423. [Google Scholar]
  40. Bevilacqua, M.; Roumy, A.; Guillemot, C.; Alberi-Morel, M.L. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the British Machine Vision Conference (BMVC), Surrey, UK, 3–7 September 2012; BMVA Press: Surrey, UK, 2012; pp. 1–10. [Google Scholar]
  41. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Curves and Surfaces; Springer: Berlin/Heidelberg, Germany, 2012; pp. 711–730. [Google Scholar]
Figure 1. Polarization-based visual detection system.
Figure 1. Polarization-based visual detection system.
Photonics 11 00874 g001
Figure 2. The network architecture of the generator.
Figure 2. The network architecture of the generator.
Photonics 11 00874 g002
Figure 3. The network architecture of the discriminator.
Figure 3. The network architecture of the discriminator.
Photonics 11 00874 g003
Figure 4. Defect detection and characterization system based on Stokes property and SRGAN-H.
Figure 4. Defect detection and characterization system based on Stokes property and SRGAN-H.
Photonics 11 00874 g004
Figure 5. Detection images of scratch-type and hole-type at 0°, 45°, 90°, and 135°.
Figure 5. Detection images of scratch-type and hole-type at 0°, 45°, 90°, and 135°.
Photonics 11 00874 g005
Figure 6. Transformed polarization angle image Aop, degree of polarization image Dop, and Stokes vector S 1 and S 2 images.
Figure 6. Transformed polarization angle image Aop, degree of polarization image Dop, and Stokes vector S 1 and S 2 images.
Photonics 11 00874 g006
Figure 7. The detection results of scratch-type and hole-type defects in the Aop, Dop, S 1 , and S 2 images.
Figure 7. The detection results of scratch-type and hole-type defects in the Aop, Dop, S 1 , and S 2 images.
Photonics 11 00874 g007
Figure 8. Scratch-type polar angle images Aop, degree of polarization images Dop, and Stokes vector images S 1 and S 2 after processing with SRGAN-H.
Figure 8. Scratch-type polar angle images Aop, degree of polarization images Dop, and Stokes vector images S 1 and S 2 after processing with SRGAN-H.
Photonics 11 00874 g008
Figure 9. Hole-type polar angle images Aop, degree of polarization images Dop, and Stokes vector images S 1 and S 2 after processing with SRGAN-H.
Figure 9. Hole-type polar angle images Aop, degree of polarization images Dop, and Stokes vector images S 1 and S 2 after processing with SRGAN-H.
Photonics 11 00874 g009
Figure 10. Evaluation results for scratch-type and hole-type images after processing with SRGAN-H.
Figure 10. Evaluation results for scratch-type and hole-type images after processing with SRGAN-H.
Photonics 11 00874 g010
Figure 11. Comparison of the scratch-type (#1) images after super-resolution reconstruction using SRGAN-H and CNN, observed under magnification.
Figure 11. Comparison of the scratch-type (#1) images after super-resolution reconstruction using SRGAN-H and CNN, observed under magnification.
Photonics 11 00874 g011
Figure 12. Comparison of the hole-type (#2) images after super-resolution reconstruction using SRGAN-H and CNN, observed under magnification.
Figure 12. Comparison of the hole-type (#2) images after super-resolution reconstruction using SRGAN-H and CNN, observed under magnification.
Photonics 11 00874 g012
Table 1. The evaluation of the testing set.
Table 1. The evaluation of the testing set.
IndicatorBSD100Set5Set14#1#2
PSNR24.48826.78825.39033.40531.159
SSIM0.6410.8030.6930.8900.896
Table 2. Reconstruction of scratch-type (#1) images using the CNN algorithm.
Table 2. Reconstruction of scratch-type (#1) images using the CNN algorithm.
IndicatorAopDop S 1 S 2
PSNR25.8960831.3838932.0751532.88573
SSIM0.861390.923530.905210.93046
Table 3. Reconstruction of scratch-type (#1) images using the SRGAN-H algorithm.
Table 3. Reconstruction of scratch-type (#1) images using the SRGAN-H algorithm.
IndicatorAopDop S 1 S 2
PSNR23.6262927.1567828.9781732.55771
SSIM0.4927670.852980.844970.92549
Table 4. Reconstruction of hole-type (#2) images using the CNN algorithm.
Table 4. Reconstruction of hole-type (#2) images using the CNN algorithm.
IndicatorAopDop S 1 S 2
PSNR24.9003431.0452032.1350131.49505
SSIM0.861390.824850.874320.83833
Table 5. Reconstruction of hole-type (#2) images using the SRGAN-H algorithm.
Table 5. Reconstruction of hole-type (#2) images using the SRGAN-H algorithm.
IndicatorAopDop S 1 S 2
PSNR15.8108023.0232428.3589626.49653
SSIM0.163760.480440.817440.68581
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, S.; Peng, X.; Cao, H. Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning. Photonics 2024, 11, 874. https://doi.org/10.3390/photonics11090874

AMA Style

Sun S, Peng X, Cao H. Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning. Photonics. 2024; 11(9):874. https://doi.org/10.3390/photonics11090874

Chicago/Turabian Style

Sun, Shangrongxi, Xing Peng, and Hongbing Cao. 2024. "Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning" Photonics 11, no. 9: 874. https://doi.org/10.3390/photonics11090874

APA Style

Sun, S., Peng, X., & Cao, H. (2024). Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning. Photonics, 11(9), 874. https://doi.org/10.3390/photonics11090874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop