Next Article in Journal
The Effects of Off-Farm Employment on Non-Timber Forest Product Plantations
Previous Article in Journal
Quantifying the Spatiotemporal Variation of NPP of Different Land Cover Types and the Contribution of Its Associated Factors in the Songnen Plain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution Reconstruction of Particleboard Images Based on Improved SRGAN

Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(9), 1842; https://doi.org/10.3390/f14091842
Submission received: 15 August 2023 / Revised: 31 August 2023 / Accepted: 7 September 2023 / Published: 10 September 2023
(This article belongs to the Section Wood Science and Forest Products)

Abstract

:
As an important forest product, particleboard can greatly save forestry resources and promote low-carbon development by reusing wood processing residues. The size of the entire particleboard is large, and there are problems with less image feature information and blurred defect outlines when obtaining the particleboard images. The super-resolution reconstruction technology can improve the quality of the particleboard surface images, making the defects clearer. In this study, the super-resolution dense attention generative adversarial network (SRDAGAN) model was improved to solve the problem that the super-resolution generative adversarial network (SRGAN) reconstructed image would produce artifacts and its performance needed to be improved. The Batch Normalization (BN) layer was removed, the convolutional block attention module (CBAM) was optimized to construct the dense block, and the dense blocks were constructed via a densely skip connection. Then, the corresponding 52,400 image blocks with high resolution and low resolution were trained, verified, and tested according to the ratio of 3:1:1. The model was comprehensively evaluated from the effect of image reconstruction and the three indexes of PSNR, SSIM, and LPIPS. It was found that compared with BICUBIC, SRGAN, and SWINIR, the PSNR index of SRDAGAN increased by 4.88 dB, 3.25 dB, and 2.68 dB, respectively; SSIM increased by 0.0507, 0.1122, and 0.0648, respectively; and LPIPS improved by 0.1948, 0.1065, and 0.0639, respectively. The reconstructed images not only had a clearer texture, but also had a more realistic expression of various features, and the performance of the model had been greatly improved. At the same time, this study also emphatically discussed the image reconstruction effect with defects. The result showed that the SRDAGAN proposed in this study can complete the super-resolution reconstruction of the particleboard images with high quality. In the future, it can also be further combined with defect detection for the actual production to improve the quality of forestry products and increase economic benefits.

1. Introduction

Particleboard is a kind of wood-based nonstructural composite manufactured using a compressed mixture of wood particles of various size and resin [1,2]. As an important forestry product, particleboard can greatly save forest resources, reduce deforestation, and promote low-carbon development by reusing wood processing residues. Since the particleboard has the advantages of low price, easy processing, good sound and thermal insulation [3,4], it is used in a variety of applications including furniture manufacturing and construction [5]. According to the IMARC Group [6], the global particleboard market size reached 22.2 billion USD in 2022, and it is expected to achieve a compound annual growth rate of 3.7% in the next six years.
However, defects often appear on the surface of the panels from problems either in manufacturing or handling, including glue spots, sanding defects, staining (from oil or rust), indentations, and scratches [7]. At present, many researchers use machine vision technology [8] to collect particleboard surface images and use intelligent defect detection instead of manual screening. Zhao et al. [9] used the Ghost Bottleneck module, the SELayer module, and the DWConv module to improve the You Only Look Once v5 (YOLOv5) model, which obtained a lightweight model to detect particleboard defects. Then, this team [10] proposed a YOLO v5-Seg-Lab-4 model to detect particleboard defects. The results showed that the Mean Average Precision of the model was 93.20%, and the Mean Intersection over Union was 76.63%, which improved the detection performance. Wang et al. [11] used an improved Capsule Network Model to solve the small sample classification problem of particleboard defects. The research on defect detection of particleboard has not been applied to enterprise production, and the defect detection of the whole particleboard still needs to be further explored.
In the actual production process, the size of the whole particleboard is 1220 mm × 2440 mm. The surface area of particleboard is large and the texture details are complex and diverse, while the surface defects of particleboard are mostly small in area, and some defects are only 10 sq. mm. When the whole particleboard is photographed with an industrial camera with a resolution of 2 k, the 1 mm has only 1–2 pixels, resulting in less image feature information and blurred defect contours, making it difficult to identify and detect. When using a more detailed camera, the cost of the enterprise investment will be greatly increased. Therefore, how to accurately identify the small, fine, and different shapes of the defects on the full particleboard has become a research difficulty. The application of image super-resolution reconstruction technology on the particleboard can extract image detail features, improve the surface image quality of particleboard, which is of great significance to study the image information of particleboard, and serve for subsequent defect detection.
Image super-resolution reconstruction is to reconstruct the low-resolution image into the corresponding high-resolution image via a specific algorithm to improve the image quality and make it closer to the actual situation. In the image reconstruction, the traditional interpolation algorithm will have the problems of the blurred image edge and the lack of feature information expression, so the reconstruction effect is not ideal. For this reason, many researchers used deep learning algorithms to complete image super-resolution reconstruction tasks, such as SRResnet, EDSR, EPSR, DBPN, and other methods [12]. Dong et al. [13] proposed a three-layer convolution neural network algorithm SRCNN for super-resolution reconstruction, which is the first work of the deep learning method in the field of super-resolution reconstruction. Ledig et al. [14] proposed a residual network structure which is used to construct SRGAN. The ability of model reconstruction is improved via antagonistic training between the generator and the discriminator, which makes the image more real, but there are also artifacts. After that, Zhang et al. [15] introduced the channel attention module for super-resolution reconstruction for the first time, which solved the difficult problem of deep network training. Many researchers continued to explore SRGAN deeply. Xiong et al. [16] modified the loss function and network structure to apply it to remote sensing images to improve the stability of the model and the reconstruction performance. In order to solve the problem that the reconstructed image is too smooth and the details are lost, Zhong et al. [17] introduced the residual channel attention module to reconstruct the infrared image of insulators via the improved SRGAN in order to better realize the classification of insulator defects. Liang et al. [18] proposed a SwinIR model to restore high-quality images, which consisted of shallow feature extraction, deep feature extraction, and high-quality image reconstruction. SRGAN has a good performance in many fields, and we can obtain more real images, but it has not been explored in the image of particleboard at present. This task has certain exploratory significance and broad prospects.
The main contributions of this study are as follows: (1) in order to solve the problem of artifacts generated by SRGAN and the reconstruction performance that needs to be improved, we constructed SRDAGAN by optimizing it. We removed the BN layer, improved the CBAM to build the dense block, and used the densely skip connection. After fully extracting image feature information, the network realized the artifact removal and improved the super-resolution reconstruction performance of the model; (2) the improved SRDAGAN was applied to the super-resolution reconstruction of the particleboard images so that the research of particleboard was no longer a single defect detection in the visual inspection. We further dug out the detailed information of the particleboard image, which was a very interesting and meaningful research task.

2. Materials and Methods

2.1. Image Acquisition

During the experiment, the surface image of particleboard is obtained via a self-built image acquisition system, which is shown in Figure 1. The specific parameters of the linear camera, lens, adapter ring, and light source, which were selected by the particleboard image acquisition system, are shown in Table 1. In this experiment, the particleboard specimens provided by China Dare Wood Industrial (Jiangsu) Co., Ltd. (Suqian, China) were selected. The specific product parameters of the particleboards are shown in Table 2.
We put the particleboard specimens on the conveyor belt, set up the light source, and adjusted the position of the camera so that the lens was 1100 mm away from the surface of the particleboard, so that the whole particleboard could be photographed. Lighting was provided only via the linear light sources. A pair of photoelectric trigger switches was set up, and then the conveyor belt was turned on to transport the particleboards. The color linear camera received the trigger signal and started to collect the surface images of the particleboards line by line and saved it. The camera completed the data transmission via the Gige interface and finally obtained the RGB images of the whole particleboards [19].

2.2. Establishment of Dataset

The pixel size of the original images of particleboards was 8192 × 16,800. The original images were preprocessed, the black edge was cut via threshold segmentation, and the tilt problem of particleboard caused by uneven placement was solved with the help of rectangular correction. Considering that too large images will increase the complexity of network computing and occupy more memory, all images were cut into image blocks with the size of 128 × 128 pixels. Then, the bicubic interpolation down-sampling method was used to obtain the corresponding low-resolution image blocks of particleboards, where the proportional coefficient is ×4, that is, the low-resolution image blocks were the size of 32 × 32, and finally, 52,400 high-resolution and low-resolution image blocks were obtained. According to the proportion of 3:1:1, the training set, verification set, and test set were divided, that is, the number of images in the training set was 31,440, and the number of images in the verification set and test set was 10,480 each.
In the established dataset, the particleboard image blocks contained big shavings, scratches, staining, and other defects, and some of the particleboard image blocks are shown in Figure 2.

2.3. Improvement of Image Processing Algorithm

Traditional interpolation methods often have the problem that the edge information of the image is not clear and some details are smoothed when reconstructing the image. Therefore, this study adopted an improved SRGAN structure, which included two main parts: the generator and the discriminator. The algorithm flow is shown in Figure 3. The down-sampled low-resolution image I L R was input into the trained generator to generate the super-resolution image I S R , and then the generated super-resolution image I S R and the corresponding high-resolution image I H R were input into the discriminator at the same time. The result is between 0 and 1, and the closer the value is to 1, the higher the reality of the image restoration. In the process of generating adversarial network training, the model iteratively adjusted the parameters according to the discrimination situation. The parameters of the generator were adjusted and optimized if the judgment is correct, and the parameters of the discriminator were adjusted and optimized if the judgment is incorrect. The generator and the discriminator carried out adversarial learning, alternate training, and then an improved performance. Finally, the super-resolution image generated by the generator was closer to the real image. The discriminant performance of the discriminator had also been greatly improved, which can accurately judge whether the image is correct [20].
In order to fully mine the features of the particleboard images and enhance the image expression, we proposed the SRDAGAN, which removed the Batch Normalization (BN) layers to improve the network reconstruction performance, reduced training time, and designed an improved CBAM [21]. Fully mining the detailed features of the image and using dense jump connections enable the network to learn more high-frequency features.

2.3.1. Removed Batch Normalization

In the neural network structure, the BN layer mainly plays the role of the normalization of data information, which speeds up the convergence of model training and reduces the dependence of the network on the initial value of the data [22]. However, for the task of super-resolution reconstruction of the particleboard images, there are many texture details on the surface image and because there is a certain density deviation of particleboard products within the allowable range of technology, then there will inevitably be some differences in the feature information of different positions on the whole particleboard, and there will also be slight differences in the fineness and color characteristics of the particle fragments. This puts forward higher requirements for the ability of the network to extract information. When the image data is normalized by the BN layer, the same operation will be performed on the output of the upper layer, and the feature information in the image will be ignored to a certain extent, which greatly reduces the ability of the network to extract feature information, and the BN layer plays a limited role in the process of image super-resolution reconstruction.
Therefore, we removed the BN layer; on the one hand, it can greatly reduce the computational cost and improve the stability of the model, and on the other hand, it can prevent it from smoothing the color, texture, contrast, and other information in the image, as well as restrain the generation of artifacts. In addition, after reducing the BN layer, we can more easily design and improve the network structure, add attention modules to improve the performance of the model, and make the image reconstruction of particleboard more clear and real.

2.3.2. Improved Convolutional Block Attention Module

The attention module focuses on the specific information of the image with reference to human vision and selectively extracts and processes the high-frequency information of this part. In this study, CBAM was optimized to extract the channel information and spatial information of the particleboard images at the same time, focusing on the details of the particleboard image. CBAM mainly included the channel attention module (CAM) [23] and the spatial attention module (SAM) [24]. In this study, CAM and SAM were built into a tandem structure in the order of before and after, as shown in Figure 4.
In the CAM structure part, only the spatial dimension information is compressed and the channel information of the feature image is fully extracted. When the input data F 0 is the feature image of C × H × W (C is the number of channels, H is the height of the image, and W is the width of the image), it is subjected to average pooling and maximum pooling [25], respectively, and then the convolution layer [26] compresses the amount of channels C to the original 1/R (R takes 16), where the activate function selects Relu and then combines the two data to form a feature mapping in the form of addition [27]. Via a convolutional layer and a sigmoid layer, the feature mapping range is between 0 and 1, which enhances the expression of the high-frequency information and suppresses the expression of the useless information. The calculation equation is shown in Equation (1), where W 0 and W 1 are the weights, and σ is the sigmoid function. Then, we multiply the output result by the original image F 0 to obtain F 2 , so that the feature image returns to the size of C × H × W.
F 1 = σ W 1 W 0 A v g p o o l ( F 0 ) + W 1 W 0 M a x p o o l ( F 0 ) ,
In the SAM structure part, only the channel dimension information is compressed and the spatial information of the feature image is fully extracted. The calculated input data F 2 was input into average pooling and maximum pooling, and the two data are combined in the form of concat to complete the splicing. The convolution with a size of 7 × 7 is used to reduce the dimension of the image to 1 × H × W, and then the mapping is completed via a convolution layer and a sigmoid layer. The calculation equation is shown in Equation (2), where f 7 × 7 represents a convolution operation with a size of 7 × 7. The output result is multiplied by the feature image F 2 to obtain F 4 , and the feature image returns to the size of C × H × W again.
F 3 = σ f 7 × 7 A v g p o o l F 2 ; M a x p o o l ( F 2 )
CBAM can not only fully learn the channel and spatial information of the image and extract important features, but it also works as a lightweight attention module, where it has less computation and a strong general performance, so it can be better embedded in the neural network structure and then improve the overall reconstruction performance of the model.

2.3.3. Construction of Generator

As shown in Figure 5a, the common dense block consists of a BN layer, a convolution layer, and a Relu layer, which has a large amount of computation and cannot fully extract the details of the image [28]. As shown in Figure 5b, an improved dense block was designed in this study. We combined a convolution layer, a Relu layer, and a CBAM into a Bottleneck at the same time, and then connected the image features extracted from each layer to each layer after it went through a concat connection, and four Bottlenecks were connected to form a dense block. This method fully extracted the complex detail features of the particleboard surface, carried on the deep mining according to the particleboard image information, and played a better role in restoring.
As shown in Figure 6, in the generator of SRDAGAN, the low-resolution image block I L R is input through the convolution layer, and the original size of each image block is 3 × W × H. After the input of the convolution layer, the size becomes C × W × H. The information of the three channels of RGB in the particleboard image block is mapped to C channel components, and the low frequency features in the image are extracted.
Then, the n dense blocks (n takes 8) process the image data via the densely skip connection, extract the high frequency features in the images, and let the neural network fully learn the color, texture, and other feature information of the particleboard images. After the image is output by dense block, all the features are connected in series, resulting in a large amount of computation, so a convolution layer with the size of 1 × 1 is used to reduce the number of channels. The deconvolution layer is used to learn the up-sampling and output of a feature image with 256 channels. The reconstruction layer magnifies the image proportionally and outputs it to the output layer. The output layer consists of a convolution layer, which reduces the amount of channels of the feature image to 3, and finally outputs the super-resolution image I S R .

2.3.4. Construction of Discriminator

In the SRGAN structure, the discriminator of the standard GAN is used to distinguish the generated super-resolution image I S R [29]. The closer the value is to 1, the closer the I S R is to the real high-resolution image I H R , as shown in Equation (3):
D x = σ C x ,
where D x is the evaluation value of the discriminant result, and C x is the output of the untransformed discriminator.
In this study, for the discriminator in SRDAGAN, we used the discriminator D R a based on the relative mean, as shown in Equations (4) and (5):
D R a x r , x f = σ C x r E x f x f ,
D R a x f , x r = σ C x f E x r x r ,
where the real high-resolution image and the reconstructed super-resolution image are respectively represented by x r and x f , and the mean value operation of the images in a small batch is represented by E x , respectively.

2.3.5. Loss Function

In the super-resolution task, the loss function [30] provides guidance for model training by evaluating the difference between the super-resolution image I S R of the model output and the original high-resolution image I H R . The loss function of the model generator is the comprehensive loss function L G , as indicated by Equation (6):
L G = L p e r c e p + λ L G R a + η L 1 ,
where L p e r c e p is the perceptual loss, L G R a is the generative adversarial loss, L 1 is the content loss, and λ and η are the balance coefficients between each loss function L .
The loss function L D of the model discriminator is shown in Equation (7):
L D = E x r log D R a x r , x f E x f log 1 D R a x f , x r ,
where E x r and E x f represent the mean value operation of all the real high-resolution images and all the reconstructed super-resolution images in a small batch, respectively.

3. Results

3.1. Evaluation Index

In the task of super-resolution reconstruction of the particleboard images, in order to accurately analyze the quality of the generated super-resolution images, we used three evaluation indexes: peak signal-to-noise ratio (PSNR) [31], structural similarity index measure (SSIM) [32], and learned perceptual image patch similarity (LPIPS) [33]. At the same time, using the three indexes to calculate can better analyze the image quality and objectively evaluate the super-resolution reconstruction performance of the model.
PSNR index is calculated based on the mean squared error (MSE) [34], which represents the ratio relation of the maximum pixel value MAX of the image to the MSE of the image. The higher the value, the higher the image quality. The quality evaluation is mainly based on the difference between image pixels, as shown in Equations (8) and (9):
M S E = 1 K i = 1 K ( I i I ^ ( i ) ) 2 ,
P S N R = 10 log 10 ( M A X 2 M S E ) ,
SSIM evaluates the structural similarity between the reference image and the target image via the correlation of the parameters of each pixel in the image. SSIM comprehensively considers the mean value of luminance, the standard deviation of contrast, and the covariance of the structure in the image to form the evaluation results. The value of SSIM is between 0 and 1. The closer the value is, the higher the quality of the image reconstruction is. It mainly evaluates the degree of similarity between image pixels, which is more consistent with the sensory evaluation of human eyes, as shown in Equations (10)–(13).
L X , Y = 2 μ X μ Y + C 1 μ X 2 + μ Y 2 + C 1 ,
C X , Y = 2 σ X σ Y + C 2 σ X 2 + σ Y 2 + C 2 ,
S X , Y = σ X Y + C 3 σ X σ Y + C 3 ,
S S I M X , Y = L X , Y × C ( X , Y ) × S ( X , Y ) ,
where X and Y are the reference images and the target images, respectively; L , C , and S are the luminance, contrast, and structure, respectively; μ X and μ Y represent the average value of X and Y ; σ X and σ Y represent the standard deviation of X and Y ; σ X Y represents the covariance of X and Y ; and C 1 , C 2 , and C 3 take constants.
LPIPS is used to measure the differences between two images, which can better evaluate the differences of perceptual quality. Input the generated image and the real image into the L layers of the convolutional neural network, extract the features via each layer, normalize them, calculate the L2 distance [35] after weighting the features of each layer, and finally obtain the LPIPS value by averaging. The smaller the value of LPIPS, the smaller the difference between the two images and the more realistic the generated image is, as shown in Equation (14).
d ( x , x 0 ) = l 1 H l W l h , w ω l ( y ^ l h w y ^ 0 l h w ) 2 2 ,
where d represents the distance between x and x 0 , ω l is the vector that reduces the number of active channels, and y ^ l and y ^ 0 l represent the normalized image after the feature extraction of the l layer, respectively.

3.2. Experimental Results

In this study, the experiments were carried out under the same hardware and software conditions, and the specific parameters of the experimental configuration are shown in Table 3. We used the Adam optimizer [36] to update the weight parameters of the network, where the initial learning rate of the network was 1 × 10 4 , and the learning rate of every 50,000 epochs was halved. The attenuation rate parameter β 1 was set to 0.9 and β 2 was set to 0.999. The feature weight was 1, the amount of feature image was 64, the batch size was set to 32, and the balance coefficients λ and η of the discriminator are 5 × 10−3 and 1 × 10−2, respectively.
The images of the particleboards were input into the trained model for reconstruction, and the super-resolution images of the particleboards were obtained. In order to compare the details of the image restoration more clearly, we showed and compared the contrast images of the same part of the particleboard, as shown in Figure 7. From left to right are the images of the corresponding parts of LR, BICUBIC, SRGAN, SWINIR, and SRDAGAN. Obviously, with the deepening of image information mining [37], the effect of super-resolution reconstruction was gradually improved, the texture details were richer, and the feature performance was more obvious. SRDAGAN can fully reflect some texture and color information which cannot be well restored by BICUBIC, SRGAN, and SWINIR. In order to study the impact of the improvement points in our study on the performance of the model, we take the ablation experiments to gradually modify the model and compare it using the evaluation indexes, as shown in Table 4. The average results of the PSNR, SSIM, and LPIPS evaluations on the test set are shown in Table 5.
From the perspective of the image reconstruction effect, the quality of the particleboard image generated by the traditional BICUBIC was poor, and the characteristic information of the particleboard could not be well represented. The simple data processing method caused the image restoration details to be blurred and the texture was not clear enough, which could not meet the requirements of the particleboard reconstruction. Compared with the texture details of the particleboard image reconstructed by BICUBIC, the texture details of the particleboard image were better represented by SRGAN, the edges were sharper and more obvious, and the features of the images were better restored, but it inevitably produces artifacts, which will restrain the real information expression of the image itself and affect the image perception. From the point of view of human vision, the reconstruction quality still needed to be further improved. In this study, the images reconstructed by SRDAGAN made up for the shortcomings of the existing network structure and efficiently reconstructed a variety of feature information such as the color, texture distribution, and fineness of the particle fragments while avoiding artifacts that reduced the image quality. At the same time, there were a few wood particles with a great difference in color, and the SRDAGAN proposed in this study had a good ability to restore and characterize the particle fragments of different colors, make the differences obvious, and the sensory effect was better. In the comparison of the state-of-the-art methods, although the SWINIR model can restore the feature information of the particleboard to a better extent, it is still not as good as the image restored by SRDAGAN. In this study, image information was fully mined by removing BN, improving CBAM, and using the densely skip connection. After the comparison, we found that the images reconstructed by SRDAGAN were more real and the details were clearer.
From the perspective of the image evaluation index, when facing the PSNR index, BICUBIC was 25.83 dB, and the effect was not good. SRGAN improved 1.63 dB than BICUBIC, the image reconstruction had achieved a certain effect, and the restoration of pixels is more reasonable. Traditional BICUBIC obviously limited the ability to mine image feature information because of the simple processing and calculation. The SRDAGAN method could achieve 30.71 dB. Compared with BICUBIC, SRGAN, and SWINIR, its PSNR index increased by 4.88 dB, 3.25 dB, and 2.68 dB, respectively. The details of each pixel of the particleboard images were better restored, the low frequency information could be preserved and repaired, and the high frequency information could be deeply mined.
When facing the SSIM index, BICUBIC, SRGAN, SWINIR, and SRDAGAN were 0.6517, 0.7024, 0.7498, and 0.8146, respectively. SRGAN was 0.0507 higher than BICUBIC, although it was improved to a certain extent, but due to the existence of artifacts, there was still room for improvement in similarity with the real image. By improving the network structure, SRDAGAN fully extracted important information. Compared with the former three methods, SSIM was increased by 0.1629, 0.1122, and 0.0648, respectively, and the performance of the super-resolution reconstruction model was significantly improved. Due to the gradual optimization of data processing, the reconstructed images were more and more close to the real high-resolution images, and the differences caused by image reconstruction were becoming smaller and smaller.
When facing the LPIPS index, BICUBIC, SRGAN, SWINIR, and SRDAGAN were 0.4829, 0.3946, 0.3530, and 0.2881, respectively. Compared with the former three methods, our method has an advantage of 0.1948, 0.1065, and 0.0639. Compared with the unmodified SRGAN, the performance of the images restored by SRDAGAN had also been improved to a better extent, and the differences with the real images had been greatly reduced. The LPIPS index further illustrated the excellent performance of the SRDAGAN model.

4. Discussion

In the actual production process of the factory, the particleboards often appear as small defects with various shapes, which affect the quality of the board [38]. Using the super-resolution reconstruction of the particleboard images, we can explore its help for the subsequent defect detection. For the particleboard images with defective parts, we compared, analyzed, and discussed its reconstruction in depth. The defects of particleboards were shown in Figure 8.
In Figure 8, from top to bottom are big shaving, handwriting, glue spot, staining, and scratch of the particleboards, and from left to right are the images corresponding to LR, BICUBIC, SRGAN, SWINIR, SRDAGAN, and HR. By comparison, it was found that the reconstruction ability of traditional BICUBIC for various defects is obviously insufficient, and there was a problem that the reconstruction of defect contours was not clear and not sharp. SRGAN had been able to extract the contour features of each defect to a certain extent in image super-resolution reconstruction, and the edges of the reconstructed defect images were clearer and sharper, especially on the three defects of big shaving, handwriting, and scratch, where the contour details of the defects performed better. However, this method had a shortcoming that could not be ignored. Due to the existence of the BN layer and the network structure, artifacts and image color information would be severely suppressed. The particleboard images reconstructed via the algorithm could not accurately restore the true color features. This would inevitably cause interference to the subsequent detection. During intelligent detection, it is very likely that glue spots will be recognized as dust spots and staining will be recognized as sand leakage, resulting in errors in particleboard detection and grading. For the state-of-the-art method SWINIR, the performance of this model had been improved, and the detail recovery was better. However, the contour reduction in big shavings, handwriting, and scratches was still not clear enough, and the performance of the model needed to be further improved.
For the SRDAGAN algorithm proposed in this study, it better improved the shortcomings of SRGAN. This method could not only fully learn the defect details, but also improved the performance of the model via the mutual training of the generator and the discriminator, and finally generated artifact-free and high-quality super-resolution images. At the same time, compared with the other methods, when SRDAGAN reconstructed images, the color features, contour shapes, and information expressions of multiple defects had been significantly improved. Not only was the particleboard image reconstruction effect improved, but also the generated super-resolution images were closer to the real high-resolution images, and the defect features could be distinguished from the normal particle features, which can be further and better applied to actual engineering.
In terms of method innovation, the SRDAGAN method proposed in this study removed the BN layer, improved CBAM to form the dense block, built a generator through the densely skip connection, and combined the discriminator with the generative adversarial idea to achieve model improvement. From the experimental results, the SRDAGAN we proposed greatly enhanced the expressiveness and authenticity of super-resolution reconstructed images. Compared with the common methods and the state-of-the-art method, the performance of SRDAGAN had been improved to a better extent, and the performance was better. After removing the BN layer, it can correctly express the real color information and eliminate artifacts. The improved generator can mine and extract texture details, defect features, and other information, and it finally completed high quality image reconstruction. Moreover, by applying a suitable super-resolution method in the industrial detection process of particleboard products, a lower-resolution industrial camera can be selected for image acquisition, and super-resolution images can be obtained under the condition of reducing economic costs for defect detection, improving the level of intelligence in the forestry field.

5. Conclusions

In this study which is aimed at problems such as the poor reconstruction effect of traditional interpolation methods, the artifacts in SRGAN reconstruction images, and the reconstruction performance that needs to be improved, an improved SRDAGAN method was designed which finally improved the super-resolution reconstruction image quality of particleboards and made images more clear and real.
Firstly, we acquired the surface images of the whole particleboards via the self-built image acquisition system and then divided the training set, verification set, and test set according to the ratio of 3:1:1 after image processing. Then, for the SRDAGAN model, the BN layer was removed, CBAM was optimized to build the dense block, and the dense blocks were formed into the generator via the densely skip connection, where the generator and the discriminator were trained at the same time. Then, the particleboard images were input into the trained model for testing, and the performance of the different algorithms was compared on the test set. The SRDAGAN proposed by our study has the best performance, with PSNR, SSIM, and LPIPS reaching 30.71 dB, 0.8146, and 0.2881, respectively. Compared with BICUBIC, it improved by 4.88 dB, 0.1629, and 0.1948, respectively. Compared with SRGAN, it improved by 3.25 dB, 0.1122, and 0.1065, respectively. Compared with SWINIR, it improved by 2.68 dB, 0.0648, and 0.0639, respectively. At the same time, from the perspective of image quality, the texture details, color features, and other information of the super-resolution reconstructed images were evaluated comprehensively, especially the reconstruction effect of the particleboard images with defects was discussed. We found that SRDAGAN can overcome the problems of low quality and artifacts, and can generate high quality, clearer, and more realistic particleboard super-resolution reconstructed images.
For further work, our team will expand the dataset to supplement the particleboard defect images such as the soft and edge breakage, making the dataset more diverse, and verifying the reconstruction performance of the model on images with other defects. At the same time, we will also combine the super-resolution reconstruction algorithm and the defect detection algorithm, use economical cameras to realize the defect detection of particleboard, and set up a super-resolution reconstruction detection system in line with the actual production to improve the quality of forestry products and increase economic benefits.

Author Contributions

Conceptualization, W.Y. and Y.L.; methodology, W.Y. and Y.L.; software, W.Y. and Y.L.; validation, W.Y., H.Z. and Y.L.; formal analysis, Y.Y. and Y.S.; investigation, W.Y; resources, Y.L.; data curation, Y.L.; writing—original draft preparation, W.Y.; writing—review and editing, Y.L.; visualization, Y.Y. and Y.S.; supervision, Y.L.; project administration, Y.L. and W.Y.; funding acquisition, Y.L. and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Postgraduate Research and Practice Innovation Program of Jiangsu Province ‘Research on Intelligent Detection System of Particleboard Surface Defects’, grant number SJCX23_0321, and the Central Financial Forestry Science and Technology Promotion Demonstration Project ‘Demonstration and Promotion of Key Technology of Particleboard Appearance Quality Inspection’, grant number Su[2023]TG06. It was also funded by the 2019 Jiangsu Province Key Research and Development Plan by the Jiangsu Province Science and Technology, grant number BE2019112.

Data Availability Statement

The data are not publicly available because this study is still in progress with the company.

Acknowledgments

The authors would like to express their most sincere thanks for the support of experimental materials and consultation given by China Dare Wood Industrial (Jiangsu) Co., Ltd.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baharuddin, M.; Zain, N.M.; Harun, W.; Roslin, E.N.; Ghazali, F.A.; Som, S.N.M. Development and performance of particleboard from various types of organic waste and adhesives: A review. Int. J. Adhes. Adhes. 2023, 124, 103378. [Google Scholar] [CrossRef]
  2. Lee, S.H.; Lum, W.C.; Boon, J.G.; Kristak, L.; Antov, P.; Pędzik, M.; Rogoziński, T.; Taghiyari, H.R.; Lubis, M.A.R.; Fatriasari, W.; et al. Particleboard from agricultural biomass and recycled wood waste: A review. J. Mater. Res. Technol. 2022, 20, 4630–4658. [Google Scholar] [CrossRef]
  3. Ferrández-García, C.-E.; Ferrández-García, A.; Ferrández-Villena, M.; Hidalgo-Cordero, J.F.; García-Ortuño, T.; Ferrández-García, M.-T. Physical and Mechanical Properties of Particleboard Made from Palm Tree Prunings. Forests 2018, 9, 755. [Google Scholar] [CrossRef]
  4. Copak, A.; Jirouš-Rajković, V.; Španić, N.; Miklečić, J. The Impact of Post-Manufacture Treatments on the Surface Characteristics Important for Finishing of OSB and Particleboard. Forests 2021, 12, 975. [Google Scholar] [CrossRef]
  5. Owodunni, A.A.; Lamaming, J.; Hashim, R.; Taiwo, O.F.A.; Hussin, M.H.; Kassim, M.H.M.; Bustami, Y.; Sulaiman, O.; Amini, M.H.M.; Hiziroglu, S. Adhesive application on particleboard from natural fibers: A review. Polym. Compos. 2020, 41, 4448–4460. [Google Scholar] [CrossRef]
  6. Particle Board Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2023–2028. Available online: https://www.imarcgroup.com/particle-board-market (accessed on 20 July 2023).
  7. Iswanto, A.H.; Sucipto, T.; Suta, T.F. Effect of Isocyanate Resin Level on Properties of Passion Fruit Hulls (PFH) Particleboard. IOP Conf. Series Earth Environ. Sci. 2019, 270, 012021. [Google Scholar] [CrossRef]
  8. Shu, Y.; Xiong, C.; Fan, S. Interactive design of intelligent machine vision based on human-computer interaction mode. Microprocess. Microsyst. 2020, 75, 103059. [Google Scholar] [CrossRef]
  9. Zhao, Z.; Yang, X.; Zhou, Y.; Sun, Q.; Ge, Z.; Liu, D. Real-time detection of particleboard surface defects based on improved YOLOV5 target detection. Sci. Rep. 2021, 11, 21777. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Ge, Z.; Jia, M.; Yang, X.; Ding, R.; Zhou, Y. A Particleboard Surface Defect Detection Method Research Based on the Deep Learning Algorithm. Sensors 2022, 22, 7733. [Google Scholar] [CrossRef]
  11. Wang, C.; Liu, Y.; Wang, P.; Lv, Y. Research on the Identification of Particleboard Surface Defects Based on Improved Capsule Network Model. Forests 2023, 14, 822. [Google Scholar] [CrossRef]
  12. Ye, S.; Zhao, S.; Hu, Y.; Xie, C. Single-Image Super-Resolution Challenges: A Brief Review. Electronics 2023, 12, 2975. [Google Scholar] [CrossRef]
  13. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. Lect. Notes Comput. Sci. 2014, 8692, 184–199. [Google Scholar] [CrossRef]
  14. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  16. Xiong, Y.; Guo, S.; Chen, J.; Deng, X.; Sun, L.; Zheng, X.; Xu, W. Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors. Remote Sens. 2020, 12, 1263. [Google Scholar] [CrossRef]
  17. Zhong, Z.; Chen, Y.; Hou, S.; Wang, B.; Liu, Y.; Geng, J.; Fan, S.; Wang, D.; Zhang, X. Super-resolution reconstruction method of infrared images of composite insulators with abnormal heating based on improved SRGAN. IET Gener. Transm. Distrib. 2022, 16, 2063–2073. [Google Scholar] [CrossRef]
  18. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar] [CrossRef]
  19. Xie, W.; Wei, S.; Yang, D. Morphological measurement for carrot based on three-dimensional reconstruction with a ToF sensor. Postharvest Biol. Technol. 2023, 197, 112216. [Google Scholar] [CrossRef]
  20. Çelik, G.; Talu, M.F. Resizing and cleaning of histopathological images using generative adversarial networks. Phys. A Stat. Mech. Its Appl. 2020, 554, 122652. [Google Scholar] [CrossRef]
  21. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; Volume 11211, pp. 3–19. [Google Scholar] [CrossRef]
  22. Song, Y.; Li, J.; Hu, Z.; Cheng, L. DBSAGAN: Dual Branch Split Attention Generative Adversarial Network for Super-Resolution Reconstruction in Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3266325. [Google Scholar] [CrossRef]
  23. Zhang, W.; Ke, W.; Yang, D.; Sheng, H.; Xiong, Z. Light field super-resolution using complementary-view feature attention. Comput. Vis. Media 2023, 9, 843–858. [Google Scholar] [CrossRef]
  24. Liu, J.; Zhang, W.; Tang, Y.; Tang, J.; Wu, G. Residual Feature Aggregation Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), Seattle, WA, USA, 13–19 June 2020; pp. 2356–2365. [Google Scholar] [CrossRef]
  25. Qian, H.; Zheng, J.; Wang, Y.; Jiang, D. Fatigue Life Prediction Method of Ceramic Matrix Composites Based on Artificial Neural Network. Appl. Compos. Mater. 2023, 30, 1251–1268. [Google Scholar] [CrossRef]
  26. Jin, X.; McCullough, P.E.; Liu, T.; Yang, D.; Zhu, W.; Chen, Y.; Yu, J. A smart sprayer for weed control in bermudagrass turf based on the herbicide weed control spectrum. Crop Prot. 2023, 170, 106270. [Google Scholar] [CrossRef]
  27. Lu, E.; Hu, X. Image super-resolution via channel attention and spatial attention. Appl. Intell. 2021, 52, 2260–2268. [Google Scholar] [CrossRef]
  28. Brahimi, S.; Ben Aoun, N.; Benoit, A.; Lambert, P.; Ben Amar, C. Semantic segmentation using reinforced fully convolutional densenet with multiscale kernel. Multimed. Tools Appl. 2019, 78, 22077–22098. [Google Scholar] [CrossRef]
  29. Esmaeilpour, M.; Chaalia, N.; Abusitta, A.; Devailly, F.-X.; Maazoun, W.; Cardinal, P. Bi-discriminator GAN for tabular data synthesis. Pattern Recognit. Lett. 2022, 159, 204–210. [Google Scholar] [CrossRef]
  30. Gnanha, A.T.; Cao, W.; Mao, X.; Wu, S.; Wong, H.-S.; Li, Q. αβ-GAN: Robust generative adversarial networks. Inf. Sci. 2022, 593, 177–200. [Google Scholar] [CrossRef]
  31. Mohammad-Rahimi, H.; Vinayahalingam, S.; Mahmoudinia, E.; Soltani, P.; Bergé, S.J.; Krois, J.; Schwendicke, F. Super-Resolution of Dental Panoramic Radiographs Using Deep Learning: A Pilot Study. Diagnostics 2023, 13, 996. [Google Scholar] [CrossRef]
  32. Gourdeau, D.; Duchesne, S.; Archambault, L. On the proper use of structural similarity for the robust evaluation of medical image synthesis models. Med. Phys. 2022, 49, 2462–2474. [Google Scholar] [CrossRef]
  33. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar] [CrossRef]
  34. Li, Z.; Wang, D.; Zhu, T.; Ni, C.; Zhou, C. SCNet: A deep learning network framework for analyzing near-infrared spectroscopy using short-cut. Infrared Phys. Technol. 2023, 132, 104731. [Google Scholar] [CrossRef]
  35. Xue, F.; Zhou, M.; Zhang, C.; Shao, Y.; Wei, Y.; Wang, M. Rt-swinir: An improved digital wallchart image super-resolution with attention-based learned text loss. Vis. Comput. 2023, 39, 3467–3479. [Google Scholar] [CrossRef]
  36. Chang, Z.; Zhang, Y.; Chen, W. Electricity price prediction based on hybrid model of adam optimized LSTM neural network and wavelet transform. Energy 2019, 187, 115804. [Google Scholar] [CrossRef]
  37. Xie, C.; Tang, H.; Fei, L.; Zhu, H.; Hu, Y. IRNet: An Improved Zero-Shot Retinex Network for Low-Light Image Enhancement. Electronics 2023, 12, 3162. [Google Scholar] [CrossRef]
  38. Rofii, M.N.; Yumigeta, S.; Kojima, Y.; Suzuki, S. Utilization of High-density Raw Materials for Panel Production and Its Performance. Procedia Environ. Sci. 2014, 20, 315–320. [Google Scholar] [CrossRef]
Figure 1. Image acquisition system for particleboard.
Figure 1. Image acquisition system for particleboard.
Forests 14 01842 g001
Figure 2. Particleboard image blocks: (a) without defects; (b) big shaving; (c) scratch; and (d) staining.
Figure 2. Particleboard image blocks: (a) without defects; (b) big shaving; (c) scratch; and (d) staining.
Forests 14 01842 g002
Figure 3. Algorithm flow.
Figure 3. Algorithm flow.
Forests 14 01842 g003
Figure 4. Improved convolutional block attention module.
Figure 4. Improved convolutional block attention module.
Forests 14 01842 g004
Figure 5. Comparison of dense blocks: (a) common dense block; and (b) improved dense block.
Figure 5. Comparison of dense blocks: (a) common dense block; and (b) improved dense block.
Forests 14 01842 g005
Figure 6. Generator of SRDAGAN.
Figure 6. Generator of SRDAGAN.
Forests 14 01842 g006
Figure 7. Comparison of super-resolution reconstruction images of particleboards (The boxes of the images are enlarged and compared).
Figure 7. Comparison of super-resolution reconstruction images of particleboards (The boxes of the images are enlarged and compared).
Forests 14 01842 g007
Figure 8. Comparison of the reconstruction effect of particleboard images with defective parts.
Figure 8. Comparison of the reconstruction effect of particleboard images with defective parts.
Forests 14 01842 g008
Table 1. Equipment parameters of image acquisition system.
Table 1. Equipment parameters of image acquisition system.
DeviceItemParameter
CameraProduct ModelHIKROBOT MV-CL086-91GC
Resolution8192 × 6 pixel
Pixel Size5 µm
Maximum Line Frequency4.7 kHz
Sensor TypeCMOS
SpectrumColor
Exposure Time3 μs–10 ms
Data InterfaceGige
LensProduct ModelLD21S01
Focus Distance35 mm ± 5%
ApertureF2.8–F16
Adapter ringProduct ModelM72-F T34.5
Light SourceProduct ModelHIKROBOT MV-LTHS-1300-W
Overall Dimension1370 mm × 58 mm × 90.1 mm
TypeLinear light source
Power576 W
Color Temperature6000–7000 K
Table 2. Particleboard product parameters.
Table 2. Particleboard product parameters.
ItemParameter
Size12,200 mm × 24,400 mm × 18 mm
Raw Material Tree SpeciesPinus
AdhesiveUrea-formaldehyde resin
Density Deviation<4%
Hot-pressing Temperature160–200 °C
Table 3. Experimental configuration parameters.
Table 3. Experimental configuration parameters.
Configuration PlatformItemParameter
Hardware ConfigurationSystemWindows 10 × 64
CPUIntel(R) Core(TM) I9 [email protected] GHz
GPUNVIDIA GeForce RTX 2080 Ti
MemoryKHX2666C16/16G × 2
Software ConfigurationIDEPyCharm Community Edition
Programing LanguagePython3.7
Computing PlatformCUDA10.1
GPU Accelerate libraryCuDNN7604
Table 4. Ablation experiments (√: this part exists, ×: this part does not exist, ↑: higher is better, ↓: lower is better).
Table 4. Ablation experiments (√: this part exists, ×: this part does not exist, ↑: higher is better, ↓: lower is better).
Improvements and Indexes1st2nd3rd4th
BN×××
Improved CBAM××
Densely skip connection×××
PSNR(dB) 27.4628.1229.6630.71
SSIM 0.70240.73710.78320.8146
LPIPS 0.39460.35270.30350.2881
Table 5. Evaluation index results of different algorithms (↑: higher is better, ↓: lower is better).
Table 5. Evaluation index results of different algorithms (↑: higher is better, ↓: lower is better).
AlgorithmPSNR (dB) ↑SSIM ↑LPIPS ↓
BICUBIC25.830.65170.4829
SRGAN27.460.70240.3946
SWINIR28.030.74980.3530
SRDAGAN30.710.81460.2881
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, W.; Zhou, H.; Liu, Y.; Yang, Y.; Shen, Y. Super-Resolution Reconstruction of Particleboard Images Based on Improved SRGAN. Forests 2023, 14, 1842. https://doi.org/10.3390/f14091842

AMA Style

Yu W, Zhou H, Liu Y, Yang Y, Shen Y. Super-Resolution Reconstruction of Particleboard Images Based on Improved SRGAN. Forests. 2023; 14(9):1842. https://doi.org/10.3390/f14091842

Chicago/Turabian Style

Yu, Wei, Haiyan Zhou, Ying Liu, Yutu Yang, and Yinxi Shen. 2023. "Super-Resolution Reconstruction of Particleboard Images Based on Improved SRGAN" Forests 14, no. 9: 1842. https://doi.org/10.3390/f14091842

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop