Next Article in Journal
Joint Production and Maintenance Optimization of a Series–Parallel System with Quality-Contingent Demand
Next Article in Special Issue
Extension Design Model of Rapid Configuration Design for Complex Mechanical Products Scheme Design
Previous Article in Journal
Further Development of 3D Crack Growth Simulation Program to Include Contact Loading Situations
Previous Article in Special Issue
Reinforcement-Learning-Based Tracking Control with Fixed-Time Prescribed Performance for Reusable Launch Vehicle under Input Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fv-AD: F-AnoGAN Based Anomaly Detection in Chromate Process for Smart Manufacturing

1
Department of Smart Factory Convergence, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
2
Department of Applied Data Science, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
3
Department of Advanced Materials Science & Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7549; https://doi.org/10.3390/app12157549
Submission received: 12 June 2022 / Revised: 19 July 2022 / Accepted: 25 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue AI Applications in the Industrial Technologies)

Abstract

:
Anomaly detection for quality prediction has recently become important, as data collection has increased in various fields, such as smart factories and healthcare systems. Various attempts have been made in the existing manufacturing process to improve discrimination accuracy due to data imbalance in the anomaly detection model. Predicting the quality of a chromate process has a significant influence on the completeness of the process, and anomaly detection is important. Furthermore, obtaining image data, such as monitoring during the manufacturing process, is difficult, and prediction is challenging owing to data imbalance. Accordingly, the model employs an unsupervised learning-based Generative Adversarial Networks (GAN) model, performs learning with only normal data images, and augments the Fast Unsupervised Anomaly Detection with GAN (F-AnoGAN) base with a visualization component to provide a more intuitive judgment of defects with chromate process data. In addition, anomaly scores are calculated based on mapping in the latent space, and new data are applied to confirm anomaly detection and the corresponding location values. As a result, this paper presents a GAN architecture to detect anomalies through chromate facility data in a smart manufacturing environment. It proved meaningful performance and added visualization parts to provide explainable interpretation. Data experiments on the chromate process show that the loss value, anomaly score, and anomaly position are accurately distinguished from abnormal images.

1. Introduction

Supervised learning is difficult to predict and is limited to areas where abnormal data are collected. Recently, unsupervised learning or semi-supervised learning models have been increasingly used in anomaly detection. The GAN, an unsupervised learning model, has features that can solve data imbalance problems and is widely used for anomaly detection in abnormal areas [1,2]. Furthermore, we have successfully demonstrated learning and deep generative models. The proposed GAN-based anomaly detection method detects anomalies using only the normal data generated by learning. The GAN-based anomaly-detection model is being developed, and the GAN model is actively being studied in the image field. Recently, semi-supervised learning has been used to improve classification problems [3].
Anomaly detection is a method to distinguish between normal and abnormal (bad) data, and making a distinction between the two is a critical issue in all domains. AI can preferentially classify products that are expected to be defective during the manufacturing process, resulting in. Furthermore, AI has the advantage of first reviewing and shorter working hours suggesting products that are deemed defective, such as a lack of expertise of workers and human error. Most anomaly detection predictions are available in a variety of domains but require an annotation process for unusual data. Furthermore, data collection in the manufacturing process industry is difficult because the frequency of defective samples is significantly lower than that of normal samples [4].
Therefore, domains with severe data imbalances require access to unsupervised learning-based anomaly detection. Therefore, in this paper, we learn and generate only normal data using GAN, then input the data into the learned GAN model, compare the original image with the original image, and predict it [5]. We also propose a model that augments the discriminator portion of the model with a visualization portion of the output to make the outlier position more intuitive and descriptive.
A data imbalance impedes anomaly detection in the manufacturing process. Various studies are being conducted to address these issues using batch sampling [6] and augmentation techniques. In addition, we expect to solve this problem if we proceed with learning only through normal data as one of the various studies [7]. Therefore, this study applies fast anomaly detection with the GAN (F-AnoGAN) technique to manufacturing process data that are difficult to label. In the absence of abnormal data in the manufacturing process, the GAN technique can only be used only with normal data by generating and learning arbitrary data. AnoGAN is the first study to utilize GAN for anomaly detection and uses the F-AnoGAN technique, a method developed in AnoGAN, for manufacturing process data. Localization also highlights damaged areas of an image, making them visible [8].
Among the GAN models, we propose the detection and localization of anomalies in the chromate process using the F-AnoGAN technique, which is an anomaly detection GAN. The anomalous part of the image is represented as an image and its performance is verified and demonstrated by the loss value and anomaly score. Furthermore, the F-AnoGAN model proposed in this paper is intended for real-world anomaly detection and utilization, and it outperforms previous Anomaly GAN(AnoGAN) models in quantitative indicators [9].
This paper proposes a prediction method of outlier detection in the quality check process through image data generated by the chromate process. The predictive performance of the model through the proposed Fv-AnoGAN and the visualization through it proved to be significant. Suggest that image data from the manufacturing process can contribute to one of the prediction methods of outlier detection.
The rest of this paper is structured as follows Section 2 introduces the anomaly detection cases and existing anomaly detection models used in the smart factory. Section 3 elaborates on the proposed F-AnoGAN model’s components and approaches, as well as its key ideas. Section 4 presents the experimental environment, structure, and evaluation results of the dataset. Finally, Section 5 concludes the paper with a summary of evaluation results and directions for future research.

2. Related Work

2.1. Anomaly Detection in Smart Factory

The total amount of data collected in the process field grows exponentially as technology advances. Accordingly, in the recent manufacturing industry, as the introduction and application of Smart Factories based on Big Data have become visible, various analysis methods have been attempted to detect process abnormalities using various data that can be collected at each stage of the production process. In [10], for example, the semiconductor manufacturing industry used various machine-learning models to detect abnormalities, including SVM, random forest, XGBost, FDA, and C5.0.
Furthermore, industrial applications utilize disproportionate time series data to present a new anomaly detection approach based on GAN. Unbalanced data in the manufacturing sector is steadily collected and generated, but experimental studies using them are insufficient and studies that have proven their validity through outlier scores are underway [11]. Figure 1 [10] shows the anomaly detection method for deriving the final model from the data collection proposed in.
This paper presents an approach to anomaly detection using image data from the GAN-based manufacturing process compared with the approach of anomaly detection using time series data and adds a visualization part of the model to show intuitive anomaly detection analysis. Next, as a technology using outlier detection in the manufacturing process, an abnormal detection technique being used in the manufacturing process is being studied as an industrial robot. The proposed Mode Seeking Generative Adversarial Networks (MSGAN) model through data from robot sensors increases anomaly detection enhancement and improves accuracy by synthetic samples [12].

2.2. Unsupervised Learning for Anomaly Detection

Anomaly detection is the task of identifying test data that do not match the normal data distribution identified during training. The approach for anomaly detection is derived from image analysis [13] and research papers exist on various application cases [10]. There are numerous challenges in detecting anomalies in a changing manufacturing process image. In general, algorithms are based on the assumption that there are structural differences between images with and without anomalies [14]. The algorithms of the anomaly detection technique are broadly classified as supervised and unsupervised learning. In general, supervised learning is often used when the presence or absence of anomalies in a given dataset is known, but there are few ways to objectively prove which datasets actually contain anomalies and how much they contain in a real factory manufacturing environment. As a result, the significance of anomaly-detection techniques based on unsupervised learning has emerged, and numerous studies have recently been conducted in this regard. In a related study, “Deep Ant: A Deep Learning Approach for Unsupervised Anomaly Detection in Time Series” [15], there is a research method that applies unsupervised learning to difficult-to-determine anomalous time-series data [16]. It is replaced by the LSTM-RNN structure to process time series data, and the application of the adversarial training method of GAN results in computational complexity [17].
One of the unsupervised learning-based anomaly detection is the GANormaly paper [18], which is a methodology similar to AnoGAN.For problems with a lot of unlabeled data, it outperforms traditional approaches for semi-supervised anomaly detection. The GANnormal model is proposed to perform the process of creating an image and the steps of learning in latent space at once. Compared to the proposed paper, the model proposes performance improvement using adversarial auto-encode. Figure 2 [15] shows a new unsupervised learning model called “DeepAnt” based on CNN, which successfully detects anomalies in time series data.

2.3. GAN & AnoGAN

GAN is a model from a paper titled “Generative Adversarial Nets” [1] that captures patterns or features of a dataset and then creates new data with similar distributions. GAN can learn generative models that generate realistic images in detail. In addition, learning such as a GAN system using noise for image generation is possible [19]. Accordingly, GAN is divided into two models: the generator and the discriminator. The generator generates “fake” images that are similar to “real” images and the discriminator successfully distinguishes between the “fake” and “real” images generated by the generator. As the learning progresses, the generator attempts to outperform the discriminator, and the discriminator attempts to outperform the generator. Subsequently, the performance of the discriminator finally reaches 50%, and the equilibrium is maintained. The formula given below is the result of formulating the competitive learning models of the generator and the discriminator.
min G max D V ( D , G ) = E x p data ( x ) [ log D ( x ) ] + E z p z ( z ) [ log ( 1 D ( G ( z ) ) ) ] ,
This paper, first published in 2014, has been followed by numerous follow-up studies in 2022. This GAN is used by AnoGAN for unsupervised anomaly detection. Anomaly detection using AnoGAN targets normal images of MNIST: for example, generative models are generated using normal images from MNIST. After creating a discriminator model that determines whether the image created by this generative model is real or fake, the anomaly score is quantitatively obtained. Subsequently, based on this anomaly score, unlabeled data are passed through the model to determine if they are anomalous. In addition, the Wasserstein GAN (WGAN) was used in this study, and an improved WGAN [20] training was proposed in 2017 [21].
L z γ = ( 1 λ ) · L R z γ + λ · L D z γ . ( λ = 0.1 )
The loss function in AnoGAN is a combination of both loss and reduction loss, and when two losses are added, the z is updated. This is because existing supervised learning has limitations and, in this case, it is with weight. The generator and discriminator must have been trained before the latent z can be learned. However, as explained in Section 2.2, the problem is there is a large amount of data that are not labeled in the field, and the time and cost of creating additional datasets are limited. GAN, however, belongs to unsupervised learning and can be learned without the labels used in supervised learning. It also presents a significant advantage in directly generating data. Since then, new models based on various GAN models such as Deep Convolutional Generative Adversarial Network (DCGAN) [22], Least Squares Generative Adversarial Network (LSGAN) [23], Progressive Growing of Generative Adversarial Network (PGGAN) [24], and Super Resolution Generative Adversarial Network (SRGAN) [25], have been proposed, and AnoGAN is one such model. Figure 3 [9] shows the architecture of the model of AnoGAN, which is based on the model of Fv-AnoGAN proposed in this paper.

2.4. Localization & Threshold

We used the localization technique [26] to borrow unsupervised learning-based outlier localization for the visualization proposed in this paper. We used unsupervised learning-based outlier localization techniques for the visualization proposed in this study. The outliers must be discovered to understand why outliers occur and how they affect the data. Most of these are found using visualization techniques to locate outliers. The differences between images can be used to check the location of outliers in the case of localization, threshold, and GAN. The diagnostic method for outliers currently employs visualization or residuals, but visualization papers in F-AnoGAN can not be found with chromate process data. Several methods of visual papers have been proposed, such as an anomaly detection technique that constructs a clustering-based ensemble model [27] or a visualization study of changes resulting from nonstationary occurrences in time series data [28]. Visual localization technology is a technology that estimates the current position indoors and outdoors using only the images. We developed this study by testing outlier detection using GAN, extracting outlier images, and adding intuitive and accessible visualization parts to the model.

3. Fv-AD: F-AnoGAN Based Anomaly Detection

3.1. System Architecture

As a result of anomaly detection, we aim to represent anomalous images together through localization and anomaly scores. The proposed anomaly detection prediction model is named fast visualization-anomaly detection GAN (Fv-AnoGAN).
Figure 4 shows an overview of the Fv-AnoGAN model and localization section. Similar to F-AnoGAN, WGAN and encoder are used, and the model is composed of input data, layer, generator, discriminator, encoder layer, and output. We learn using images of the normal data collected in the chromate process with the input data. The input data are then learned in the WGAN model during the training phase, and the encoder training below is performed using the normal image because it is based on a well-trained model. The encoder is learned in the middle layer of the discriminator model’s encoder section using the residual loss value of the feature space. After learning the GAN and encoder, in the detection step, we propose a model that calculates the anomaly score through the learned model by adding unseen data and checking the localized anomaly image. The F-AnoGAN model, which performed well in anomaly detection, was used in this study. F-AnoGAN is a model developed to improve the performance of AnoGAN, which first announced the detection of anomalies based on GAN and Figure 5 shows the overall structure of the model and the F-AnoGAN model through image data generated by the chromate process.
The visualization of the abnormal part is then explained using the localization technique as a result of the anomaly detection, along with the abnormal score. F-AnoGAN is divided into two stages: WGAN and encoder [20]. The GAN learns two models: Generator G and Discriminator D, which describe the distribution of trained data and attempt to produce similar images, which compete with each other and categorize the fake data generated by the generator through an adversarial process. The WGAN is a data classification algorithm that can improve the output performance by redefining metrics for convergence values. It also simplifies the model based on the distance of Wasserstein from Jensen-Shannon. The first step in building the model is to train the WGAN so that The Generator G only learns the distribution of normal data from the data to be learned, resulting in only normal images. The encoder is then trained to map the image to the latent space. F-AnoGAN is randomized in the process of mapping images in AnoGAN, and AnoGAN, which learns, is not well mapped. To address this issue, F-AnoGAN employs AutoEncoder’s Encoder model [29]. Encoder models are compressed, as the input data are characterized by latent vectors. The reason for using the encoder model is to prevent the generator from generating normal data when the test data are input to the WGAN. The GAN weight is fixed at this point, and if the query image is entered into the GAN during the TEST process, the generator will only generate normal data unrelated to the query data. The method of mapping to Latent space is based on an encoder, and training is conducted to enable x z , inverse mapping. Anomaly detection is performed based on a trained model that includes the discriminator feature secondary loss and image reconstruction error G(z).

3.2. Calculating Anomaly Score

GAN learning generates the generator G(z) = z x , which maps to the latent space z x , but not the inverse mapping to the manifold x -> latent space z, which is required for outlier detection. To solve this problem, AnoGAN maps to the latent space, but F-AnoGAN learns the encoder, enabling reverse mapping to E(x) = x z . F-AnoGAN with encoder proposes three methods, but this study uses the last encoder model of i z i f method using feature space loss for the i m a g e z i m a g e . In the learning process, the process of mapping from a real image to the latent space z is encoded and consists of an encoder model. When query image x arrives, it allows the quick creation of G(E(x)) such as x. The i z i f architecture has the same structure as the AutoEncoder and CAE [30], in which the encoder follows the decoder. During learning, the encoder trains mapping from a real image to z and learns with G, which maps z to the image space. This architecture is similar to image-to-image AE, and the learning proceeds in a manner that minimizes the MSE of lossmsx and E(G(x)) used when learning with the i z i f architecture. The i z i Training objective implements similarities in the image space. The loss function of i z i an encoder training architecture compared is as follows. It has a structure with a generator behind the encoder and minimizes MSE loss.
L i z i ( x ) = 1 n x G ( E ( x ) ) 2 ,
Minimizing the pixel-by-pixel difference is not an actual example of a normal image; an image with a small residual may be output even in the case of an abnormal image. The image space residual is not a reliable position. This leads to the creation of architecture by further calculating the statistics from real and reconstructed images. The statistics of the input are calculated using the discriminator feature f ( ) of the intermediate layer, and the characteristic expression in the middle indicates the dimension [5]. The discriminator feature in F-AnoGAN is inspired by the feature mapping technique proposed in [31] and it is related to the loss value used in the initial outlier detection task for mapping repeated z-values. However, i z i ’s goal is to limit the generation of images, but i z i ’s shortcomings are that mapping to the latent space is unknown, and the exact result can not be predicted by calculating the residual loss. This adds statistics to the actual and generated images to construct the architecture. i z i f ’s loss function is as follows:
L i z i f ( x ) = 1 n · x G ( E ( x ) ) 2 + κ n d · f ( x ) f ( G ( E ( x ) ) ) 2 ,
From the formula, it can be seen that the discriminator features are related to the loss used in AnoGAN and the discriminator feature utilizes the parameters obtained when learning the GAN. It provides a good mapping of the image and latent spaces using this loss. The parameters learned during [5] WGAN training are fixed during encoder training, and the Figure 6 architecture guides encoder training in both the image and latent space at the same time, selecting the proposed encoder training architecture from the F-AnoGAN model. The discriminator parameters are learned during WGAN training and fixed during encoder training.
After learning the GAN and encoder, the anomaly score is calculated by entering the query image X. Calculating the anomaly score is the same as the equation [5].
A ( x ) = A R ( x ) + K A D ( x )
An anomaly score represents the deviation of the query and reconstruction images during outlier detection. By observing the formula in [5], it can be said that it is the same as the loss i z i f formula and usually results in a high score on an abnormal image and a low score on a normal image. The residual loss equation used for learning is employed to calculate the average score. As the model learns only about normal images, only images similar to the input images are reconstructed using the encoder with x of normal images.

4. Experiment and Results

4.1. Experiment Environment

Python 3.7 and PyTorch 1.7.0 were used in the model configurations proposed in this paper. We defined models and used GPU acceleration in a Google Collaboration environment in a cloud environment using Torch.nnn and Torch vision modules to avoid collisions caused by the presence of different versions of all training and experiments. Google Collaboration Pro is a graphics processing unit GPU (e.g., T4 or P100 specifications). We obtained good results from the Google Collaboration environment. However, it should be noted that the performance of learning artificial intelligence neural networks and the degree of convergence of artificial intelligence neural networks may vary depending on the GPU.

4.2. Datasets

In this paper, experiments were conducted using manufacturing process data images collected from actual chromate processes [32]. Chromate has a large impact on process completeness and was collected as data for quality prediction. We used data to forecast normal and poor quality. Figure 7 depicts some of the collected data datasets.
Chromate treatment is used as a post-treatment method after galvanizing or cadmium plating; in fact, chromate treatment after galvanizing is essential. The process of coating a rust preventive film using bichromate is used for products that require gloss. The main reaction for treating the film is that zinc dissolves, hydrogen ion concentration decreases at the zinc interface to reduce bichromate ions, and precipitates on the zinc surface. In the chromate process, if the solution concentration is low, the result is not white; if the concentration is high, the thickness is reduced. There are various methods for the plating process, among which, the dataset of the chromate process is an electroplating process.
The data collection method receives real-time data through PLC and sensors, stores it in CSV file form based on the collected time, and uses an image for visual evaluation of quality as a product stored in PNG form through an image collection device. The datasets are CSV and PNG, and image data are learned to automatically conduct visual quality inspections to detect defective products more accurately. Among the learning data, 1103 normal and 76 abnormal data are used to enable experiments on unbalanced data. Subsequently, we convert it into <64 × 64, 128 × 128, 256 × 256> sizes that show the optimal performance of input images through various experiments.

4.3. Performance Matrix

This subsection employs two tables to validate the numerical values of the anomaly scores. By dividing normal data and abnormal data, the image value, z value learned by encoder, anomaly score, and loss value are shown in a table. The values in the two tables are the numerical values of the anomaly score for the test data, with label 0 representing the anomaly data and label 1 representing the normal data.
image_distance represents MSELoss in the loss function. MSELoss is often used for differences between masks in differences between images or segmentation. It indicates that the difference value is obtained by subtracting the target value from the generated image value, and the average value is 0.06. The average anomaly score value of the normal data in Table 1 is 0.01, indicating a difference between the two classes.
z_distance also uses the MSELoss function, and the difference from image_distance is an image learned by the encoder, indicating the difference between the generated value and the difference value. Table 2 The average value of z_distance is 0.11. Table 1 confirms that the z_distance value of the normal data is 0.005, and a significant difference in the numerical values between the two classes can be confirmed.
Section 3 describes the anomaly score. The difference in features between the generated value and the target value of the output as a discriminator is referred to as the loss value.
Table 1 present the experimental results of the outlier detection score for the normal. There is a clear difference in all of the figures when compared to the experimental results with the abnormal data in Table 2.

4.4. Results and Analysis

When learning WGAN, the final setting value of the hyperparameter is as follows: Epoch = 100, Lr = 0.0002, batch_size = 32, b1 = 0.6, b2 = 0.999, latent_dim = 100, sample_interval = 400, and the hyperparameter values you set when you learned Encoder are as follows: Epoch = 200, batch_size = 32. Lr = 0.0002, b1 = 0.5, b2 = 0.999, latent_dim = 100, sample_interval = 400. The learning was performed after it was set to x, and the following performance was confirmed.
Figure 8 shows a histogram of the scores for normal and abnormal data. The normal image distribution of the histogram graph is 0.15 or higher and 0.06 or lower. This demonstrates that the threshold for anomaly scores was set to a value between 0.06 and 0.15. The Results for the normal and abnormal data have significant implications and are visible. Anomaly scores can be represented as important indicators of the anomaly detection results in the process. In the graph, the x-axis represents the outlier score and the y-axis represents the number. Furthermore, the proposed anomaly score in F-AnoGAN can distinguish between bad and normal data, and the comparison image obtained from the difference between the test and generated images can accurately determine the abnormal part.
Figure 9 shows a graph of the loss function obtained using the F-AnoGAN model. The x-axis represents the number of data points and the y-axis represents the loss value. The above figure shows that the experimental results of F-AnoGAN and AnoGAN were excellent. The loss value of F-AnoGAN is lower than that of AnoGAN and the images created by F-AnoGAN are more sophisticated than those created by AnoGAN.
The image on the left side of Figure 10 show the image data of defective products that occurred during the chromate process. The data were found to be defective due to a crack at the bottom, and anomaly detection was performed using the generated image. The middle image was created by the generator, and although it did not attain the resolution of the real image, it created a false image. The final image compares the real image learned through the encoder with the fake image generated by the generator. The darkened part of the image indicates an abnormal part can be visually identified.
The leftmost image in Figure 11 show the image data of the defective product that occurred during the chromate process. These data were found to be defective by the crack at the top, and anomaly detection was performed with the generated image. The middle image represents a fake image generated by the generator. Compared to AnoGAN, it showed a high resolution, and the last image compares the real image learned through the conventional encoder with the fake image generated by the generator. Compared to Figure 11, we can see the darkened part at the bottom, and through this, we can observe that the crack at the top and the crack at the bottom of Figure 10 and Figure 11 were detected in contrast.
Figure 12 shows the result of the abnormal detection of a real image without a crack. The image on the left shows the image data of the normal product obtained during the chromate process. The middle image is an image generated by the generator, and the normal image does not reach the same resolution as the real image but produces a compliant false image. The final image compares the real image learned through the encoder with the fake image generated by the generator. Compared to Figure 11 and Figure 12, the abnormal location of the dark black area was not found, demonstrating that the normal image was correctly detected.
Localization is used to detect anomalies, and the location of the anomaly can be checked after learning with F-AnoGAN. As the image was binarized using the threshold technique, the position of the abnormal part is converted to a dark black color. Figure 13 learns chromate process data from the proposed F-AnoGAN and then demonstrates that abnormalities can be judged and predicted using localization and anomaly scores.
Table 3 compares the approaches according to encoder training. The same WGAN training was implemented using an unrestricted encoder of a linear output layer. Three versions of the performance are presented and the i z i f encoder model proposed in this paper describes the best performance. We compare z i z , i z i , and i z i f encoder training architectures based on WGAN.
Although there are various evaluation indicators for measuring the model, in this study, the performance of the model was confirmed and verified using five items as shown in Table 4. The refined data according to the preprocessing of the data has a large impact on accuracy, which can vary depending on the quality of the data, and image data collected in the chromate process boast high performance in all models. The proposed Fv-AnoGAN model exhibited the highest performance.
The fast localization anomaly detection (Fv-AD) model was newly defined in this study, and positive and useful results were obtained using data generated in the chromate process in the outlier detection process. The abnormal location of the defective image in this study was determined using unsupervised learning-based F-AnoGAN, which was intended to be tested, and it was proven using outlier scores. Because the post-processing process greatly affects quality, the chromate process is expected to solve the problem of worker inexperience or not being caught by humans at the manufacturing process site, which predicts accurate quality. Furthermore, several attempts using chromate process data are considered meaningful among the various unsupervised learning techniques. In a future research project, we will continue to study F-AnoGAN by studying a combined model that allows artificial intelligence to demonstrate explainable results in the process of learning the model by combining the XAI model [33] with F-AnoGAN.

5. Conclusions

To address these issues, manufacturing processes, such as real-time monitoring methods that detect process anomalies early to achieve product homogeneity, have long been studied [34]. In the existing manufacturing process, various attempts have been made to determine outliers owing to data imbalance accurately. In addition, abnormal detection is difficult because of technical limitations or a lack of worker efficiency. Quality issues are not limited to the mass production stage; therefore, the process of product planning, R&D, mass production, and service must be managed in one step. To solve this problem, we propose a manufacturing process model using F-AnoGAN with an excellent outlier detection performance. Based on various experiments, data from the actual manufacturing process revealed optimal performance.
Owing to the expected effect, the completeness of the process in the chromate process significantly affects the quality, and chromium, an environmental substance, is substituted. It is expected that it will be required in the field of manufacturers that require cost reduction and actual quality prediction. Furthermore, it is anticipated that it will be used in processes similar to electroplating, such as melt plating and chemical plating. In the future, by comparing more models, we might be able to create a model that can discriminate outliers with pixel levels and we intend to improve it more efficiently by optimizing parameters or reducing learning and generation time to supplement the results.
Because PGAN currently uses WGAN-GP losses and further increases reliability and robustness, future use of pGAN is also considered a complement to future models [22] and is also expected to allow optimization to create lightweight models suitable for use in a variety of domains.

Author Contributions

Conceptualization, C.P. and S.L.; methodology, C.P.; software, C.P. and S.L.; validation, C.P. and S.L.; formal analysis, D.C.; investigation, D.C.; resources, C.P.; data curation, C.P.; writing—original draft preparation, C.P., S.L. and D.C.; writing—review and editing, S.L.; visualization, S.L.; supervision, J.J. project administration, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2018-0-01417) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). Also, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1F1A1060054).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Acknowledgments

This research was supported by the SungKyunKwan University and the BK21 FOUR (Graduate School Innovation) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF) and the ITRC(Information Technology Research Center) support program (IITP-2022-2018-0-01417) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  2. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  3. Pham, H.; Dai, Z.; Xie, Q.; Le, Q.V. Meta pseudo labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 21–25 June 2021; pp. 11557–11568. [Google Scholar]
  4. Chalapathy, R.; Chawla, S. Deep learning for anomaly detection: A survey. arXiv 2019, arXiv:1901.03407. [Google Scholar]
  5. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Langs, G.; Schmidt-Erfurth, U. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 2019, 54, 30–44. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, K.; Lim, J.; Bok, K.; Yoo, J. Handling method of imbalance data for machine learning: Focused on sampling. J. Korea Contents Assoc. 2019, 19, 567–577. [Google Scholar]
  7. Li, Z.; Kamnitsas, K.; Glocker, B. Analyzing overfitting under class imbalance in neural networks for image segmentation. IEEE Trans. Med. Imaging 2020, 40, 1065–1077. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, Y.; Juefei-Xu, F.; Guo, Q.; Liu, Y.; Pu, G. FakeLocator: Robust localization of GAN-based face manipulations. IEEE Trans. Inf. Forensics Secur. 2022; Early Access. [Google Scholar] [CrossRef]
  9. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In Proceedings of the International Conference on Information Processing in Medical Imaging, Boone, NC, USA, 25–30 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 146–157. [Google Scholar]
  10. Nam, H. A case study on the application of process abnormal detection process using big data in smart factory. Korean J. Appl. Stat. 2021, 34, 99–114. [Google Scholar]
  11. Jiang, W.; Hong, Y.; Zhou, B.; He, X.; Cheng, C. A GAN-based anomaly detection approach for imbalanced industrial time series. IEEE Access 2019, 7, 143608–143619. [Google Scholar] [CrossRef]
  12. Lu, H.; Du, M.; Qian, K.; He, X.; Wang, K. GAN-based data augmentation strategy for sensor anomaly detection in industrial robots. IEEE Sens. J. 2021. [Google Scholar] [CrossRef]
  13. Kiran, B.R.; Thomas, D.M.; Parakkal, R. An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. J. Imaging 2018, 4, 36. [Google Scholar] [CrossRef] [Green Version]
  14. Oz, M.A.N.; Mercimek, M.; Kaymakci, O.T. Anomaly localization in regular textures based on deep convolutional generative adversarial networks. Appl. Intell. 2022, 52, 1556–1565. [Google Scholar] [CrossRef]
  15. Munir, M.; Siddiqui, S.A.; Dengel, A.; Ahmed, S. DeepAnT: A deep learning approach for unsupervised anomaly detection in time series. IEEE Access 2018, 7, 1991–2005. [Google Scholar] [CrossRef]
  16. Li, D.; Chen, D.; Goh, J.; Ng, S.k. Anomaly detection with generative adversarial networks for multivariate time series. arXiv 2018, arXiv:1809.04758. [Google Scholar]
  17. Li, D.; Chen, D.; Jin, B.; Shi, L.; Goh, J.; Ng, S. Madgan: Multivariate anomaly detection for time series data with generative adversarial networks. In Proceedings of the International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; pp. 703–716. [Google Scholar]
  18. Akcay, S.; Atapour-Abarghouei, A.; Breckon, T.P. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 622–637. [Google Scholar]
  19. Bae, S.; Kim, M.; Jung, H. GAN system using noise for image generation. J. Korea Inst. Inf. Commun. Eng. 2020, 24, 700–705. [Google Scholar]
  20. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Seoul, Korea, 15–17 November 2017; pp. 214–223. [Google Scholar]
  21. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  22. Berg, A.; Ahlberg, J.; Felsberg, M. Unsupervised learning of anomaly detection from contaminated image data using simultaneous encoder training. arXiv 2019, arXiv:1905.11034. [Google Scholar]
  23. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
  24. Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2017, arXiv:1710.10196. [Google Scholar]
  25. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  26. Venkataramanan, S.; Peng, K.C.; Singh, R.V.; Mahalanobis, A. Attention guided anomaly localization in images. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 485–503. [Google Scholar]
  27. Park, C.H.; Kim, T.; Kim, J.; Choi, S.; Lee, G.H. Outlier Detection By Clustering-Based Ensemble Model Construction. KIPS Trans. Softw. Data Eng. 2018, 7, 435–442. [Google Scholar]
  28. Yoo, J.; Choo, J. A study on the test and visualization of change in structures associated with the occurrence of non-stationary of long-term time series data based on unit root test. KIPS Trans. Softw. Data Eng. 2019, 8, 289–302. [Google Scholar]
  29. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  30. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  31. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training gans. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  32. Platform, K.A.M.P. (Korea AI Manufacturing Platform) CNC Machine AI Dataset, KAIST (UNIST, EPM SOLUTIONS). 2022. Available online: https://www.kamp-ai.kr/front/dataset/AiData.jsp (accessed on 11 June 2020).
  33. Das, A.; Rad, P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv 2020, arXiv:2006.11371. [Google Scholar]
  34. Bentley, K.H.; Kleiman, E.M.; Elliott, G.; Huffman, J.C.; Nock, M.K. Real-time monitoring technology in single-case experimental design research: Opportunities and challenges. Behav. Res. Ther. 2019, 117, 87–96. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Anomaly detection procedure in smart factory.
Figure 1. Anomaly detection procedure in smart factory.
Applsci 12 07549 g001
Figure 2. Anomaly Detection for Time series.
Figure 2. Anomaly Detection for Time series.
Applsci 12 07549 g002
Figure 3. AnoGAN.
Figure 3. AnoGAN.
Applsci 12 07549 g003
Figure 4. Fv-AD Model.
Figure 4. Fv-AD Model.
Applsci 12 07549 g004
Figure 5. System architecture for chromate process.
Figure 5. System architecture for chromate process.
Applsci 12 07549 g005
Figure 6. i z i f Encoder training.
Figure 6. i z i f Encoder training.
Applsci 12 07549 g006
Figure 7. Datasets.
Figure 7. Datasets.
Applsci 12 07549 g007
Figure 8. Histogram of anomaly score.
Figure 8. Histogram of anomaly score.
Applsci 12 07549 g008
Figure 9. Loss function.
Figure 9. Loss function.
Applsci 12 07549 g009
Figure 10. Bottom Crack Image Data Comparison.
Figure 10. Bottom Crack Image Data Comparison.
Applsci 12 07549 g010
Figure 11. Top Crack Image Data Comparison.
Figure 11. Top Crack Image Data Comparison.
Applsci 12 07549 g011
Figure 12. Normal Image Data Comparison.
Figure 12. Normal Image Data Comparison.
Applsci 12 07549 g012
Figure 13. Localization images.
Figure 13. Localization images.
Applsci 12 07549 g013
Table 1. Performance matrix of anomaly detection score for normal data.
Table 1. Performance matrix of anomaly detection score for normal data.
TypeLabelImage_distanceAnomaly_scoreZ_distanceLoss
Class1 0.0113228 0.015835 0.001151 1171.768912
1 0.013555 0.020455 0.008917 1290.782983
1 0.012377 0.017536 0.003320 1012.679232
1 0.014349 0.021915 0.011179 1266.791052
1 0.013614 0.018974 0.008290 1387.901923
1 0.018190 0.028164 0.005597 1206.969482
1 0.016685 0.024101 0.003862 1124.322021
1 0.020770 0.032192 0.002772 1307.966553
1 0.020538 0.031241 0.005769 1238.942871
1 0.017792 0.030411 0.002449 1274.199707
Mean1 0.013291 0.021593 0.005569 1143.691237
Table 2. Performance matrix of anomaly detection score for anomaly data.
Table 2. Performance matrix of anomaly detection score for anomaly data.
TypeLabelImage_distanceAnomaly_scoreZ_distanceLoss
Class0 0.050823 0.228023 0.076672 2012.108765
0 0.048244 0.194100 0.060309 1544.944214
0 0.041436 0.118608 0.090345 1597.007935
0 0.091974 0.452780 0.209052 2004.321533
0 0.095337 0.395791 0.115832 1816.836914
0 0.036689 0.167070 0.069618 1936.427153
0 0.036439 0.133632 0.084261 1824.568182
0 0.058436 0.251603 0.114561 1907.987691
0 0.097947 0.491510 0.250416 1638.791923
0 0.056836 0.229287 0.065458 1574.176541
Mean0 0.061416 0.266240 0.113652 1831.658193
Table 3. Encoder training performance.
Table 3. Encoder training performance.
PrecisionSensitivitySpecificityF1-ScoreAUC
z i z 0.9230.9570.9660.9270.942
i z i 0.9080.9210.9510.8940.931
i z i f 0.9780.9810.9820.9710.997
Table 4. Model performance.
Table 4. Model performance.
PrecisionSensitivitySpecificityF1-ScoreAUC
CNN0.9520.9690.9710.9630.993
AE0.9640.9740.9810.9510.978
AnoGAN0.9280.9830.9800.9310.964
Fv-AD0.9780.9810.9820.9710.997
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, C.; Lim, S.; Cha, D.; Jeong, J. Fv-AD: F-AnoGAN Based Anomaly Detection in Chromate Process for Smart Manufacturing. Appl. Sci. 2022, 12, 7549. https://doi.org/10.3390/app12157549

AMA Style

Park C, Lim S, Cha D, Jeong J. Fv-AD: F-AnoGAN Based Anomaly Detection in Chromate Process for Smart Manufacturing. Applied Sciences. 2022; 12(15):7549. https://doi.org/10.3390/app12157549

Chicago/Turabian Style

Park, Chanho, Sumin Lim, Daniel Cha, and Jongpil Jeong. 2022. "Fv-AD: F-AnoGAN Based Anomaly Detection in Chromate Process for Smart Manufacturing" Applied Sciences 12, no. 15: 7549. https://doi.org/10.3390/app12157549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop