Next Article in Journal
Analysis of the Experimental Integration of Thermoelectric Generators in Photovoltaic–Thermal Hybrid Panels
Previous Article in Journal
Multi-Data Aspects of Protein Similarity with a Learning Technique to Identify Drug-Disease Associations
Previous Article in Special Issue
Sentence Representation Method Based on Multi-Layer Semantic Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation

1
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
2
Faculty of Information Technology, Satya Wacana Christian University, Salatiga 50711, Indonesia
3
School of Creative Technologies, the University of Portsmouth, Portsmouth PO1 2UP, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(7), 2913; https://doi.org/10.3390/app11072913
Submission received: 28 February 2021 / Revised: 16 March 2021 / Accepted: 20 March 2021 / Published: 24 March 2021

Abstract

:
A synthetic image is a critical issue for computer vision. Traffic sign images synthesized from standard models are commonly used to build computer recognition algorithms for acquiring more knowledge on various and low-cost research issues. Convolutional Neural Network (CNN) achieves excellent detection and recognition of traffic signs with sufficient annotated training data. The consistency of the entire vision system is dependent on neural networks. However, locating traffic sign datasets from most countries in the world is complicated. This work uses various generative adversarial networks (GAN) models to construct intricate images, such as Least Squares Generative Adversarial Networks (LSGAN), Deep Convolutional Generative Adversarial Networks (DCGAN), and Wasserstein Generative Adversarial Networks (WGAN). This paper also discusses, in particular, the quality of the images produced by various GANs with different parameters. For processing, we use a picture with a specific number and scale. The Structural Similarity Index (SSIM) and Mean Squared Error (MSE) will be used to measure image consistency. Between the generated image and the corresponding real image, the SSIM values will be compared. As a result, the images display a strong similarity to the real image when using more training images. LSGAN outperformed other GAN models in the experiment with maximum SSIM values achieved using 200 images as inputs, 2000 epochs, and size 32 × 32.

1. Introduction

Neural networks with more layers have been implemented in the latest development in deep learning [1]. These neural network models are far more capable of acquiring greater preparation. Nonetheless, obtaining a correct and reliable data collection with manual labeling is often costly. In machine learning as well as computer vision, this has been a general problem. An effective way to synthesize images is to increase the training collection which will improve image recognition accuracy. Employing data augmentation for enlarging the training set in image classification has been carried out in various research [2]. Traffic sign detection (TSD) and traffic sign recognition (TSR) technology have been thoroughly researched and discussed by researchers in recent years [3,4]. Many TSD and TSR systems consist of large quantities of training data. In recent years, a few datasets of traffic signs have been shown: German Traffic Sign Data Set (GTSRB) [5], Chinese Traffic Sign Database (TSRD), and Tsinghua-Tencent 100K (TT100K) [6]. Traffic signs are different from country to country and, in various circumstances, an interesting recommendation is to apply synthetically generated training data. The synthetic image will save time and energy for data collection [7,8]. Synthetic training data have not yet been commonly used in the TSR sector but are worth exploring because very few datasets come from other countries, in particular from Taiwan. In this research, we focus on Taiwan’s prohibitory signs. Our motivation arises from the current unavailability of such a Taiwan traffic sign database, image and research system.
A generative adversarial network (GAN) [9] is a deep research framework of two models, a generative model and a discriminative model. Both models are instructed together. GAN has brought a lot of benefits to several specific tasks, such as images synthesis [10,11,12], image-to-image translation [13,14], and image restoration [15]. Image synthesis is a fundamental problem in computer vision [16,17,18]. In order to obtain more diverse and low-cost training data, traffic sign images synthesized from standard templates have been widely used to train classification algorithms based on machine learning [12,19]. Radford et al. [20] proposed the Deep Convolutional Generative Adversarial Network (DCGAN) in 2016. DCGAN combines the Generative Adversarial Network (GAN) with CNN so that all GANs can obtain better and more stable training results. Other versions of GAN are Least Squares Generative Adversarial Networks (LSGAN) and Wasserstein Generative Adversarial Networks (WGAN) [21,22]. Both can better solve the problem of instability training in GAN. Each GAN has achieved excellent results in producing synthetic imagery. Therefore, due to the lack of a current training dataset, our experiments apply DCGAN, LSGAN, and WGAN to generate synthetic images.
Traffic sign images compiled from regular models are commonly used to collect additional training data with low cost and flexibility to train computer classification algorithms [19]. In this paper, DCGAN, LSGAN, and WGAN are used to generate complicated images. The synthetic image is a solution for holding a small amount of data. GAN has performed outstanding results in image data generation. Our experiment favors using synthetic images by GAN to obtain image data because this does not depend on a vast number of datasets for training.
This work’s main contributions can be summarized as follows: first, a synthesis of high-quality Taiwan prohibitory sign images Class (T1-T4) is obtained using various GAN models. Second, an analysis and evaluation performance of DCGAN, LSGAN, and WGAN generates a synthetic image with different epochs (1000 and 2000), numbers, and sizes (64 × 64, and 32 × 32). Next, we proposed an experimental setting with various GAN styles to generate a synthetic image. We then evaluate the synthetic image using SSIM and MSE. The remainder of this article is structured as follows. Section 2 covers materials and methods. Section 3 describes the experiment and results. Lastly, Section 4 offers preliminary conclusions and suggests future work.

2. Materials and Methods

A synthetic picture is used to expand the dataset broadly. A well-known method is the combination of original and synthetic data for better detection performance. Multiple approaches such as [23,24] have confirmed the advantage of combining synthetic data when actual data is limited. Lately, particular approaches [25] have proposed to defeat the domain gap among real and synthetic data by applying generative adversarial networks (GANs). This system obtained more reliable results than training with real data. However, GAN is challenging to train and has shown its importance primarily in regression tasks.

2.1. Deep Convolutional Generative Adversarial Networks (DCGAN)

Radford et al. evaluated the architectural and topological constraints of the convolutional GAN in 2016. The method is more stable in most settings, and is named Deep Convolutional GAN (DCGAN) [20,26]. DCGAN is a paradigm for image production consisting of a generative G network and a discriminative D network [20,27]. Figure 1 displays the G and D network diagram. The G network is a neural de-convolutional device that creates images from d-dimensional vectors using de-convolutional layers. On the other hand, a D network has the same equivalent structure as a traditional CNN that discriminates whether the data is a real image from a predefined dataset or G [28]. The training of DCGAN is expressed in Formula (1) as follows [9]:
m i n i m a x D , G =   E x ~ p d a t a x     l o g D x +   E z p z z   l o g 1 D G z
where x is the first image, z is a d-dimensional vector consisting of arbitrary numbers, and p d a t a x and p z z are the probability distributions of x and z. D x is the probability of the input being a generated image from p d a t a x , and ( 1 D G z is the probability of being generated from p z z . D is trained to increase the correct answer rate, and G is trained to decrease l o g ( 1 D G z to deceive D.
Consequently, optimizing D, we obtain maximum V (D, G), and when optimizing G, we obtain minimum V (D, G). Lastly, the optimization problem is displayed in Formula (2) and Formula (3):
D G * = a r g m a x V   G , D
G * = a r g m i n V   G , D G *
G captures sample data distribution and generates a sample like real training data with noise z obedient to a certain distribution, such as uniform distribution and Gaussian distribution. The pursuit effect is as good as the actual sample. The D classification estimates the probability of a sample being taken from training instead of the from data generated. If the sample is from real training results, D gives a significant probability. Otherwise, D gives a small probability [29,30].

2.2. Least Squares Generative Adversarial Networks (LSGAN)

The discriminator in LSGANs uses the least squares as its cost function [31,32]. In other applications, LSGANs are used to generate samples that can represent the real data. There are two advantages of Least Squares Generative Adversarial Networks (LSGANs) over regular GANs. First, LSGANs can produce more extraordinary quality images than conventional GANs. Second, LSGANs perform more stably during the learning process [33,34]. Training GANs is a complex problem in practice because of the instability of GANs’ learning.
Recently, research papers have pointed out that the uncertainty of GANs’ learning is affected by the objective function [35]. In particular, decreasing the typical GAN objective functions can affect gradient loss problems, which makes it difficult to update the generator. This barrier can be relieved by LSGAN since the penalization of samples dependent on the boundary distances may create further gradients when the generator is modified. In comparison, training instability for standard GANs is focused technically on the method-searching action of the objective function, and LSGANs display fewer mode-seeking behaviors. The cost function of an LSGAN is shown in Formulas (4) and (5) [36].
V D   m i n   LSGAN   D =   1 2   E   D X r e a l , i 1 2 + 1 2   E   D G ( X f a k e , i 2
V G   m i n   LSGAN   G =   1 2   D   G X f a k e , i 1 2
LSGANs can generate new data with high similarity to the original data through the mutual benefits of discriminator and generator in the model [37]. Therefore, this paper chooses LSGAN to augment the dataset and generate more realistic data.

2.3. Wasserstein Generative Adversarial Networks (WGANs)

WGAN [22] has been developed to solve the problem of network training variability [38], which is believed to be correlated with the presence of unwanted fine gradients of the GAN discriminator function. Yang et al. [39] approved WGAN for denoising low-dose CT images and attained a successful application in medical imaging reconstruction. WGAN is used in the synthesis data generation module to generate virtual damage signals to monitor the increase in minority defects and stabilize the training data set using synthetic signals.
Two important contributions of WGAN [40] are as follows: (1) WGAN may not display a sign in the experimental collapse mode. (2) When the critic performs well, the generator will always understand. To estimate the Wasserstein distance, we need to find a 1-Lipschitz function. This experiment builds a deep network to learn about the problem. Indeed, this network is very similar to the discriminator D, but without the sigmoid function, and the output is a scalar score rather than a probability. This score can be explained as how real the input images are. In WGAN the discriminator is changed to the critic to reflect its new role. The difference between GAN and general WGAN is to change discriminator to critic, along with the cost function. For both, the network design is almost the same except that the critic does not have an output sigmoid function. The cost function in critic and generator for WGAN could be seen in Formulas (6) and (7), respectively.
w 1 m i = 1 m f x i f G z i
θ 1 m i = 1 m f G z i
However, f has to be a 1-Lipschitz function. To enforce the constraint, WGAN applies a simple clipping to reduce the highest weight value in f. The weights of the discriminator must be regulated by hyperparameters c within a certain range. The architecture of WGAN is shown in Figure 2, where z represents random noise, G represents generator, G(z) represents samples generated by the generator, C represents discriminator, and C* represents an approximate expression of Wasserstein-1 distance.

2.4. SSIM and MSE

The structural similarity (SSIM) index is a good indicator of perceived image quality. The SSIM assessment approach distinguishes the brightness and contrast of the required image detail and incorporates structural information for image quality evaluation [41,42]. The structural similarity measurement is split into three parts: the luminance function l(x,y), the contrast function c(x,y), and the structure comparison function s(x,y) [43]. These three factors will become indicators of how similar the structure is. The mean value is an estimate of brightness, the standard deviation is used as a contrast estimate, and the total variation number is used as a structural resemblance measure. The SSIM functions Formulae (8)–(11) areas follows [44,45].
l x , y = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1
c x , y = 2 σ x y + C 2 σ x + σ y + C 2
s x , y = σ x y + C 3 σ x σ y + C 3
S S I M x , y = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
where μ x is the average of x, μ y is the average of y, σ x 2 is the variance of x, σ y 2 is the variance of y, and σ x y is the covariance of x and y. The input of SSIM [46] is a pair image, one an undistorted image, and the other a distorted image. The structural similarity between both images can be observed as an image quality indicator of the distorted image. Contrasted with traditional image quality measurement indicators, such as Peak Signal-to-Noise Ratio (PSNR), and Mean Squared Error (MSE) [47], the structural similarity is more in line with the human eye for image quality in terms of image quality measurement judgment. The relation between SSIM and more conventional quality metrics in a vector field of the image components can be demonstrated geometrically. The components of these images might be pixels or other derived elements, for example, linear coefficients. [48].
Mean Square Error (MSE) is adopted to determine the discrepancy between estimated values, and the original values of the quantity being estimated are the square of the difference of pixels. The error is the amount by which the value implied by the estimator differs from the quantity to be estimated shown in Formula (12) [49].
M S E =   i = 1 n P i Q i 2 n
where P i represents observed value, n is the number of data points, and Q i represents predicted value. In our works, synthetic images generated by DCGAN, LSGAN, and WGAN are evaluated using SSIM and MSE. However, the value of SSIM is between −1 and 1, the higher is better. In contrast, smaller MSE values suggest a more favorable result.

2.5. Image Preprocessing

Traditional data augmentation comes from fundamental changes such as horizontal flipping, differences in color space, and automatic cutting. These developments encode several of the invariances previously discussed which model challenges for the classification of images. The increases mentioned in geometric transformations, color space transformations, kernel flips, images blend, random erasing, increased function spaces, adverse preparation, transitions in neural design, and meta-learning systems are surveyed [50]. While these methods of data augmentation are developed manually, recent experiments have continued to focus on deep neural network models to automatically create new training samples [49,51].
Crop images can be practiced by cutting a central patch for a specific image as a reasonable method step for image data with combined width and height dimensions. Besides, random cropping can also be used to perform an outcome relevant to interpretation. The difference between random cutting and translation is that the cutting decreases the size of the object, while translations maintain the spatial dimension of the image. This may not be a label-preserving change, depending on the compression threshold determined for harvesting. To get a better result, we cropped the image to focus on the sign. We use 200, 100, and 50 images as input in each group. Rotation changes are accomplished by the right or left rotation of the image on an axis of around 1° to 359°. Rotation increases depend heavily on the rotation grade parameter. Light rotations such as between 1 and 20 or −1 to −20 may be useful for digit identification activities, but the data mark is no longer retained after transformation as the rotation grade rises. Therefore, during data augmentation, these experiments perform certain operations using the following parameter parameters: rotation range = 20, zoom range = 0.10, width shift range = 0.2, height shift range = 0.2, and shear range = 0.15.

2.6. Research Workflow

In this section, we will describe our proposed method to generate traffic sign images using different GAN methods. Figure 3 illustrates the workflow of this research. Besides, we conducted some experiments with different settings to create a realistic synthetic image by DCGAN, LSGAN, and WGAN. We only focus on Taiwan prohibitory signs that consist of no entry images (Class T1), no stopping images (Class T2), no parking images (Class T3), and speed limit images (Class T4), see Table 1.
This research analysis divides the picture into a category based on the overall picture used for training. The first category used 200 images with sizes 64 × 64 and 32 × 32.
Later, it produces 1000 images for each combination of the same size. The second category applies 100 images of 64 × 64 and 32 × 32 dimensions. Next, for each combination, it will generate 1000 images of the same size. The latter group practices 50 images of 64 × 64 and 32 × 32 dimensions. Therefore, 1000 prints of a similar combination size will be produced. The selection of image size is based on the fact that traffic signs are usually small. Table 2 describes various GANs’ experimental settings in our work. A detailed description of advantages and disadvantages of DCGAN, LSGAN, and WGAN is shown in Table 3.

3. Results

Data Generation Results

The training model environment was described in this stage. This experiment used Nvidia GTX970 GPU accelerator 16 GB memory and an intel E3-1231 v3 Central Processing Unit (CPU) with 16 GB DDR3-1866 memory. In Torch and TensorFlow our approach is applied. The generative network and discriminative network are trained with Adam [20] optimizer with β1 = 0:5, β2 = 0:999, and learning rate of 0.0002. The batch size is 25, and the hyperparameter λ is set to 0.5. The iterations for pre-training and training are set as 1000 and 2000. Then, the total images for input are 200, 100, and 50. Further, the images sizes are32 × 32, and 64 × 64, respectively, for input and output. Hence, the steps in discriminator training are as follows [55,56]: (1) The discriminator groups both original data and fake data from the generator. (2) The discriminator loss fixes the error classifying, such as an original instance as fake or a fake as an original. (3) The discriminator renews its weights through backpropagation from the discriminator loss through the discriminator system. Furthermore, some procedures for the training generator are as follows [57,58]: (1) Example random noise. (2) Produce generator products from sampled arbitrary noise. (3) Obtain a discriminator “Real” or “Fake” classification for generator output. (4) Estimate loss from discriminator classification. (5) Backpropagate through both the discriminator and generator to achieve gradients. (6) Apply gradients to modify only the generator weights.
Furthermore, we measure the G loss value and D loss value in each experiment. In the beginning, two-loss functions are connected to the discriminator and, during the discrimination training, it uses the D loss. During the generator training, we use the G loss. Hence, the discriminator aims to determine the probability of real and fake images. The training time increases with the number of epochs. The LSGAN training process is shown in Figure 4.

4. Discussion

Figure 5 displays synthetic traffic sign images generated by DCGAN, LSGAN, and WGAN with epoch 2000 and size 32 × 32. Figure 6 shows the realistic synthetic image generated by DCGAN, LSGAN, and WGAN for all classes with 2000 epoch and size 64 × 64. Figure 7 and Figure 8 describe the synthetic image generation result using 1000 epoch and size 32 × 32 and 64 × 64, respectively. Moreover, the image is relatively real because we cannot distinguish which image is fake and which is actual. The images seem very sharp, natural and realistic. Hence, the worst generate images occur while using 50 input images and 1000 epochs, as seen in Figure 7 and Figure 8. The image appears blurry, not clear, and has much noise compared to others.
Our experiments empirically tested the data generation by various GANs by calculating the similarity between the synthesized images and their corresponding real images. We measured SSIM values between generated images and authentic images of a similar nature. SSIM includes masking of the luminosity and contrast. The error calculation also involves strong interconnections of closer pixels, and the metric is based on small image windows. Figure 9 describes some examples of the SSIM and MSE calculation for original image and synthetic image by LSGAN. All original image in Figure 9 indicates the same MSE = 0 and SSI = 1. We calculated the SSIM and MSE values for each synthetic image and compared them with the original image. We do this to evaluate which GAN model is the best. Hence, Figure 9b shows MSE = 2.11 and SSIM = 0.81 for class T1.
The detailed performance evaluation of synthetic images by various GANs using 1000 and 2000 epochs is presented in Table 4.
Table 4 represents the complete SSIM and MSE calculation for various GANs. We calculate the average SSIM and MSE for each model, including DCGAN, LSGAN, and WGAN. Moreover, Group 2 analysis using 200 total images as input and size 32 × 32 is as follows: LSGAN exhibits the maximum SSIM values at 0.473 and minimum MSE value 4.851 using 1000 epoch. WGAN achieved the second highest with SSIM and MSE scores of 0.452 and 4.963, respectively. DCGAN obtains the minimum SSIM values at 0.315 and maximum MSE value at 6.912 with the same setting. Similarly, using 2000 epoch LSGAN obtains the optimum SSIM values at 0.498, followed by WGAN at 0.482 and DCGAN at 0.468. In contrast, Group 5 obtained the worst experimental results by entering 50 images and dimensions of 64 × 64. LSGAN presents an SSIM value of 0.165 and an MSE value of 14.377 with 1000 epochs. Furthermore, WGAN reached an SSIM value of 0.336 and an MSE value of 11.603. DCGAN achieved SSIM values at 0.282 and MSE 11.943. All MSE scores were higher than 10, and the SSIM values were lower than the other groups.
LSGAN exceeds other GANs, as LSGANs give certain advantages over standard GANs. LSGANs will first produce images of better quality than standard GANs. Secondly, LSGANs perform more stably during the learning process. For evaluating the image quality, we conducted qualitative and quantitative experiments, and the experimental results show that LSGANs can generate higher quality images than regular GANs. Moreover, LSGAN enhances the primary GAN loss function by substituting the original cross-entropy loss function with the least-squares loss function. This fixes the two major traditional GAN problems. LSGAN makes the image quality of the outcome stronger, the training process robust, and the speed of convergence faster. The synthetic image that LSGAN creates looks obvious, actual, and genuine.
The Least Squares GAN (LSGAN) is planned to help generators become more valuable. Intuitively, LSGAN required the discriminator target label for the original image to be 1 and the resulting image to be 0. For the generator, we needed the target label for the resulting image to be 1. The LSGAN can be implemented with a minor change to the discriminator layer’s output and the adoption of the least-squares, or L2, loss function. The output layer of the discriminator model must be a linear activation function.

5. Conclusions

This paper mainly discusses how synthetic images are produced by various GANs (DCGAN, LSGAN, and WGAN). We conduct an analysis and evaluation performance of DCGAN, LSGAN, and WGAN to generate a synthetic image with different epoch (1000 and 2000), numbers, and sizes (64 × 64, and 32 × 32). Next, we evaluate the synthetic image generation results using SSIM and MSE.
Based on our experiments’ results, we can summarize as follows: (1) The trend of MSE value increases along with image size, the number of epochs, and training time. (2) The optimum SSIM values are reached while using a lot of images (200) with small size images (32 × 32) for input training. (3) The larger image size will produce a higher MSE value and require longer training time. (4) LSGAN achieves synthetic image creation’s best performance with 200 total images as input, dimensions 32 × 32, and 2000 epoch. These groups obtain maximum SSIM values at 0.498 and minimum MSE values at 4.453. Hence, while using 1000 epoch, LSGAN exhibits average SSIM values at 0.473 and MSE values at 4.851. LSGAN beats other GAN models in SSIM and MSE values.
In the future, the synthetic image generated by various GANs will be used for training and combine with the real image to enhance traffic sign recognition systems. Currently, only images with a total input of 200 and 2000 epochs were used. Through a model trained on synthetic images of different sizes, we will understand the synthetic image characteristic that affects the method. We will design a new optimized GAN to generate traffic sign images and compare it with the existing GANs in our future works. Future research will also trial other synthetic image generation methods blended with Explainable AI (XAI).

Author Contributions

Conceptualization, C.D. and R.-C.C.; data curation, C.D. and Y.-T.L.; formal analysis, C.D.; funding acquisition, R.-C.C. and H.Y.; investigation, C.D.; methodology, C.D.; project administration, C.D., R.-C.C. and H.Y.; resources, C.D. and Y.-T.L.; software, C.D. and Y.-T.L.; supervision, R.-C.C. and H.Y.; validation, C.D.; visualization, C.D.; writing—original draft, C.D.; Writing—review and editing, C.D. and R.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology, Taiwan. The Nos are MOST-107-2221-E-324 -018 -MY2 and MOST-106-2218-E-324 -002, Taiwan. This research is also partially sponsored by Chaoyang University of Technology (CYUT) and the Higher Education Sprout Project, Ministry of Education (MOE), Taiwan, under the project name: “The R&D and the cultivation of talent for health-enhancement products”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Acknowledgments

The author would like to thank all colleagues from Chaoyang Technology University and Satya Wacana Christian University, Indonesia, and all involved in this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  2. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  3. Dewi, C.; Chen, R.C.; Liu, Y.-T. Taiwan Stop Sign Recognition with Customize Anchor. In Proceedings of the ICCMS 20, Brisbane, QLD, Australia, 26–28 February 2020; pp. 51–55. [Google Scholar]
  4. Chen, R.C.; Dewi, C.; Huang, S.W.; Caraka, R.E. Selecting critical features for data classification based on machine learning methods. J. Big Data 2020, 7, 1–26. [Google Scholar] [CrossRef]
  5. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In Proceedings of the International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 1453–1460. [Google Scholar]
  6. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118. [Google Scholar]
  7. Mogelmose, A.; Trivedi, M.M.; Moeslund, T.B. Learning to detect traffic signs: Comparative evaluation of synthetic and real-world datasets. In Proceedings of the Proceedings—International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012; pp. 3452–3455. [Google Scholar]
  8. Vinayakumar, R.; Alazab, M.; Soman, K.P.; Poornachandran, P.; Al-Nemrat, A.; Venkatraman, S. Deep Learning Approach for Intelligent Intrusion Detection System. IEEE Access 2019, 7, 41525–41550. [Google Scholar] [CrossRef]
  9. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  10. Li, Y.; Xiao, N.; Ouyang, W. Improved boundary equilibrium generative adversarial networks. IEEE Access 2018, 6, 11342–11348. [Google Scholar] [CrossRef]
  11. Zhang, H.; Xu, T.; Li, H.; Zhang, S.; Wang, X.; Huang, X.; Metaxas, D.N. StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1947–1962. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Dewi, C.; Chen, R.-C.; Hendry; Liu, Y.-T. Similar Music Instrument Detection via Deep Convolution YOLO-Generative Adversarial Network. In Proceedings of the 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), Morioka, Japan, 23–25 October 2019; pp. 1–6.
  13. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
  14. Wang, Z.; Chen, Z.; Wu, F. Thermal to visible facial image translation using generative adversarial networks. IEEE Signal Process. Lett. 2018, 25, 1161–1165. [Google Scholar] [CrossRef]
  15. Kim, D.; Jang, H.U.; Mun, S.M.; Choi, S.; Lee, H.K. Median Filtered Image Restoration and Anti-Forensics Using Adversarial Networks. IEEE Signal Process. Lett. 2018, 25, 278–282. [Google Scholar] [CrossRef]
  16. Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-attention generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, ICML, Long Beach, CA, USA, 9–15 June 2019; pp. 12744–12753. [Google Scholar]
  17. Tai, S.-K.; Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Jiang, X.; Yu, H. Deep learning for traffic sign recognition based on spatial pyramid pooling with scale analysis. Appl. Sci. 2020, 10, 6997. [Google Scholar] [CrossRef]
  18. Chen, R.-C.; Dewi, C.; Zhang, W.-W.; Liu, J.-M. Integrating Gesture Control Board and Image Recognition for Gesture Recognition Based on Deep Learning. Int. J. Appl. Sci. Eng. (IJASE) 2020, 17, 237–248. [Google Scholar]
  19. Luo, H.; Yang, Y.; Tong, B.; Wu, F.; Fan, B. Traffic Sign Recognition Using a Multi-Task Convolutional Neural Network. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1100–1111. [Google Scholar] [CrossRef]
  20. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation learning with Deep Convolutional GANs. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–16. [Google Scholar]
  21. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2813–2821. [Google Scholar]
  22. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, Australi, 6–11 August 2017; pp. 298–321. [Google Scholar]
  23. Dwibedi, D.; Misra, I.; Hebert, M. Cut, Paste and Learn: Surprisingly Easy Synthesis for Instance Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1310–1319. [Google Scholar]
  24. Georgakis, G.; Mousavian, A.; Berg, A.C.; Košecká, J. Synthesizing training data for object detection in indoor scenes. In Proceedings of the Robotics: Science and Systems, Cambridge, MA, USA, 12–16 July 2017; p. 13. [Google Scholar]
  25. Bousmalis, K.; Silberman, N.; Dohan, D.; Erhan, D.; Krishnan, D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 95–104. [Google Scholar]
  26. Dewi, C.; Chen, R.-C.; Tai, S.-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. Electronics 2020, 9, 889. [Google Scholar] [CrossRef]
  27. Li, Q.; Qu, H.; Liu, Z.; Zhou, N.; Sun, W.; Sigg, S.; Li, J. AF-DCGAN: Amplitude Feature Deep Convolutional GAN for Fingerprint Construction in Indoor Localization Systems. In IEEE Transactions on Emerging Topics in Computational Intelligence; IEEE: Piscatvey, NJ, USA, 2019; pp. 1–13. [Google Scholar]
  28. Abe, K.; Iwana, B.K.; Holmer, V.G.; Uchida, S. Font creation using class discriminative deep convolutional generative adversarial networks. In Proceedings of the 4th Asian Conference on Pattern Recognition, ACPR 2017, Nanjing, China, 26–29 November 2017; pp. 238–243. [Google Scholar]
  29. Du, Y.; Zhang, W.; Wang, J.; Wu, H. DCGAN based data generation for process monitoring. In Proceedings of the 2019 IEEE 8th Data Driven Control and Learning Systems Conference, DDCLS 2019, Dali, China, 24–27 May 2019; pp. 410–415. [Google Scholar]
  30. Liu, S.; Yu, M.; Li, M.; Xu, Q. The research of virtual face based on Deep Convolutional Generative Adversarial Networks using TensorFlow. Phys. A Stat. Mech. Appl. 2019, 521, 667–680. [Google Scholar] [CrossRef]
  31. Anas, E.R.; Onsy, A.; Matuszewski, B.J. CT Scan Registration with 3D Dense Motion Field Estimation Using LSGAN. In Proceedings of the Communications in Computer and Information Science, Chennai, India, 14–17 October 2020; pp. 195–207. [Google Scholar]
  32. Mardani, M.; Gong, E.; Cheng, J.Y.; Vasanawala, S.S.; Zaharchuk, G.; Xing, L.; Pauly, J.M. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans. Med. Imaging 2019, 38, 167–179. [Google Scholar] [CrossRef]
  33. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. On the Effectiveness of Least Squares Generative Adversarial Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2947–2960. [Google Scholar] [CrossRef] [Green Version]
  34. He, X.; Fang, L.; Rabbani, H.; Chen, X.; Liu, Z. Retinal optical coherence tomography image classification with label smoothing generative adversarial network. Neurocomputing 2020, 405, 37–47. [Google Scholar] [CrossRef]
  35. Qi, G.J. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. Int. J. Comput. Vision 2020, 128, 1118–1140. [Google Scholar] [CrossRef] [Green Version]
  36. Xue, H.; Teng, Y.; Tie, C.; Wan, Q.; Wu, J.; Li, M.; Liang, G.; Liang, D.; Liu, X.; Zheng, H.; et al. A 3D attention residual encoder–decoder least-square GAN for low-count PET denoising. Nuclear Instrum. Methods Physics Res. Sec. A Accel. Spectrometers Detect. Assoc. Equip. 2020, 983, 164638. [Google Scholar] [CrossRef]
  37. Sun, D.; Yang, K.; Shi, Z.; Chen, C. A new mimicking attack by LSGAN. In Proceedings of the International Conference on Tools with Artificial Intelligence, ICTAI, Boston, MA, USA, 5–7 November 2018. [Google Scholar]
  38. Wang, W.; Wang, C.; Cui, T.; Li, Y. Study of Restrained Network Structures for Wasserstein Generative Adversarial Networks (WGANs) on Numeric Data Augmentation. IEEE Access 2020, 8, 89812–89821. [Google Scholar] [CrossRef]
  39. Yang, Q.; Yan, P.; Zhang, Y.; Yu, H.; Shi, Y.; Mou, X.; Kalra, M.K.; Zhang, Y.; Sun, L.; Wang, G. Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss. IEEE Trans. Med. Imaging 2018, 37, 1348–1357. [Google Scholar] [CrossRef]
  40. Panwar, S.; Rad, P.; Jung, T.P.; Huang, Y. Modeling EEG Data Distribution with a Wasserstein Generative Adversarial Network to Predict RSVP Events. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1720–1730. [Google Scholar] [CrossRef] [PubMed]
  41. Yu, J.; Li, J.; Yu, Z.; Huang, Q. Multimodal Transformer with Multi-View Visual Representation for Image Captioning. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 4467–4480. [Google Scholar] [CrossRef] [Green Version]
  42. Hell, S.W.; Sahl, S.J.; Bates, M.; Zhuang, X.; Heintzmann, R.; Booth, M.J.; Bewersdorf, J.; Shtengel, G.; Hess, H.; Tinnefeld, P.; et al. The 2015 super-resolution microscopy roadmap. J. Phys. D Appl. Phys. 2015, 48, 443001. [Google Scholar] [CrossRef]
  43. Joemai, R.M.S.; Geleijns, J. Assessment of structural similarity in CT using filtered backprojection and iterative reconstruction: A phantom study with 3D printed lung vessels. Br. J. Radiol. 2017, 90, 20160519. [Google Scholar] [CrossRef] [PubMed]
  44. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Deshmukh, A.; Sivaswamy, J. Synthesis of optical nerve head region of fundus image. In Proceedings of the International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 583–586. [Google Scholar]
  46. Zhou, Y.; Yu, M.; Ma, H.; Shao, H.; Jiang, G. Weighted-to-spherically-Uniform SSIM objective quality evaluation for panoramic video. In Proceedings of the International Conference on Signal Processing Proceedings, ICSP, Weihai, China, 28–30 September 2019; pp. 54–57. [Google Scholar]
  47. Mathieu, M.; Couprie, C.; LeCun, Y. Deep multi-scale video prediction beyond mean square error. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016—Conference Track Proceedings, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  48. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  49. Dewi, C.; Chen, R.C.; Yu, H. Weight analysis for various prohibitory sign detection and recognition using deep learning. Multimed. Tools Appl. 2020, 79, 32897–32915. [Google Scholar] [CrossRef]
  50. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  51. Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy. IEEE Access 2017, 5, 5858–5869. [Google Scholar] [CrossRef]
  52. Fang, W.; Ding, Y.; Zhang, F.; Sheng, J. Gesture recognition based on CNN and DCGAN for calculation and text output. IEEE Access 2019, 7, 28230–28237. [Google Scholar] [CrossRef]
  53. Kim, S.; Jang, J.; Kim, C.O. A run-to-run controller for a chemical mechanical planarization process using least squares generative adversarial networks. J. Intell. Manuf. 2020. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Ai, Q.; Xiao, F.; Hao, R.; Lu, T. Typical wind power scenario generation for multiple wind farms using conditional improved Wasserstein generative adversarial network. Int. J. Electr. Power Energy Syst. 2020, 114, 105388. [Google Scholar] [CrossRef]
  55. Lu, S.; Sirojan, T.; Phung, B.T.; Zhang, D.; Ambikairajah, E. DA-DCGAN: An Effective Methodology for DC Series Arc Fault Diagnosis in Photovoltaic Systems. IEEE Access 2019, 7, 45831–45840. [Google Scholar] [CrossRef]
  56. Cheng, M.; Fang, F.; Pain, C.C.; Navon, I.M. Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Comput. Methods Appl. Mech. Eng. 2020, 365, 113000. [Google Scholar] [CrossRef] [Green Version]
  57. Salehinejad, H.; Colak, E.; Dowdell, T.; Barfett, J.; Valaee, S. Synthesizing Chest X-Ray Pathology for Training Deep Convolutional Neural Networks. IEEE Trans. Med. Imaging 2019, 38, 1197–1206. [Google Scholar] [CrossRef] [PubMed]
  58. Turner, R.; Hung, J.; Frank, E.; Saatci, Y.; Yosinski, J. Metropolis-Hastings generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA, 9–15 June 2019; pp. 11044–11052. [Google Scholar]
Figure 1. The generative network G (left) and the discriminative network D (right) topology.
Figure 1. The generative network G (left) and the discriminative network D (right) topology.
Applsci 11 02913 g001
Figure 2. Schematic of the Wasserstein Generative Adversarial Network (WGAN).
Figure 2. Schematic of the Wasserstein Generative Adversarial Network (WGAN).
Applsci 11 02913 g002
Figure 3. The system architecture.
Figure 3. The system architecture.
Applsci 11 02913 g003
Figure 4. LSGAN training process (a) d_loss, and (b) g_loss.
Figure 4. LSGAN training process (a) d_loss, and (b) g_loss.
Applsci 11 02913 g004
Figure 5. Synthetic traffic sign images of all classes with epoch 2000 and size 32 × 32 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Figure 5. Synthetic traffic sign images of all classes with epoch 2000 and size 32 × 32 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Applsci 11 02913 g005
Figure 6. Synthetic traffic sign images of all classes with epoch 2000 and size 64 × 64 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Figure 6. Synthetic traffic sign images of all classes with epoch 2000 and size 64 × 64 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Applsci 11 02913 g006
Figure 7. Synthetic traffic sign images of all classes with epoch 1000 and size 32 × 32 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Figure 7. Synthetic traffic sign images of all classes with epoch 1000 and size 32 × 32 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Applsci 11 02913 g007
Figure 8. Synthetic traffic sign images of all classes with epoch 1000 and size 64 × 64 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Figure 8. Synthetic traffic sign images of all classes with epoch 1000 and size 64 × 64 generated by (a) DCGAN, (b) LSGAN, and (c) WGAN.
Applsci 11 02913 g008
Figure 9. Structural Similarity Index (SSIM) and Mean Squared Error (MSE) Calculation. (a). Class T1 Original Image, (b) Class T1 Synthetic Image, (c). Class T2 Original Image, (d) Class T2 Synthetic Image, (e). Class T3 Original Image, (f) Class T3 Synthetic Image, (g). Class T4 Original Image, and (h) Class T4 Synthetic Image.
Figure 9. Structural Similarity Index (SSIM) and Mean Squared Error (MSE) Calculation. (a). Class T1 Original Image, (b) Class T1 Synthetic Image, (c). Class T2 Original Image, (d) Class T2 Synthetic Image, (e). Class T3 Original Image, (f) Class T3 Synthetic Image, (g). Class T4 Original Image, and (h) Class T4 Synthetic Image.
Applsci 11 02913 g009
Table 1. Taiwan Prohibitory Signs.
Table 1. Taiwan Prohibitory Signs.
IDNameSign
T1No entry Applsci 11 02913 i001
T2No stopping Applsci 11 02913 i002
T3No parking Applsci 11 02913 i003
T4Speed Limit Applsci 11 02913 i004
Table 2. GANs’ Experimental Setting.
Table 2. GANs’ Experimental Setting.
NoTotal ImageInput/Output Image Size (px)Total Generate Image
120064 × 641000
220032 × 321000
310064 × 641000
410032 × 321000
55064 × 641000
65032 × 321000
Table 3. Advantages and disadvantages of various GANs.
Table 3. Advantages and disadvantages of various GANs.
FeatureDeep Convolution Generative Adversarial Network (DCGAN)Least Squares Generative Adversarial Network (LSGAN)WGAN
Advantages(1) DCGAN applies stridden convolutions on the discriminator and fractional convolutions on the generator to substitute pooling layers. Features are typically extracted with CNN.
(2) To resolve the gradient problems DCGAN uses the Batch Standardization Algorithm. The BN algorithm fixes weak initializations, brings the gradient to each layer, and restricts the generator from collecting all samples to the equivalent stage.
(3) DCGAN uses various activation functions, including Adam optimization,
Rectified Linear Unit (ReLU) activation function, and Leaky ReLU.
(4) The results show the better performance of DCGAN and confirm the capability of the GAN structure in generating samples. DCGAN is generally considered as the standard when associated with different GAN models [52].
(1) LSGAN enhances the primary GAN loss function by substituting the original cross-entropy loss function with the least-squares loss function. This way fixes the two major traditional GAN problems.
(2) LSGAN makes the image quality of the outcome stronger, the training process robust, and the speed of convergence is faster [53].
(1) WGAN solves the problem of training instability due to its efficient network architecture. The sigmoid feature eliminates the discriminator’s last layer in this model [54].
(2) The loss values of WGAN correspond with generated image quality. The lower loss means better quality image, for a steady training method.
Disadvantages(1) The model parameters oscillate, destabilize and never converge.
(2) The generator collapses which produces limited varieties of samples, and highly sensitive to the hyperparameter selections.
(3) The discriminator becomes extremely successful so that the generator gradient disappears and receives nothing. Unbalance within the generator and discriminator causing overfitting.
(1) The disadvantage of LSGAN is that excessive penalties for outliers lead to reduced sample diversity. (1) The disadvantage of WGAN is the longer training time.
Table 4. Various GAN performance evaluations.
Table 4. Various GAN performance evaluations.
GroupTotal ImageImage Size (px)1000 EPOCH2000 EPOCH
DCGANLSGANWGANDCGANLSGANWGAN
MSESSIMMSESSIMMSESSIMMSESSIMMSESSIMMSESSIM
T1
120064 × 647.9060.5258.2190.4598.1240.489.3420.4838.1560.4977.4850.509
220032 × 327.3670.2694.1840.5214.1820.5113.5020.5584.0190.5294.0090.533
310064 × 6418.8220.1778.8420.4188.7620.4568.2850.5028.5690.4757.6530.504
410032 × 329.1970.0944.7340.494.6480.4885.1260.4494.1020.5574.0810.531
55064 × 649.7850.36517.7630.07510.4390.3529.7760.3669.930.3369.390.41
65032 × 324.8760.4716.8190.2365.3820.4063.9240.5628.6190.264.8770.453
T2
120064 × 6410.0250.3229.1120.4199.5140.4428.9690.3858.960.4368.3850.475
220032 × 325.3660.3084.6150.4164.9090.4014.7240.3834.2130.4234.6390.432
310064 × 649.440.3879.3610.3319.550.4028.9070.3918.9430.3628.0940.449
410032 × 324.6940.3834.7310.3644.8860.3557.9990.1234.4080.4024.4250.393
55064 × 6411.5670.21911.7610.18110.3750.3159.5490.3949.3130.2729.0130.377
65032 × 324.9540.3397.2250.1325.1930.2694.290.3754.9990.2434.6010.336
T3
120064 × 6410.4440.39910.4350.38210.3190.4139.9660.4529.2220.4618.9770.469
220032 × 326.9840.3515.1210.4625.2570.4235.2390.4784.6440.5044.8650.459
310064 × 649.9440.42310.0630.3589.3350.4089.8950.3929.9410.388.3210.478
410032 × 324.7680.4875.3690.3644.8380.4364.6510.4944.5710.4694.5790.47
55064 × 6413.910.1913.3030.13411.940.31310.5030.33912.5370.2339.9360.377
65032 × 325.3540.4398.6830.2155.8850.3485.4840.4345.8740.3635.5030.391
T4
120064 × 6412.9280.42810.4490.45110.44910.0550.46311.8880.3629.6490.48
220032 × 327.930.3325.4820.4945.5020.4715.6980.4534.9340.5354.890.504
310064 × 6413.0290.45112.5540.38212.060.41911.1810.43111.2550.3910.8340.459
410032 × 325.970.4866.1320.4215.950.4477.3580.4595.7620.4695.5270.47
55064 × 6412.5110.35214.6810.2713.6560.36516.3110.32613.6370.31313.0350.405
65032 × 326.6250.4578.4210.2696.8980.4166.4280.4566.3990.3626.1020.445
Average
120064 × 6410.3260.4199.5540.4289.7390.4469.5830.4469.5570.4398.6240.483
220032 × 326.9120.3154.8510.4734.9630.4524.7910.4684.4530.4984.6010.482
310064 × 6412.8090.36010.2050.3729.9270.4219.5670.4299.6770.4028.7260.473
410032 × 326.1570.3635.2420.4105.0810.4326.2840.3814.7110.4744.6530.466
55064 × 6411.9430.28214.3770.16511.6030.33611.5350.35611.3540.28910.3440.392
65032 × 325.4520.4277.7870.2135.8400.3605.0320.4576.4730.3075.2710.406
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Yu, H. Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation. Appl. Sci. 2021, 11, 2913. https://doi.org/10.3390/app11072913

AMA Style

Dewi C, Chen R-C, Liu Y-T, Yu H. Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation. Applied Sciences. 2021; 11(7):2913. https://doi.org/10.3390/app11072913

Chicago/Turabian Style

Dewi, Christine, Rung-Ching Chen, Yan-Ting Liu, and Hui Yu. 2021. "Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation" Applied Sciences 11, no. 7: 2913. https://doi.org/10.3390/app11072913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop