*3.2. 3D-IWGAN*

In the original GAN method, the Kullback–Leibler divergence is minimized when training the networks to generate 3D objects. The drawback of using this method is the slow and unstable training process. A recent study introduced a method called 3D-improved Wasserstein GAN (3D-IWGAN) [25] that attempted to fix this issue and aimed to generate more realistic 3D objects by minimizing the Wasserstein distance between the data and generated distributions. The distance formula is

$$\mathcal{W}(p\_{r\prime}, p\_{\mathcal{J}}) = \inf\_{\gamma \sim \text{II}(p\_r, p\_{\mathcal{J}})} \mathbb{E}\_{(x, y) \sim \gamma} [||x - y||] \tag{2}$$

where <sup>Π</sup>(*pr*, *pg*) is the set of all possible joint probability distributions between *pr* and *pg* and *γ* ∈ <sup>Π</sup>(*pr*, *pg*) is the set of joint distributions in the continuous probability space. In the definition of Wasserstein distance, the infimum (greatest lower bound) indicates that we are only interested in the smallest cost.

The deviation of the discriminator's gradients is penalized from unity in 3D-IWGAN. It provides a more essential method for enforcing the Lipschitz constraint. The gradients of a differentiable function are at most one if and only if it is a 1-Lipschitz function. In the formal definition, the loss function for the discriminator in 3D-IWGAN is written as

$$\mathbb{E}\_{\mathfrak{X}\sim p\_{\mathfrak{X}}}[D(\mathfrak{X})] - \mathbb{E}\_{\mathfrak{x}\sim p\_{\mathfrak{r}}}[D(\mathfrak{x})] + \lambda \mathbb{E}\_{\mathfrak{X}\sim p\_{\mathfrak{r}}}[(\left\|\nabla\_{\mathfrak{X}}D(\mathfrak{X})\right\|\_{2} - 1)^{2}] \tag{3}$$

where *λ* is the gradient penalty, *pg* is the generator distribution, *pr* is the target distribution, and *px* is the uniform distribution sampling.

The issue of the instability of GAN training can be tackled by employing the IW-GAN method. Our system utilized the IWGAN with an enhanced gradient penalty for 3D microstructure reconstruction. In comparison to previous approaches, our system demonstrated more stable results in training of the GAN. Table 2 displays the comparison between our enhanced gradient penalty and other GAN gradient penalties. A detailed explanation of this gradient penalty will be covered in Section 4.

**Table 2.** Gradient penalty comparison of different GAN methods. This measures the squared difference between the norm of the gradient of the predictions with respect to the input images and 1 in all cases.


A latent vector is needed as an input to the generator network in the 3D-IWGAN system. In a general way, this latent vector is generated randomly using normal or uniform distribution. Meanwhile, our system generates this latent vector using a variational autoencoder (VAE). A VAE [36] consists of two parts: an encoder and a decoder. Its main task in our system is to encode material microstructures into a lower-dimensional latent space and to decode samples from the latent space back into microstructures. A full schematic view of the VAE architecture is given in Figure 2 Our VAE's decoder network is simultaneously used by the generator network to reproduce the original sample.

**Figure 2.** VAE encoder architecture. It encodes data into a lower-dimensional latent space.

### **4. Proposed System**

### *4.1. System Architecture*

The proposed system for the virtual experiment consists of three components: preprocessing, 3D model generator, and pavement analysis. The architecture of the system is illustrated in Figure 3.

In this section, we provide a brief overview of each component of the system. The preprocessing part has several tasks such as cropping, converting, downsizing, and resampling the images. In the model generation part, we use an adversarial network to generate a 3D image from the 2D images of the porous pavement microstructure. This 3D image is used as the input for pavement analysis, which further provides the numerical values of physical properties such as porosity, permeability, and hydraulic conductivity.

**Figure 3.** System architecture. This consists of three steps: preprocessing, 3D model generation, and pavement analysis.
