*2.7. Model Evaluation*

The final established models (in this study) were verified and evaluated both visually and numerically. To evaluate the model, visual predictive check (VPC), bootstrapping, and goodness-of-fit (including distribution of residuals) analyses were used. The goodness-of-fit was confirmed by using diagnostic scatter plots as follows: (a) population-predicted concentrations (PRED) versus observed (DV), (b) individual-predicted concentrations (IPRED) versus DV, (c) PRED versus conditional weighted residuals (CWRES), (d) time (IVAR) versus CWRES, and (e) quantile–quantile plot of the components of CWRES.

By using non-parametric bootstrap analysis, the stability of the final model was confirmed, and the bootstrap option of Phoenix NLME was used. A total of 1000 replicates were generated by the repeated random sampling with replacement from the original dataset. The estimated parameter values, such as the standard errors (SE; including confidence intervals) and medians from the bootstrap procedure, were compared with those estimated from the original dataset.

By using the VPC option of Phoenix NLME, VPCs of the final established models were performed. The time–DV concentration data were graphically superimposed on the median values and the 5th and 95th percentiles of the time-simulated concentration profiles. If the DV concentration data were approximately distributed within the 95th and 5th prediction interval, the model was expected to be precise. Normalized prediction distribution error (NPDE) was used to evaluate the predictive performance of the model on the basis of a Monte Carlo simulation with the R package [16]. NPDE results were summarized graphically using (1) quantile–quantile plot of the NPDE, (2) a histogram of the NPDE, (3) scatterplot of NPDE vs. time, and (4) scatterplot of NPDE vs. PRED. If the predictive performance is satisfied, the NPDE will follow a normal distribution (Shapiro–Wilk test) with a mean value of zero (*t*-test) and a variance of one (Fisher's test).
