*4.2. MGEP Preclassification Settings*

We used the MGEP algorithm to preclassify the trained image set before deep learning. We selected a subset of samples that were related in color and texture feature categories. Equation (16) was used as the fitness function to implement the MGEP classification algorithm, which was then applied to sample preclassification in the SR image restoration process.

The function set was set to {+, −, ×, /}, the evolution algebra *gen* was set to 1000, and the population size *N* was set to 100. The total number of genes was set to 10, of which there were 7 common genes and 3 homologous genes. The three homologous genes respectively output the color category calculated according to Equation (1), the texture category calculated according to Equation (18), and the texture category calculated according to Equation (19) of the input LR image block.

$$\text{T(j,k)} = \sum\_{\varepsilon=-\text{T n}=-\text{T}}^{\text{j}} \sum\_{\mathbf{n}=-\text{T}}^{\text{k}} \varepsilon^2 \eta^2 \mathbf{C}(\varepsilon, \mathbf{n}, \mathbf{j}, \mathbf{k}) \tag{18}$$

$$\text{MEAN} = \frac{1}{m} \sum\_{i} i p\_{\Delta}(i) \tag{19}$$

Each gene had a head length of 5 and a chromosome length of 110. The genetic manipulation probability was set to 0.1.

The MGEP preclassification effect is shown in Figure 4. The training samples "Car", "Building", and "Cobblestone" in ImageNet-91 that have little correlation with the color texture features of the input image "Flowers" were excluded, thereby improving the training effect and matching accuracy.

### *4.3. Image Restoration Results*

CNN deep learning was performed after MGEP preclassification. All convolutional layer filters were 3 × 3 in size and the number of filters was 64. We used the method of He et al. [25] to initialize the convolutional layer. The convolution kernel moved in steps of 1. In order to keep the size of all feature maps the same as the input of each layer, 0 was filled around the boundary before applying the convolution. The learning rate of all layers was initialized to 5 <sup>×</sup> 10<sup>−</sup>4, and the learning rate dropped by 2 times every 15 epochs until the learning rate was less than 5 <sup>×</sup> <sup>10</sup><sup>−</sup>9.

Tables 1–4 show the PSNR/SSIM values and running times of the six algorithms on the Set5, Set14, BSD100, and Urban100 test sets when the upscale factors were 2, 3, and 4, respectively.

**Table 1.** Average Peak Signal-to-Noise Ratio (PSNR)/Structural Similarity (SSIM) values for 2× scale. Red color indicates the best and the blue color indicates the second-best performance.


**Table 2.** Average PSNR/SSIM values for 3× scale. Red color indicates the best and the blue color indicates the second-best performance.


**Table 3.** Average PSNR/SSIM values for 4× scale. Red color indicates the best and the blue color indicates the second-best performance.



**Table 4.** Comparison of the running times (sec) for scales 2×, 3×, and 4×. Red color indicates the best performance.

As can be seen from Tables 1–3, the MGEP-SRCNN algorithm achieved the best PSNR effect at different magnifications on the four test sets. Compared with SRCNN, the PSNR evaluation index increased by 0.44–1.69 dB, and the improvement effect obtained in the Set5 dataset was the best. In terms of SSIM indicators, except for the suboptimal data obtained under the 2× and 3× conditions on the BSD100 dataset, other SSIM indicators were all optimal. Compared with SRCNN, the SSIM evaluation index improved by about 0.005–0.062, where 4× showed the best improvement. In addition, from Table 4 we can see that the MGEP-SRCNN algorithm achieved the best performance of running time on the precondition of accuracy.

Then, we subjectively determined the quality of the output image and compared the performance of the six SR algorithms by observing the visual effects of the restored image. For comparison, given a 3× upscale factor, the restoration effects of the different SR algorithms used on the Set5, Set14, BSD100, and Urban100 test sets are shown in Figures 5–8.

**Figure 5.** Super-resolution restoration results of the image "Baby" in Set5 [21]. (**a**) Bicubic; (**b**) SRCNN [5]; (**c**) SCN [9]; (**d**) VDSR [8]; (**e**) DRCN [10]; (**f**) MGEP-SRCNN; (**g**) Ground truth.

**Figure 6.** Super-resolution restoration results of the image "Flowers" in Set14 [22]. (**a**) Bicubic; (**b**) SRCNN [5]; (**c**) SCN [9]; (**d**) VDSR [8]; (**e**) DRCN [10]; (**f**) MGEP-SRCNN; (**g**) Ground truth.

**Figure 7.** Super-resolution restoration results of the image "016" in BSD100 [23]. (**a**) Bicubic; (**b**) SRCNN [5]; (**c**) SCN [9]; (**d**) VDSR [8]; (**e**) DRCN [10]; (**f**) MGEP-SRCNN; (**g**) Ground truth.

**Figure 8.** Super-resolution restoration results of the image "002" in Urban100 [24]. (**a**) Bicubic; (**b**) SRCNN [5]; (**c**) SCN [9]; (**d**) VDSR [8]; (**e**) DRCN [10]; (**f**) MGEP-SRCNN; (**g**) Ground truth.

It can be intuitively seen from the figure that the reconstructed images of both traditional Bicubic and SRCNN have aliasing, while the restored image provided by our algorithm is clearer and sharper, and the reconstruction quality is better. In the details, as seen in the baby eye in Figure 5, petal color

in Figure 6, branch texture in Figure 7, and building glass in Figure 8, MGEP-SRCNN reconstructed images have clearer features, have no sawtooth texture, and are more in line with the visual needs of the human eye.

Figure 9 shows the PSNR/SSIM values of the six algorithms on the Set5, Set14, BSD100, and Urban100 test sets when the upscale factor was 3. The MGEP-SRCNN algorithm achieved the best PSNR effects in all four datasets. Except for the suboptimal data obtained in BSD100, the SSIM indicators were all optimal.

**Figure 9.** Image restoration results of the six SR algorithms. (**a**) PSNR; (**b**) SSIM.
