**Require:**

*B*:numberofMSDB andSADBtraining;

*D*={*<sup>D</sup>*1,*D*2,*Dn*}:NSRNtrainingset;

 *<sup>w</sup>*0:weightinitialization.

**Ensure:**


2: MSDB learning: *sm*(*ws*) = *Sm*(*fp*(*B*)), where *ws* is the parameter of MSDB



8: **end for**

9: Initialize NSRN with *wn*

 **do**


#### **4. Experiments and Discussions**

*4.1. Dataset*

There are few public real-space target images, and ground-truth labels of degraded images are also difficult to obtain. Therefore, degraded image simulation is used to obtain training data to verify the effectiveness of the proposed method. The 3D models used to obtain images of simulated space objects are from STK (Satellite Tool Kit) [69], which provides various satellite models and turbulence degradation models. The reflected sunlight of space objects is refracted by atmospheric turbulence, which makes the images observed by ground-based telescopes blurred. This turbulence blur can be represented by the following model [28].

$$\mathbf{h}(\boldsymbol{\mu}, \boldsymbol{\upsilon}) = \boldsymbol{\varepsilon}^{\{-3.44 \left(\frac{\boldsymbol{\lambda} \boldsymbol{f} \boldsymbol{I} \} ^{\boldsymbol{\beta} \cdot \boldsymbol{\beta}}\right)}} \boldsymbol{\varepsilon}^{\boldsymbol{\varepsilon} \cdot \boldsymbol{\varepsilon}} \tag{13}$$

where *U* = √*u*<sup>2</sup> + *v*<sup>2</sup> is the frequency, (*<sup>u</sup>*, *v*) is the unit pulse, *λ* is the wavelength, *f* is the optical focal length, and *r* is the atmosphere coherence length. It can be seen that the larger the *r*, the stronger the atmospheric motion and the blurrier the image. Therefore, different degrees of turbulence blurred images can be generated by changing the size of *r*.

To obtain more diverse training data, clear satellite images with different attitude angles are obtained by rotating the 3D satellite model from STK. The acquired images are dataenhanced, including rotating 90, 180, and 270 degrees and flipping horizontally and vertically. Images are then blurred using the atmospheric turbulence long-exposure degradation function shown in Equation (13). By setting different *r* values in [0, 0.02], blurred image datasets with three levels contained in three subsets—mildly degraded (r ∈ [0.005, 0.01)),

moderately degraded (r ∈ [0.005, 0.015)), and severely degraded (r ∈ [0.005, 0.02])—can be obtained. During atmospheric turbulence imaging, the turbulence blurring is also mixed with photon noise, dark noise, reset noise, and readout noise. These noises mainly obey Gaussian and Poisson distributions, so we add Gaussian noise and Poisson noise to the blurred image. The value range of the parameter of Gaussian noise is [35, 42], and the value range of the parameter of Poisson noise is [4, 7]. The real degradation model is expressed as:

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{g}(\mathbf{x}, \mathbf{y}) \* h(\mathbf{x}, \mathbf{y}) + n(\mathbf{x}, \mathbf{y}) + p(\mathbf{x}, \mathbf{y}), \tag{14}$$

where *f* is the observed image, *g* is the original image, *h* is the PSF atmospheric turbulence, *n* represents Gaussian noise, and *p* represents Poisson noise. To ensure the generalization ability of the model and encourage the restoration model to learn the blur degradation mode and the corresponding restoration mode, we adopt the strategy of training on small images and verifying and testing on large images.

We cut the image at 20-pixel intervals to generate 32 × 32 image patches and then discarded samples in which more than 90% of the patches were black background area, resulting in 117,300 image patches for training the model. Some of the generated training samples are shown in Figure 7. A total of 56 large images that are not used to for the training set are used as the test set, and some test samples are shown in Figure 8. We also collected 17 real-world turbulence-degraded images from public sources as a test set, as shown in Figure 9. Detailed information about the dataset is show in Table 1. The spatial resolutions of the large images in the table are not uniform, and their ranges is [256 × 256, 1024 × 1024].

**Figure 7.** Some training data. From left to right: clear; mildly degraded; moderately degraded; and severely degraded.

**Table 1.** Composition details of dataset.


**Figure 8.** Some simulated data for testing. From left to right: clear; mildly degraded; moderately degraded; and severely degraded.

**Figure 9.** Some real-world turbulence-degraded data for testing.

#### *4.2. Metrics for Evaluation and Methods for Comparison*

The simulated images have labels, so the performance evaluation of the algorithm can be carried out by combining subjective methods and objective metrics. For objective metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are used to evaluate the restoration performance of each algorithm. For subjective metrics, the quality of the restored image is evaluated by human vision and the reference images. Moreover, for real images, due to the lack of reference images, only subjective evaluation and no-reference metrics can be used. In this paper, the no-reference evaluation metrics used are Brenner, Laplacian, SMD, Variance, Energy, Vollath, and Entropy. The calculation methods of these no-reference metrics can be found in [70].

Gao [2] conducted extensive analysis on traditional restoration methods for spatial images. The experimental results show that the traditional methods are not ideal for removing turbulence blur, so the proposed method is not compared using traditional methods. To better analyze and evaluate the performance of this method, some representative deep learning methods are selected for comparative experiments, namely Gao [2], Chen [38], Mao-30 [49], MemNet [50], CBDNet [48], ADNet [31], DPDNN [53], and DPIR [36]. For absolute fairness, for all comparison methods, we use the parameters given in the original text and train them with the training set of this paper.

## *4.3. Ablation Experiment*

Our proposed model (Figure 1) uses an asymmetric U-NET as the backbone. To verify the effectiveness of the proposed model, an ablation experiment is performed. In this experiment, the backbone U-NET is named Model1, and Model1 to Model6 are formed by plugging MSDB, SADB, FDBP, and curriculum learning strategy (TNRS) into Model1, as shown in Table 2. When training models Model1–Model5, three training subsets with different blur degrees are directly merged as the final training set. Model6 is trained using the steps shown in Algorithm 1. The trained model is tested on three different degraded images; the results of the objective evaluation metric are shown in Table 2, and the partially restored images are shown in Figure 10.

**Table 2.** Performance of models with different components (The best results are shown in bold fonts).


**Figure 10.** Restoration of severe turbulence blur using different modules (The red boxes represent the focus region).

It can be seen from Table 2 that: (1) Model1, which only contains the backbone U-NET, lacked sufficient representation power to learn intrinsic features from degraded images and reconstruct images well. (2) The PSNR of Model2 obtained by plugging MSDB into Model1 was significantly improved because MSDB enables U-NET to have better global and local information presentation capabilities. However, the PSNR of Model3 obtained by plugging SADB into Model1 decreased, but the image details are richer. (3) Model4 was obtained by plugging MSDB and SADB into Model1. Compared with Model1, Model2, and Model3, both the PSNR and the SSIM significantly improved in Model4. This is because Model4 has stronger noise suppression performance. (4) Model5, obtained by plugging FBPR into Model4, obtained more consistent results. (5) Model6 (NSRN) added the curriculum

learning algorithm to Model5 to train the network. The performance of Model6 was further improved compared to Model5, which proves that the proposed model does have better generalization ability, and it is easier to capture the mapping relationship between sharp images and low-resolution images. Moreover, from the restored images of each model in Figure 10, the results of Model 6 have the best visual effect, and the edges and textures are clearer.

#### *4.4. Experiments and Comparative Analysis of Simulated Images*

#### (1) Model for mild degradation

We use the trained model for restoration experiments on test data with mild degradation, and the resulting averages of objective evaluation metrics are shown in Table 3. It can be seen that for PSNR, Mao, CBDNet, ADNet, DPDNN, DPIR, and the proposed method all achieve very good results. These methods all have more complex network models, so they have better presentation ability. For SSIM, DPDNN, DPIR, and our method have significantly better performance than the remaining methods, which shows that the method based on noise suppression has a better ability to restore textual details. Compared to the second-ranked method, our method improves PSNR by 0.16 and improves SSIM by 0.036. An example set of restored results is shown in Figure 11. It can be seen that for mildly degraded images, almost all methods achieve better visual effects.

**Table 3.** Average PSNR and SSIM of different state-of-the-art methods on mild degradation (The best results are shown in bold fonts).


**Figure 11.** Restoration using different state-of-the-art methods on mild turbulence blur (The red boxes represent the focus region).

(2) Model for moderate degradation

The test results of all models on the moderately degraded dataset are shown in Table 4. It can be seen that for PSNR, DPIR, DPDNN, and Mao achieve competitive results. However, our method has the best performance and is nearly 0.3 higher than the secondranked method, indicating that the proposed method does have a strong representation of learning ability by introducing modules such as FBPR. On SSIM, the best method is DPDNN, and our method is close to DPDNN. The restoration results of different methods on a typical moderately degraded image are shown in Figure 12. It can be seen that the visual effects of images restored by DPDNN, DPIR, Mao, and our method are similar. However, in contrast, DPDNN has sharper edges in some regions, and our method is more consistent.

**Figure 12.** Restoration using different state-of-the-art methods on moderate turbulence blur (The red boxes represent the focus region).

**Table 4.** Average PSNR and SSIM of different state-of-the-art methods on moderate degradation (The best results are shown in bold fonts).


(3) Model for severe degradation

The results of objective evaluation metrics of all restoration models on the severely degraded image test set are shown in Table 5. It can be seen that our method has obvious advantages in this dataset: the PSNR is higher than the second-ranked method by nearly 0.2, and the SSIM is higher than the second-ranked method by 0.007. Further, for PSNR, our method is the only one that exceeds 28. Our method is also the only method that shows the best performance in both metrics, which shows that for severely degraded images with severe noise and severe blur, the method that can specifically deal with the noise is more competitive. The restoration results of different methods on a typical severely degraded image are shown in Figure 13. From the visual effect, our method restores more texture details and has obvious advantages.

**Figure 13.** Restoration using different methods on severe turbulence blur (The red boxes represent the focus region).

**Table 5.** Average PSNR and SSIM of different state-of-the-art methods on severe degradation (The best results are shown in bold fonts).


In general, the proposed method, DPDNN, and DPIR are the most competitive methods, while the Gao, Mao, and Chen models are too small to represent the huge sample space spanned by severely degraded images. This shows that a network that can restore heavily noisy and blurred severely degraded images not only needs sufficient representation ability but also some mechanism for learning features, such as attention. Moreover, as the model becomes more complex, the generalization ability and restoration ability of the network model can be improved by separately processing blur and noise.

To better compare the performance of each algorithm under different noise levels, an image is randomly selected from the test set and then mixed with different levels of noise for restoration experiments. As seen in Figure 14, DPIR and our method have similar performance on SSIM. DPDNN also has good performance when the noise intensity is greater than 35. Moreover, our method has the best PSNR at almost all noise levels.

**Figure 14.** Results of different noise levels: (**a**) Test image; (**b**) SSIM; (**c**): PSNR.

#### *4.5. Experiments and Comparative Analysis of Real Images*

The results of the non-reference evaluation metrics of the restoration results obtained by all the compared methods on real data are shown in Table 6, and the restoration results on real data are shown in Figure 15. There was still a big difference between the simulated training data and the real image distribution, and all methods encountered cross-domain problems. However, under the same conditions, our method is the best in these numerical experiments and these evaluation metrics. Of course, the reliability of the no-reference evaluation and the consistency with human vision require further research [24]. The proposed method has a certain enhancement of texture and edges, so metrics such as upper edge and gradient have weak advantages over other methods. As shown in Figure 15, due to the weak network representation ability of the methods of Gao [2] and Mao [49], the restored image is still blurred. The rest of the methods can provide visually pleasing restoration. The visual effect restored by the method of Chen [38] is close to our method, indicating that our method has excellent performance for the restoration of severely degraded images. This is because it treated additive noise and blur degradation separately and designed special modules to denoise and perform blur deconvolution.

**Table 6.** Results of non-reference evaluation metrics on real test data (The best results are shown in bold fonts).


**Figure 15.** Restoration using different methods on real turbulence blur (The red boxes represent the focus region).

## **5. Conclusions**

Atmospheric turbulence-blurred images are usually observed at long distances and contain severe noise. Therefore, the restoration of atmospheric turbulence-degraded images includes two tasks: deblurring and denoising. Although deblurring and denoising belong to the same underlying visual tasks, their internal principles are different. Denoising removes high-frequency noise in images, while deblurring using deconvolution to obtain high-frequency information from blurred images. Based on this knowledge, we design a deep neural network model for the restoration of atmospheric turbulence-degraded images based on curriculum learning. Noise suppression of degraded images is achieved by designing a dedicated denoiser without enforcing fully decoupled denoising and deblurring. The experimental results demonstrate the effectiveness of our method. However, the restoration of real turbulence-degraded images is still an open problem. The design of a GAN [71] model based on the ideas proposed in this paper to improve the restoration of real images will be the direction and focus of future research.

**Author Contributions:** Conceptualization, C.X.; methodology, C.X.; software, J.S.; validation, J.S.; formal analysis, Z.G.; investigation, J.S.; resources, J.S.; data curation, J.S.; writing—original draft preparation, J.S.; writing—review and editing, C.X.; visualization, C.X.; supervision, C.X.; project administration, C.X.; funding acquisition, C.X. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work has been partially supported by the Sichuan Science and Technology Program (grant nos. 2021YFG0022, 2022YFG0095).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable. **Data Availability Statement:** Not applicable.

**Acknowledgments:** We thank anonymous reviewers and academic editors for their valuable comments.

**Conflicts of Interest:** The authors declare no conflict of interest.
