Next Article in Journal
Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems
Previous Article in Journal
Design and Evaluation of a Rapid Monolithic Manufacturing Technique for a Novel Vision-Based Tactile Sensor: C-Sight
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Atmospheric Turbulence Phase Reconstruction via Deep Learning Wavefront Sensing

1
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
2
The Key Laboratory for Special Fiber and Fiber Sensor of Hebei Province, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4604; https://doi.org/10.3390/s24144604
Submission received: 24 May 2024 / Revised: 1 July 2024 / Accepted: 15 July 2024 / Published: 16 July 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
The fast and accurate reconstruction of the turbulence phase is crucial for compensating atmospheric disturbances in free-space coherent optical communication. Traditional methods suffer from slow convergence and inadequate phase reconstruction accuracy. This paper introduces a deep learning-based approach for atmospheric turbulence phase reconstruction, utilizing light intensity images affected by turbulence as the basis for feature extraction. The method employs extensive light intensity-phase samples across varying turbulence intensities for training, enabling phase reconstruction from light intensity images. The trained U-Net model reconstructs phases for strong, medium, and weak turbulence with an average processing time of 0.14 s. Simulation outcomes indicate an average loss function value of 0.00027 post-convergence, with a mean squared error of 0.0003 for individual turbulence reconstructions. Experimental validation yields a mean square error of 0.0007 for single turbulence reconstruction. The proposed method demonstrates rapid convergence, robust performance, and strong generalization, offering a novel solution for atmospheric disturbance correction in free-space coherent optical communication.

1. Introduction

Free-space coherent optical communication is recognized as a critical technology to surmount current high-speed communication limitations due to its superior data rates and information capacity [1,2,3]. During the transmission of a signal beam through free space, atmospheric turbulence-induced refractive index fluctuations lead to amplitude and phase distortions, substantially impairing the communication quality of free-space coherent optical communication systems [4,5]. This phenomenon is particularly pronounced in long-distance space-to-ground communications, where the signal beam undergoes long-distance atmospheric channel propagation, leading to an incomplete phase, distorted wavefronts, and thus a reduced communication bit error rate during signal demodulation at the receiving end. Consequently, the impact of atmospheric turbulence on space-to-ground communication is particularly severe [6]. The principle behind the effect of atmospheric turbulence on the signal beam is the addition of a random turbulence phase to the signal beam. Therefore, the accurate reconstruction of the turbulence phase is crucial for addressing atmospheric disturbance challenges in the field of free-space coherent optical communication.
Adaptive optic methods are a common and effective approach to compensate for spatial phase distortions. Mature adaptive optic technologies are categorized into two main types based on the utilization of wavefront sensing: conventional adaptive optics and wavefront sensor-less adaptive optics [7,8]. The fundamental approach of the conventional adaptive optic method entails the use of wavefront sensing instruments, such as Shack–Hartmann wavefront sensors or curvature wavefront sensors, to measure the wavefront aberrations in the signal beam. Subsequently, these aberrations are corrected by a wavefront compensation system composed of deformable mirrors. Nevertheless, scintillation effects caused by intense turbulence can disrupt the wavefront of the signal beam, thereby compromising the accuracy of wavefront sensor measurements [9,10].
The wavefront sensor-less adaptive optics approach cannot directly measure wavefront information. Consequently, optimization algorithms such as GA, PSO, RUN, or SPGD are employed to iteratively optimize the control signal of the wavefront corrector, using metrics such as image clarity, Strehl ratio, or coupling efficiency as the objective function to achieve optimal phase compensation [11,12,13,14]. Nonetheless, the iterative process of these optimization algorithms constrains the control bandwidth of wavefront sensor-less adaptive optic systems. The phase inversion method directly reconstructs the distorted wavefront from the far-field image. However, conventional phase inversion methods based on the Gerchberg–Saxton algorithm [15] or the phase difference method [16] necessitate multiple iterations to resolve wavefront phase information, which fails to satisfy the real-time demands of spatial phase distortion compensation.
Due to its outstanding advantage in extracting deep feature information from images, deep neural networks are expected to achieve the direct calculation of the phase from far-field images to the wavefront, improving the real time and accuracy of phase inversion [17,18,19,20]. In this study, a method is presented for reconstructing the turbulence phase based on the U-Net network. The method uses intensity images affected by turbulence as feature extraction objects and utilizes a large dataset of intensity-phase samples under different turbulence intensities for training, achieving turbulence phase reconstruction based on intensity images. To better deal with complex and diverse atmospheric conditions, the turbulence phase was extracted from Gaussian beams affected by different turbulence intensities (Cn2 = 5 × 10−17, Cn2 = 5 × 10−15, Cn2 = 1 × 10−13). Moreover, this paper further verified the effectiveness of the proposed method by building an experimental system. The results demonstrate that the U-Net-based approach offers a rapid and accurate method for reconstructing atmospheric turbulence phases, presenting a novel strategy for wavefront phase compensation. The proposed method exhibits strong generalization ability and robustness, enabling accurate and rapid wavefront phase reconstruction across diverse turbulence intensities, which is advantageous for the deployment of free-space coherent optical communication systems.

2. Atmospheric Turbulence Model

To evaluate the detrimental impact of atmospheric turbulence on coherent optical communication systems, a quantitative analysis of the stochastic distribution properties of both the amplitude and phase after the transmission of the signal beam through atmospheric turbulent channels is essential. Light’s propagation through the atmosphere exhibits a refractive index power spectral density as a stochastic process. The creation of an atmospheric turbulence phase screen utilizes the low-frequency compensation power spectrum inversion method to create an independent stochastic process that mirrors the refractive index power spectral density of atmospheric turbulence, facilitating numerical simulations of the spatial phase distortion caused by atmospheric turbulence [21,22]. In addressing this limitation, third-order harmonic compensation was introduced across varying turbulence intensities. The process of generating phase screens using the power spectral method can be expressed as follows:
φ ( j Δ x , l Δ y ) = m = 0 N x n = 0 N y R ( k x , k y ) Δ k x , Δ k y 2 π k 2 Δ z Φ n ( m Δ k x , n Δ k y ) exp ( 2 π i ( j m N x + l n N y ) ) ,
where Δx and Δy represent the sampling interval of the phase screen in the x and y directions, R(kx, ky) is a complex Gaussian random matrix, Φn(mΔkx,nΔky) is the power spectrum function, the wave number k is given by k = 2π/λ, kx = 2π/Nx and ky = 2π/Ny are wave number increments, and Δz represents the distance between phase screens. However, the power spectrum inversion method suffers from an insufficient number of low-frequency components. To solve this problem, we added third-order harmonic compensation to different turbulence intensities. The subharmonics were generated as follows:
φ S H ( j Δ x , l Δ y ) = p = 1 N p m = 1 1 n = 1 1 R ( k x p , k y p ) Δ k x p , Δ k y p 2 π k 2 Δ z Φ n ( Δ k x p , Δ k y p ) exp ( 2 π i ( j m 3 p N x + l n 3 p N y ) ) ,
where P is the series of subharmonics and the frequency intervals of the subharmonics are Δkxp = Δkx/3p and Δkyp = Δky/3p, respectively. The phase screen after the subharmonic compensation is obtained by adding (1) and (2).
Based on the methodology outlined above, the impact of atmospheric turbulence on the beam can be likened to introducing random phase noise, with the extent of spatial phase distortion induced by atmospheric turbulence being quantified by the atmospheric refractive index structure constant Cn2. The simulation results of atmospheric turbulence phase screens under varying Cn2 values are illustrated in Figure 1. The simulation assumes an optical wavelength of 532 nm, with a grid point count of N = 128 and a phase screen width of 0.4 m. Figure 1a–c depict phase screens corresponding to weak (Cn2 = 5 × 10−17 m−2/3), moderate (Cn2 = 5 × 10−15 m−2/3), and strong (Cn2 = 1 × 10−13 m−2/3) turbulence, respectively. As the Cn2 value increases, the severity of spatial phase distortion induced by atmospheric turbulence escalates. While weak turbulence results in relatively mild phase distortion, strong turbulence can induce profound spatial phase distortion in the beam, significantly impairing the light’s wavefront.
To simulate the transmission process of Gaussian light in the atmosphere, this paper adopts the most important fundamental mode of Gaussian light in Gaussian light, and assumes that Gaussian light propagates in the Z direction. The optical field expression of the Gaussian beam can be expressed as follows [23]:
U ( x , y , z ) = U 0 ω 0 ω ( z ) exp ( r 2 ω 2 ( z ) ) × exp { i [ k z tan 1 z z R + k r 2 2 R ( z ) ] } ,
where U0 is a constant; the cross-sectional radius r2 is defined as r2 = x2 + y2; ω(z) represents the cross-sectional radius at position z, where ω0 is the waist radius of the fundamental mode Gaussian beam; and R(z) is the wavefront curvature radius at point z.
Assuming that the spatial phase distortion induced by a random continuous atmosphere is ascribed to an infinitely thin atmospheric turbulence phase screen positioned at the midpoint of the propagation path, the transmission of Gaussian beams through atmospheric turbulence can be modeled as two sequential processes: propagation in vacuum and modulation by the phase screen. Equation (3) can be divided into two parts: the amplitude factor before the multiplication sign and the phase factor after the multiplication sign. The main impact of atmospheric turbulence on the Gaussian beam is on the phase. After propagating a certain distance, the accumulated phase fluctuations caused by turbulence will result in changes in the complex amplitude, leading to distortion in the light intensity. The optical field expression of the Gaussian beam affected by turbulence is given by
U ( x , y , z ) = U 0 ω 0 ω ( z ) exp ( r 2 ω 2 ( z ) ) × exp { i [ k z tan 1 z z R + k r 2 2 R ( z ) ] } × exp [ i φ ( x , y ) ] .
In this study, the power spectrum inversion method was used to simulate the phase screens, with the following parameters: the wavelength of the light wave was λ = 532 nm, the initial radius of the Gaussian beam was ω0 = 20 mm, the number of grid points was N = 128, and the width of the phase screens was 0.4 m. Before the network training, the image was grayscaled. The intensity distributions simulated for different turbulence intensities are shown in Figure 2.
Figure 2 illustrates that the distortion of the light beam escalates with the augmentation of turbulence intensity. At a turbulence intensity of Cn2 = 5 × 10−17 m−2/3, the light spot displayed measly drift. When the turbulence intensity was Cn2 = 5 × 10−15 m−2/3, the light spot showed slight spreading without discernible distortion. However, at a turbulence intensity of Cn2 = 1 × 10−13 m−2/3, the light spot center experienced severe drift, and the initially circular shape transformed into an irregular form. This transformation is attributed to the increase in irregular turbulent phase contributions to the light beam as the turbulence intensity grows, thereby enhancing phase and intensity fluctuations, which resulted in substantial beam distortion.

3. U-Net Model

The U-Net architecture employed in this study is illustrated in Figure 3. It consists of four downsampling stages and four upsampling stages. Each stage contains two convolutional layers. In contrast to the standard U-Net, this network utilizes padding to prevent a reduction in the image size during convolution. Consequently, the horizontal gray arrows directly concatenate two feature maps using the “cat” operation. During the downsampling process, which is represented by the red arrows, a 2 × 2 max pooling operation was applied to reduce the size of the feature maps. Conversely, during the upsampling process, which is represented by the green arrows, a 2 × 2 transpose-convolution operation was used.
The network input was a single-channel two-dimensional image representing a Gaussian beam affected by atmospheric turbulence. To reduce the computational complexity, the image size was initially reduced to 128 × 128 pixels. The downsampling process involved applying a convolution (conv1) with a 3 × 3 kernel, stride of 1, and padding of 1 to obtain 64 feature maps with a size of 128 × 128. These feature maps were then downsampled to 64 feature maps with a size of 64 × 64 using a 2 × 2 max pooling operation. This process was repeated with conv2, resulting in 128 feature maps of size 64 × 64 being downsampled to 128 feature maps of size 32 × 32. Subsequently, conv3 generated 256 feature maps of size 32 × 32, which were further downsampled to 256 feature maps of size 16 × 16. Conv4 produced 512 feature maps with a size of 16 × 16, which were downsampled to 512 feature maps with a size of 8 × 8. Finally, conv5 generated 1024 feature maps of size 8 × 8, indicating the completion of the downsampling process. The network then proceeded with an upsampling process to restore the original image size.
During the first upsampling stage, the feature maps from conv4 were upsampled to 512 feature maps with dimensions of 16 × 16. These upsampled feature maps were concatenated (cat) with the corresponding feature maps from conv4, forming 1024 feature maps of size 16 × 16. The resulting feature maps passed through conv6 to generate 512 feature maps with a size of 16 × 16. A similar process occurred in the subsequent upsampling stages. The feature maps from conv3 were upsampled to 256 feature maps of size 32 × 32 and concatenated with the corresponding feature maps from conv3 to form 512 feature maps of size 32 × 32 after passing through conv7. The feature maps from conv2 were then upsampled to 128 feature maps of size 64 × 64 and concatenated with the corresponding feature maps from conv2 to form 256 feature maps of size 64 × 64 after passing through conv8. Finally, the feature maps from conv1 were upsampled to 64 feature maps of size 128 × 128 and concatenated with the corresponding feature maps from conv1 to form 128 feature maps of size 128 × 128 after passing through conv9. The prediction layer outputs the turbulence phase, which affects Gaussian beam transmission. The entire process of the U-Net network allows the accurate prediction of the turbulence phase from the intensity images affected by atmospheric turbulence.
The U-Net network uses a loss function composed of the predicted and actual turbulent phases to evaluate its performance. The sample error loss function used in this study was
L o s s = 1 N i = 1 N ( y i log ( y ^ i ) + ( 1 y i ) log ( 1 y ^ i ) )
where N represents the number of samples, yi represents the actual turbulent phase, and ŷi represents the turbulent phase predicted by the U-Net model. First, for each sample, we calculated the loss of the true label and the predicted result. Then, we summed the losses of all the samples, and finally, we divided the sum by the number of samples to obtain the average loss. The MSE was used to measure the quality of the model predictions:
M S E = 1 m i = 1 m ( y i y ^ i ) 2
where m represents the total number of pixels in the predicted and actual turbulence phase images.

4. Phase Reconstruction of Atmospheric Turbulence

To verify the effectiveness of the method, we used the atmospheric turbulence phase screen to modulate the Gaussian beam on the transmission path and then used the U-Net network to reconstruct the turbulent phase in the distorted Gaussian beam. The turbulent phase was then reconstructed. To ensure that the model had a good generalization ability, 6000 training sets and 2400 test sets were constructed for the three turbulence intensities of Cn2 = 5 × 10−17 m−2/3, Cn2 = 5 × 10−15 m−2/3, and Cn2 = 1 × 10−13 m−2/3. The training sets were used to train the U-Net model, and the test set was used to test the trained model. The method was considered to be effective if the turbulence phase output by the U-Net model was consistent with the actual turbulence phase after a distorted Gaussian beam affected by atmospheric turbulence was input into the U-Net model.
The training process encompasses two primary stages: forward propagation and backward propagation. During forward propagation, the training parameters of the U-Net network are adjusted, whereas backward propagation focuses on optimizing the network parameters based on the loss function. The U-Net network was trained using Python on a system equipped with an NVIDIA GTX 4080 GPU, employing an Adam optimizer with an initial learning rate of 10−3, a batch size of 50, and a total of 2500 training epochs. After a certain number of rounds of training, the network loss function reaches a fitting point. At this point, the network model training ends, and the weights are saved for the atmospheric turbulence phase reconstruction task.
Figure 4 presents the convergence curves of three different turbulence intensity loss functions. As shown, the U-Net model had almost the same reconstruction performance for different turbulence intensities, indicating that the model has a strong generalization ability. Furthermore, it can be observed that there are burrs in the enlarged curve image. The reason is that the loss function value during the training process is calculated based on the comparison between the reconstructed turbulence phase and the actual turbulence phase in each iteration. The turbulence phase reconstructed with each iteration of the network cannot be guaranteed to be closer to the actual turbulence phase than the last time. Therefore, the value of the loss function is constantly fluctuating locally, but it is still monotonically decreasing overall.
The test results of the model for the three turbulence intensities are shown in Figure 5. The first, second, and third rows present the simulation results for turbulence intensities of Cn2 = 5 × 10−17 m−2/3, Cn2 = 5 × 10−15 m−2/3, and Cn2 = 1 × 10−13 m−2/3, respectively. The last image in each row shows the reconstructed turbulence. The difference between the reconstructed image and the actual image is presented in Figure 5. Under the three turbulence intensities, the MSEs from the first row to the third row are 0.00033, 0.00032, and 0.00029, respectively. According to the network reconstruction results, the difference between the reconstructed and actual phase screens was very small. The proposed model had the best reconstruction effect for strong turbulence, presumably because the images in the case of strong turbulence had more obvious features for the network to identify and extract.
To demonstrate the advantages of the U-Net-based method in turbulent phase reconstruction, we compared it with the CNN [15] and AlexNet [16]. The decline curves of the loss function of the three models after training under different turbulence intensities are shown in Figure 6. All three networks were trained using the same dataset, and the number of iterations was set to 2500. As shown in Figure 6a–c, we display the loss function descent curves for the three network models under Cn2 = 5 × 10−17 m−2/3, Cn2 = 5 × 10−15 m−2/3, and Cn2 = 1 × 10−13 m−2/3 turbulence intensities, respectively. The red, blue, and green color curves represent the loss function values of CNN, the AlexNet, and the proposed model decreasing as the number of iterations increases, respectively. Lower loss values indicate that the network is more accurate in phase reconstruction.
The CNN and the AlexNet had slower convergence than the proposed model; even after 2500 iterations, their loss function values exhibited a downward trend, indicating that these two networks require more iterations to reach convergence. In contrast, the proposed model had already reached convergence after 2500 iterations, and its loss function value was lower than those of the other two networks. These results indicate that the proposed model achieves excellent turbulence phase reconstruction with significantly fewer iterations.
In addition, the time required to predict the turbulent phase is one of the criteria for evaluating the ability to extract turbulent information. Therefore, we recorded the average amounts of time required by the three models to predict the turbulent phase. As shown in Table 1, the CNN and AlexNet models both required more time; they took approximately 0.31 and 0.29 s, respectively. In comparison, the proposed model took approximately 0.14 s to reconstruct the turbulent phase. In summary, compared with the other two models, the loss function value of the model proposed in this paper has dropped by more than 20 times, and the phase reconstruction time has been reduced by more than half, which is significantly faster than the turbulence change rate using Greenwood frequency as the evaluation index. This indicates that the reconstruction time is meeting the requirement for fast reconstruction.

5. Experiment Results

To evaluate the effectiveness of the proposed method for turbulence phase reconstruction, an experimental platform was established to gather empirical data. Illustrated in Figure 7, the light emitted from the laser source (532 nm) passes through a series of optical components, including a neutral density filter (NDF) and a field lens (FL), resulting in optical power attenuation and beam expansion. Subsequently, the light beam traverses a polarizing filter (PF) before impinging upon the spatial light modulator (SLM). The SLM is programmed with an atmospheric turbulence phase screen of varying turbulence intensity to induce wavefront aberrations [26]. Ultimately, the distorted beam, manipulated by the SLM, is captured by the camera. The images captured by the camera are acquired using dedicated data acquisition software on the computer, depicted in the interface in Figure 7. On the left-hand side of the computer screen, the atmospheric turbulence phase screen loaded onto the SLM is displayed, while the corresponding distorted light intensity image is presented on the right-hand side. These images, collected by the computer, constitute the dataset utilized for training and validating the proposed model.
This paper utilizes a dataset comprising 2000 images for training across three distinct turbulence intensities, supplemented by an additional 300 images allocated for testing purposes. Throughout the training phase, all other parameters remained consistent. The loss curves pertaining to the proposed model under the varied turbulence intensities are delineated in Figure 8. Specifically, the loss function curves corresponding to weak, moderate, and strong turbulence intensities are represented by the blue, red, and green curves, respectively. Notably, the red box highlights the magnified loss function curves post 1200 iterations. Consistent with the simulation results, the loss function demonstrates a downward trend with increasing iterations under experimental conditions.
The training loss values are 0.0005, 0.001, and 0.003 under weak, moderate, and strong turbulence intensities, respectively. Moreover, the spikes observed in the loss function curves exhibit greater prominence than simulated counterparts, a phenomenon attributed to the introduction of noise by external environmental factors and experimental equipment during testing. The outcomes of model testing are depicted in Figure 9, where panels (a)–(c) display the test results under weak, moderate, and strong turbulence intensities. The corresponding test loss values for these conditions are 0.00052, 0.0013, and 0.0035, respectively. Each column in Figure 9 illustrates distorted light intensity images under varying turbulence intensities in the first column; real phase screen images in the second column; reconstructed phase screen images by the proposed network model in the third column; and the discrepancy between the second and third columns in the fourth column. These discrepancy images unveil subtle phase fluctuations, indicative of the network model’s challenge in effectively capturing features within regions exhibiting significant phase variations and its susceptibility to noise inherent in the experimental setup. The experimental findings demonstrate the network model’s capability to reconstruct phase screen images from distorted light intensity images across diverse turbulence intensities, underscoring its robust generalization prowess.

6. Conclusions

This paper presents a method that uses the U-Net model to reconstruct the atmospheric turbulence phase, and further verified its effectiveness through experiments. Compared to previous turbulence reconstruction schemes based on deep learning, the proposed scheme has decreased the loss function by more than 20 times and reduced time consumption by more than half. The detailed experiment results indicated that the proposed method could effectively extract turbulence phases of different intensities. Moreover, the reconstructive performance of the model remained stable and did not fluctuate with changes in the turbulence intensity, indicating the model’s strong generalization capability. The proposed method allows for the accurate and rapid reconstruction of the wavefront phase under varying turbulence intensities with strong generalization ability and robustness, thereby benefiting the application of free-space coherent optical communication.

Author Contributions

Conceptualization, Y.L. and M.Z.; methodology, Y.L.; software, M.Z. and X.W.; validation, Y.L., M.Z. and X.W.; formal analysis, X.W.; investigation, M.Z.; resources, Y.L.; data curation, M.Z.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L. and M.Z.; visualization, Y.L.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Hebei Natural Science Foundation (F2022203011) and Science Research Project of Hebei Education Department (QN2022112).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, T.; Zeng, W.; Liu, Y.; Song, S.; Zhao, L.; Chen, C.; Zhang, C.; Guo, L. Secure turbulence-resistant coherent free-space optical communications via chaotic region-optimized probabilistic constellation shaping. Opt. Lett. 2023, 48, 684. [Google Scholar] [CrossRef] [PubMed]
  2. Zhou, H.; Duan, Y.; Song, H.; Su, X.; Zhao, Z.; Zou, K.; Song, H.; Zhang, R.; Boyd, R.W.; Tur, M.; et al. Automatic turbulence mitigation for coherent free-space optical links using crystal-based phase conjugation and fiber-coupled data modulation. Opt. Lett. 2023, 48, 2194. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, L.; Wang, J.; Tang, X.; Chen, H.; Chen, X. Performance analysis of a spatial diversity coherent free-space optical communication system based on optimal branch block phase correction. Opt. Express 2022, 30, 7854. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, K.; Song, L.; Wang, C.; Ren, Z.; Zhao, G.; Dou, J.; Di, J.; Barbastathis, G.; Zhou, R.; Zhao, J.; et al. On the use of deep learning for phase recovery. Light Sci. Appl. 2024, 13, 4. [Google Scholar] [CrossRef]
  5. Li, X.; Liu, X.; Wei, W.; Zhong, X.; Ma, H.; Chu, J.A. DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence. Remote Sens. 2023, 15, 5071. [Google Scholar] [CrossRef]
  6. Ge, H.; Zhou, X.; Chen, Y.; Zhang, J.; Ni, W.; Wang, X.; Zheng, L. Shot-noise-limited photon-counting precoding scheme for MIMO ultraviolet communication in atmospheric turbulence. Opt. Express 2023, 31, 426. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, A.; Pan, A.; Ma, C.; Yao, B. Phase retrieval algorithms: Principles, developments and applications (invited). Infrared Laser Eng. 2022, 51, 20220402. [Google Scholar] [CrossRef]
  8. Christou, J.; Girkin, J.; Kulcsar, C.; Young, L. Feature issue introduction: Applications of adaptive optics. Opt. Express 2021, 29, 11533–11537. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, W.; Yao, K.; Huang, D.; Lin, X.; Wang, L.; Lv, Y. Performance evaluation of coherent free space optical communications with a double-stage fast-steering-mirror adaptive optics system depending on the Greenwood frequency. Opt. Express 2016, 24, 13288–13302. [Google Scholar] [CrossRef]
  10. Wang, M.; Guo, W.; Yuan, X. Single-shot wavefront sensing with deep neural networks for free-space optical communications. Opt. Express 2021, 29, 3465–3478. [Google Scholar] [CrossRef]
  11. Zhang, H.; Xu, L.; Guo, Y.; Cao, J.; Liu, W.; Yang, L. Application of AdamSPGD algorithm to sensor-less adaptive optics in coherent free-space optical communication system. Opt. Express 2022, 30, 7477–7490. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, W.; Jin, D.; Shi, W.; Cao, J. Performance analysis of coherent optical communication based on hybrid algorithm. Opt. Laser Technol. 2022, 149, 107878. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Luo, Y.; Yang, H.; Su, H.; Liu, J. CoolMomentum-SPGD Algorithm for Wavefront Sensor-Less Adaptive Optics Systems. Photonics 2023, 10, 102. [Google Scholar] [CrossRef]
  14. Han, L.; Dai, Y.; Qiu, Y. Compensation for aberrant wavefront in UOWC based on adaptive optics technique employing genetic algorithm. Optik 2023, 281, 170832. [Google Scholar] [CrossRef]
  15. Gerchberg, R.; Saxton, W. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 1972, 35, 274–279. [Google Scholar]
  16. Gonsalves, R. Phase retrieval and diversity in adaptive optics. Opt. Eng. 1982, 21, 829–832. [Google Scholar] [CrossRef]
  17. You, J.; Gu, J.; Du, Y.; Wan, M.; Xie, C.; Xiang, Z. Atmospheric Turbulence Aberration Correction Based on Deep Learning Wavefront Sensing. Sensors 2023, 23, 9159. [Google Scholar] [CrossRef]
  18. Fu, S.; Zhang, S.; Wang, T.; Gao, C. Pre-turbulence compensation of orbital angular momentum beams based on a probe and the Gerchberg–Saxton algorithm. Opt. Lett. 2016, 41, 3185. [Google Scholar] [CrossRef]
  19. Meng, J.; He, J.; Huang, M.; Li, Y.; Zhu, B.; Kong, X.; Han, Z.; Li, X.; Liu, Y. Predictive correction method based on deep learning for a phase compensation system with frozen flow turbulence. Opt. Lett. 2022, 47, 6417. [Google Scholar] [CrossRef]
  20. Wang, X.; Wu, T.; Dong, C.; Zhu, H.; Zhu, Z.; Zhao, S. Integrating deep learning to achieve phase compensation for free-space orbital-angular-momentum-encoded quantum key distribution under atmospheric turbulence. Photonics Res. 2021, 9, B9. [Google Scholar] [CrossRef]
  21. Abdukirim, A.; Ren, Y.; Tao, Z.; Liu, S.; Li, Y.; Deng, H.; Rao, R. Effects of Atmospheric Coherent Time on Inverse Synthetic Aperture Ladar Imaging through Atmospheric Turbulence. Remote Sens. 2023, 15, 2883. [Google Scholar] [CrossRef]
  22. Liu, Y.; Gao, M.; Zeng, X.; Liu, F.; Bi, W. Factors influencing the applications of active heterodyne detection. Opt. Laser Eng. 2021, 146, 106694. [Google Scholar] [CrossRef]
  23. Wu, Z.; Feng, Z.; Ye, S.; Song, B.; Wei, R.; Yu, C. Beam Properties of a Partially Coherent Beam Propagating Horizontally in Atmospheric Turbulence. Photonics 2023, 10, 477. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Zhou, T.; Fang, L.; Kong, L.; Xie, H.; Dai, Q. Conformal convolutional neural network (CCNN) for single-shot sensorless wavefront sensing. Opt. Express 2020, 28, 19218. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, Z.; Khan, F.N.; Li, Y.; Wang, Z.; Zhang, Y.; Fu, H.Y. Application and comparison of active and transfer learning approaches for modulation format classification in visible light communication systems. Opt. Express 2022, 30, 16351. [Google Scholar] [CrossRef]
  26. Kudryashov, A.; Gonglewski, J.; Browne, S.; Highland, R. Liquid crystal phase modulator for adaptive optics. Temporal performance characterization. Opt. Commun. 1997, 141, 247–252. [Google Scholar] [CrossRef]
Figure 1. Simulation results of atmospheric turbulence phase screen under different turbulence intensities. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Figure 1. Simulation results of atmospheric turbulence phase screen under different turbulence intensities. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Sensors 24 04604 g001
Figure 2. The intensity distributions simulated under different turbulence intensities. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Figure 2. The intensity distributions simulated under different turbulence intensities. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Sensors 24 04604 g002
Figure 3. U-Net network structure.
Figure 3. U-Net network structure.
Sensors 24 04604 g003
Figure 4. Loss function curves for different turbulence intensities. The picture in the red dotted box pointed to by the arrow is the enlarged picture of the curve after the 500th iteration.
Figure 4. Loss function curves for different turbulence intensities. The picture in the red dotted box pointed to by the arrow is the enlarged picture of the curve after the 500th iteration.
Sensors 24 04604 g004
Figure 5. Actual and reconstructed turbulent phase screens and the difference between them under various turbulence intensities. (a) Cn2 = 5 × 10−17 m−2/3, and the MSE = 0.00033; (b) Cn2 = 5 × 10−15 m−2/3, and the MSE = 0.00032; (c) Cn2 = 1 × 10−13 m−2/3, and the MSE was 0.00029.
Figure 5. Actual and reconstructed turbulent phase screens and the difference between them under various turbulence intensities. (a) Cn2 = 5 × 10−17 m−2/3, and the MSE = 0.00033; (b) Cn2 = 5 × 10−15 m−2/3, and the MSE = 0.00032; (c) Cn2 = 1 × 10−13 m−2/3, and the MSE was 0.00029.
Sensors 24 04604 g005
Figure 6. Loss function value decline curves under different turbulence intensities of CNN, the AlexNet, and the proposed model. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Figure 6. Loss function value decline curves under different turbulence intensities of CNN, the AlexNet, and the proposed model. (a) Weak turbulence (Cn2 = 5 × 10−17 m−2/3); (b) moderate turbulence (Cn2 = 5 × 10−15 m−2/3); (c) strong turbulence (Cn2 = 1 × 10−13 m−2/3).
Sensors 24 04604 g006
Figure 7. Experimental system structure. NDF, Neutral Density Filter; FL, Field Lens; PF, Polarizing Filter; SLM, Spatial Light Modulator.
Figure 7. Experimental system structure. NDF, Neutral Density Filter; FL, Field Lens; PF, Polarizing Filter; SLM, Spatial Light Modulator.
Sensors 24 04604 g007
Figure 8. The loss function curves under three different turbulence intensities. The blue, red, and green curves correspond to the loss function curves under weak, moderate, and strong turbulence intensities, respectively. The red box indicated by the red arrow shows the amplified loss function curves after 1200 iterations.
Figure 8. The loss function curves under three different turbulence intensities. The blue, red, and green curves correspond to the loss function curves under weak, moderate, and strong turbulence intensities, respectively. The red box indicated by the red arrow shows the amplified loss function curves after 1200 iterations.
Sensors 24 04604 g008
Figure 9. The test results of the proposed model under three different turbulence intensities. (a) Cn2 = 5 × 10−17 m−2/3; (b) Cn2 = 5 × 10−15 m−2/3; (c) Cn2 = 1 × 10−13 m−2/3.
Figure 9. The test results of the proposed model under three different turbulence intensities. (a) Cn2 = 5 × 10−17 m−2/3; (b) Cn2 = 5 × 10−15 m−2/3; (c) Cn2 = 1 × 10−13 m−2/3.
Sensors 24 04604 g009
Table 1. Comparison of reconstruction times of three methods.
Table 1. Comparison of reconstruction times of three methods.
NetworkDatasetReconstruction Time/sAverage Time/s
CNN model [24]Cn2 = 5 × 10−17 m−2/3
Cn2 = 5 × 10−15 m−2/3
Cn2 = 1 × 10−13 m−2/3
0.42
0.27
0.25
0.31
AlexNet model [25]Cn2 = 5 × 10−17 m−2/3
Cn2 = 5 × 10−15 m−2/3
Cn2 = 1 × 10−13 m−2/3
0.38
0.26
0.24
0.29
The proposed modelCn2 = 5 × 10−17 m−2/3
Cn2 = 5 × 10−15 m−2/3
Cn2 = 1 × 10−13 m−2/3
0.18
0.13
0.12
0.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Zheng, M.; Wang, X. Atmospheric Turbulence Phase Reconstruction via Deep Learning Wavefront Sensing. Sensors 2024, 24, 4604. https://doi.org/10.3390/s24144604

AMA Style

Liu Y, Zheng M, Wang X. Atmospheric Turbulence Phase Reconstruction via Deep Learning Wavefront Sensing. Sensors. 2024; 24(14):4604. https://doi.org/10.3390/s24144604

Chicago/Turabian Style

Liu, Yutao, Mingwei Zheng, and Xingqi Wang. 2024. "Atmospheric Turbulence Phase Reconstruction via Deep Learning Wavefront Sensing" Sensors 24, no. 14: 4604. https://doi.org/10.3390/s24144604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop