Next Article in Journal
Temporal Convolutional Neural Network-Based Prediction of Vascular Health in Elderly Women Using Photoplethysmography-Derived Pulse Wave during Exercise
Previous Article in Journal
Research into Prediction Method for Pressure Pulsations in a Centrifugal Pump Based on Variational Mode Decomposition–Particle Swarm Optimization and Hybrid Deep Learning Models
Previous Article in Special Issue
A Vascular Feature Detection and Matching Method Based on Dual-Branch Fusion and Structure Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A W-Shaped Self-Supervised Computational Ghost Imaging Restoration Method for Occluded Targets

1
Department of Physics, Changchun University of Science and Technology, Changchun 130022, China
2
School of Physics and Electronics, Baicheng Normal University, Baicheng 137000, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(13), 4197; https://doi.org/10.3390/s24134197
Submission received: 29 May 2024 / Revised: 18 June 2024 / Accepted: 25 June 2024 / Published: 28 June 2024
(This article belongs to the Special Issue Advanced Sensing and Measurement Control Applications)

Abstract

:
We developed a novel method based on self-supervised learning to improve the ghost imaging of occluded objects. In particular, we introduced a W-shaped neural network to preprocess the input image and enhance the overall quality and efficiency of the reconstruction method. We verified the superiority of our W-shaped self-supervised computational ghost imaging (WSCGI) method through numerical simulations and experimental validations. Our results underscore the potential of self-supervised learning in advancing ghost imaging.

1. Introduction

Ghost imaging is a nontraditional imaging method that has the ability to capture object images in inaccessible environments. It is robust against interference and enable nonlocal image reconstruction, exploiting the features of second-order correlation functions [1,2,3,4,5,6]. The groundbreaking work of Pittman et al. demonstrated the feasibility of ghost imaging using entangled photons [7]. Ghost imaging provides a unique and promising alternative to conventional imaging techniques such as charge-coupled devices (CCDs) and complementary metal–oxide semiconductor (CMOS) cameras. It has numerous applications, including imaging through scattering media [8,9], color object imaging [10], and detection of moving objects [11,12].
Many methods to improve the practical use of ghost imaging (GI) have been developed, including differential ghost imaging (DGI) [13], normalized ghost imaging (NGI) [14], and alternating projection ghost imaging (APGI) [15]. These methods provide enhancement in the quality of ghost imaging, while computational ghost imaging (CGI) [16] has made experiments simpler. On the other hand, multiple data acquisitions are often required to achieve satisfactory imaging results. This is because the information collected by bucket detectors in each measurement is limited. For example, if a two-dimensional object has M pixels ( M = a b , where a denotes the number of horizontal pixels, and b the of vertical ones), the number of required measurements N corresponds to a sampling rate β = N/M. Traditional ghost imaging often requires N to be much larger than M to obtain high-quality imaging, if random speckles are used for illumination. Compressed sensing ghost imaging [17] and deep learning ghost imaging [18,19,20] have been proposed to address these issues. Both methods have demonstrated their ability to achieve high-quality imaging at low sampling rates. However, the reconstruction algorithms based on compressed sensing ghost imaging are highly sensitive to background noise [17,21], which limits their practical applications. Deep learning ghost imaging combines the principles of ghost imaging with the capabilities of deep neural networks to achieve high-quality imaging at low sampling rates [18]. Deep learning ghost imaging has demonstrated significant performance in noise reduction [22,23,24,25,26,27,28,29]. In addition, it can be applied in various fields [30,31,32,33,34,35], but these deep learning methods require a lot of data and lengthy training, which is very time-consuming. In order to improve the efficiency of deep learning retrieval imaging, self-supervised learning approaches have been proposed [36,37,38]. These methods can produce high-quality images without the need to collect data.
In practical applications of ghost imaging, partial occlusion of the object under test may occur. Although previous studies have demonstrated that ghost imaging has some imaging capabilities for occluding objects [4], the method requires the distance between the target object and the obstacle to be sufficiently far, and the occluding object needs to be between the object under test and the bucket detector, which is not easily achievable in practical applications. Other related studies have focused on scattering media such as atmospheric turbulence [39,40], which also imposes certain limitations on practical applications.
The above method has certain advantages in ghost imaging with occluding objects, but there are still some unresolved issues, such as how to image nontransparent occluding objects and how to improve the efficiency of occluding-object ghost imaging. Therefore, addressing these issues remains a challenge, prompting us to develop a method called W-shaped self-supervised computational ghost imaging (WSCGI). This method, combining self-supervised deep learning techniques, quickly achieves high-quality imaging at low sampling rates. At the same time, it can also restore the parts obscured by opaque objects. Compared with other occluding-object ghost imaging methods, our approach is simple and practical and greatly improves the imaging efficiency.
This paper is structured as follows: Section 2 describes WSCGI, outlining its principles and methods. In Section 3 and Section 4, we discuss the effectiveness of WSCGI in restoring occluded objects using results from numerical simulations and experiments, respectively. Finally, Section 5 summarizes the main research findings and conclusions.

2. W-Shaped Self-Supervised Computational Ghost Imaging

Figure 1a illustrates the setup of our computational ghost imaging experiment. At first, a series of random illumination speckle patterns were generated by the computer and loaded onto the projector. Then, the light passed through the object and was collected by a bucket detector. At last, the collected intensity signals were sent to the computer. The set of collected signals B m can be written as
B m = I m ( x , y ) T ( x , y ) d x d y ,
where I m ( x , y ) represents the speckle pattern of the mth measurement with m = 1 , 2 . . . , N , and N represents the total number of measurements. T ( x , y ) denotes the transmittance function of the object, and (x, y) denotes the coordinates of the target. By leveraging the principle of normalized ghost imaging, it is possible to effectively reduce noise interference, thereby obtaining high-quality images. This corresponds to
T N G I ( x , y ) = B m I m ( x , y ) B m S m B m I m ( x , y )
where . . . denotes averaging over N measurements, and S m = I m ( x , y ) d x d y .
Although algorithmically optimized computational ghost imaging methods have achieved some degree of success, they still require a large number of measurements and demanding experimental conditions. Deep-learning-based ghost imaging methods can be employed, but they often rely on manually designed feature extractors and extensive labeled data, resulting in significant time consumption. The method is shown in Figure 1b.
The input for the WSCGI network consists of signals from the bucket detector and computer-generated speckle patterns. Upon exploiting deep learning techniques, it autonomously learns image representations, eliminating the need for dataset preparation, and accomplishes the reconstruction of occluded objects. The neural network that we designed facilitates iterative refinement of the final results. In other words, we preprocess the input images to enhance their quality and the overall information they provide, before proceeding with occlusion recovery. Such a structure significantly enhances the probability and quality of occlusion imaging recovery. The network architecture of WSCGI is illustrated in Figure 2.
Our WSCGI network is made of three main parts: The first one handles network inputs, receives the bucket detector signals as well as the input speckle signals generated by the physical GI model, and computes the initial image input. The mathematical principles underlying this process are illustrated in Equation (2).
The second part is Net1, which is primarily responsible for enhancing the input image. Net1 consists of multiple convolutional and deconvolutional modules, ultimately yielding an output using a sigmoid function. The convolutional and deconvolutional modules include convolution layers, batch normalization (BN) layers, and LeakyReLU layers. The convolution kernel size is 5 × 5, with a learning rate of 0.01, an exponential decay rate of 0.9, and a decay step size of 100. Additionally, Net1 processes the entire image, thus needing more detail and structural information, and hence requiring relatively more network layers. By harnessing the inherent low sensitivity to natural signals and high resistance to noise of neural networks, we can effectively filter out abrupt components in degraded images by controlling the number of iterations. Mathematically, this can be expressed as
min θ 1 2 T N G I f θ ( z ) 2 + λ R ( θ ) ,
where T N G I represents the low-quality input image, f ( · ) denotes the network framework, and θ are the network parameters. We incorporate total variation (TV) regularization as a regularization term ( λ R ), where λ is the regularization coefficient. The image z is randomly generated, i.e., it does not contain any specific information learned by the network about the original image. By minimizing the mean square error term and the total variation regularization term in the loss function, we optimize the network parameters to achieve image enhancement. The total variation regularization term is also exploited to smooth the image and enhance edges, thereby reducing noise in the reconstruction process and preserving the information details of the image. Overall, Net1 can simultaneously achieve functions such as image denoising and edge enhancement.
The third component is Net2, tasked primarily with restoring the occluded parts of the image by further processing the output of Net1. The structure of Net2 is largely similar to that of Net1 and consists of four convolutional modules with downsampling layers, and four convolutional modules with upsampling layers. LeakyReLU is employed as activation function. The convolution kernel size is 5 × 5, with a learning rate of 0.01, an exponential decay rate of 0.9, and a decay step size of 100. Net2 primarily deals with occluded local regions, hence requiring the processing of more localized contextual information. Compared to Net1, it requires relatively fewer network layers, and the mathematical expression of its action may be written as follows:
E P ; P 0 = 1 N i = 1 N ( P i P 0 i ) 2 ,
where P 0 represents the input image, P represents the predicted image, P i and P 0 i denote the ith pixel values of the predicted and input images, respectively, and N is the total number of pixels. The patching process aims at filling the occluded regions of the image while minimizing the discontinuities and distortions in the surrounding areas. To achieve this goal, we use the mean squared error as the loss function and minimize the objective function E ( ) , ensuring that the completed image P i closely matches the reference pixel values at each position. Upon employing gradient descent optimization algorithm, we iteratively update the parameters of the model, enabling the generated image to closely resemble the original image, thereby completing the image patching process.
In evaluating the performance of self-supervised learning network aimed at reconstructing images, various metrics are commonly employed. These metrics include peak signal-to-noise ratio (PSNR) [41], the structural similarity index (SSIM) [42], and the discriminator of generative adversarial networks (GANs) [43]. However, in the absence of original images, these methods may lack objectivity. To address this issue, we introduce the natural image quality evaluator (NIQE) [44], which is a assessment method that does not require a reference and is capable of automatically evaluating image quality. The NIQE was developed for accurately assessing image quality by analyzing local features to simulate the statistical properties of natural images. The evaluation formula is written as follows:
D ( μ 1 , μ 2 , Γ 1 , Γ 2 ) = ( μ 1 μ 2 ) T Γ 1 + Γ 2 2 1 ( μ 1 μ 2 )
where D denotes the final score obtained from the NIQE, μ 1 and Γ 1 are the mean vector and the covariance matrix obtained within the NIQE model, and μ 2 and Γ 2 are the mean vector and the covariance matrix of the input image. Ultimately, we employed the NIQE score of the image to assess its quality, ensuring that our network effectively identified the highest-quality images during the training process.

3. Simulations

The image size used in the numerically simulated experiments was 64 × 64 pixels. In the experiment, the images were divided into complex images and simple binary images. The sampling rate β refers to the ratio between the number of sampling points and the total number of pixels.
The results are summarized in Figure 3. A lower NIQE value in Figure 3a indicates a higher image quality. In Figure 3c, we show the results obtained using normalized ghost imaging (NGI), where random speckle patterns are used as illumination modes, gradient descent (GD), and conjugate gradient descent (CGD), which are methods of compressive sensing ghost imaging, and alternating projection (APGI), which involves alternating projections in ghost imaging. It is evident from the results that even at a sampling rate of 100 % , the results of NGI, DG, and APGI are not ideal. There is noise in the background, and the image is blurry, consistent with the previous discussion. It is worth noting that the image quality of CGD also significantly deteriorates with the decrease in the sampling rate. The imaging effect of WSCGI is similar to that of GIDC. However, our method has lower noise in the image at lower sampling rates. Although WSCGI is not the optimal solution among the methods, it can still obtain high-quality images at low sampling rates. Moreover, WSCGI is mainly used to obtain images of occluded objects, so this part of the experiment cannot fully demonstrate the full capabilities of this method. To quantitatively evaluate the results obtained through different methods, we calculated the structural similarity index (SSIM) for each reconstructed image relative to its corresponding reference standard. Table 1 shows the SSIM values.
Next, we simulated a ghost imaging experiment to restore occluded objects. After analyzing the data, we decided to set the sampling rate at 25 % since it produced satisfactory results using less data and less computational time.
In Figure 4, the simulation results for the reconstruction of occluded objects for occlusion rates ranging from 10% to 40% are presented. It is apparent that WSCGI outperforms NGI, DIP, and double DIP in terms of image quality. WSCGI effectively restores the images of partially occluded objects at 20% or lower occlusion, partially restores them at 30% occlusion with some blurring, and exhibits less noticeable recovery at 40% occlusion. To assess the imaging performance of WSCGI under different occlusion conditions, we conducted SSIM and PSNR analyses. The SSIM and PSNR curves for different occlusion rates are shown in Figure 5a,b, respectively. Our quantitative results indicate that WSCGI performs better in recovering images with low occlusion rates, while its performance decreases with increasing occlusion rate. Clearly, recovering images with larger occluded areas poses a challenge, which we plan to investigate in future work.

4. Optical Experiment

We also conducted optical experiments, collecting data with the computational ghost imaging system illustrated in Figure 1. The size of the reconstructed images was set to 64 × 64 pixels. Here, we compared the performance of NGI, GIDC, and WSCGI at different sampling rates. The experimental results, shown in Figure 6, indicate that WSCGI successfully reconstructed the target object using only 256 measurements ( β = 6.25%). Although the resulting image appears slightly blurry, it still retains the overall outline of the object. The image quality improved significantly as the sampling rate increased. In all test scenarios (different objects and β values), WSCGI outperformed NGI and GIDC in terms of visual appearance and quantitative evaluation metrics (especially SSIM). When β exceeded 25%, the SSIM was larger than 90%, which is consistent with our simulation results. Furthermore, the experimental results suggest thatnthe images reconstructed by NGI are compromised by noise-induced degradation, resulting in lower contrast.
Furthermore, we conducted experiments at a sampling rate of 25%, introducing black opaque stripes as occluders objects. After feeding the collected bucket detection signals and speckle signals into the neural network, the following experimental results were obtained:
Figure 7 b 1 shows horizontally occluded letters, Figure 7 b 2 represents the resolution map of diagonal occlusion, Figure 7 b 3 shows Chinese characters occluded by a central square, and Figure 7 b 4 ) represents multiple segments of discontinuous occlusion. We can observe that the restoration effect in Figure 7 c 1 us not as satisfactory as that in Figure 7 c 2 , c 3 , which may be due to the introduction of a relatively high occlusion rate in Figure 7. In Figure 7 b 4 , we also show multiple segments of discontinuous occlusion, using different sizes and angles of multiple segments of occlusion to occlude the target object. The results are shown in Figure 7 c 4 , which better verify the effectiveness of WSCGI in occlusion imaging. Overall, our experimental results are highly consistent with the simulation results. WSCGI is effective in imaging occluded objects, but there is still potential for improvement in high-occlusion scenes.

5. Conclusions

In this paper, we proposed an innovative approach termed WSCGI, which exploits deep learning to restore occluded objects by ghost imaging, without requiring large-scale datasets. WSCGI also exhibits the capability to achieve the high-quality restoration of partially occluded objects at low sampling rates. Using numerical simulations and optical experiments, the effectiveness and feasibility of WSCGI in practical applications were verified, offering new insights and solutions for the advancement of ghost imaging technology. Future work will focus on further improving the image restoration performance of WSCGI under high occlusion rates.

Author Contributions

Conceptualization, Y.W. and X.W.; Methodology, Y.W.; Software, C.G. and H.Z.; Validation, Z.Y. (Zhuo Yu); Formal analysis, C.G.; Investigation, Y.W. and Hong Wang; Resources, C.G.; Data curation, Y.W. and Hong Wang; Writing—original draft, Y.W.; Writing—review & editing, Y.W., X.W. and Z.Y. (Zhihai Yao); Supervision, X.W. and Z.Y. (Zhihai Yao); Project administration, Z.Y. (Zhihai Yao). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science & Technology Development Project of Jilin Province (No.YDZJ202101ZYTS030).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, L.; Zhao, S. Fast reconstructed and high-quality ghost imaging with fast Walsh–Hadamard transform. Photonics Res. 2016, 4, 240–244. [Google Scholar] [CrossRef]
  2. Yu, H.; Lu, R.; Han, S.; Xie, H.; Du, G.; Xiao, T.; Zhu, D. Fourier-transform ghost imaging with hard X rays. Phys. Rev. Lett. 2016, 117, 113901. [Google Scholar] [CrossRef] [PubMed]
  3. Dong, S.; Zhang, W.; Huang, Y.; Peng, J. Long-distance temporal quantum ghost imaging over optical fibers. Sci. Rep. 2016, 6, 26022. [Google Scholar] [CrossRef] [PubMed]
  4. Gao, C.; Wang, X.; Gou, L.; Feng, Y.; Cai, H.; Wang, Z.; Yao, Z. Ghost imaging for an occluded object. Laser Phys. Lett. 2019, 16, 065202. [Google Scholar] [CrossRef]
  5. Sun, M.J.; Zhang, J.M. Single-pixel imaging and its application in three-dimensional reconstruction: A brief review. Sensors 2019, 19, 732. [Google Scholar] [CrossRef] [PubMed]
  6. Zhu, W.; Ma, W.; Su, Y.; Chen, Z.; Chen, X.; Ma, Y.; Bai, L.; Xiao, W.; Liu, T.; Zhu, H.; et al. Low-dose real-time X-ray imaging with nontoxic double perovskite scintillators. Light. Sci. Appl. 2020, 9, 112. [Google Scholar] [CrossRef]
  7. Pittman, T.B.; Shih, Y.H.; Strekalov, D.V.; Sergienko, A.V. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A 1995, 52, R3429. [Google Scholar] [CrossRef] [PubMed]
  8. Wan, W.; Luo, C.; Guo, F.; Zhou, J.; Wang, P.; Huang, X. Demonstration of asynchronous computational ghost imaging through strong scattering media. Opt. Laser Technol. 2022, 154, 108346. [Google Scholar] [CrossRef]
  9. Lin, L.X.; Cao, J.; Zhou, D.; Cui, H.; Hao, Q. Ghost imaging through scattering medium by utilizing scattered light. Opt. Express 2022, 30, 11243–11253. [Google Scholar] [CrossRef]
  10. Gholami-milani, S.; Olyaeefar, B.; Ahmadi-kandjani, S.; Kheradmand, R. Grayscale and color ghost-imaging of moving objects by memory-enabled, memoryless and compressive sensing algorithms. J. Opt. 2019, 21, 085709. [Google Scholar] [CrossRef]
  11. Yang, D.; Chang, C.; Wu, G.; Luo, B.; Yin, L. Compressive ghost imaging of the moving object using the low-order moments. Appl. Sci. 2020, 10, 7941. [Google Scholar] [CrossRef]
  12. Li, H.; Xiong, J.; Zeng, G. Lensless ghost imaging for moving objects. Opt. Eng. 2011, 50, 127005. [Google Scholar] [CrossRef]
  13. Ferri, F.; Magatti, D.; Lugiato, L.A.; Gatti, A. Differential ghost imaging. Am. Phys. Soc. 2010, 104, 253603. [Google Scholar] [CrossRef] [PubMed]
  14. Sun, B.; Welsh, S.S.; Edgar, M.P.; Shapiro, J.H.; Padgett, M.J. Normalized ghost imaging. Opt. Express 2012, 20, 16892–16901. [Google Scholar] [CrossRef]
  15. Bian, L.; Suo, J.; Dai, Q.; Chen, F. Experimental comparison of single-pixel imaging algorithms. J. Opt. Soc. Am. A 2018, 35, 78–87. [Google Scholar] [CrossRef] [PubMed]
  16. Shapiro, J.H. Computational ghost imaging. Phys. Rev. A 2008, 78, 061802. [Google Scholar] [CrossRef]
  17. Katz, O.; Bromberg, Y.; Silberberg, Y. Compressive ghost imaging. Appl. Phys. Lett. 2009, 95, 739. [Google Scholar] [CrossRef]
  18. Lyu, M.; Wang, W.; Wang, H.; Wang, H.; Li, G.; Chen, N.; Situ, G. Deep-learning-based ghost imaging. Sci. Rep. 2017, 7, 17865. [Google Scholar] [CrossRef] [PubMed]
  19. Shimobaba, T.; Endo, Y.; Nishitsuji, T.; Takahashi, T.; Nagahama, Y.; Hasegawa, S.; Sano, M.; Hirayama, R.; Kakue, T.; Shiraki, A.; et al. Computational ghost imaging using deep learning. Opt. Commun. 2018, 413, 147–151. [Google Scholar] [CrossRef]
  20. Liu, H.; Chang, X.; Yan, J.; Guo, P.; Xu, D.; Bian, L. Masked autoencoder for highly compressed single-pixel imaging. Opt. Lett. 2023, 48, 4392–4395. [Google Scholar] [CrossRef]
  21. Jiying, L.; Jubo, Z.; Chuan, L.; Shisheng, H. High-quality quantum-imaging algorithm and experiment based on compressive sensing. Opt. Lett. 2010, 35, 1206–1208. [Google Scholar] [CrossRef] [PubMed]
  22. Rizvi, S.; Cao, J.; Zhang, K.; Hao, Q. DeepGhost: Real-time computational ghost imaging via deep learning. Sci. Rep. 2020, 10, 11400. [Google Scholar] [CrossRef] [PubMed]
  23. Song, H.; Nie, X.; Su, H.; Chen, H.; Zhou, Y.; Zhao, X.; Peng, T.; Scully, M.O. 0.8% Nyquist computational ghost imaging via non-experimental deep learning. Opt. Commun. 2022, 520, 128450. [Google Scholar] [CrossRef]
  24. Zhai, X.; Cheng, Z.; Liang, Z.; Chen, Y.; Hu, Y.; Wei, Y. Computational ghost imaging via adaptive deep dictionary learning. Appl. Opt. 2019, 58, 8471–8478. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, H.; Duan, D. Computational ghost imaging with compressed sensing based on a convolutional neural network. Chin. Opt. Lett. 2021, 19, 101101. [Google Scholar] [CrossRef]
  26. Wu, H.; Wang, R.; Zhao, G.; Xiao, H.; Liang, J.; Wang, D.; Tian, X.; Cheng, L.; Zhang, X. Deep-learning denoising computational ghost imaging. Opt. Lasers Eng. 2020, 134, 106183. [Google Scholar] [CrossRef]
  27. Wang, F.; Wang, H.; Wang, H.; Li, G.; Situ, G. Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging. Opt. Express 2019, 27, 25560–25572. [Google Scholar] [CrossRef] [PubMed]
  28. Wu, H.; Wang, R.; Zhao, G.; Xiao, H.; Wang, D.; Liang, J.; Tian, X.; Cheng, L.; Zhang, X. Sub-Nyquist computational ghost imaging with deep learning. Opt. Express 2020, 28, 3846–3853. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, H.; Zhao, G.; Chen, M.; Cheng, L.; Xiao, H.; Xu, L.; Wang, D.; Liang, J.; Xu, Y. Hybrid neural network-based adaptive computational ghost imaging. Opt. Lasers Eng. 2021, 140, 106529. [Google Scholar] [CrossRef]
  30. Gao, Z.; Cheng, X.; Chen, K.; Wang, A.; Hu, Y.; Zhang, S.; Hao, Q. Computational ghost imaging in scattering media using simulation-based deep learning. IEEE Photonics J. 2020, 12, 1–15. [Google Scholar] [CrossRef]
  31. Li, F.; Zhao, M.; Tian, Z.; Willomitzer, F.; Cossairt, O. Compressive ghost imaging through scattering media with deep learning. Opt. Express 2020, 28, 17395–17408. [Google Scholar] [CrossRef] [PubMed]
  32. Li, Y.; Xue, Y.; Tian, L. Deep speckle correlation: A deep learning approach toward scalable imaging through scattering media. Optica 2018, 5, 1181–1190. [Google Scholar] [CrossRef]
  33. Hu, H.K.; Sun, S.; Lin, H.Z.; Jiang, L.; Liu, W.T. Denoising ghost imaging under a small sampling rate via deep learning for tracking and imaging moving objects. Opt. Express 2020, 28, 37284–37293. [Google Scholar] [CrossRef] [PubMed]
  34. Liu, H.; Chen, Y.; Zhang, L.; Li, D.H.; Li, X. Color ghost imaging through the scattering media based on A-cGAN. Opt. Lett. 2022, 47, 569–572. [Google Scholar] [CrossRef] [PubMed]
  35. Ni, Y.; Zhou, D.; Yuan, S.; Bai, X.; Xu, Z.; Chen, J.; Li, C.; Zhou, X. Color computational ghost imaging based on a generative adversarial network. Opt. Lett. 2021, 46, 1840–1843. [Google Scholar] [CrossRef] [PubMed]
  36. Wang, F.; Wang, C.; Chen, M.; Gong, W.; Zhang, Y.; Han, S.; Situ, G. Far-field super-resolution ghost imaging with a deep neural network constraint. Light. Sci. Appl. 2022, 11, 1. [Google Scholar] [CrossRef] [PubMed]
  37. Liu, S.; Meng, X.; Yin, Y.; Wu, H.; Jiang, W. Computational ghost imaging based on an untrained neural network. Opt. Lasers Eng. 2021, 147, 106744. [Google Scholar] [CrossRef]
  38. Chang, X.; Wu, Z.; Li, D.; Zhan, X.; Yan, R.; Bian, L. Self-supervised learning for single-pixel imaging via dual-domain constraints. Opt. Lett. 2023, 48, 1566–1569. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, P.; Gong, W.; Shen, X.; Han, S. Correlated imaging through atmospheric turbulence. Phys. Rev. A 2010, 82, 033817. [Google Scholar] [CrossRef]
  40. Dixon, P.B.; Howland, G.A.; Chan, K.W.C.; O’Sullivan-Hale, C.; Rodenburg, B.; Hardy, N.D.; Shapiro, J.H.; Simon, D.S.; Sergienko, A.V.; Boyd, R.W. Quantum ghost imaging through turbulence. Phys. Rev. A 2011, 83, 51803. [Google Scholar] [CrossRef]
  41. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  43. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. (NeurIPS) 2014, 27. [Google Scholar]
  44. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ’Completely Blind’ Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  45. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9446–9454. [Google Scholar]
  46. Gandelsman, Y.; Shocher, A.; Irani, M. "Double-DIP": Unsupervised image decomposition via coupled deep-image-priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11026–11035. [Google Scholar]
Figure 1. Schematic diagram of the experimental setup for computational ghost imaging and WSCGI: (a) computational ghost imaging; (b) WSCGI.
Figure 1. Schematic diagram of the experimental setup for computational ghost imaging and WSCGI: (a) computational ghost imaging; (b) WSCGI.
Sensors 24 04197 g001
Figure 2. WSCGI network architecture diagram. The neural network consists of three main parts: (1) processing the input data using the NGI method to obtain low-quality images; (2) Net1, which is composed of convolutional layers, downsampling layers, deconvolutional layers, batch normalization, and the LeakyReLU activation function; (3) Net2, which is composed of convolutional layers, upsampling layers, downsampling layers, and batch normalization, utilizing the LeakyReLU activation function.
Figure 2. WSCGI network architecture diagram. The neural network consists of three main parts: (1) processing the input data using the NGI method to obtain low-quality images; (2) Net1, which is composed of convolutional layers, downsampling layers, deconvolutional layers, batch normalization, and the LeakyReLU activation function; (3) Net2, which is composed of convolutional layers, upsampling layers, downsampling layers, and batch normalization, utilizing the LeakyReLU activation function.
Sensors 24 04197 g002
Figure 3. Results of numerically simulated reconstruction of an unobstructed object. (a) The selection method for WSCGI network results, which aims to determine the optimal solution among 1500 output results. The selection process is based on NIQE, with lower NIQE scores indicating higher image quality. (b) Original object; (c) output results when β = 6.25%, β = 12.5%, β = 25%, β = 50%, β = 100%, using various methods including GI, normalized ghost imaging (NGI), gradient descent (GD), conjugate gradient descent (CGD), alternating projection (APGI), GIDC [36].
Figure 3. Results of numerically simulated reconstruction of an unobstructed object. (a) The selection method for WSCGI network results, which aims to determine the optimal solution among 1500 output results. The selection process is based on NIQE, with lower NIQE scores indicating higher image quality. (b) Original object; (c) output results when β = 6.25%, β = 12.5%, β = 25%, β = 50%, β = 100%, using various methods including GI, normalized ghost imaging (NGI), gradient descent (GD), conjugate gradient descent (CGD), alternating projection (APGI), GIDC [36].
Sensors 24 04197 g003
Figure 4. Comparison among the different reconstruction methods for various occlusion rates. DIP [45] and double DIP [46] are two self-supervised deep learning methods for image deocclusion. The occlusion rate is the ratio of the occluded area to the area of the object.
Figure 4. Comparison among the different reconstruction methods for various occlusion rates. DIP [45] and double DIP [46] are two self-supervised deep learning methods for image deocclusion. The occlusion rate is the ratio of the occluded area to the area of the object.
Sensors 24 04197 g004
Figure 5. Objective evaluation of reconstruction results f0r different occlusion rates. (a) SSIM curve. (b) PSNR curve.
Figure 5. Objective evaluation of reconstruction results f0r different occlusion rates. (a) SSIM curve. (b) PSNR curve.
Sensors 24 04197 g005
Figure 6. Experimental reconstruction by NGI and WSCGI. Each row represents the reconstruction results of the same object using different methods, while each column represents the results of different sampling rates using the same method.
Figure 6. Experimental reconstruction by NGI and WSCGI. Each row represents the reconstruction results of the same object using different methods, while each column represents the results of different sampling rates using the same method.
Sensors 24 04197 g006
Figure 7. Experimental results obtained by partially occluding the target object. each arrow points to the corresponding area of local magnification in the box. (a) images processed by Net1 without local occlusion; (b) the partially occluded target, from left to right: horizontally occluded (Single layer letters), obliquely occluded (Resolution board), centrally occluded (Chinese characters), and multi-segment discontinuous occlusion(Double layered letters); (c) the image processed by the complete neural network.
Figure 7. Experimental results obtained by partially occluding the target object. each arrow points to the corresponding area of local magnification in the box. (a) images processed by Net1 without local occlusion; (b) the partially occluded target, from left to right: horizontally occluded (Single layer letters), obliquely occluded (Resolution board), centrally occluded (Chinese characters), and multi-segment discontinuous occlusion(Double layered letters); (c) the image processed by the complete neural network.
Sensors 24 04197 g007
Table 1. SSIM obtained with different GI reconstruction methods and at different sampling rates.
Table 1. SSIM obtained with different GI reconstruction methods and at different sampling rates.
Object“Peppers” “Triangle”
Method\ β 6.25%12.25%25%50%100% 6.25%12.25%25%50%100%
WSCGI0.660.790.890.910.96 0.970.980.980.980.98
NGI0.090.100.110.120.12 0.090.140.180.290.36
GIDC0.610.800.890.930.97 0.560.760.960.970.99
GD0.260.300.460.630.81 0.120.170.220.370.58
CGD0.220.300.460.630.81 0.120.170.220.370.98
AP0.200.270.450.650.80 0.170.270.360.540.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, X.; Gao, C.; Yu, Z.; Wang, H.; Zhao, H.; Yao, Z. A W-Shaped Self-Supervised Computational Ghost Imaging Restoration Method for Occluded Targets. Sensors 2024, 24, 4197. https://doi.org/10.3390/s24134197

AMA Style

Wang Y, Wang X, Gao C, Yu Z, Wang H, Zhao H, Yao Z. A W-Shaped Self-Supervised Computational Ghost Imaging Restoration Method for Occluded Targets. Sensors. 2024; 24(13):4197. https://doi.org/10.3390/s24134197

Chicago/Turabian Style

Wang, Yu, Xiaoqian Wang, Chao Gao, Zhuo Yu, Hong Wang, Huan Zhao, and Zhihai Yao. 2024. "A W-Shaped Self-Supervised Computational Ghost Imaging Restoration Method for Occluded Targets" Sensors 24, no. 13: 4197. https://doi.org/10.3390/s24134197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop