Next Article in Journal
Interval Adjustable Dual-Wavelength Erbium-Doped Fiber Laser Based on Cascaded Two Mach-Zehnder Interferometers
Next Article in Special Issue
Reducing the Crosstalk in Collinear Holographic Data Storage Systems Based on Random Position Orthogonal Phase-Coding Reference
Previous Article in Journal
Capture Dynamics of Dielectric Microparticles in Hollow-Core-Fiber-Based Optical Traps
Previous Article in Special Issue
Isotropic Two-Dimensional Differentiation Based on Dual Dynamic Volume Holograms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Holographic Reconstruction by Deep Learning with One Frame

College of Science, China University of Petroleum (East China), Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(10), 1155; https://doi.org/10.3390/photonics10101155
Submission received: 3 September 2023 / Revised: 7 October 2023 / Accepted: 12 October 2023 / Published: 15 October 2023
(This article belongs to the Special Issue Holographic Information Processing)

Abstract

:
A robust method is proposed to reconstruct images with only one hologram in digital holography by introducing a deep learning (DL) network. The U-net neural network is designed according to DL principles and trained by the image data set collected using phase-shifting digital holography (PSDH). The training data set was established by collecting thousands of reconstructed images using PSDH. The proposed method can complete the holography reconstruction with only a single hologram and then benefits the space bandwidth product and relaxes the storage loads of PSDH. Compared with the results of PSDH, the results of deep learning are immune to most disturbances, including reference tilt, phase-shift errors, and speckle noise. Assisted by a GPU processor, the proposed reconstruction method can reduce the consumption time to almost one percent of the time needed by two-step PSDH. This method is expected to be capable of holography imaging with a single hologram, with high capacity, efficiently in the digital holography applications.

1. Introduction

With the development of holography [1,2,3], there remains a vital requirement: separating the original object image from the twin image and autocorrelation items in holography reconstruction. A laser with high coherence benefits the recording of off-axis holography and then boosts its application in practice [4,5,6]. Off-axis holography has the intelligence for original, high-quality image recovery but only by sacrificing the system bandwidth product. Because of the low resolution of the current CCD recording device, digital holography imposes strict restrictions on the reference tilt [6]. Phase-shifting digital holography (PSDH) solves the problem of recovering the original object wave by calculating the complex amplitude of the object wave on the recording plane, with two or more holograms with different reference phases, and the phase-shift values between two adjacent frames generated during the recording process [7]. In particular, the precision of the phase-shift value is crucial for high-quality reconstruction. However, due to the influence of environmental interference and the precision of the phase shifter, there is an error between the phase-shift value set before recording and the actual value shifted in the reference beam. In addition, the reference tilt, in practice, also incurs the error phase in the reconstructed complex wave front. To obtain the actual phase-shift values, many generalized phase-shifting digital holography (GPSDH) algorithms have been proposed [8,9,10]. One way to accomplish this task is to obtain unknown phase shifts either iteratively or non-iteratively based on statistical averages [8,9], reconstruct the object wave on the recording plane, and finally obtain the original object wave front on the original objective plane by inverse Fresnel diffraction. This GPSDH method can ensure the high quality of the reconstructed image and the low requirement on the CCD resolution at the same time, but the storing of multiple holograms and the large data set imposes a heavy burden on the computer storage and processor, which is adverse to the dynamic display of stereoscopic video. Another kind of method is to employ a week off-axis recording setup [9]. By performing Fourier transform operations on the holograms, the unknown phase shift is acquired, the wave front is restored, and the object image is rebuilt. Although this weak reference tilt method can reduce the hologram number to only the extreme situation of two and no other measurement is needed in the phase-shift extraction, the recovered wave requires further steps to remove the phase error caused by reference tilt, which encumbers the whole recovery process, especially in the application of dynamic holographic display.
Additionally, these algorithms have obvious drawbacks, such as dependence on the high acquisition quality and accurate physical constraints of multiple holograms and requiring longer computation time caused by iterations.
Here, one robust holographic reconstruction by deep learning (RHRDL) method is proposed, which can quickly complete the holography reconstruction with a single coaxial hologram or one frame with a weak reference tilt. This method can avoid not only the complex calculation of phase-shift extraction and object wave reconstruction in traditional PSDH [8] but also the wave correction of the phase in an algorithm with a weak reference tilt [9]. The availability and efficiency of this method were tested by applying it to optical experiments.

2. Establishment of Data Set and Model for Network Training

The DL process is essentially a process of training the neural network structure using datasets to obtain a general fitting function [11,12,13,14,15]. The training image is input to the neural network, the output of the network is obtained by forward transmission, the loss (difference) between the output image and the real image is compared, and the loss function is transmitted back to the neural network [16]. During training, the loss function gradient is used to guide the optimization direction of the neural network and update the parameters of the neural network, as shown in Figure 1. The general logic of DL is to repeat this process until an optimal or local optimal solution is obtained [17]. Because of the excellent performance of DL, it is more frequently employed for optical information processing, including rapid generation of computer-generated holograms (CGH) and digital holography reconstruction [18,19].
The quality of the dataset determines the upper limit of the DL ability of the network. High-quality, low-repetition, and complex datasets often obtain higher-quality training parameters [20,21,22,23,24]. Therefore, a high-quality two-step PSDH algorithm is used to reconstruct the object wave [9]. The interferometric configuration shown in Figure 2 is used to collect the hologram. The target object is a resolution plate, and one laser beam with wavelength 532 nm (emitted from a semi-conductor laser (MSL-FN-532, produced by CNI from Changchun, China) is divided into two beams through a beam splitter (BS1). A uniform plane wave is obtained after passing through a micro-objective, a pin hole, and a collimating lens (a convex lens with focal length of 15 cm). The reference wave is reflected by BS1 and a mirror mounted on a Piezoelectric Transducer (PZT), respectively, and the phase shifts are generated by PZT. The object wave that carries the object information after diffraction overlaps with the reference wave that carries the phase information of the reference wave, forming a hologram, which is collected by a CCD ( DH-SV1410FM, produced by IMAVISION from Beijing, China).
For the collected holograms, the spectral analysis method is used to extract the phase-shift values between the two holograms and reconstruct the object wave. The two holograms recorded with the reference tilt phase φxy are represented by Equations (1) and (2):
I 1 = A o 2 + A r 2 + 2 A o A r cos ( φ o φ x y )
I 2 = A o 2 + A r 2 + 2 A o A r cos ( φ o φ x y δ )
In the two equations, Ao, Ar are the amplitudes of object wave and reference wave, respectively, the first two terms on the right sides are intensities of the objective wave and the reference wave, respectively, φo, φxy are the phases of object wave and reference wave, and δ is the phase difference of reference between two holograms. In fact, the additional reference phase φxy is contributed by the angles between the reference and objective waves represented by α and β along x and y axes, respectively. We have the formula of the additional reference phase φxy with the parameters of the reference tilt angles α and β
φ x y = 2 π ( x sin α + y sin β ) / λ
where λ is the wavelength of the reference wave with a small tilt employed during the procedure of hologram recording. To separate the phase-shift parameter δ more easily, We can further rewrite the above Equations (1) and (2) into
I 1 = A o 2 + A r 2 + A o A r exp ( i φ o ) exp ( i φ x y ) + A o A r exp ( i φ o ) exp ( i φ x y )
I 2 = A o 2 + A r 2 + A o A r exp ( i φ o ) exp ( i φ x y ) exp ( i δ ) + A o A r exp ( i φ o ) exp ( i φ x y ) exp ( i δ )
Equations (4) and (5) are operated by Fourier transforms, and their distributions in the frequency domain can be expressed:
F 1 ( u , v ) = F A ( u , v ) + A r 2 δ ( u , v ) + A r F o ( u + u p , v + v p ) + A r F o ( u + u p , v + v p )
F 2 ( u , v ) = F A ( u , v ) + A r 2 δ ( u , v ) + A r F o ( u + u p , v + v p ) exp ( i δ )             + A r F o ( u + u p , v + v p ) exp ( i δ )
Here F1(u, v) and F2(u, v) come from the Fourier transforms of I1 and I2 in Equations (4) and (5), respectively, FA(u, v) is the Fourier transform results of objective intensities, Fo(u + up, v + vp) is the spatial spectrum of the objective wave Aoexp(−o) in Equations (4) and (5), and δ(u, v) is a delta function of the spectrum distribution from the unit value of one. In Equation (7), there are exp() and exp(−) in the last two terms of the right sides and we know that parameter δ is the phase-shift value between the reference phases corresponding to the two holograms. Because δ is a constant for all the pixels on holograms, it can be calculated by the subtraction operation on the argument angle of the complex term on any one of the last two in Equations (6) and (7).
This phase shift value extraction method is simple and efficient, and it can work without using any other measurements. Furthermore, this phase shift extraction algorithm is suitable for PSDH techniques with only two frames, which is the extreme situation for PSDH. This phase shift calculation method is also fit for cases with three [9] or more than three frames if the tilt reference wave is introduced. In most conventional phase shift extraction methods, the phase-shift values are calculated with either three or more than three frames and, in some algorithms that have been designed, even iterations are rescued. These methods need more consumption time for complex calculations, especially for the latter one. The conventional two step method for phase-shift extraction reported recently [8] is time saving, but it needs the measurements of the object and reference intensities. Compared with these conventional methods, this novel phase shift calculation method is convenient because it needs only two Fourier transforms on the holograms and one subtraction operation. In the following, the whole specific procedure of the phase-shift extraction and object wave reconstruction are described briefly.
In Equations (6) and (7), the first two terms on the right side of the equations are located at the center of the spectrum, and the third and fourth terms are the origin-symmetric spectra. By a similar algorithm [9], reference tilt angles α and β can be obtained by determining up, vp from the coordinates of the complex amplitude of the spectrum at the (u + up, v + vp) position,
sin α = λ u p / M d x sin β = λ v p / N d y
where M, N, dx, dy are the pixel numbers and pitches on the chip of the CCD device, respectively. The phase-shift value δ can be obtained after subtraction
δ = arg [ A r F o ( u + u p , v + v p ) exp ( i δ ) ] arg [ A r F o ( u + u p , v + v p ) ]
Among them, arg[·] represents the operation of taking the argument angle. Finally, the object wave on the recording surface is reconstructed by the formula
O 1 = ( I 1 I o I r ) / 2 A r + i [ I 2 I 1 cos δ ( 1 cos δ ) ( I o + I r ) ] / ( 2 A r sin δ )
where Io and Ir are the intensities of the objective wave and reference wave. Obviously, the object wave in Equation (10) contains the tilt phase error φxy. This tilt phase error causes the phase error of the reconstructed object wave on the recording plane and then the corresponding phase error of the object wave on the image plane after the inverse Fresnel transform. This phase error caused by reference tilt can be corrected by equation
O = O 1 exp ( i φ x y ) = O 1 exp [ i 2 π ( x sin α + y sin β ) / λ ]
The original image on the original plane can be obtained by the inverse Fresnel diffraction of O.
One case of the computer simulation results is shown in Figure 3. The amplitude and phase of the object wave are assumed to be Gaussian and spheroid distributions, respectively, and a plane wave is employed as the reference wave. The interference fringes between the object wave and the reference wave are recorded as hologram I1 and, after the reference phase is shifted, another hologram I2 is generated. In Figure 3e, to make the spectrum easier to observe, the energy of the zero-frequency part has been suppressed. Finally, the object wave on the recording plane is obtained and corrected by Equations (10) and (11), and the original image is obtained after the inverse Fresnel diffraction of O.
Figure 4 is one experimental example of the two-step PSDH method. Figure 4a,b are two holograms with reference tilt and phase shift, and Figure 4c–e are the intensities of the background wave, object wave, and reference wave, respectively. Using the two-step PSDH algorithm, the reference phase difference δ between the two holograms is calculated and the result is 0.4356 rad. Figure 4f is the reconstructed image with 1392 × 1040 pixels.
To improve the efficiency of neural network training, hologram images are divided into several parts. Each group of holograms can produce eight sets of datasets, and there are 688 × 512 pixels on every part in the dataset. The holograms are the image input, and the reconstructed image is the label image of the network trained. By repeating all the recording and reconstruction processes, a total of 1500 sets of datasets were produced, which were divided into two groups: 1200 training sets and 300 test sets according to a 4:1 ratio.
In the process of network training, the structure of the neural network becomes more and more complex and the number of hidden layers increases, which hinders the improvement of the network update speed and leads to difficulties in training. In this regard, the idea of batch normalization from Sergry Loffe et al. [25] is introduced by taking normalization as a part of the model architecture and implementing normalization for each training small batch to speed up the training speed. Batch normalization processing can use a higher learning rate, and there is no need to be concerned about the initialization. At the same time, it can also act as a regularizer to reduce the requirements for training data. On the other hand, network depth is directly related to network performance and potential. But with the increase in network depth, there are inevitable problems: gradient disappearance and gradient explosion. Even the training effect of a too deep network structure is not as good as that of a slightly shallow network, which hinders the convergence of results. The proposal of a residual network solves this problem and stimulates the potential of the network structure by establishing residual mapping for the network structure [26].
The loss function in the model is the mean square error of the reconstructed images (MSE Loss). Our experiments show that, as the loss function, the mean square error has a faster optimal convergence speed for hologram reconstruction. If the input image is represented as x1 and the real (ground truth) image is y1, both of which are images with M × N pixels in size, and each pixel of the image is expressed as amn, bmn(0 < m < M, 0 < m < N), then the loss between the two images can be expressed as:
l o s s ( x 1 , y 1 ) = 1 M N m = 1 M n = 1 N ( a m n b m n ) 2
In addition, the ReLu function is selected as the activation function and the Adam optimizer is chosen as the model optimizer [27]. Although some works have reported that the optimization algorithm of the adaptive learning rate may be not as good as the random gradient descent algorithm in the final result, the optimization algorithm of the adaptive learning rate obviously has a faster optimization speed, and the convergence effect can be improved by controlling the Adam learning rate [28].
The U-net model structure used here is shown in Figure 5, which mainly includes down-sampling and up-sampling parts. The residual mapping structure was established symmetrically before up/down-sampling. The purple arrow represents the operation, including a combination of two two-dimensional convolutions, batch normalization(BN), and ReLu activation functions. The green arrow denotes the operations of adding a max pooling layer on the basis of the purple one to achieve the purpose of down-sampling. The red arrow is the operation to add a two-dimensional deconvolution (or transposed convolution) layer on the basis of the purple arrow to achieve the purpose of up-sampling [29,30,31]. Both the input and output of the model are 1 × 512 × 688 pixels in size and they contain 31,042,369 parameters, which require 2170.16 M storage space.

3. Deep Learning Network Training Results

The established data set was used for training the parameters of the network structure. At the beginning of the training, in order to pursue faster convergence speed, we set the learning rate to Lr = 0.01 and trained it for 50 cycles, and then a small parameter of Lr = 0.001 was used for the next 50 cycles. Figure 6 shows the curve of the loss distribution, in which the black curve corresponds to the loss between the input image and the output image of the training set, and the red curve corresponds to the loss between the input image and the output image of the test set. It can be observed that the training result after 50 cycles gradually reaches the optimal solution (or local optimal solution), so we stopped the training process after 100 training cycles.
In order to test the reconstruction ability of the network after training, the reconstruction results by two-step PSDH are compared with those based on the RHRDL model in Figure 7, where Figure 7a,d,g are the collected holograms, Figure 7b,e,h are the reconstructed images by the corresponding PSDH algorithm, and Figure 7c,f,i are the ones by the corresponding RHRDL model reconstruction. Through the comparison among Figure 7a–c, it can be seen that although the result of RHRDL is not as good as the result of PSDH reconstruction in some aspects (such as that in the yellow circle in Figure 7b, the image shows higher resolution), the image reconstructed by RHRDL has higher reconstruction accuracy, excellent arc restoration (the arc part of the character ‘5’ in the blue circle in Figure 7e is well recovered), and good noise resistance (observing the imaging area of the red circle in Figure 7h, we know that it is a pure white image with little noise). It can be seen that the trained network structure has learned how to reconstruct the hologram with high quality, and the reconstruction ability is worth affirmation.
The pseudo color image can highlight the possible information omission caused by insufficient contrast. Here, the jet pseudo color is used to process the original grayscale image, as shown in Figure 8. Figure 8a shows the reconstruction result of PSDH algorithm, and Figure 8b shows the result of RHRDL. Through comparison, it can be seen that the latter method can suppress the appearance of ghost images. For example, the fuzzy number “2, 3, 4, 5” appears on the left of the number “4, 5” in Figure 8a. In theory, the weak edge diffraction information outside the recording chip of the CCD should be reconstructed. Due to the limitation of the recording chip, the part on the right of the hologram “turns back” to the left for display.
Another concern about the performance of RHRDL is the running time required for the reconstruction by the DL model. The time-consuming test was carried out on a personal computer with Core i5-5490 CPU and GTX 1050Ti GPU. From the comparison, the same test works are performed on the same computer but with two- and three-step PSDH [9] methods. All results are shown in Table 1. The results in the second column are the time needed with only CPU and the third one in the table is that needed with the help of GPU during the process of reconstruction. The digits in the last column are the numbers of the holograms needed for information processing.
It can be seen that, with the help of GPU, the time for the three all decreased significantly, but the RHRDL model shows the best performance in the time reduction. With the GPU assistant, it has the fastest reconstruction speed of 0.335 s, and the number of holograms required for reconstruction is only one frame, which greatly compresses the image acquisition process, making the method satisfactory. In fact, the CUDA framework with high-speed parallel computing ability integrated in a GPU has become a necessary means of the DL method at present. To test the performance of the method, the reconstruction work is also performed on the NVIDIA GeForce RTX 3090, and the consuming time can be decreased to only 0.013 s, which is applicable in the dynamic holography display. Because only one frame is used, this RHRDL method decreases not only the storage burden but also the reconstructing time, showing its robustness.

4. Image Similarity Evaluation and Model Stability Analysis

Besides the efficiency of the proposed method RHRDL, the stability of the model is also an important indicator to ensure its practical value. To further describe the performance of our U-net network under different conditions and its robustness in object image reconstruction, it is investigated in two aspects: the structural similarity distribution of the test set and the noise resistance of hologram reconstruction [32,33].
Mean square error (MSE) and peak signal to noise ratio (PSNR) have been used in function optimization and evaluation because they are easy to use and have clear physical significance [34,35,36]. These two functions do not match the human visual perception. Furthermore, complex loss functions cannot provide effective gradient guidance. To objectively evaluate the difference between the image reconstructed by the RHRDL method and the image reconstructed by the classical algorithm, we use the structural similarity function (SSIM) to evaluate the image quality [35]. SSIM provides a more effective objective evaluation through the parameters of image brightness, contrast, and structure.
The SSIM is generally expressed as
SSIM ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where x, y is the normalized image of the two images to be obtained, and μx, μy is the mean value of the two images, then:
μ x = 1 N i = 1 N x i , μ y = 1 N i = 1 N y i
N is the number of pixels of the image. σx, σy is the standard deviation
σ x = ( 1 N 1 i = 1 N ( x i μ x ) 2 ) 1 2 , σ y = ( 1 N 1 i = 1 N ( y i μ y ) 2 ) 1 2
σxy is the covariance of x and y images
σ x y = 1 N 1 i = 1 N ( x i μ x ) ( y i μ y )
c1 = (k1L)2, c2 = (k2L)2 are two constants, where k1 = 0.01, k2 = 0.03, and L = 2B − 1 (B is a binary digits) and L = 255 for 8-bit binary images. The value of SSIM should distribute in the scope from 0 to 1. The closer the value is to 1, the higher the similarity is. The structural similarity of two identical images is 1.
Figure 9 shows the deep learning reconstruction image and label image structure similarity function curve described by the structural similarity function. It can be seen that the structural similarity of the test set is higher than 0.998.
In the process of the hologram acquisition, the interferogram has high sensitivity. When the experimental platform has slight vibration or strong air disturbance, it will cause a change in the hologram style, resulting in the reduction in hologram contrast and causing the image to blur.
Here, Gaussian blur is used to simulate the insufficient resolution of holograms caused by various reasons. Holograms with certain blur are obtained by adjusting the size and variance of the Gaussian filter, as shown in Figure 10. Figure 10a is the original hologram, Figure 10b is the result of filtering using a Gaussian filter with a size of 5 × 5 pixels and a variance of 2, Figure 10c is the result of using a Gaussian filter with a size of 10 × 10 pixels and a variance of 2, and Figure 10d is the result of using a Gaussian filter with a size of 10 × 10 pixels and a variance of 4. Figure 10d shows obvious information loss and sharpness reduction compared with Figure 10a.
The four holograms in Figure 10 are reconstructed using the RHRDL method and the results are shown in Figure 11. It can be seen that the blur of the hologram does not cause a decline in reconstruction quality, but only a small decrease in brightness. The mean square errors between the four reconstructed images and the standard label images are 0.016%, 0.030%, 0.030%, and 0.046%, respectively. Therefore, the network structure has strong noise resistance.

5. Conclusions

A DL method is provided here to improve the imaging quality and efficiency in digital holography, and the RHRDL network model used can reconstruct the image without considering the small reference tilt and phase-shift error. The U-net network is used to reconstruct an image from a single hologram with a size of 688 × 512 pixels. A total of about 1800 sets of experimental images were used as data sets, the structural similarity of the reconstruction results was higher than 0.998, and the structure is noise-immune. Thus, the migration application of DL in digital holography is feasible. This method is not limited to strict physical constraints and can achieve satisfactory results in resolution. For a single hologram, it takes only 0.013 s to complete the rapid reconstruction work, which is beyond the conventional method. Here we use PSDH to train the network so that the quality of the reconstructed image is improved. The performance was limited by the data we used. In future work, a larger data base can be used to train a universal network for different holograms. Furthermore, a new network with fast processing time, full-color-depth data, and higher resolution should be designed and trained in the future. This method will promote the development of digital holography towards intelligence and big data.

Author Contributions

Conceptualization, X.X.; methodology, X.X.; software, X.X. and X.W.; validation, H.W. and W.L.; formal analysis, X.X. and W.L.; investigation, W.L.; resources, X.W.; data curation, X.W.; writing—original draft preparation, X.X. and X.W.; writing—review and editing, X.X. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Fundamental Research Funds for the Central Universities of China, grant number 22CX03027A.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodman, J.W.; Lawrence, R.W. Digital image formation from electronically detected holograms. Appl. Phys. Lett. 1967, 11, 77–79. [Google Scholar] [CrossRef]
  2. Bryngdahl, O.; Frank, W.I. Digital holography–computer-generated holograms. Prog. Opt. 1990, 28, 1–86. [Google Scholar]
  3. Poon, T.-C.; Liu, J.-P. Introduction to Modern Digital Holography with MATLAB; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  4. Leith, E.N.; Upatnieks, J. Reconstructed wavefronts and communication theory. J. Opt. Soc. Am. A 1962, 52, 1123–1130. [Google Scholar] [CrossRef]
  5. Xu, X.F.; Wang, X.W.; Wang, H. Accurate Image Locating by Hologram Multiplexing in Off-Axis Digital Holography Display. Appl. Sci. 2022, 12, 1437. [Google Scholar] [CrossRef]
  6. Xu, X.F.; Zhang, Z.W.; Wang, Z.C.; Wang, J.; Zhan, K.Y.; Jia, Y.L.; Jiao, Z.Y. Robust digital holography design with monitoring setup and reference tilt error elimination. Appl. Opt. 2018, 57, B205–B211. [Google Scholar] [CrossRef] [PubMed]
  7. Yamaguchi, I.; Zhang, T. Phase-shifting digital holography. Opt. Lett. 1997, 22, 1268–1270. [Google Scholar] [CrossRef]
  8. Xu, X.; Cai, L.; Wang, Y.; Yang, X.; Meng, X.; Dong, G.; Shen, X.; Zhang, H. Generalized phase-shifting interferometry with arbitrary unknown phase shifts: Direct wave-front reconstruction by blind phase shift extraction and its experimental verification. Appl. Phys. Lett. 2007, 90, 121124. [Google Scholar] [CrossRef]
  9. Xu, X.F.; Ma, T.Y.; Jiao, Z.Y.; Xu, L.; Dai, D.J.; Qiao, F.L.; Poon, T.-C. Novel Generalized Three-Step Phase-Shifting Interferometry with a Slight-Tilt Reference. Appl. Sci. 2019, 9, 5015. [Google Scholar] [CrossRef]
  10. Okada, K.; Sato, A.; Tsujiuchi, J. Simultaneous calculation of phase distribution and scanning phase shift in phase shifting interferometry. Opt. Commun. 1991, 84, 118–124. [Google Scholar] [CrossRef]
  11. Xu, X.F.; Wang, X.W.; Luo, W.L.; Wang, H.; Sun, Y.T. Efficient Computer-generated Holography Based on Mixed Linear Convolutional Neural Networks. Appl. Sci. 2022, 12, 4177. [Google Scholar] [CrossRef]
  12. Qian, K.; Seah, H.S. Sequential demodulation of a single fringe pattern guided by local frequencies. Opt. Lett. 2007, 2, 127–129. [Google Scholar]
  13. Bengio, Y. Deep learning of representations: Looking forward. In International Conference on Statistical Language and Speech Processing; Carlos, M., Senja, P., Matthew, P., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2013; pp. 1–37. [Google Scholar]
  14. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  15. Madali, N.; Gilles, A.; Gioia, P.; Morin, L. Automatic depth map retrieval from digital holograms using a deep learning approach. Opt. Express 2023, 31, 4199–4215. [Google Scholar] [CrossRef]
  16. Zeng, T.; Zhu, Y.; Lam, E.Y. Deep learning for digital holography: A review. Opt. Express 2021, 29, 40572–40593. [Google Scholar] [CrossRef] [PubMed]
  17. Rivenson, Y.; Zhang, Y.; Günaydın, H.; Teng, D.; Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 2018, 7, 17141. [Google Scholar] [CrossRef] [PubMed]
  18. Barbastathis, G.; Ozcan, A.; Situ, G.H. On the use of deep learning for computational imaging. Optica 2019, 6, 921–943. [Google Scholar] [CrossRef]
  19. Zheng, H.; Hu, J.; Zhou, C.; Wang, X. Computing 3D Phase-Type Holograms Based on Deep Learning Meth-od. Photonics 2021, 8, 280. [Google Scholar] [CrossRef]
  20. Nguyen, T.; Bui, V.; Lam, V.; Raub, C.B.; Chang, L.C.; Nehmetallah, G. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection. Opt. Express 2017, 25, 15043–15057. [Google Scholar] [CrossRef]
  21. Wang, K.; Li, Y.; Kemao, Q.; Di, J.; Zhao, J. One-step robust deep learning phase unwrapping. Opt. Express 2019, 27, 15100–15115. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, K.; Kemao, Q.; Di, J.; Zhao, J. Deep learning spatial phase unwrapping: A comparative review. Adv. Photonics Nexus 2022, 1, 014001. [Google Scholar] [CrossRef]
  23. Wu, Z.; Wang, T.; Wang, Y.; Wang, R.; Ge, D. Deep learning for the detection and phase unwrapping of mining-induced deformation in large-scale interferograms. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5216318. [Google Scholar] [CrossRef]
  24. Shang, R.; Hoffer-Hawlik, K.; Wang, F.; Situ, G.; Luke, G.P. Two-step training deep learning framework for computational imaging without physics priors. Opt. Express 2021, 29, 15239–15254. [Google Scholar] [CrossRef] [PubMed]
  25. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR: New York, NY, USA, 2015; pp. 448–456. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Washington, DC, USA, 2016. [Google Scholar]
  27. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  28. Yan, K.; Yu, Y.; Sun, T.; Asundi, A.; Kemao, Q. Wrapped phase denoising using convolutional neural networks. Opt. Lasers Eng. 2020, 128, 105999. [Google Scholar] [CrossRef]
  29. Sun, X.; Mu, X.; Xu, C.; Pang, H.; Deng, Q.; Zhang, K.; Jiang, H.; Du, J.; Yin, S.; Du, C. Dual-task convolutional neural network based on the combination of the U-Net and a diffraction propagation model for phase hologram design with suppressed speckle noise. Opt. Express 2022, 30, 2646–2658. [Google Scholar] [CrossRef] [PubMed]
  30. Zhang, G.; Guan, T.; Shen, Z.; Wang, X.; Hu, T.; Wang, D.; He, Y.; Xie, N. Fast phase retrieval in off-axis digital holographic microscopy through deep learning. Opt. Express 2018, 26, 19388–19405. [Google Scholar] [CrossRef]
  31. Li, J.; Zhang, Q.; Zhong, L.; Lu, X. Hybrid-net: A two-to-one deep learning framework for three-wavelength phase-shifting interferometry. Opt. Express 2021, 29, 34656–34670. [Google Scholar] [CrossRef] [PubMed]
  32. Li, Y.; Miao, Z.; Zhang, R.; Wang, J. DenoisingNet: An Efficient Convolutional Neural Network for Image Denoising. In Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 1 May 2019. [Google Scholar]
  33. Fang, Q.; Xia, H.; Song, Q.; Zhang, M.; Guo, R.; Montresor, S.; Picart, P. Speckle denoising based on a deep learning via conditional generative adversarial network in digital holographic interferometry. Opt. Express 2022, 30, 20666–20683. [Google Scholar] [CrossRef] [PubMed]
  34. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Lu, W.; Shi, Y.; Ou, P.; Zheng, M.; Tai, H.; Wang, Y.; Duan, R.; Wang, M.; Wu, J. High quality of an absolute phase reconstruction for coherent digital holography with an enhanced anti-speckle deep neural unwrapping network. Opt. Express 2022, 30, 37457–37469. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The neural network learning process.
Figure 1. The neural network learning process.
Photonics 10 01155 g001
Figure 2. Configuration for holography data acquisition.
Figure 2. Configuration for holography data acquisition.
Photonics 10 01155 g002
Figure 3. One case of computer simulation results. (a,b) Amplitude and phase of object wave; (c) hologram I1; (d) hologram I2; (e) spectral distribution with zero-frequency suppressed; (f) reconstructed object image.
Figure 3. One case of computer simulation results. (a,b) Amplitude and phase of object wave; (c) hologram I1; (d) hologram I2; (e) spectral distribution with zero-frequency suppressed; (f) reconstructed object image.
Photonics 10 01155 g003
Figure 4. One case of the experimental results using the two-step PSDH method. (a,b) Two holograms; (c) background intensity; (d) object intensity; (e) reference intensity; (f) reconstructed image.
Figure 4. One case of the experimental results using the two-step PSDH method. (a,b) Two holograms; (c) background intensity; (d) object intensity; (e) reference intensity; (f) reconstructed image.
Photonics 10 01155 g004
Figure 5. Structure for U-net network model.
Figure 5. Structure for U-net network model.
Photonics 10 01155 g005
Figure 6. The loss varies with training epochs.
Figure 6. The loss varies with training epochs.
Photonics 10 01155 g006
Figure 7. Comparison of reconstructing results. (a,d,g) Holograms; (b,e,h) reconstruction results by PSDH; (c,f,i) reconstruction results by RHRDL. The three colored circles in (b,e,h) show a comparison of the details with the same areas as (c,f,i).
Figure 7. Comparison of reconstructing results. (a,d,g) Holograms; (b,e,h) reconstruction results by PSDH; (c,f,i) reconstruction results by RHRDL. The three colored circles in (b,e,h) show a comparison of the details with the same areas as (c,f,i).
Photonics 10 01155 g007
Figure 8. Comparison among pseudo color images. (a) Reconstruction results by PSDH; (b) reconstruction results by RHRDL.
Figure 8. Comparison among pseudo color images. (a) Reconstruction results by PSDH; (b) reconstruction results by RHRDL.
Photonics 10 01155 g008
Figure 9. SSIM of the reconstructed images.
Figure 9. SSIM of the reconstructed images.
Photonics 10 01155 g009
Figure 10. Hologram after Gaussian blur. (a) Original hologram; (b) hologram blurred using a Gaussian filter with a size of 5 × 5 pixels and a variance of 2; (c) hologram blurred using a Gaussian filter with a size of 10 × 10 pixels and a variance of 2; (d) hologram blurred using a Gaussian filter with a size of 10 × 10 pixels and a variance of 4.
Figure 10. Hologram after Gaussian blur. (a) Original hologram; (b) hologram blurred using a Gaussian filter with a size of 5 × 5 pixels and a variance of 2; (c) hologram blurred using a Gaussian filter with a size of 10 × 10 pixels and a variance of 2; (d) hologram blurred using a Gaussian filter with a size of 10 × 10 pixels and a variance of 4.
Photonics 10 01155 g010
Figure 11. Reconstructed image of the four holograms (ad) in Figure 10 using an RHRDL method.
Figure 11. Reconstructed image of the four holograms (ad) in Figure 10 using an RHRDL method.
Photonics 10 01155 g011
Table 1. Comparison of the time for different methods.
Table 1. Comparison of the time for different methods.
MethodCPU(Core i5-5490)GPU(GTX 1050Ti)The Number of Holograms Required
RHRDL3.529758 s0.334988 s1
PSDH(2steps)1.218556 s0.694618 s2
PSDH(2steps)1.343408 s1.200886 s3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, X.; Luo, W.; Wang, H.; Wang, X. Robust Holographic Reconstruction by Deep Learning with One Frame. Photonics 2023, 10, 1155. https://doi.org/10.3390/photonics10101155

AMA Style

Xu X, Luo W, Wang H, Wang X. Robust Holographic Reconstruction by Deep Learning with One Frame. Photonics. 2023; 10(10):1155. https://doi.org/10.3390/photonics10101155

Chicago/Turabian Style

Xu, Xianfeng, Weilong Luo, Hao Wang, and Xinwei Wang. 2023. "Robust Holographic Reconstruction by Deep Learning with One Frame" Photonics 10, no. 10: 1155. https://doi.org/10.3390/photonics10101155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop