Next Article in Journal
Three-Dimensional Processing of Reflections for Passive-Source Seismology Based on Geometric Design
Previous Article in Journal
DMS-YOLOv5: A Decoupled Multi-Scale YOLOv5 Method for Small Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HoloForkNet: Digital Hologram Reconstruction via Multibranch Neural Network

by
Andrey S. Svistunov
,
Dmitry A. Rymov
,
Rostislav S. Starikov
and
Pavel A. Cheremkhin
*
Laser Physics Department, Institute for Laser and Plasma Technologies, National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Kashirskoe Shosse 31, 115409 Moscow, Russia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 6125; https://doi.org/10.3390/app13106125
Submission received: 6 April 2023 / Revised: 14 May 2023 / Accepted: 15 May 2023 / Published: 17 May 2023

Abstract

:
Reconstruction of 3D scenes from digital holograms is an important task in different areas of science, such as biology, medicine, ecology, etc. A lot of parameters, such as the object’s shape, number, position, rate and density, can be extracted. However, reconstruction of off-axis and especially inline holograms can be challenging due to the presence of optical noise, zero-order image and twin image. We have used a deep-multibranch neural network model, which we call HoloForkNet, to reconstruct different 2D sections of a 3D scene from a single inline hologram. This paper describes the proposed method and analyzes its performance for different types of objects. Both computer-generated and optically registered digital holograms with resolutions up to 2048 × 2048 pixels were reconstructed. High-quality image reconstruction for scenes consisting of up to eight planes was achieved. The average structural similarity index (SSIM) for 3D test scenes with eight object planes was 0.94. The HoloForkNet can be used to reconstruct 3D scenes consisting of micro- and macro-objects.

1. Introduction

Digital holography is a technique that allows for recording, processing and reconstructing data about 2D and 3D scenes [1,2]. This became possible due to the development of optical–digital methods of image registration and processing. At present, digital holography is widely used in medicine [3,4,5], biology [6,7,8,9], ecology [10,11], metrology [6,12], astronomy [13] and other fields. A digital hologram is an interference pattern registered from reference and object beams coherent with each other. Object images are typically reconstructed numerically using light propagation modeling [14,15]. The reconstructed object image can be viewed as a 2D section or as a 3D scene. Images can also be reconstructed optically by displaying holograms on spatial light modulators (SLMs) [16,17]. Digital holography has great potential for application in various fields of science and technology [2], including artificial-intelligence-based methods [18,19,20,21].
The main problems of digital holography include the presence of zero-order twin images [22,23,24] and optical noises [1,25,26,27] in reconstructed images. These factors can lead to a decrease in the quality of reconstruction and, therefore, the accuracy of the obtained object images. Optical noises may occur due to imperfections in the optical elements, phase shift, or non-uniform intensity in the illuminating light. Zero-order and twin images are parasitic diffraction orders. They are a significant negative factor in the case of inline digital holography [28]. Off-axis holography [29] allows for a reduction in the effect of zero-order and twin images due to the spatial separation of diffraction orders. However, this scheme reduces the amount of registered object information. To reduce the effect of noise in inline holography, the most popular method is the registration of several (usually four) holograms with different phase shifts [1,28]. Another option is time-consuming and often difficult to implement iterative procedures, including regularized inversion, model fitting and compressive sensing [30,31,32,33].
The development of computer technology allowed for the processing of large datasets, which led to the development of neural network-based algorithms for various tasks. In particular, in holography, neural network-based methods can be used for reconstruction acceleration [34], denoising [35,36,37], aberrations compensation [38,39], suppression of parasitic diffraction orders [40,41], data compression [42], particle localization, positioning [43,44] and classification [45], depth prediction and autofocusing [46,47], etc.
For holographic image reconstruction, the focus is mainly on obtaining a single amplitude or phase [48,49,50] image or two images: amplitude/phase [36,51,52,53,54,55,56,57] and extended focused imaging/depth map ones [57,58], in a single [36,51,52,53,54,56,57,58] or multiwavelength [55] scheme. Typically, neural networks for 3D hologram reconstruction contain one decoder branch, reconstructing not object planes separately but a depth map where objects from different planes are separated by their intensity. However, reconstructing 3D scene planes from a single hologram would optimize and speed up the reconstruction, enhance the capabilities of deep learning holography and improve the image quality of the whole scene. It is especially useful for inline holograms, where the overlapping of objects on different 2D planes and noise significantly reduce the reconstruction quality.
HoloForkNet is the proposed neural network for holographic image reconstruction. It is a deep convolutional neural network with skip connections. We propose the use of a multibranch architecture, which is used in various fields of science [59,60]. To our knowledge, the idea of using a multibranch neural network for digital hologram reconstruction was first described in [51]. The amplitude and phase of the original object were reconstructed simultaneously.
Before this research, the depth map reconstruction approach was used for 3D scene reconstruction. An example of such a model is Dense-U-net [58]. However, this approach may have a number of limitations associated with the size of objects and their overlapping across different object planes. Thus, this paper proposes an alternative method less affected by these issues: the HoloForkNet.
The HoloForkNet is able to separately reconstruct each of the 2D planes forming a 3D scene from a single hologram without the use of a depth map. Therefore, this network reconstructs all object planes recorded on a hologram at once. The network processes the hologram and then generates a set of reconstructed images. These images are 2D sections of the original 3D scene. Therefore, a set of reconstructed 2D sections can be interpreted as the equivalent of a 3D scene. An example of a 3D scene represented by its sections is shown in Figure 1. The method will be tested in the most complex case of inline digital holograms.
The rest of the paper is organized as follows: In Section 2, the proposed method is described. In Section 3, numerical experiments and their results are presented. The optical experiments and their results are given and discussed in Section 4. The main results are provided in the conclusions.

2. The Proposed Method

Convolutional neural networks can be effectively used for a variety of problems [61]. The base of the proposed HoloForkNet architecture is the U-Net network [62]. U-Net is a deep convolutional neural network with skip connections that was originally designed for image segmentation and low-resolution image reconstruction but has been widely used in various image transformation tasks [63,64,65,66], defect detection [67], etc. U-Net uses a decoder–encoder architecture. Meaning the network has two pathways: an encoding pathway, which compresses the image and extracts its features, and a decoding pathway, which expands the image and creates the transformed image based on the extracted features. Intermediate links [68] preserve information in deep architectures by transferring it from earlier layers to deeper layers.
Usually only a single 2D scene is reconstructed from a hologram. In this case, one decoder pathway is used for the output. In this paper, we propose the reconstruction of 3D scenes consisting of a set of separate 2D object planes. In this case, the output of the neural network will have several independent decoder pathways, each responsible for a separate reconstruction plane. The architecture of the neural network is presented in Figure 2. Here A, B and C denote the skip connections for each branch. The figure shows the network used for holograms of 3D scenes, consisting of N object planes.
The main elements of convolutional neural networks are convolutional layers. These layers allow for the extraction of features from the input data, which makes them a particularly useful tool for image processing. Convolutional layers consist of filters; each filter is a set of weights corresponding to a small region of the input data. The filters move over the input data and perform a convolution, that is, the calculation of a weighted sum of the pixels in that region. The result of the convolution is a feature map [69].
Each convolutional layer is followed by a batch normalization layer. It is necessary for the effective training of neural networks with a large number of layers. Batch normalization accelerates the convergence of the descent algorithm and reduces the risk of model retraining. The idea is to normalize the output data of each layer by centering and normalizing the data within the mini batch. The normalization is followed by scaling and shifting, performed using two additional parameters that are set during the learning process. This allows the model to learn faster and with better stability.
An activation function usually completes the standard neural network block [70]. In this case, the ReLu function was used. ReLu allows us to avoid the vanishing gradient problem, which occurs when using the traditional sigmoid and hyperbolic tangent in neural networks with a large number of layers.
The HoloForkNet encoder consists of blocks of sequentially applied convolutional layers with a 5 × 5 kernel size and a stride of 1, the batch normalization layer and the ReLU activation function. Every second block is followed by a MaxPool layer, which decreases the size of the input tensor by a factor of 2 by taking the maximum value over an input window of 2 × 2 for each channel of the input tensor. The decoder pathway is almost identical to the encoder, but instead of the MaxPool, the upsampling layer is used. In addition, the tensors from the encoder pathway are concatenated with the corresponding tensors in the decoder pathway (skip-connections). Since the images used are large (1024 × 1024 pixels), each image is split into multiple channels by the adjacent pixels in order to work more efficiently with memory during training [71]. In this approach, an image with a resolution of X × Y pixels is divided into multiple channels by selecting pixels with the step S. The image is, therefore, converted into S2 channels with X/S × Y/S pixels of resolution each. Each of the channels contains information from the whole field of the original image, which allows us to reduce the size of images without losing information. Figure 3 shows an example of image interleaving with step 2.
Training was performed using the standard Adam optimizer [72]. Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first- and second-order moments. The mean square error loss function (MSE) was used:
MSE = 1 M L x = 1 M y = 1 L ( O ( x , y ) R ( x , y ) ) 2 ,
where O(x, y) is the ground truth, R(x, y) is the reconstructed image generated by the neural network, M × L is the image dimensionality and x and y are the coordinates of image pixels.

3. Numerical Experiments

3.1. Numerical Experiment Conditions

The neural network was trained on sets of 3D computer-generated holograms. The scenes consisted of different numbers of object planes (3D scene sections). From 2 to 20 planes were used. The size of each object plane was 1024 × 1024 pixels. Two types of phase images were used. The first type (object 1) were digits from the MNIST dataset [73]. In this case, each plane contains one object (one digit) in the center of the field. Examples of object planes are shown in Figure 4a,b. The phase range was from −π/2 (black dots) to π/2 (white dots). The other type of object (object 2) were images of circles, which can be considered similar to various biomedical microparticles. Examples of the objects are shown in Figure 4c,d. The radius of the circles in these cases was 32 (Figure 4c) and 64 (Figure 4d) pixels.
These specific object examples were chosen because they represent several large groups of objects and illustrate the ability of a neural network to handle them. Randomly distributed particles have great variability in their mutual arrangement, which makes them great data for a neural network to train on. In addition, they can be seen as model examples for biomedical particles or fine particles in gas. Images of numbers are examples of large grayscale shapes, so it is possible to estimate the applicability of a neural network to other grayscale images.
The propagation from each object plane was calculated using the Fast Fourier Transform (FFT)-based Fresnel approximation method (single-FFT approach) [14,15]:
O H ( ξ , η , z ) = exp ( i k z ) i λ z exp ( i π ( ξ 2 + η 2 ) λ z ) F F T { O ( x , y , 0 ) exp ( i π ( x 2 + y 2 ) λ z ) } ,
where FFT{…}-the fast Fourier transform, λ-wavelength, k-wavenumber, z-the distance from the object plane to the hologram, (ξ, η, z)-the coordinates in the hologram plane, (x, y, 0)-the coordinates in the object plane, O(x, y, 0)-object plane, where OH(ξ, η, z)-object wave distribution in the hologram plane. The distance between the hologram plane and the nearest plane was usually 23 cm. The distance between neighboring object planes was usually 1 µm. The wavelength λ was 532 nm. SLM pixel size was 9 × 9 µm2.
Inline Fresnel holograms with a resolution of 1024 × 1024 pixels were generated. The hologram was calculated as:
H ( ξ , η ) = | O H ( ξ , η , z ) + R ( ξ , η ) | 2 ,
where R(ξ, η)-the reference beam in the hologram plane. The object beam in the hologram plane was the sum of waves from each object plane. The images were also reconstructed using the single-FFT approach [14,15].
The HoloForkNet was trained in a way where each hologram was matched with a set of corresponding object planes. This neural network was implemented in the Python programming language using the TensorFlow library. The model was trained with Nvidia RTX 3060 and Nvidia RTX 3070 GPUs.

3.2. Assessing Image Quality

Two metrics were used to assess image quality: the structural similarity index (SSIM) and the correlation coefficient (CC). The original O(x, y) and reconstructed R(x, y) object planes were compared.
CC shows how strongly two variables are linearly related to each other. For images, this corresponds to how many pixels in the two images correlate with each other. However, the correlation coefficient does not take into account other factors that can affect image quality, such as contrast and brightness. The CC is defined as follows [74]:
CC = x = 1 M y = 1 N ( O ( x , y ) μ O ) ( R ( x , y ) μ R ) ( x = 1 M y = 1 N ( O ( x , y ) μ O ) 2 ) ( x = 1 M y = 1 N ( R ( x , y ) μ R ) 2 ) ,
where μO and μR are the mean values for the images O(x, y) and R(x, y). If the CC value is 1, the images are identical. The closer the CC is to zero, the lower the similarity of the images.
SSIM considers image structure and contrast as well as brightness. It compares not only pixel brightness values but also their location and structure. The structural similarity index can more accurately assess image quality, especially when there are distortions in the image structure, such as blurring. SSIM [75] was also used as a measure of image quality. In a simple form, it is defined as follows:
SSIM = ( 2 μ O μ R + c 1 ) ( 2 σ O R + c 2 ) ( μ O 2 + μ R 2 + c 1 ) ( σ O 2 + σ R 2 + c 2 ) ,
where σO, σR and σOR are standard deviations and cross-covariance for the images O(x, y) and R(x, y), and c1, c2 are constants. The SSIM value is equal to 1 when the images are identical.
The average values of the metrics were estimated. For a better understanding of how the neural network works, it is necessary to understand the possible variation in values, so the standard deviation (STD) was also determined.

3.3. Reconstructing Images of Objects 1

For example, let us consider training a neural network on holograms containing two planes with phase images of digits. Figure 5 shows examples of objects (b, d) restored by the neural network. On the left side, the object planes closer to the hologram are presented. For comparison, the images of ground truth objects are given (a, c). A training set consisted of 25,000 holograms. Each hologram was matched with two images, which represent two corresponding object planes. The validation dataset consisted of 5000 holograms. The model gives two images corresponding to the object planes for each hologram.
From a visual point of view, the resulting images are extremely close to the truth. A quantitative evaluation of the results was also performed. The structural similarity index (SSIM) and correlation coefficient (CC) and their STDs were calculated for 100 reconstructed images to assess the performance of the neural network. The results are presented in Table 1. It can be seen that the neural network provides good quality for such a scene’s reconstruction.
The method generates reconstructed object images. In contrast to standard propagation modeling methods, the images are not contaminated by parasitic diffraction orders. This is especially useful when inline digital holograms are considered.

3.4. Reconstructing Object Images 2

The second type of phase objects (circles) were placed randomly on each object plane. Holograms of 3D scenes consisting of 2–20 planes were generated. Each of the planes contained 1–16 circular objects. The neural network was trained for each number of object planes. An example of quality evaluation for each case is shown in Figure 6. There were 10 objects in each object plane. For two planes, a total of 20 objects per 3D scene are recorded, and, for example, eight planes correspond to 80 objects in a 3D scene. The training set consisted of 25,000 holograms, and the validation set consisted of 5000 holograms. Each additional object plane increases the size of the dataset and, therefore, the training time. For eight planes, the training set consisted of (25,000 + 8 × 25,000) = 225,000 images. In the test sample, there were (5000 + 8 × 5000) = 45,000 images.
It can be seen that as the number of planes grows, the quality of reconstruction decreases. This is explained by the increasing amount of information contained in the hologram and the correspondingly increasing reconstruction complexity. For object 1 (digits), the more important metric is SSIM. These objects are grayscale, and contrast is important for them. The SSIM metric takes it into account better. For object 2 (circles), CC is more useful. Objects are binary, for them, it is important to have no artifacts in the background and the correct location. This is better handled by CC. Both metrics are given for a more complete description of all images. In Figure 6, CC decreases more rapidly than SSIM. For example, with an increasing number of objects, the neural network does not fully draw them even with correct positioning. The CC is more sensitive to such changes. The STD value also increases, suggesting that the number of artifacts in the reconstructed object images may increase. At the same time, the quality assessment metrics, even within the STD, remain above 0.8 for each set of planes. This indicates good performance from the model.
An example of images reconstructed by the neural network is shown in Figure 7c,d. The 3D scene consisted of eight object planes. There were five objects in each plane. The images are arranged from left to right as they move away from the hologram plane. For comparison, Figure 7a,b show examples of the ground truth.
From a visual point of view, the quality of the reconstruction is high. The vast majority of the objects are located in the correct places and have the correct characteristics. Only a small number of artifacts were introduced in the form of a false image, traces of objects from another plane, or incomplete objects. However, in this example, the artifacts are insignificant and do not influence the perception of the reconstructed images.
The results were compared against those generated by the Dense-U-net [58] model. This method was chosen because it is designed for 3D scene reconstruction, similar to HoloForkNet. Other methods were not used in the comparison because they require more initial data for reconstruction (for example, some standard methods involve recording four holograms for further reconstruction of information from them). The results of the quality assessment metrics for the five objects on each of the eight planes are shown in Table 2.
Several factors contribute to the difference in reconstruction quality. The most important ones are the overlapping objects from different object planes and the non-uniformity of intensity over the reconstructed object area. Initially, Dense-U-net was designed to work with smaller, non-overlapping objects and the reconstructed 3D scene was not separated into individual 2D planes. The object overlapping and non-uniformity are not as impactful. Thus, it is possible to say that HoloForkNet shows better performance in cases of larger and overlapping objects spread across fewer object planes.
Figure 8 shows a plot describing the quality of reconstruction for each of the eight object planes. There were five objects in each plane. It can be seen that all planes are approximately equal in quality. This is explained by the fact that all planes have the same complexity of elements: number, shape and density of objects.
In experimental data, the number of objects is rarely strictly fixed. Therefore, to extend the evaluation of the method’s capabilities, the model was tested with samples with different numbers of objects in planes. At the same time, within one sample, the scenes contained the same number of objects in each plane. The plot for reconstruction quality in relation to the number of objects in each plane is shown in Figure 9. Holograms of 3D scenes containing eight object planes were used. The total number of objects for the left point of the graph is 8 × 1 = 8 and for the right one, 8 × 16 = 128.
It can be seen that an increase in the number of objects in the plane leads to a decrease in the quality of reconstruction. This can be explained by the increased reconstruction complexity. In this case, the neural network model works well up to approximately 10 objects per plane. Therefore, Figure 6 shows exactly this number of objects per plane. With a further increase in the number of objects, the quality begins to fall faster. The reduction in the CC with the reduction in the complexity (for one and two objects in the plane) of the scene can be explained by the suboptimal conditions for the neural network since it was trained on the dataset with five objects in each plane. This could be enough of a deviation from the optimal conditions to cause one of the metrics to suffer slightly.
Experiments were also conducted with a random number of objects in each plane. The number can be from 1 to 20 objects. Hence, the average number of objects in the plane in the large sample was 10.5. The number of planes in 3D scenes was eight. The estimated quality of neural network reconstruction was: SSIM = 0.94 ± 0.01, CC = 0.90 ± 0.09. This is consistent with the values obtained for the case where each plane contained 10 objects. It can be concluded that for the model, the total number of objects in the scene is more important than the number of objects in each plane.
Thus, the HoloForkNet trained on computer-generated data has shown good performance. Further, the HoloForkNet architecture will be verified on experimentally recorded digital holograms in order to confirm its applicability not only to the data obtained by computer modeling but also to optically obtained holograms.

4. Optical Experiments

4.1. Conditions for Optical Experiments

For experimental testing of the proposed HoloForkNet network, digital holograms of 3D scene phase objects were recorded. The scheme of the experimental setup is shown in Figure 10.
A 200 mW Nd:YAG crystal laser with a wavelength of 532 nm was used. The beam passed through a filtration system (lens L1 and pinhole) and was collimated using lens L2. Beam splitter BS1 divided the light into reference and object beams. Beam splitter BS2 divided the object beam into two directions: towards spatial light modulators SLM1 and SLM2. SLM1 is a phase-only LCoS SLM HoloEye PLUTO-2 (1920 × 1080 pixels; pixel size is 8 × 8 µm). SLM2 is a phase-only LCoS SLM HoloEye GAEA-2 (4160 × 2464 pixels; pixel size is 3.74 × 3.74 µm). The images of the objects were displayed on the SLMs. The beams are reflected from the SLMs, and every pixel obtains its own phase shift. After that, the beams were combined through BS2 and passed through the beam splitter, BS3. The reference beam was reflected from mirror M and combined with the object beam using BS3. The resulting interference pattern was recorded by a Flare 48MP monochrome camera (7920 × 6004 pixels; pixel size is 4.6 × 4.6 µm). The digital camera and SLMs were controlled by a computer.
This setup allows for the recording of digital holograms of 2D and 3D scenes. The minimum distance used from the SLM to the camera was 67 cm. If the distances from BS2 to SLM1 and SLM2 are almost identical, then quasi-2D scene holograms are recorded. If one of the modulators is off, a hologram of one 2D phase object is recorded. If the distances from BS2 to SLM1 and SLM2 are unequal, then 3D scene holograms are recorded. The 3D scene consists of two different object planes, which were displayed on SLM1 and SLM2. Holograms of phase objects were experimentally recorded in this mode. The depth of the recorded 3D scene was modulated by moving SLM2.

4.2. Results of Optical Experiments

Digital holograms of phase objects of both types were optically registered: digits and circles. The holograms had a resolution of 2048 × 2048 pixels. The images of 1024 × 1024 pixels were displayed on SLM1. The size of object 1 was interpolated to 1024 × 1024 pixels. The circle radius for object 2 was 64 pixels. The size of the displayed image on SLM2 was increased so that it occupied a similar area to SLM1 during reconstruction. Registration was performed in two cases: when the distance between the modulators, and therefore the planes, was small (1 cm) and significantly larger (20 cm). Randomly distributed circles with no intersection between them were used as object 2.
A total of 4944 holograms were registered for object 1, of which 4500 were used for training and 444 for validation. Each training hologram was matched with two corresponding original images, forming a scene. For object 2, 10,522 holograms were recorded: 9500 were used for training and 1022 for validation. The model was trained on the obtained data. Due to the four times larger size of the input image (hologram), the interleaving step in the linear direction was increased by two times. Figure 11 shows examples of ground truth (a, c, e, g) and reconstructed (b, d, f, h) images from holograms containing two object planes. The distances from the camera to the modulators were different. The left image was closer to the camera plane. The distance between the object planes of the 3D scenes was 1 cm for Figure 11b,d and 20 cm for Figure 11f,h.
From a visual point of view, the reconstruction quality is high for both 3D scene depth options. The model works well regardless of the distances between the planes. It successfully reconstructs even on-axis, side-by-side, located objects (see the upper objects in the right plane of Figure 11c,d). There are no visible artifacts in the reconstructed images. The quality of images was evaluated by the SSIM and CC metrics. They are given in Table 3. The values are averaged for 100 test images. Object 1 is presented in Figure 11a,e and object 2 in Figure 11c,g.
It can be seen that the HoloForkNet produces high-quality reconstructed images. At the same time, it provides reconstruction without parasitic diffraction orders, which is very useful for inline digital holography. The HoloForkNet method can work with media with different particle densities. The considered example of the reconstruction corresponds to a density of 1059 particles per mm3.

5. Analysis of the Obtained Results

For a small number of objects and planes, almost perfect reconstruction quality was achieved. For complex scenes (for example, eight planes with 16 objects each), artifacts appear. Possibly, this problem can be solved by training the neural network on scenes with a large number of objects and by optimizing the architecture.
Another direction of the network’s development is the improvement of its performance for holograms with a large number of object planes. The first problem is an increase in the total number of objects in the hologram 3D scene. The second problem is a significant increase in training time with an increased number of object planes. This is because each additional object plane requires adding a new decoder pathway. Thus, when going from the reconstruction of holograms with one plane to holograms of scenes containing eight object planes, the number of decodable parameters of the neural network is increased by 3.8 times. The solution to this problem requires modernization of the neural network architecture.
To increase the versatility of this method, it is necessary to further modify the training dataset. This is aimed at ensuring that the neural network can successfully distinguish objects with different characteristics and sizes. In order to obtain the maximum reconstruction quality, it might be necessary to create an optimized dataset for each specific problem.

6. Conclusions

In this paper, we propose a neural network-based method that allows for the reconstruction of images of phase objects from 3D scene holograms. This makes it possible to practically eliminate the noise in the reconstructed images. In addition, zero-order and twin images do not appear even in the case of inline digital holograms.
The proposed HoloForkNet model effectively handles the reconstruction of phase-only objects. We obtained a qualitative reconstruction of eight object planes with a random number of circular objects on each of them. From 1 to 20 objects on each plane, the correlation coefficient is equal to 0.90 ± 0.09.
The method showed good performance on the optically registered digital holograms of 3D phase scenes. The result is invariant regardless of the distance between the object planes of the scene. The average reconstruction quality for all types of holograms at different distances by the structural similarity index was 0.88 ± 0.02.
For method acceleration and improvement, a number of the discussed modifications can be applied. The method can be useful in the analysis of dynamic and fast processes where microparticles or other objects are located in different planes of the medium and their images should be reconstructed.

Author Contributions

Conceptualization, P.A.C. and R.S.S.; methodology, D.A.R. and P.A.C.; software, D.A.R. and A.S.S.; validation, A.S.S.; formal analysis, R.S.S.; investigation, A.S.S., D.A.R. and P.A.C.; resources, R.S.S.; data curation, A.S.S.; writing—original draft preparation, A.S.S. and D.A.R.; writing—review and editing, A.S.S. and P.A.C.; visualization, A.S.S.; supervision, P.A.C.; project administration, R.S.S.; funding acquisition, P.A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Russian Science Foundation (RSF), Grant No. 22-79-10340.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schnars, U.; Falldorf, C.; Watson, J.; Jüptner, W. Digital Holography and Wavefront Sensing; Springer: Berlin/Heidelberg, Germany, 2015; ISBN 9783662446928. [Google Scholar]
  2. Javidi, B.; Carnicer, A.; Anand, A.; Barbastathis, G.; Chen, W.; Ferraro, P.; Goodman, J.; Horisaki, R.; Khare, K.; Kujawinska, M.; et al. Roadmap on digital holography [Invited]. Opt. Express 2021, 29, 35078. [Google Scholar] [CrossRef]
  3. Lam, V.K.; Nguyen, T.C.; Chung, B.M.; Nehmetallah, G.; Raub, C.B. Quantitative assessment of cancer cell morphology and motility using telecentric digital holographic microscopy and machine learning. Cytom. Part A 2017, 93, 334–345. [Google Scholar] [CrossRef]
  4. Wu, Y.; Rivenson, Y.; Zhang, Y.; Wei, Z.; Günaydin, H.; Lin, X.; Ozcan, A. Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. Optica 2018, 5, 704–710. [Google Scholar] [CrossRef]
  5. Li, H.; He, G.; Song, Q.; Xia, H.; Liu, Z.; Liang, J.; Li, T. The Study of Tooth Erosion Tested by the Color Digital Holography (CDH) Detection System. Appl. Sci. 2022, 12, 8613. [Google Scholar] [CrossRef]
  6. Balasubramani, V.; Kujawińska, M.; Allier, C.; Anand, V.; Cheng, C.-J.; Depeursinge, C.; Hai, N.; Juodkazis, S.; Kalkman, J.; Kuś, A.; et al. Roadmap on Digital Holography-Based Quantitative Phase Imaging. J. Imaging 2021, 7, 252. [Google Scholar] [CrossRef]
  7. Petrov, V.; Pogoda, A.; Sementin, V.; Sevryugin, A.; Shalymov, E.; Venediktov, D.; Venediktov, V. Advances in Digital Holographic Interferometry. J. Imaging 2022, 8, 196. [Google Scholar] [CrossRef]
  8. Dyomin, V.; Semiletov, I.; Chernykh, D.; Chertoprud, E.; Davydova, A.; Kirillov, N.; Konovalova, O.; Olshukov, A.; Osadchiev, A.; Polovtsev, I. Study of Marine Particles Using Submersible Digital Holographic Camera during the Arctic Expedition. Appl. Sci. 2022, 12, 11266. [Google Scholar] [CrossRef]
  9. Galande, A.S.; Gurram, H.P.R.; Kamireddy, A.P.; Venkatapuram, V.S.; Hasan, Q.; John, R. Quantitative phase imaging of biological cells using lensless inline holographic microscopy through sparsity-assisted iterative phase retrieval algorithm. J. Appl. Phys. 2022, 132, 243102. [Google Scholar] [CrossRef]
  10. Zhu, Y.; Yeung, C.H.; Lam, E.Y. Microplastic pollution monitoring with holographic classification and deep learning. J. Phys. Photonics 2021, 3, 024013. [Google Scholar] [CrossRef]
  11. Kim, J.; Go, T.; Lee, S.J. Accurate real-time monitoring of high particulate matter concentration based on holographic speckles and deep learning. J. Hazard. Mater. 2020, 409, 124637. [Google Scholar] [CrossRef]
  12. Kemper, B.; von Bally, G. Digital holographic microscopy for live cell applications and technical inspection. Appl. Opt. 2007, 47, A52–A61. [Google Scholar] [CrossRef]
  13. Muslimov, E.R.; Sakhabutdinov, A.Z.; Morozov, O.G.; Pavlycheva, N.K.; Akhmetov, D.M.; Kharitonov, D.Y. Digital Holographic Positioning Sensor for a Small Deployable Space Telescope. Appl. Sci. 2022, 12, 4427. [Google Scholar] [CrossRef]
  14. Schnars, U.; Jüptner, W.P.O. Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol. 2002, 13, R85–R101. [Google Scholar] [CrossRef]
  15. Verrier, N.; Atlan, M. Off-axis digital hologram reconstruction: Some practical considerations. Appl. Opt. 2011, 50, H136–H146. [Google Scholar] [CrossRef]
  16. Memmolo, P.; Bianco, V.; Paturzo, M.; Ferraro, P. Numerical Manipulation of Digital Holograms for 3-D Imaging and Display: An Overview. Proc. IEEE 2016, 105, 892–905. [Google Scholar] [CrossRef]
  17. Cheremkhin, P.A.; Kurbatova, E.A.; Evtikhiev, N.N.; Krasnov, V.V.; Rodin, V.G.; Starikov, R.S. Adaptive Digital Hologram Binarization Method Based on Local Thresholding, Block Division and Error Diffusion. J. Imaging 2022, 8, 15. [Google Scholar] [CrossRef]
  18. Cheremkhin, P.; Evtikhiev, N.; Krasnov, V.; Rodin, V.; Rymov, D.; Starikov, R. Machine learning methods for digital holography and diffractive optics. Procedia Comput. Sci. 2020, 169, 440–444. [Google Scholar] [CrossRef]
  19. Zeng, T.; Zhu, Y.; Lam, E.Y. Deep learning for digital holography: A review. Opt. Express 2021, 29, 40572. [Google Scholar] [CrossRef]
  20. Situ, G. Deep holography. Light. Adv. Manuf. 2022, 3, 1–23. [Google Scholar] [CrossRef]
  21. Montresor, S.; Tahon, M.; Picart, P. Deep learning speckle de-noising algorithms for coherent metrology: A review and a phase-shifted iterative scheme [Invited]. J. Opt. Soc. Am. A 2022, 39, A62. [Google Scholar] [CrossRef]
  22. Cuche, E.; Marquet, P.; Depeursinge, C. Spatial filtering for zero-order and twin-image elimination in digital off-axis holography. Appl. Opt. 2000, 39, 4070–4075. [Google Scholar] [CrossRef]
  23. Stoykova, E.S.E.; Kang, H.K.H.; Park, J.P.J. Twin-image problem in digital holography—A survey (Invited Paper). Chin. Opt. Lett. 2014, 12, 060013–60024. [Google Scholar] [CrossRef]
  24. Li, Y.-L.; Li, N.-N.; Di Wang, D.; Chu, F.; Lee, S.-D.; Zheng, Y.-W.; Wang, Q.-H. Tunable liquid crystal grating based holographic 3D display system with wide viewing angle and large size. Light Sci. Appl. 2022, 11, 188. [Google Scholar] [CrossRef]
  25. Cheremkhin, P.A.; Evtikhiev, N.; Kozlov, A.V.; Krasnov, V.; Rodin, V.; Starikov, R. An optical-digital method of noise suppression in digital holography. J. Opt. 2022, 24, 115702. [Google Scholar] [CrossRef]
  26. Wang, D.; Li, N.-N.; Li, Y.-L.; Zheng, Y.-W.; Wang, Q.-H. Curved hologram generation method for speckle noise suppression based on the stochastic gradient descent algorithm. Opt. Express 2021, 29, 42650. [Google Scholar] [CrossRef]
  27. Wang, D.; Li, Z.-S.; Zheng, Y.-W.; Li, N.-N.; Li, Y.-L.; Wang, Q.-H. High-Quality Holographic 3D Display System Based on Virtual Splicing of Spatial Light Modulator. ACS Photonics 2022. [Google Scholar] [CrossRef]
  28. Yamaguchi, I.; Zhang, T. Phase-shifting digital holography. Opt. Lett. 1997, 22, 1268–1270. [Google Scholar] [CrossRef]
  29. Leith, E.N.; Upatnieks, J. Wavefront Reconstruction with Diffused Illumination and Three-Dimensional Objects. J. Opt. Soc. Am. 1964, 54, 1295–1301. [Google Scholar] [CrossRef]
  30. Denis, L.; Lorenz, D.; Thiébaut, E.; Fournier, C.; Trede, D. Inline hologram reconstruction with sparsity constraints. Opt. Lett. 2009, 34, 3475–3477. [Google Scholar] [CrossRef]
  31. Momey, F.; Denis, L.; Olivier, T.; Fournier, C. From Fienup’s phase retrieval techniques to regularized inversion for in-line holography: Tutorial. J. Opt. Soc. Am. A 2019, 36, D62–D80. [Google Scholar] [CrossRef]
  32. Berdeu, A.; Flasseur, O.; Méès, L.; Denis, L.; Momey, F.; Olivier, T.; Grosjean, N.; Fournier, C. Reconstruction of in-line holograms: Combining model-based and regularized inversion. Opt. Express 2019, 27, 14951–14968. [Google Scholar] [CrossRef]
  33. Zhang, W.; Cao, L.; Brady, D.J.; Zhang, H.; Cang, J.; Zhang, H.; Jin, G. Twin-Image-Free Holography: A Compressive Sensing Approach. Phys. Rev. Lett. 2018, 121, 093902. [Google Scholar] [CrossRef]
  34. Zhang, G.; Guan, T.; Shen, Z.; Wang, X.; Hu, T.; Wang, D.; He, Y.; Xie, N. Fast phase retrieval in off-axis digital holographic microscopy through deep learning. Opt. Express 2018, 26, 19388–19405. [Google Scholar] [CrossRef]
  35. Tahon, M.; Montrésor, S.; Picart, P. Deep Learning Network for Speckle De-Noising in Severe Conditions. J. Imaging 2022, 8, 165. [Google Scholar] [CrossRef]
  36. Galande, A.S.; Thapa, V.; Gurram, H.P.R.; John, R. Untrained deep network powered with explicit denoiser for phase recovery in inline holography. Appl. Phys. Lett. 2023, 122, 133701. [Google Scholar] [CrossRef]
  37. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning Deep CNN Denoiser Prior for Image Restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2808–2817. [Google Scholar] [CrossRef]
  38. Nguyen, T.; Bui, V.; Lam, V.; Raub, C.B.; Chang, L.-C.; Nehmetallah, G. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection. Opt. Express 2017, 25, 15043–15057. [Google Scholar] [CrossRef]
  39. Pan, F.; Dong, B.; Xiao, W.; Ferraro, P. Stitching sub-aperture in digital holography based on machine learning. Opt. Express 2020, 28, 6537–6551. [Google Scholar] [CrossRef]
  40. Wang, H.; Li, K.; Jiang, X.; Wang, J.; Zhang, X.; Liu, X. Zero-order term suppression in off-axis holography based on deep learning method. Opt. Commun. 2023, 537, 129264. [Google Scholar] [CrossRef]
  41. Rymov, D.A.; Starikov, R.S.; Cheremkhin, P.A. Neural-network-enabled holographic image reconstruction via amplitude and phase extraction. J. Opt. Technol. 2022, 89, 511–516. [Google Scholar] [CrossRef]
  42. Jiao, S.; Jin, Z.; Chang, C.; Zhou, C.; Zou, W.; Li, X. Compression of phase-only holograms with JPEG standard and deep learning. Appl. Sci. 2018, 8, 1258. [Google Scholar] [CrossRef]
  43. Zhang, X.; Wang, H.; Wang, W.; Yang, S.; Wang, J.; Lei, J.; Zhang, Z.; Dong, Z. Particle field positioning with a commercial microscope based on a developed CNN and the depth-from-defocus method. Opt. Lasers Eng. 2022, 153, 106989. [Google Scholar] [CrossRef]
  44. Shao, S.; Mallery, K.; Kumar, S.S.; Hong, J. Machine learning holography for 3D particle field imaging. Opt. Express 2020, 28, 2987–2999. [Google Scholar] [CrossRef]
  45. Terbe, D.; Orzó, L.; Zarándy, A. Classification of Holograms with 3D-CNN. Sensors 2022, 22, 8366. [Google Scholar] [CrossRef]
  46. Pitkäaho, T.; Manninen, A.; Naughton, T.J. Focus prediction in digital holographic microscopy using deep convolutional neural networks. Appl. Opt. 2019, 58, A202–A208. [Google Scholar] [CrossRef]
  47. Cuenat, S.; Andréoli, L.; André, A.N.; Sandoz, P.; Laurent, G.J.; Couturier, R.; Jacquot, M. Fast autofocusing using tiny transformer networks for digital holographic microscopy. Opt. Express 2022, 30, 24730. [Google Scholar] [CrossRef]
  48. Jaferzadeh, K.; Fevens, T. HoloPhaseNet: Fully automated deep-learning-based hologram reconstruction using a conditional generative adversarial model. Biomed. Opt. Express 2022, 13, 4032. [Google Scholar] [CrossRef]
  49. Ju, Y.-G.; Choo, H.-G.; Park, J.-H. Learning-based complex field recovery from digital hologram with various depth objects. Opt. Express 2022, 30, 26149. [Google Scholar] [CrossRef]
  50. Pirone, D.; Sirico, D.G.; Miccio, L.; Bianco, V.; Mugnano, M.; Ferraro, P.; Memmolo, P. Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning. Lab Chip 2022, 22, 793–804. [Google Scholar] [CrossRef]
  51. Wang, K.; Dou, J.; Kemao, Q.; Di, J.; Zhao, J. Y-Net: A one-to-two deep learning framework for digital holographic reconstruction. Opt. Lett. 2019, 44, 4765–4768. [Google Scholar] [CrossRef]
  52. Rivenson, Y.; Zhang, Y.; Günaydın, H.; Teng, D.; Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 2018, 7, 17141. [Google Scholar] [CrossRef]
  53. Chen, H.; Huang, L.; Liu, T.; Ozcan, A. Fourier Imager Network (FIN): A deep neural network for hologram reconstruction with superior external generalization. Light Sci. Appl. 2022, 11, 254. [Google Scholar] [CrossRef]
  54. Li, H.; Chen, X.; Chi, Z.; Mann, C.; Razi, A. Deep DIH: Single-Shot Digital In-Line Holography Reconstruction by Deep Learning. IEEE Access 2020, 8, 202648–202659. [Google Scholar] [CrossRef]
  55. Wang, K.; Kemao, Q.; Di, J.; Zhao, J. Y4-Net: A deep learning solution to one-shot dual-wavelength digital holographic reconstruction. Opt. Lett. 2020, 45, 4220–4223. [Google Scholar] [CrossRef]
  56. Niknam, F.; Qazvini, H.; Latifi, H. Holographic optical field recovery using a regularized untrained deep decoder network. Sci. Rep. 2021, 11, 10903. [Google Scholar] [CrossRef]
  57. Ren, Z.; Xu, Z.; Lam, E.Y. End-to-end deep learning framework for digital holographic reconstruction. Adv. Photonics 2019, 1, 016004. [Google Scholar] [CrossRef]
  58. Wu, Y.; Wu, J.; Jin, S.; Cao, L.; Jin, G. Dense-U-net: Dense encoder–decoder network for holographic imaging of 3D particle fields. Opt. Commun. 2021, 493, 126970. [Google Scholar] [CrossRef]
  59. Cai, S.; Wu, Y.; Chen, G. A Novel Elastomeric UNet for Medical Image Segmentation. Front. Aging Neurosci. 2022, 14, 31. [Google Scholar] [CrossRef]
  60. Neven, R.; Goedemé, T. A Multi-Branch U-Net for Steel Surface Defect Type and Severity Segmentation. Metals 2021, 11, 870. [Google Scholar] [CrossRef]
  61. Dhillon, A.; Verma, G.K. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2019, 9, 85–112. [Google Scholar] [CrossRef]
  62. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  63. Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 2018, 16, 67–70. [Google Scholar] [CrossRef]
  64. Li, L.; Li, X.; Jiang, L.; Su, X.; Chen, F. A review on deep learning techniques for cloud detection methodologies and challenges. Signal Image Video Process. 2021, 15, 1527–1535. [Google Scholar] [CrossRef]
  65. Xiao, H.; Li, L.; Liu, Q.; Zhu, X.; Zhang, Q. Transformers in medical image segmentation: A review. Biomed. Signal Process. Control 2023, 84, 104791. [Google Scholar] [CrossRef]
  66. Hao, X.; Yin, L.; Li, X.; Zhang, L.; Yang, R. A Multi-Objective Semantic Segmentation Algorithm Based on Improved U-Net Networks. Remote Sens. 2023, 15, 1838. [Google Scholar] [CrossRef]
  67. Wang, A.; Togo, R.; Ogawa, T.; Haseyama, M. Defect Detection of Subway Tunnels Using Advanced U-Net Network. Sensors 2022, 22, 2330. [Google Scholar] [CrossRef]
  68. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  69. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  70. Ding, B.; Qian, H.; Zhou, J. Activation functions and their characteristics in deep neural networks. In Proceedings of the 2018 Chinese Control And Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 1836–1841. [Google Scholar] [CrossRef]
  71. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. arXiv 2016, arXiv:1609.05158. [Google Scholar] [CrossRef]
  72. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  73. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 1 March 2021).
  74. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Pearson: New York, NY, USA, 2018; ISBN 9781292223049. [Google Scholar]
  75. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Figure 1. Example of a 3D scene recorded on a hologram.
Figure 1. Example of a 3D scene recorded on a hologram.
Applsci 13 06125 g001
Figure 2. Neural network architecture.
Figure 2. Neural network architecture.
Applsci 13 06125 g002
Figure 3. Example of image partitioning by pixel position.
Figure 3. Example of image partitioning by pixel position.
Applsci 13 06125 g003
Figure 4. Examples of objects used in training datasets. (a,b)—digits (objects 1), (c,d)—circles (objects 2).
Figure 4. Examples of objects used in training datasets. (a,b)—digits (objects 1), (c,d)—circles (objects 2).
Applsci 13 06125 g004
Figure 5. Ground truth (a,c) and neural network-reconstructed images (b,d). The holograms contained scenes consisting of two object planes.
Figure 5. Ground truth (a,c) and neural network-reconstructed images (b,d). The holograms contained scenes consisting of two object planes.
Applsci 13 06125 g005
Figure 6. Correlation between the reconstruction quality and the number of object planes in the hologram. There were 10 objects in each plane.
Figure 6. Correlation between the reconstruction quality and the number of object planes in the hologram. There were 10 objects in each plane.
Applsci 13 06125 g006
Figure 7. Ground truth (a,b) and neural network-reconstructed (c,d) images. The hologram contained a 3D scene consisting of eight object planes.
Figure 7. Ground truth (a,b) and neural network-reconstructed (c,d) images. The hologram contained a 3D scene consisting of eight object planes.
Applsci 13 06125 g007
Figure 8. Correlation between the reconstruction quality and the object plane index. The hologram contained a 3D scene consisting of eight object planes.
Figure 8. Correlation between the reconstruction quality and the object plane index. The hologram contained a 3D scene consisting of eight object planes.
Applsci 13 06125 g008
Figure 9. Correlation between the reconstruction quality and the number of objects in the planes. The holograms contained 3D scenes consisting of eight object planes.
Figure 9. Correlation between the reconstruction quality and the number of objects in the planes. The holograms contained 3D scenes consisting of eight object planes.
Applsci 13 06125 g009
Figure 10. Schematic of the experimental setup for recording digital holograms.
Figure 10. Schematic of the experimental setup for recording digital holograms.
Applsci 13 06125 g010
Figure 11. Ground truth (a,c,e,g) and neural network-reconstructed (b,d,f,h) images of scenes from recorded digital holograms. The distances between the planes are 1 cm (a,c) and 20 cm (e,g).
Figure 11. Ground truth (a,c,e,g) and neural network-reconstructed (b,d,f,h) images of scenes from recorded digital holograms. The distances between the planes are 1 cm (a,c) and 20 cm (e,g).
Applsci 13 06125 g011
Table 1. Evaluation of reconstruction quality for synthesized holograms of scenes consisting of two object planes (objects—digits).
Table 1. Evaluation of reconstruction quality for synthesized holograms of scenes consisting of two object planes (objects—digits).
Metric1st Plane2nd Plane
SSIM0.86 ± 0.040.85 ± 0.04
CC0.97 ± 0.020.95 ± 0.02
Table 2. Numerical evaluation of reconstruction quality for computer-generated holograms of 3D scenes consisting of eight object planes (five objects in each plane).
Table 2. Numerical evaluation of reconstruction quality for computer-generated holograms of 3D scenes consisting of eight object planes (five objects in each plane).
MetricHoloForkNetDense-U-net
SSIM0.972 ± 0.0040.93 ± 0.09
CC0.98 ± 0.040.7 ± 0.2
Table 3. Metrics for evaluating the quality of images reconstructed from optically recorded holograms.
Table 3. Metrics for evaluating the quality of images reconstructed from optically recorded holograms.
MetricsObjects 1Objects 2
1st Plane2nd Plane1st Plane2nd Plane
Small distance between planes
SSIM0.84 ± 0.040.85 ± 0.030.84 ± 0.040.930 ± 0.009
CC0.994 ± 0.0030.994 ± 0.0030.994 ± 0.0030.99 ± 0.01
Large distance between planes
SSIM0.85 ± 0.030.80 ± 0.030.85 ± 0.030.923 ± 0.007
CC0.993 ± 0.0030.989 ± 0.0060.993 ± 0.0030.992 ± 0.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Svistunov, A.S.; Rymov, D.A.; Starikov, R.S.; Cheremkhin, P.A. HoloForkNet: Digital Hologram Reconstruction via Multibranch Neural Network. Appl. Sci. 2023, 13, 6125. https://doi.org/10.3390/app13106125

AMA Style

Svistunov AS, Rymov DA, Starikov RS, Cheremkhin PA. HoloForkNet: Digital Hologram Reconstruction via Multibranch Neural Network. Applied Sciences. 2023; 13(10):6125. https://doi.org/10.3390/app13106125

Chicago/Turabian Style

Svistunov, Andrey S., Dmitry A. Rymov, Rostislav S. Starikov, and Pavel A. Cheremkhin. 2023. "HoloForkNet: Digital Hologram Reconstruction via Multibranch Neural Network" Applied Sciences 13, no. 10: 6125. https://doi.org/10.3390/app13106125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop