Next Article in Journal
Novel Instrument for Clinical Evaluations of Active Extraocular Muscle Tension
Next Article in Special Issue
Recording of Full-Color Snapshot Digital Holographic Portraits Using Neural Network Image Interpolation
Previous Article in Journal
Winding Loss Suppression in Inverter-Fed Traction Motors via Hybrid Coil Materials and Configurations
Previous Article in Special Issue
Temperature Sensing in Space and Transparent Media: Advancements in Off-Axis Digital Holography and the Temperature Coefficient of Refractive Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Object Deep-Field Digital Holographic Imaging Based on Inverse Cross-Correlation

1
Key Laboratory of Luminescence and Optical Information, Ministry of Education, Beijing Jiaotong University, Beijing 100044, China
2
Laboratory of Optics in Free Space (LOFS), Key Laboratory of In-Fiber Integrated Optics, Ministry of Education, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(20), 11430; https://doi.org/10.3390/app132011430
Submission received: 28 August 2023 / Revised: 16 October 2023 / Accepted: 17 October 2023 / Published: 18 October 2023
(This article belongs to the Special Issue Digital Holography and Its Application)

Abstract

:
To address the complexity of small or unique reconstruction distances in digital holography, we propose an inverse cross-correlation-based algorithm for the digital holographic imaging of multiplanar objects with a large depth of field. In this method, a planar output mapping is closely around the objects, and it is established by calculating the image inverse cross-correlation matrix of the reconstructed image at similar reconstruction distances, whereby the object edges serve as the result guide. Combining the search for edge planes with the depth estimation operator, the depth of field of digital holography is improved, thus allowing for a digital holography that is capable of meeting the requirements of the holographic imaging of multiplanar objects. Compared with the traditional depth estimation operator method, the proposed method solves the reconstruction ambiguity problem in multiple planes with a simple optical path, and no additional optical or mechanical devices need to be added, thus greatly improving the reconstruction quality. The numerical calculation results and the experimental results with multiplanar samples validate the effectiveness of the proposed method.

1. Introduction

Digital holography can simultaneously record the amplitude and phase information fields of an object in a photodetector under non-contact conditions [1,2]. The image of the object can then be numerically reconstructed through interference recording and diffraction reconstruction [3]. Digital holography can offer high sensitivity in detecting subtle morphological changes and deformations due to its ability to capture small phase variations. This sensitivity makes it well suited for applications requiring precise measurements. Furthermore, digital holography is highly suitable for recording multi-dimensional information, thereby enabling the measurement of dynamic behaviors under specific conditions [4,5]. The advantages of digital holography also extend to its potential for quantitative analysis through the computer-based processing of amplitude and phase information. By integrating image processing and analysis techniques, additional valuable information can be extracted from data. As a result, digital holography has found extensive use in diverse domains, including industrial manufacturing, biology, medicine, and materials science. Its applications encompass three-dimensional topography measurement [6,7], particle field characterization [8], deformation analysis [9], stress distribution mapping, and other areas [10]. Many of its key issues are also commonly discussed [11].
In digital holography, diffraction reconstruction is the key aspect allowing one to realize the reconstruction of three-dimensional scenes. In this process, the selection of the appropriate diffraction distance has a critical influence on ensuring the quality of the three-dimensional reconstruction. In the case of a single-plane image of the object’s digital reconstruction, the assumption of a well-defined focal plane often holds true. Consequently, a range of actual measurement distances can be explored, and image quality can be evaluated at various diffraction reconstruction distances to identify the optimal one [12,13,14]. However, the situation becomes complicated when the measured object is a multiplanar object; this is because the image quality is affected by multiple planes, and it is also at which point the single-focal plane assumption no longer applies. Therefore, optimizing the selection of the diffraction distance is a crucial technique for realizing multiplanar digital holographic imaging with a large depth of field.
In order to accomplish the holographic reconstruction of multiplanar objects, different structures and algorithms have recently been proposed. Lauren Wolbromsky et al. proposed a spectral multiplexing method under low-coherent-light sources [15]. This method uses the principle that the different angles of the reference light are separated in the spectrum; as such, objects at different locations are recorded in the interference fringes of different directions in order to image multiple depths in a single acquisition. However, this method has a complex and bulky optical path structure, and it requires a precise mechanical displacement stage to adjust the axial distance between different reference optical paths. In addition, in order to achieve spectral separation, the holographic optical path must be set up as an off-axis optical path, thereby losing more spatial bandwidth; meanwhile, at the same time, the relative angle of each reflector has a strict mutual exclusion requirement. Tang Ming et al. proposed an autofocusing algorithm [16] that is based on the area criterion applied to the monitoring of microscopic plankton. By dividing the hologram into regions and focusing them sequentially, the best reconstruction distance for the object can be identified as the distance where the reconstructed object occupies the smallest area, which is determined according to the principle of minimizing the dispersion spot when the object is focused. A limitation of this method is that there are many a priori conditions for determining the existence of objects in a region, and the algorithm is not very widely applicable. With the development of deep learning, a dense encoder–decoder network was proposed by Yufeng Wu et al. [17] for 3D particle field measurement. This method still suffers from a long training time, the use of a single recognition model, and the existence of a priori knowledge. In addition, entropy-minimization-based methods are gradually emerging [18,19,20]. These methods involve calculating the local minimum of the entropy value in a reconstructed image sequence and considering it the focused section. However, such approaches require the reconstruction of an entire image sequence, which can be computationally intensive. A. Anand et al. also utilized correlation algorithms for cell classification and reconstruction [21]. The correlation algorithms used in this method focus on the classification of healthy and non-healthy cells, and cellular localization is determined by thresholding the resolution and thickness distribution of the system.
In this article, we propose an algorithm based on image inverse cross-correlation, which is designed to realize the large-depth-of-field digital holographic reconstruction of multiplanar objects. The fundamental concept behind this algorithm is the use of adjacent reconstructed images for inverse cross-correlation computation to effectively capture the edge information of the object in an image, thus allowing one to accurately locate the edge position of the object and further segment the plane where the object is located. Through the integration of the depth estimation operator, the optimal reconstructed distances are calculated for each segmented plane, thus realizing multi-plane digital holographic imaging with a large depth of field. Additionally, the reconstruction of overlapping objects can be achieved. Compared to traditional methods, the advantages of this algorithm are its simplicity and efficiency. This approach capitalizes on the synergy of image inverse correlation and depth estimation, thus eliminating the need for intricate optical systems or mechanical devices. A potential application of the proposed method lies in a multiplanar large-depth-of-field digital holographic imaging, in which it will provide a new solution for realizing high-quality multi-object three-dimensional scene reproduction.

2. Principle

2.1. Digital Holographic Reconstruction

Digital holography can be divided into two major steps: interference recording and diffraction reconstruction. In the diffraction reconstruction process, a reconstructed image is reproduced in digital form through numerical calculations that are carried out on computers [2], as described in Equation (1). Here, U 0 ( x 0 , y 0 ) represents the field distribution function of an object before diffraction, U ( x , y ) represents the light field distribution function of the reconstructed image, and z is the diffraction reconstruction distance. The symbols F and F 1 denote the two-dimensional fast Fourier and inverse transforms, respectively. H is the transfer function in the frequency domain, which is expressed in Equation (2); u and v are used to represent the frequency domain values in the vertical and horizontal directions, respectively; λ denotes the wavelength; and k represents the wave number.
U ( x , y ) = F 1 F U 0 x 0 , y 0 · H
H = e j k z 1 λ 2 2 u 2 + v 2
In the case where U 0 ( x 0 , y 0 ) is known, the key factor for reconstructing the light field distribution of an image U ( x , y ) is to find the appropriate diffraction distance z. Figure 1 shows the difference in the reconstructed image at different reconstruction distances. The quality of the reconstructed images varies across different reconstruction distances. When the numerical diffraction distance is equal to the measured length from the object to the photodetector, the reconstructed image of the object is the clearest, while the rest of the positions are blurred.
Through numerical simulations, the diffraction reconstruction process can be realized on a computer. Depending on the properties of an object and the diffraction reconstruction distance, we can obtain different reconstructed images. As such, in digital holography, a suitable diffraction reconstruction distance is essential for achieving high-quality and clear reconstructed images.
In the case of single-plane objects, determining the appropriate diffraction reconstruction distance z can often be achieved by employing a focusing evaluation function, which is used to explore a suitable range of measurement distances for the experimental subject. However, when dealing with multiplanar samples, the presence of out-of-focus information from different planes results in the smudging of indicators within the evaluation function, which significantly deteriorates the effectiveness of the search process.
However, when researching the reconstruction of multiplanar objects, we noticed that the edge diffraction fringes of the reconstructed objects varied significantly. Based on this observation, we can adopt a new method through which to determine the edge information of an object, thereby locking the position of the plane. Specifically, it is possible to search for alterations in diffraction fringes between neighboring reconstruction frames, which helps to accurately capture the position of an object’s borders. Once the boundary information of the planes has been ascertained, it becomes feasible to allocate distinct diffraction reconstruction distances to each plane, thus facilitating the achievement of multiplanar digital holographic imaging with a large depth of field.
With this approach, instead of relying on the traditional focusing evaluation function to determine the diffraction reconstruction distance, the edge information of an object is used as an anchor point to accurately locate the positions of diverse planes. This method is able to surmount the challenges posed by multiplanar samples during the reconstruction process, thus allowing each plane to be reconstructed at an appropriate distance and resulting in the heightened accuracy and clearer large-depth-of-field digital holographic imaging of multiplanar objects. In addition to improving the quality of imaging, this method also introduces a fresh perspective for tackling the digital holographic imaging challenges that involve multiplanar objects.

2.2. Introduction of Image Cross-Correlation

Image cross-correlation is a prevalent technique in the realms of signal processing and image analysis. It can be used to compare and analyze the degree of similarity or correlation between two signals or images, and it is widely utilized in various domains, such as object detection and recognition [22,23,24,25,26], image registration [27,28,29,30], motion tracking [31,32,33,34], image matching [28,34,35,36,37], and numerous other applications. It is also widely used in incoherent coded aperture correlation holographic imaging [38,39,40].
Image cross-correlation is based on a cross-correlation algorithm that is used in signal processing, one that calculates the sum of the products between pixels in the overlapping portion of a sliding window and the image itself, whereby the window traverses both images (where the sliding window serves as the template image). The outcome is a newly derived image matrix that is termed the cross-correlation image or cross-correlation matrix. Each pixel’s value within this cross-correlation matrix represents the similarity between the original image and the template image within the overlapping region at the corresponding location. A greater resemblance in pixel distribution between the template image region and the original image leads to higher values in the corresponding pixels of the cross-correlation matrix.
In two-dimensional images, similarity is defined as in Equation (3), where γ is the cross-correlation matrix of the image; I x , y and T u , v are the weighting cross-correlation image and the template image, respectively; I ( x , y ) denotes the light intensity value of the weighting image at pixel ( x , y ) ; I ¯ u , v is the average value of I ( x , y ) in the range of the template image T u , v ; and t ¯ is the template average.
γ = x y I ( x , y ) I ¯ u , v [ T ( x u , y v ) t ¯ ] x y I ( x , y ) I ¯ u , v 2 x y [ T ( x u , y v ) t ¯ ] 2 1 / 2
Generally, the method of finding the maximum value of the cross-correlation matrix γ is used for image alignment, whereas, in this paper, we use the process of finding the position of object edges by locating the pixel points under a certain threshold of γ ; we call this the inverse cross-correlation operation. In contrast to direct edge extraction algorithms, this method proves more suitable for cases where the diffraction information of an object is intertwined and mixed with the background diffraction fringes. In addition, the edges of the object cannot be easily extracted.

2.3. Calculation Flow Chart

The flowchart presented in Figure 2 outlines the process of the inverse cross-correlation digital holographic imaging of multiplanar objects with large depth of field.
The entire process can be divided into four main parts: diffraction reconstruction, cross-correlation, multiplanar reconstruction, and image fusion.
In digital holographic imaging, diffraction reconstruction is a crucial step in realizing three-dimensional image reproduction. Prior to diffraction reconstruction, a hologram must first be processed through an a + 1 order frequency domain filter; this is an indispensable process for all off-axis holograms.
Diffraction reconstruction. The filtered hologram involves two reconstructions that use the angular spectrum reconstruction algorithm. Each reconstruction generates a reconstructed image, and the purpose of the two reconstructions is to generate two images that can be computed in correlation with each other. It is necessary to note that the diffraction distance z differs between the two reconstructions, thus leading to a change in the transfer function H. As a result, two different reconstructed images are obtained, named Reconstructed Image 1 and Reconstructed Image 2, respectively.
Cross-correlation. In this process, Reconstructed Image 1 and Reconstructed Image 2 are subjected to an image cross-correlation operation, as described in Equation (3), to obtain the inverse cross-correlation matrix. Subsequently, the image edges are divided into multiple labeled regions, named Area 1 to Area N, by setting a threshold.
Multi-plane reconstruction. Depth estimation is performed separately for each delineated area, and the area is reconstructed according to the position of the extreme point in the estimation curve. This step contributes to a more accurate reconstruction of the structures and characteristics of objects within each distinct area.
Image fusion. Ultimately, the images obtained from the reconstruction are integrated based on their respective regional positions. This step involves amalgamating the reconstructed information from various regions to generate a comprehensive composite image.
This sequence of processes enables large-depth-of-field reconstruction for multiplanar objects in digital holographic imaging. Through synergizing multiple steps, including filtering, reconstruction, cross-correlation, depth estimation, and image fusion, the obstacles encountered when imaging multiplanar objects can be effectively overcome, thus leading to enhanced image quality and reconstruction precision. This method holds potential applications in the field of digital holographic imaging, and it provides a new way of realizing finer and more realistic object reconstruction.

3. Numerical Calculation Results

3.1. Viability Verification

Figure 3 illustrates the numerical calculation process of the large-depth-of-field digital holography of multiplanar objects when subjected to inverse cross-correlation. Figure 3a shows the proposed optical path system: the coherent light from the laser is collimated by a spatial filter ( S F ) and a lens (L), which is then separated into an object path and a reference path using a beam splitter, B S 1 . After encountering reflector M 2 , the object’s light path interacts with the three-plane object, as shown in Figure 3b, which then incidents to the beam-combining mirror B S 2 , and this is then received by the photodetector. The reference light is reflected from reflector M 1 to the beam-combining mirror B S 2 . The angle of B S 2 is fine-tuned to separate the reference light from the object light. After the two lights interact, the off-axis hologram can be observed in the photodetector. Assuming that the length and width of the image output by the photodetector are W and H (respectively), after the specific off-axis digital holographic filter processing, the diffraction-reconstructed image under different values of reconstruction distance, z, is calculated on a computer using the angular spectrum diffraction method. Two adjacent reconstruction distances, z 1 and z 2 , are selected. The color bars’ values indicate the normalized amplitudes. And a cross-correlation operation is performed. Since the reconstructed images have the same size, a cross-correlation coefficient matrix with a size of ( 2 W 1 ) ( 2 H 1 ) can be obtained, as shown in Figure 3c. The pixel coordinates are found under a specific threshold ζ in the matrix, thereby denoting the edge coordinates of the object to be measured. In addition, the neighboring coordinates can be used as vertices to filter out the planes of different measured objects, as shown in Figure 3d.
In the simulation, the size of the object and the photosensitive surface of the CCD were the same, the diffraction distance from the farthest object to the CCD was 0.93 m, and the three measured objects were uniformly distributed in the z-direction within the range of the depth of field—namely 0.62 m. The sample size of the CCD was 512 × 512 pixels, and the whole sensitive area was 5 × 5 mm. Image reconstruction was performed in steps of 0.062 m. The actual position of the reconstructed image should be at the coordinates with step index values of 50, 55, and 60. The simulations are performed by MATLAB R2023a.
With the initial condition that the different measured objects were located in different planes, the exact location of each object in the plane could be found by integrating the depth estimation operator, as shown in Figure 4. Figure 4a presents the intercepted pixel ranges of different objects, which are respectively distinguished by color into four different planes: blue, red, gray, and orange. Figure 4b illustrates the normalized reconstruction distance coefficients, which are calculated using the depth estimation operator in the plane range corresponding to Figure 4a. The results are the same as those presented in the simulation data, thus verifying the feasibility of the proposed algorithm. Figure 4c shows the reconstructed images of each object at their optimal reconstruction distances.
After fusing the images of the extreme value points, the results shown in Figure 5 were obtained. The color bar values indicate the normalized amplitude. It is evident that each measured object is clearly displayed without blurring. The depth of field under this simulation can reach 2.497 m according to our calculations, thus highlighting the significant effect of multiplane depth-of-field expansion.
By following the aforementioned computational procedures, the inverse cross-correlation technique can be used to realize accurate numerical solutions in the large-depth-of-field digital holographic imaging of multiplanar objects. This method combines optical principles with computerized image processing to obtain information about an object from multiple aspects so as to reconstruct a high-quality image of a multiplanar object, thus providing a powerful solution for achieving a more accurate and detailed imaging of multiplanar objects.

3.2. Comparison with Single-Depth Estimation Algorithms

During the reconstruction process, a single-depth estimation operator is commonly employed to determine the numerical reconstruction distance. However, for large-depth-of-field multiplanar objects, several challenges arise when employing such approaches.
Figure 6 illustrates the estimation results of various estimation operators, and the results are discussed and analyzed herein. The operator types are divided into different types of depth estimation operators, where type 1 is based on an image gradient operator, type 2 is a frequency domain evaluation operator, and type 3 is an autocorrelation operator based on image statistics theory. The red cross section indicates the actual plane where the three subjects are located. After a comprehensive comparison, the single-depth estimation operator has two major disadvantages compared with the inverse cross-correlation operator proposed in this article: (1) The reconstruction distance normalization coefficient for single-depth estimation operators appears to be disorganized. It can be clearly seen in Figure 6a that the ideal quadratic function curve is not present; even in the case of minimal values of the interval, it appears at the second measured object plane position, such as in the case of Laplace, EOG, Brenner, Roberts, and Schaar operators in type 1, as well as in DCT and Wave operators in type 2. (2) The reconstructed image corresponding to the distance under the extremum point is blurred, and only the general outline of the object can be seen; thus, the detailed information of the subject has been lost. Figure 6b shows the reconstructed images corresponding to the extreme value points of the different single-depth estimation operators.
After comparing Figure 4 to Figure 6, it is obvious that the reconstructed images of each object produced when using the inverse cross-correlation algorithm are very clear and that their planes are accurate, thus indicating that the limitations of the traditional single-depth estimation operator have been overcome. Therefore, the inverse cross-correlation algorithm proposed in this article provides a reference value with respect to the data processing for large-depth-of-field digital holographic imaging in multiple planes.

4. Experiment

This experiment used the same optical path as that used in the simulation. The three measured objects in the optical path were the letters “B” and “J” in the same plane, and the letters “T” and “U” in different planes. All the letters were printed on a polyester mask plate and did not transmit light. The thickness of the polyester mask plate was 0.18 mm, with the dimensions of the letters B, J, T, and U being equal to 0.5 × 0.5, 0.9 × 0.9, 0.8 × 0.8, and 0.8 × 0.8 mm2, respectively.
In this experiment, we employed a semiconductor laser (model MSL-FN-532-200mW, manufactured by the Changchun Institute of Optical Machinery) with a central wavelength of 532 nm. The photodetector utilized was a MER-125-30UC CCD, featuring a pixel size of 3.75 μ m × 3.75 μ m, and a pixel count of 1292 × 964. In this experiment, no microscopes or other lens sets were used. The resolution that is directly related to the CCD, which is theoretically able to measure objects with a pixel-size resolution, was used.

4.1. Large Depth-of-Field Reconstruction in the Non-Overlapping Case

Unlike the simulation data, the holograms captured during the experimental process contained large noise, such as scattering noise and other types; thus, each point that met the threshold was reconstructed separately, and all the reconstructed images were merged together. The specific process employed was as follows: Assuming that there are a total of i extreme points within the threshold selected by the inverse cross-correlation coefficient C c (and taking the position of each extreme point as the center coordinate), rectangular boxes with the respective dimensions of S × S, M × M, and L × L are constructed (S < M < L); following this, 3 i rectangular boxes can be obtained. The rectangular boxes are used as the boundaries to find the reconstruction distances z μ σ , which correspond to the 3 i regions ( μ = S , M , L ; σ = 1 to i). After applying z μ σ globally, all the reconstructed images are then fused and processed to obtain the final results.
The cross-correlation results are shown in Figure 7: Figure 7a shows the hologram obtained by CCD, and Figure 7b shows the inverse cross-correlation matrix Cc under z 1 = 10.0 cm and z 2 = 10.5 cm. Since the four measured objects do not overlap with each other at this time, only M × M (M = 50 pixels) could be reconstructed by choosing the matrix box. Figure 7c shows a schematic diagram of the rectangular box. The color bar values indicate the normalized amplitude.
The results are shown in Figure 8. Figure 8a shows the reconstructed phase distribution in the case of a single-depth estimation operator’s extreme point; at this time, the reconstruction distance z = 21.0 cm. The color bars indicate the phase values. It can be seen that, due to the mutual interference of the various planes, the phase is blurred, and the actual different positions of the measured object are forced to be reproduced in the same plane and thus cannot be accurately distinguished. Additionally, the reconstruction of the phase represents the error. Figure 8b shows the reconstructed image under the method proposed in this paper. The phase of each plane is clearly visible, and the relative positional distance between planes can be obtained: BJ and T are 7.2 cm apart, and T and U are 6.1 cm apart.
To better characterize the ability of the depth-of-field enhancement, the quality of the reconstructed image was quantified using the Fourier transform criterion. Firstly, the Fourier transform was applied to the reproduced image to obtain the sum of the spatial frequency components, S F T , whose expression is shown in Equation (4). M and N denote the number of horizontal and vertical pixels in the reconstructed image, and R e and I m are the real and imaginary parts of the spectrum, respectively. The higher the value of the S F T , the more information is available in the reconstructed image, and the higher the accuracy of the reconstruction.
S F T = m = 1 M n = 1 N Re 2 ( F { I ( m , n ) } ) + Im 2 ( F { I ( m , n ) } )
The S F T of the single-depth operator was 0.53 × 10 8 , as calculated using Equation (4) for the reconstructed image in Figure 8a, and the S F T of the proposed algorithm was 1.28 × 10 8 in Figure 8b. After synthesizing the between-display and quantization calculation in Figure 8, it can be seen that the digital holographic depth of field under the inverse cross-correlation calculation was greatly extended in the non-overlapping state, and the measurement of multiplanar objects was realized.

4.2. Large-Depth-of-Field Reconstruction in an Overlapping Case

In real applications, overlapping between measured objects is frequent, and this is the key problem in multiplanar reproduction. The experimental results based on this algorithm are shown in Figure 9.
Figure 9a shows the hologram obtained via CCD, and Figure 9b shows the inverse cross-correlation number matrix C c under z 3 = 12.0 cm and z 4 = 18.0 cm. Since T and U overlapped with each other in the four measured objects at this time, three different sizes for the rectangular boxes were chosen to be reconstructed in turn, in which S, M, and L were 30, 50, and 80 pixels, respectively. In addition, Figure 9c shows the schematic diagram of the M × M rectangular box. The color bar values indicate the normalized amplitude.
Figure 10 shows the reconstruction results, where Figure 10a,b are divided into the phase distribution under a single plane versus that which corresponds to this algorithm. The color bars indicate the phase values. In this case, each plane can be reproduced even if the measured objects overlap. The reproduced overlap can still be interfered in different planes, but its value is low and negligible, as shown by the red box. The S F T of the single-depth operator was 0.58 × 10 8 (calculated using Equation (4)). The S F T of the proposed algorithm was 1.20 × 10 8 . Under the overlap of the measured object, this algorithm can still accurately reproduce the depth of field of different planes, and it can calculate the relative positions between different planes.

5. Discussion

5.1. Factors Affecting Depth of Field

In discussing the maximum achievable depth of field using this method, we summarize the following factors that influence the depth of field.
The multi-plane depth of field is initially constrained by the single-plane depth of the field. Hence, the depth of field under the single plane is discussed first. The influencing factors include the sampling number (N) of the photodetector, the ideal position ( Z i ) of the object reconstruction, the reconstruction algorithm, parameters relating to the optical instruments in the system (such as microscope parameters), etc. Since the angular spectrum reconstruction algorithm was used in this article and no additional optical instrumentation was employed, only the first two influencing factors are discussed herein.
Sampling number (N) of the photodetector. In the angular spectrum reconstruction algorithm, if it is assumed that the energy concentration of the point source within n σ pixels in the reconstruction plane is the limiting condition for the depth of field, then the relationship between the actual reconstruction plane’s position Z m and Z i that deviates from the ideal position is shown in the following equation (Equation (5)):
z m z i = 3 z i n δ · N
This equation indicates that the larger the sampling number (N) of the photodetector, the smaller the depth of field of its digital holography.
Ideal position ( Z i ) for object reconstruction. Based on the above equation, the depth of field of a single-plane object increases as the ideal object reconstruction position ( Z i ) is raised.
The following section addresses the factors affecting the depth of field during the inverse cross-correlation calculation and experimental processes, which include the shape of the object itself, its position characteristics, threshold, and noise.
The morphology and position characteristics of the object itself. The reconstruction of multiplanar objects is affected by various factors, such as the shape, size, and position of the object being measured. In the case of complex diffraction profiles, an algorithm is more likely to capture the object’s edge characteristics. Moreover, the size difference between the measured objects affects the accuracy of an algorithm. The larger the difference, the easier it is for the algorithm to ignore the edges of smaller objects, thus affecting their reconstruction. Position discrepancy refers to cases where the same measured object is placed at the same total distance but where the relative distance between the objects is inconsistent. In such cases, although the measured object is the same, the incidence of diffraction stripes on the photodetector differs, and the difference in the diffraction shape also affects the size of the depth of field.
Threshold. In the inverse cross-correlation calculation, the selection of the threshold value is a critical parameter for determining the coordinates of object edge positions. The number of location coordinates increases as the threshold value decreases. A larger threshold tends to result in missing the measured target, while a smaller threshold increases the difficulty of clustering and reduces accuracy.
Noise. Speckles or other types of noise often appear in interferometric experiments, and they can affect depth-of-field results. In practical measurements, it is recommended to employ different noise reduction processes to mitigate the effects of noise.
By considering these factors, a deeper understanding of the challenges and limitations associated with the multiplanar large-depth-of-field digital holographic imaging process can be gained. Simultaneously, this knowledge enables the implementation of appropriate measures for optimizing reconstruction outcomes, thereby facilitating the achievement of higher-quality multiplanar large-depth-of-field digital holographic imaging results.

5.2. Axial Resolution

The axial resolution in 3D imaging refers to the ability to distinguish objects along the optical axis. In digital holography, it can be quantified as the minimum axial distance required for two planes to be distinguished clearly. This resolution can be expressed using Equation (6) [41], where λ represents the wavelength; z is the diffraction distance between planes; M and N are the number of samples in the transverse and longitudinal directions of the photodetector, respectively; and δ represents the pixel size.
R = 8 λ z 2 M N · δ 2
In this section, we investigate whether the current algorithm impacts axial resolution. Experimental results are presented in Figure 11. Figure 11a,c show the inverse cross-correlation matrix and its schematic diagram of the rectangular box when two closely spaced planes are considered (i.e., when the planes are separated by approximately the thickness of the polyester mask plate). Notably, even when the two planes are in close proximity, the algorithm can effectively distinguish between them, clearly indicating that they belong to separate planes. However, it should be noted that the diffraction distance at this point is 15.7 cm, which—according to Equation (6)—generates an axial resolution of 5.99 mm; this is greater than the mask plate thickness and therefore cannot be resolved.
When the two planes are positioned with an approximate separation of 6 mm, and the far plane’s diffraction distance on the CCD remains at 15.7 cm, the precise separation of the two planes can be achieved. The respective diffraction distances of the two planes are 15.77 cm and 15.16 cm, with a spacing of 0.61 cm, making them resolvable. Figure 11b,d illustrate their respective inverse cross-correlation matrix and schematic diagram of the rectangular boxes. The phase distribution is depicted in Figure 12. The color bar values indicate the phase magnitude.
The experimental results revealed that the proposed algorithm accurately localizes the object’s position when the two planes are in close proximity. Consequently, its axial resolution is solely determined by the characteristics of the digital holographic optical path itself, and the algorithm does not impose any limitations on its size.

6. Conclusions

In this article, an inverse correlation digital holographic reconstruction method was proposed for the high-quality imaging of multiplanar targets with a large depth of field. Our method involves calculating the image inverse cross-correlation matrices of reconstructed images at similar reconstruction distances, as well as identifying the object edge information when using a clustering algorithm. Then, by combining the depth estimation operator and the help of an image fusion algorithm, the digital holographic reconstruction of multiplanar objects with a large depth of field was successfully completed. This method can provide a valuable reference for numerous application scenarios, such as holographic microscopy, 3D morphometry, etc.

Author Contributions

Conceptualization, J.Z. and Z.G.; methodology, J.Z.; software, Y.N.; validation, J.Z.; investigation, Y.S.; resources, L.D.; data curation, S.W.; writing—original draft preparation, J.Z.; funding acquisition, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number 52075034).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Please send any requests for data to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yaroslavskii, L.P.; Merzlyakov, N.S. Methods of Digital Holography; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  2. Goodman, J.W. Introduction to Fourier Optics; Roberts and Company Publishers: Greenwood Village, CO, USA, 2005. [Google Scholar]
  3. Goodman, J.W.; Lawrence, R. Digital image formation from electronically detected holograms. Appl. Phys. Lett. 1967, 11, 77–79. [Google Scholar] [CrossRef]
  4. Wang, X.L.; Zhai, H.C.; Wang, Y.; Mu, G.G. Spatially angular multiplexing in ultra-short pulsed digital holography. Acta Phys. Sin. 2006, 55, 1137–1142. [Google Scholar] [CrossRef]
  5. Curtis, J.E.; Koss, B.A.; Grier, D.G. Dynamic holographic optical tweezers. Opt. Commun. 2002, 207, 169–175. [Google Scholar] [CrossRef]
  6. Many, G.; de Madron, X.D.; Verney, R.; Bourrin, F.; Renosh, P.; Jourdin, F.; Gangloff, A. Geometry, fractal dimension and settling velocity of flocs during flooding conditions in the Rhône ROFI. Estuar. Coast. Shelf Sci. 2019, 219, 1–13. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Zheng, Y.; Xu, T.; Upadhya, A.; Lim, Y.J.; Mathews, A.; Xie, L.; Lee, W.M. Holo-UNet: Hologram-to-hologram neural network restoration for high fidelity low light quantitative phase imaging of live cells. Biomed. Opt. Express 2020, 11, 5478–5487. [Google Scholar] [CrossRef]
  8. Kim, J.; Go, T.; Lee, S.J. Volumetric monitoring of airborne particulate matter concentration using smartphone-based digital holographic microscopy and deep learning. J. Hazard. Mater. 2021, 418, 126351. [Google Scholar] [CrossRef] [PubMed]
  9. Seebacher, S.; Osten, W.; Baumbach, T.; Jüptner, W. The determination of material parameters of microcomponents using digital holography. Opt. Lasers Eng. 2001, 36, 103–126. [Google Scholar] [CrossRef]
  10. Kumar, R.; Dwivedi, G. Emerging scientific and industrial applications of digital holography: An overview. Eng. Res. Express 2023, 5, 032005. [Google Scholar] [CrossRef]
  11. Osten, W.; Faridian, A.; Gao, P.; Körner, K.; Naik, D.; Pedrini, G.; Singh, A.K.; Takeda, M.; Wilke, M. Recent advances in digital holography. Appl. Opt. 2014, 53, G44–G63. [Google Scholar] [CrossRef]
  12. Fonseca, E.S.; Fiadeiro, P.T.; Pereira, M.; Pinheiro, A. Comparative analysis of autofocus functions in digital in-line phase-shifting holography. Appl. Opt. 2016, 55, 7663–7674. [Google Scholar] [CrossRef]
  13. Zhang, X.; Wu, H.; Ma, Y. A new auto-focus measure based on medium frequency discrete cosine transform filtering and discrete cosine transform. Appl. Comput. Harmon. Anal. 2016, 40, 430–437. [Google Scholar] [CrossRef]
  14. Xie, H.; Rong, W.; Sun, L. Wavelet-based focus measure and 3-d surface reconstruction method for microscopy images. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 229–234. [Google Scholar]
  15. Wolbromsky, L.; Turko, N.A.; Shaked, N.T. Single-exposure full-field multi-depth imaging using low-coherence holographic multiplexing. Opt. Lett. 2018, 43, 2046–2049. [Google Scholar] [CrossRef] [PubMed]
  16. Tang, M.; Liu, C.; Wang, X.P. Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy. Appl. Opt. 2020, 59, 333–345. [Google Scholar] [CrossRef] [PubMed]
  17. Wu, Y.; Wu, J.; Jin, S.; Cao, L.; Jin, G. Dense-U-net: Dense encoder–decoder network for holographic imaging of 3D particle fields. Opt. Commun. 2021, 493, 126970. [Google Scholar] [CrossRef]
  18. Ren, Z.; Chen, N.; Lam, E.Y. Extended focused imaging and depth map reconstruction in optical scanning holography. Appl. Opt. 2016, 55, 1040–1047. [Google Scholar] [CrossRef]
  19. Jiao, A.; Tsang, P.; Poon, T.; Liu, J.; Lee, C.; Lam, Y. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework. J. Opt. 2014, 16, 075401. [Google Scholar] [CrossRef]
  20. Tsang, P.W.M.; Poon, T.C.; Liu, J.P. Fast extended depth-of-field reconstruction for complex holograms using block partitioned entropy minimization. Appl. Sci. 2018, 8, 830. [Google Scholar] [CrossRef]
  21. Anand, A.; Chhaniwal, V.; Patel, N.; Javidi, B. Automatic identification of malaria-infected RBC with digital holographic microscopy using correlation algorithms. IEEE Photonics J. 2012, 4, 1456–1464. [Google Scholar] [CrossRef]
  22. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  23. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks 28. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems, Indore, India, 7 December 2015. [Google Scholar]
  24. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016, Proceedings, Part I 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  26. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  27. Avants, B.B.; Epstein, C.L.; Grossman, M.; Gee, J.C. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 2008, 12, 26–41. [Google Scholar] [CrossRef]
  28. Briechle, K.; Hanebeck, U.D. Template matching using fast normalized cross correlation. In Proceedings of the Optical Pattern Recognition XII, SPIE, Orlando, FL, USA, 16–20 April 2001; Volume 4387, pp. 95–102. [Google Scholar]
  29. Luo, J.; Konofagou, E.E. A fast normalized cross-correlation calculation method for motion estimation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2010, 57, 1347–1357. [Google Scholar]
  30. Sarvaiya, J.N.; Patnaik, S.; Bombaywala, S. Image registration by template matching using normalized cross-correlation. In Proceedings of the IEEE 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, Bangalore, India, 28–29 December 2009; pp. 819–822. [Google Scholar]
  31. Zahiri-Azar, R.; Salcudean, S.E. Motion estimation in ultrasound images using time domain cross correlation with prior estimates. IEEE Trans. Biomed. Eng. 2006, 53, 1990–2000. [Google Scholar] [CrossRef]
  32. Komarov, A.S.; Barber, D.G. Sea ice motion tracking from sequential dual-polarization RADARSAT-2 images. IEEE Trans. Geosci. Remote. Sens. 2013, 52, 121–136. [Google Scholar] [CrossRef]
  33. Deville, S.; Penjweini, R.; Smisdom, N.; Notelaers, K.; Nelissen, I.; Hooyberghs, J.; Ameloot, M. Intracellular dynamics and fate of polystyrene nanoparticles in A549 Lung epithelial cells monitored by image (cross-) correlation spectroscopy and single particle tracking. Biochim. Biophys. Acta (BBA)-Mol. Cell Res. 2015, 1853, 2411–2419. [Google Scholar] [CrossRef]
  34. Zhao, F.; Huang, Q.; Gao, W. Image matching by normalized cross-correlation. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar]
  35. Debella-Gilo, M.; Kääb, A. Sub-pixel precision image matching for measuring surface displacements on mass movements using normalized cross-correlation. Remote Sens. Environ. 2011, 115, 130–142. [Google Scholar] [CrossRef]
  36. Hisham, M.; Yaakob, S.N.; Raof, R.; Nazren, A.A.; Wafi, N. Template matching using sum of squared difference and normalized cross correlation. In Proceedings of the 2015 IEEE student conference on research and development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 100–104. [Google Scholar]
  37. Rao, Y.R.; Prathapani, N.; Nagabhooshanam, E. Application of normalized cross correlation to image registration. Int. J. Res. Eng. Technol. 2014, 3, 12–16. [Google Scholar]
  38. Wan, Y.; Liu, C.; Ma, T.; Qin, Y. Incoherent coded aperture correlation holographic imaging with fast adaptive and noise-suppressed reconstruction. Opt. Express 2021, 29, 8064–8075. [Google Scholar] [CrossRef]
  39. Kumar, R.; Anand, V.; Rosen, J. 3D single shot lensless incoherent optical imaging using coded phase aperture system with point response of scattered airy beams. Sci. Rep. 2023, 13, 2996. [Google Scholar] [CrossRef]
  40. Rosen, J.; Anand, V.; Rai, M.R.; Mukherjee, S.; Bulbul, A. Review of 3D imaging by coded aperture correlation holography (COACH). Appl. Sci. 2019, 9, 605. [Google Scholar] [CrossRef]
  41. Latychevskaia, T. Lateral and axial resolution criteria in incoherent and coherent optics and holography, near-and far-field regimes. Appl. Opt. 2019, 58, 3597–3603. [Google Scholar] [CrossRef]
Figure 1. The influence of different reconstruction distances on the image.
Figure 1. The influence of different reconstruction distances on the image.
Applsci 13 11430 g001
Figure 2. Flowchart of inverse cross-correlation digital holographic imaging.
Figure 2. Flowchart of inverse cross-correlation digital holographic imaging.
Applsci 13 11430 g002
Figure 3. Numerical calculation process of a large depth-of-field digital holography of multiplanar objects when subjected to inverse cross-correlation. (a) Optical setup. (b) Multi-plane objects. (c) Cross-correlation coefficient matrix. (d) Different plane ranges of different objects.
Figure 3. Numerical calculation process of a large depth-of-field digital holography of multiplanar objects when subjected to inverse cross-correlation. (a) Optical setup. (b) Multi-plane objects. (c) Cross-correlation coefficient matrix. (d) Different plane ranges of different objects.
Applsci 13 11430 g003
Figure 4. Intercept plane reconstruction results. (a) Different intercept planes. (b) Normalized reconstruction distance coefficients. (c) The reconstructed image at the maximum point.
Figure 4. Intercept plane reconstruction results. (a) Different intercept planes. (b) Normalized reconstruction distance coefficients. (c) The reconstructed image at the maximum point.
Applsci 13 11430 g004
Figure 5. Fused image.
Figure 5. Fused image.
Applsci 13 11430 g005
Figure 6. The result of single-depth operators. (a) Normalized reconstruction distance coefficients. (b) The reconstructed image at the maximum point.
Figure 6. The result of single-depth operators. (a) Normalized reconstruction distance coefficients. (b) The reconstructed image at the maximum point.
Applsci 13 11430 g006
Figure 7. (a) Hologram. (b) Inverse cross-correlation coefficient matrix. (c) Schematic diagram of the rectangular box.
Figure 7. (a) Hologram. (b) Inverse cross-correlation coefficient matrix. (c) Schematic diagram of the rectangular box.
Applsci 13 11430 g007
Figure 8. Phase distribution. (a) Single-depth estimation operator. (b) Proposed method.
Figure 8. Phase distribution. (a) Single-depth estimation operator. (b) Proposed method.
Applsci 13 11430 g008
Figure 9. (a) Hologram. (b) Inverse cross-correlation coefficient matrix. (c) Schematic diagram of the rectangular box of M × M pixels.
Figure 9. (a) Hologram. (b) Inverse cross-correlation coefficient matrix. (c) Schematic diagram of the rectangular box of M × M pixels.
Applsci 13 11430 g009
Figure 10. Phase distribution. (a) Single-depth estimation operator. (b) Proposed method.
Figure 10. Phase distribution. (a) Single-depth estimation operator. (b) Proposed method.
Applsci 13 11430 g010
Figure 11. Inverse cross-correlation matrices and their schematic diagram of the rectangular box. (a) Schematic diagram of the rectangular box at close proximity. (b) Schematic diagram of the rectangular box at 0.61 cm apart. (c) Inverse cross-correlation matrix at close proximity. (d) Inverse cross-correlation matrix at 0.61 cm apart.
Figure 11. Inverse cross-correlation matrices and their schematic diagram of the rectangular box. (a) Schematic diagram of the rectangular box at close proximity. (b) Schematic diagram of the rectangular box at 0.61 cm apart. (c) Inverse cross-correlation matrix at close proximity. (d) Inverse cross-correlation matrix at 0.61 cm apart.
Applsci 13 11430 g011
Figure 12. Phase distribution.
Figure 12. Phase distribution.
Applsci 13 11430 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, J.; Gao, Z.; Wang, S.; Niu, Y.; Deng, L.; Sa, Y. Multi-Object Deep-Field Digital Holographic Imaging Based on Inverse Cross-Correlation. Appl. Sci. 2023, 13, 11430. https://doi.org/10.3390/app132011430

AMA Style

Zhao J, Gao Z, Wang S, Niu Y, Deng L, Sa Y. Multi-Object Deep-Field Digital Holographic Imaging Based on Inverse Cross-Correlation. Applied Sciences. 2023; 13(20):11430. https://doi.org/10.3390/app132011430

Chicago/Turabian Style

Zhao, Jieming, Zhan Gao, Shengjia Wang, Yuhao Niu, Lin Deng, and Ye Sa. 2023. "Multi-Object Deep-Field Digital Holographic Imaging Based on Inverse Cross-Correlation" Applied Sciences 13, no. 20: 11430. https://doi.org/10.3390/app132011430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop