Next Article in Journal
The Thermodynamic Change Laws of CO2-Coupled Fractured Rock
Previous Article in Journal
Predicting the External Corrosion Rate of Buried Pipelines Using a Novel Soft Modeling Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of the Image Magnification Produced by Inline Holographic Systems Based on the Double-Sideband Filter

1
Departamento de Óptica, Microondas y Acústica, Instituto de Ciencias Aplicadas y Tecnología ICAT, Universidad Nacional Autónoma de México UNAM, Ciudad de México 04510, Mexico
2
Grupo de Óptica, Departamento de Física, Universitat Autònoma de Barcelona UAB, 08193 Bellaterra, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(12), 5118; https://doi.org/10.3390/app14125118
Submission received: 2 May 2024 / Revised: 4 June 2024 / Accepted: 10 June 2024 / Published: 12 June 2024

Abstract

:
In-line digital holography is a powerful tool widely used for microscopic object imaging. Usually, in-line and out-line configurations are used to implement holographic systems, but in-line-based set-ups are preferable as they are less sensitive to mechanical vibrations and refraction index variations. However, non-desired blurred conjugate images are superposed to the reconstructed object image by using in-line systems. One strategy to remove the conjugate image contribution is to include a double-sideband filter at the Fourier plane of the system. After using the filter, data obtained at the CCD are processed to retrieve the magnitude and phase (hologram) of the diffracted wavefront while removing the conjugated image. Afterwards, a diffraction integral equation is used to digitally propagate the hologram. Despite the above-mentioned factors, there is not a thorough analysis in the literature of magnification parameters associated with the final reconstructed image, this aspect being crucial for the experimental application of the above-stated approach. Under this scenario, a theoretical analysis of the longitudinal and transverse magnifications of the reconstructed images is provided in this work. The method is validated through the simulation and experimental results of different microscopic objects: glass microspheres, a micrometric reticle, and a resolution test chart USAF 1951. The obtained results provide that the combination of magnification relations with methods for hologram propagation and optimal focused image identification is effective for object position determination. This approach could be useful for 3D microparticle localization and monitoring with optimized magnification within real-time applications.

1. Introduction

Holography is an optical tool widely used for imaging microscopic objects in a wide range of applications and research disciplines [1,2,3,4,5,6,7]. In 1948, Dennis Gabor introduced a novel microscope that utilized the interference between the diffracted wavefront from microscopic objects and the illumination wavefront [8]. This configuration is known as an in-line or common-path interferometer because both the wavefront under test and the reference wavefront are propagated along the same path to the recording medium (hologram). This interferometer is less sensible to air flows and vibrations when compared with other optical configurations. Nevertheless, both the reconstructed image of the object and the conjugated defocused image appear superimposed, which notably degrades the final observation of the object. An option to eliminate the conjugated image influence is the off-axis interferometry technique, in which the object wavefront and the reference wavefront impinge on the recording medium from different angles [9,10]. The conjugated images exhibit lateral shifts and do not superimpose. In an off-axis interferometer, the wavefronts are propagated along separate paths, necessitating the use of additional optical components. However, these interferometers are more sensible to changes in the refractive index of the medium and to component vibrations. The off-axis interferometers commonly used in digital holography are the Mach–Zehnder [11,12], and Twyman–Green [13] configurations.
In in-line interferometry, digital image processing methods are employed to minimize the influence of the superimposed conjugated image [14,15,16,17]. Other techniques involve blocking specific parts of the spatial frequency spectrum of the superposition of the object and reference wavefronts to eliminate one of the conjugated images [18,19]. A more recent approach presents an in-line interferometer that removes the degrading effect of conjugated images by implementing a polarization-based double-sideband filter (DSB) at the Fourier domain [20,21,22], arising as a useful tool for in-line holography. However, there is no analysis in the literature of the consequent magnification of the resulting images, and this information is crucial to provide the method with the necessary potential to be used in real applications for metrology or the characterization of microparticles. In this framework, the current work aims to address this lack of information and complete previous studies by presenting a thorough analysis of the image magnification associated with the in-line interferometer presented in [20,21,22].
The outline of this manuscript is as follows: in Section 2, we reveal the mathematical formulation related to the transverse and longitudinal magnification of the reconstructed images obtained using an in-line interferometer with the DSB filter and a convergent lens. Next, in Section 3, we numerically simulate the imaging of microscopic objects with the in-line interferometer and the DSB filter. Furthermore, we introduce the evaluation criteria employed to achieve the best-focused reconstructed image. Afterwards, we present the numerical results of the transverse and longitudinal magnifications. Subsequently, in Section 4, we describe the experimental setup implemented in the laboratory to validate the theoretical analysis of the transverse and longitudinal magnification of the in-line interferometer with the DSB filter. We analyze the experimental results of the longitudinal and transverse magnifications of the reconstructed images of three microscopic objects: a resolution test USAF 1951, glass microparticles, and a micrometric reticle (this latter object is provided in the Supplementary Materials). Finally, we present the main conclusions of the work in Section 5.

2. Theory

The in-line interferometer (IL) studied in this work uses a convergent lens to create the magnified image of the superimposed wavefronts on the recording medium. First, the transverse MT and longitudinal ML magnifications are calculated in terms of the focal length of the convergent lens, f, and the image distance, si. Afterwards, the magnifications MT and ML of the digitally reconstructed images obtained with the in-line interferometer are examined. In the following, we summarize the derivation of the main equations required in the presented method, but a complete derivation of such relations is presented in the Supplementary Materials.

2.1. The Magnifications ML and MT of the Images in the In-Line Interferometer

In an in-line interferometer, a microscopic object is illuminated with a collimated wavefront. Both the wavefront diffracted by the object and the wavefront collimated follow the same paths and overlap with each other. In accordance with Figure 1a, the microscopic object is located at the plane x 0 , y 0 , and the superposition between the diffracted and collimated wavefronts take place at a distance z o in the plane x 1 , y 1 . The lens L forms the magnified image of the wavefront superposition at the plane x 2 , y 2 , where a recording medium, from now on a CCD camera, is positioned.
The transverse magnification, M T , and the longitudinal magnification, M L , associated with the planes x 1 , y 1 and x 2 , y 2 , respectively, are calculated as:
M T = y 2 y 1 = x 2 x 1 = s i f f ,
M L = d z i d z o = s i f 2 f 2 = M T 2 .
The Equations (1) and (2) express the transverse and longitudinal magnifications of the images, respectively, as a function of the focal length f and the distance si and establish a relation between M T and M L . Note that if the distance between the lens L and the CCD camera placed at the plane x 2 , y 2 remains unaltered, both magnifications remain unchanged.
The spatial–frequency spectrum of the superimposed wavefronts is observable at the focal length of the lens in the Fourier plane η , ξ . Thus, the double-sideband filter (DSB) is positioned in such a Fourier plane. The DSB filter first blocks one half of the frequency spectrum and, subsequently, the other half. Consequently, each of the two images formed by the lens in the plane x 2 , y 2 possesses half of the frequency spectrum. The experimental scheme used to implement the DSB is shown in Figure 1b. The collimated wavefront is linearly polarized at 45° by using the linear polarizer LP1. In front of the CCD, another linear polarizer LP2 is positioned, aligning its transmission axis at the same angle. The spatial light modulator SLM-LC is situated in the focal plane of the lens L to block the half of the frequency spectrum in the following manner: a binary distribution of grey-level values is addressed to the SLM-LC in such form that half of the SLM-LC screen induces a phase retardance of δ 1 = 0 ° , while the other half induces a phase retardance of δ 2 = 180 ° , as shown in the part (I) of Figure 1b. In this way, the polarization state of the linearly polarized beam at 45° remains unchanged when passing through the half of the SLM-LC with zero retardance. Consequently, the beam is transmitted without any issues by linear polarizer LP2. However, the beam that passes through the half of the SLM-LC screen with a retardance of 180° undergoes a rotation of 90° in its polarization state, resulting in linear polarization at 135°. As a result, this beam is blocked by the polarizer LP2. The frequency spectrum is centered at the boundary between the phase retardances δ 1 and δ 2 on the SLM-LC, which creates a knife-edge effect, blocking half of the frequency spectrum. Subsequently, a similar procedure is followed, where the phase retardances are interchanged to δ 1 = 180 ° and δ 2 = 0 ° and addressed to the SLM-LC, as shown in part (II) of Figure 1b, to block the other half of the frequency spectrum. More details about the DSB polarization-based filter implementation can be consulted in Refs. [20,21]. The DSB filter does not impact either the transverse magnification or the longitudinal magnification as it is solely influenced by the focal length of the lens and the position of the CCD camera with respect to this lens. Finally, the CCD records the corresponding intensity images that are later digitally processed on the computer to retrieve the magnitude and phase information (hologram) of the magnified wavefront in the plane x 2 , y 2 .
In the computer, the digital hologram is numerically propagated by a distance z r until the arbitrary plane x r , y r , where the magnified and focused image is reconstructed. The longitudinal magnification M L , 0 r is determined by the relationship between the propagation distance of the diffracted object, z o , and the reconstruction distance of the digital focused image, z r , as follows:
M L = M L , 0 r ,
d z i d z o = z i z r .
Considering that the longitudinal magnification remains constant between object and image regions, and in accordance with Equation (2), the transverse magnification associated with the reconstructed image and the shifted object corresponds to the transverse magnification between the planes x 2 , y 2 and x 1 , y 1 :
M T = M T , 0 r ,
x r x 0 = x 2 x 1 , y r y 0 = y 2 y 1 .
Equations (1)–(3) and (5) indicate that the transverse and longitudinal magnifications remain unchanged by the displacement z o of the object as long as the object is illuminated with a collimated beam and the distance si between the lens and the CCD camera remains constant. The distance at which the digitally reconstructed focused image is located is:
z r = z o M L .
The transverse coordinates of the digitally reconstructed image concerning the object are:
x r = x 0 M T ,
y r = y 0 M T .

2.1.1. Registration of the Superimposed Wavefronts

The process of recovering the focused image, which is conducted in two stages, is described as follows. The first stage is the filtering of the spatial–frequency spectrum with the DSB filter, the magnification of the filtered image, and the intensity registration. The second stage involves digital processing of the intensities to retrieve the complex amplitude of the diffracted image and numerically propagating it to the reconstruction plane to obtain the magnified focused image.
The diffracted wavefront propagates from the plane x 0 , y 0 to the plane x 1 , y 1 , at a distance d = z o . This propagation can be described by using the Fresnel diffraction equation [23,24,25]:
u o x 1 , y 1 = exp i k z o i λ Δ z o u x 0 , y 0 exp i k 2 z o x 0 x 1 2 + y 0 y 1 2 d x 0 d y 0 .
where k = 2 π / λ is the wave number. The superposition between the diffracted and collimated wavefronts in the plane x 1 , y 1 , as is shown in Figure 2, can be written as:
u o x 1 , y 1 = 1 + u o x 1 , y 1 .
Note that the assumption given in Equation (11) is a good approach when the analyzed objects are small in comparison with the size of the illuminating beam, as is the case of experiments provided in further sections, but will not be a suitable approach in other scenarios, as for instance, for quantitative phase imaging.
The frequency spectrum of the superposition of wavefronts is observable in the focal plane of the lens η , ξ and can be mathematically represented as the Fourier transform of Equation (11):
U o η , ξ = δ η , ξ + U o η , ξ .
At the focal length of the lens, the double-sideband filter blocks the first half of the frequency spectrum corresponding to η > 0 . The superposition of the wavefronts, with half of the frequency spectrum blocked, can be computed as the inverse Fourier transform of Equation (12) with the integration boundaries in the η direction ranging from to 0. The resulting image u i x 2 , y 2 is a scaled and inverted version of u o x 1 , y 1 :
u i x 2 , y 2 η > 0 = exp i k s o + s i 1 M T u o x 2 M T , y 2 M T η > 0 .
Here, u o x 2 / M T , y 2 / M T is the amplified and inverted ideal image of the diffracted wavefront u o x 1 , y 1 under the assumption that lens aberrations are neglected and the diameter of the lens pupil is larger than the transverse section of the wavefront.
The recorded intensity in the plane x 2 , y 2 corresponds to the squared modulus of u i x 2 , y 2 :
u i x 2 , y 2 η > 0 2 = 1 M T 2 u o x 2 M T , y 2 M T η > 0 2 .
Analogously, the other half of the frequency spectrum, corresponding to η < 0 , is blocked at the Fourier plane to obtain the subsequent intensity:
u i x 2 , y 2 η < 0 2 = 1 M T 2 u o x 2 M T , y 2 M T η < 0 2 .
Finally, both intensities given by Equations (14) and (15) are combined to obtain the intensity corresponding to the object’s full spectrum. In this form, the magnitude and phase information (digital hologram) of the diffracted wavefront is retrieved by:
u i x 2 , y 2 1 2 M T 2 u o x 2 M T , y 2 M T .
Importantly, note how the DSB filter does not alter either the transverse or longitudinal magnifications of the diffracted wavefront.

2.1.2. Image Reconstruction

The hologram u i x 2 , y 2 is propagated a distance z r to reconstruct the focused image u r x r , y r , as is shown in Figure 3. The Fresnel diffraction integral is used to depict the propagation of u i x 2 , y 2 as follows:
u r x r , y r = exp i k Δ z r i λ Δ z r u i x 2 , y 2 exp i k 2 Δ z r x 2 x r 2 + y 2 y r 2 d x 2 d y 2 .
Afterwards, the magnified diffracted image given by Equation (16) is substituted into Equation (17), leading to the wavefront of the reconstructed image as follows:
u r x r , y r = exp i k z o + Δ z r 1 2 u o x r M T , y r M T .
The complete derivation of the wave front propagation from the object plane to the focused image plane is shown in the Supplementary Materials.
The numerically reconstructed focused image is a version of the object, scaled and inverted. The coordinates of the object and image are given by Equations (7)–(9).
Based on the above obtained relations, the following main conclusions can be stated:
(1)
Both transverse magnification and longitudinal magnification can be determined from the focal length f of the lens and the distance si between the lens and the CCD camera.
(2)
The reconstruction distance Δzr depends on the longitudinal magnification ML and the object displacement z o . Meanwhile, the transverse magnification MT of the reconstructed image remains unchanged with the axial displacement of the object.

2.2. Wavefront Propagation

To simulate the propagation of the diffracted wavefront from the displaced object plane x 0 , y 0 to the plane x 1 , y 1 , at a distance z o , we used the Rayleigh–Sommerfeld diffraction equation [26]:
u o x 1 , y 1 = U f x 0 , f y 0 exp i 2 π z o λ 1 λ 2 f x 0 2 + f y 0 2 exp i 2 π f x 0 x 1 + f y 0 y 1 d f x 0 d f y 0 ,
where U f x 0 , f y 0 is the numerical Fourier transform of the distribution of transmittance of the object, u x 0 , y 0 :
U f x 0 , f y 0 = u x 0 , y 0 exp i 2 π x 0 f x 0 + y 0 f y 0 d x 0 d y 0 ,
Note that, according to the Shannon theorem [24], to implement Equation (19), the distance between the object and the propagated plane should verify z 0 N p i x δ x 2 / λ where δx is the sensor’s pixel pitch, N p i x is the number of pixels, and λ is the illuminating wavelength.
Finally, we also analyze the optimal reconstruction distance for the magnified and focused image, which is determined by evaluating the sparsity of the reconstructed image u r x r , y r . In this study, three criteria were employed to this aim: the Tamura coefficient, the Gini Index, and entropy. For each measurement, the image was reconstructed at various distances, Δzr, until reaching a maximum or minimum value in the sparsity image criteria, which corresponds to the distance of the best focused image.
The Tamura coefficient (TC) is calculated based on the mean value u ¯ r and standard deviation σ u r of the pixels in the reconstructed image u r x r , y r , as given by the following equation [27,28]:
TC z r = σ u r u ¯ r .
At the best focused distance, the standard deviation and the Tamura coefficient values have their minimum value. However, for other reconstruction distances, Δzr, both the standard deviation and TC values increase.
The Gini Index (GI) is a measure of the sparsity level of the grey levels in the image [29,30]. On the one hand, when the image intensity is concentrated in a small region (few pixels), the GI value increases. On the other hand, when the intensity is sparse throughout the whole image, the GI value decreases. The Gini Index is calculated using the following equation:
GI f _ = 1 2 k = 1 N f k f _ p = 1 N k + 1 2 N ,
where f _ is a vector containing the intensity values of the image pixels, arranged in ascending order from the least to the greatest values f _ = f 1 , f N , and f p = f 1 p + + f N p p is the norm.
The entropy criterion (Ent) measures the level of randomness in an image. The entropy is calculated based on the image histogram, which represents the occurrence frequency of different grey levels present in the image [31]. When the image is focused, the randomness of the pixel values is minimized. Conversely, in a defocused image, the randomness of the pixel values increases. The entropy of an image is calculated as follows:
Ent = n = 1 N P n log P n ,
where Pn represents the number of pixels with grey level n. In this case, the minimum value of the entropy indicates the optimal focused distance Δzr.
In each measurement, the image is reconstructed at several propagation distances, Δzr, around the theoretical reconstruction image z r = M L z o . The images are evaluated using the three sparsity criteria to determine the reconstruction distance that produces the best focused image, as is shown in Figure 4. Note how the three criteria lead to very similar results in terms of the best focused plane for the evaluated image.

3. Simulation

We calculated the theoretical magnification of the image formed by the lens L in Figure 1a using Equation (1) for the transversal magnification and Equation (2) for the longitudinal magnification. The simulated focal length was f = 85 mm and the distance between the lens and the registration plane x 2 , y 2 was s i = 347 mm . The obtained transverse magnification was MT = −3.079 and the longitudinal magnification calculated was ML = −9.480. In the simulations, xo was the width of a small area of the simulated microscopic objects (see Figure 5).
The selected microscopic objects to be simulated were (a) a resolution test USAF 1951 and (b) a microsphere, both shown in Figure 5. An extra object simulation was also conducted, with a microscope reticle, but corresponding analysis is provided in the Supplementary Materials to ensure the manuscript remains concise. In the case of the resolution test chart, the widths of a line in micrometers for the (elements) of group 4 are 31.25(1), 27.84(2), 24.80(3), 22.10(4), 19.69(5), 17.54(6); for group 5, they are 15.63(1), 13.92(2), 12.40(3), 11.05(4), 9.84(5), 8.77(6); for group 6. they are 7.81(1), 6.96(2), 6.20(3), 5.52(4), 4.92(5), 4.38(6); and for group 7, they are: 3.91(1), 3.48(2), 3.10(3), 2.76(4), 2.46(5), 2.19(6). The width of the five vertical bars (three dark bars and two bright bars) corresponding to element 2 of group 4 for the resolution test is x o = 139.2 μ m , as is shown in Figure 5a. Finally, in Figure 5b, the radius of the microsphere is x o = 50 μ m .
In the simulation, the objects are shifted in steps of 1 mm in the axial direction, covering a range from z o = 10 mm to z o = + 10 mm . In each displacement, the focused images were reconstructed, and the longitudinal and transverse magnifications were calculated according to Equations (7) and (8), respectively. As stated in Equation (19), its validity is ensured by accomplishing the Shannon theorem. In our case, the experimental conditions were δ x = 3.45 µm for the sensor’s pixel pitch, N p i x = 1500 pixels, and λ = 632.8 nm wavelength, then ensuring the use of the of the Rayleigh–Sommerfeld theorem in adequate conditions ( N p i x δ x 2 λ = 28.21 m m > 20 m m = Δ z 0 ).

3.1. Simulation Results

The simulation results of the transverse and longitudinal magnifications of the reconstructed images are presented for the two objects under test: the resolution USAF 1951 test (Section 3.1.1) and the microsphere (Section 3.1.2). Simulations related to an extra example, a microscope reticle, can be found in the Supplementary Materials.

3.1.1. Resolution Test USAF 1951

A reconstructed focused image of the simulated resolution test chart is shown in Figure 6a. Note that, if compared with Figure 5a, the object is reasonably reconstructed, but some artifacts can be observed in the reconstructed image. These artifacts are related to the sampling of the original object because they are binary objects and so they are not band-limited. As will be seen in the following subsections, this same situation is repeated for the two objects studied in this work. However, we have chosen these examples because they are useful to relate magnification results in terms of the comparisons between simulations and experiments, which is one of the main goals of this work.
In each displacement, Equation (8) is used to calculate MT; here, xr is the width of five vertical bars (three dark bars and two bright bars) corresponding to element 2 of group 4 in the reconstructed image, as is shown in Figure 6b.
Figure 7a shows that the calculated value of ML remains practically unchanged and close to the theoretical value regardless of the displacement distance z o of the resolution test chart. Similarly, the values of MT calculated at the same displacement distance z o remain close to the theoretical value, as is shown in Figure 7b. The transverse magnification MT was calculated after determining the reconstruction distance Δ z r of the best focused image.

3.1.2. Microspheres

Figure 8a shows the reconstructed and focused image of a glass microsphere. As shown in Figure 8b, x r = i m g / 2 is the radius of a microsphere in the reconstructed image.
Figure 9a shows the calculated values of ML obtained with different object displacements. The values of MT as a function of the object displacements z o are plotted in Figure 9b. The higher variation in the transversal magnification values is due to the low number of pixels used to draw the microsphere in the simulation. The simulated glass sphere’s region of interest comprised only an area of 115 × 115 pixels.
The average and standard deviation of the transverse and longitudinal magnification values presented in Figure 7 and Figure 9 were calculated, and the results are summarized in Table 1. The values with the least standard deviation are shaded.
The Tamura coefficient and the entropy exhibit the smallest standard deviation, indicating that they are the most effective evaluation criteria for obtaining the best focused images. Additional processing is required for the reconstructed images before they can be evaluated using focusing criteria.

4. Experimental Validation

The in-line interferometer with the DSB filter shown in Figure 10 was implemented to experimentally validate the theoretical analysis of the transverse and longitudinal magnifications of the reconstructed images.
The microscopic objects were illuminated using an expanded and collimated laser beam, with a wavelength λ = 632.8 nm , manufactured by Lumentum. The lens L2, with a focal length f = 85 mm, creates an image of the object plane s o = 113 mm at the image plane s i = 347 mm , where a CCD (model Blackfly BFS-U3-31S4M-C manufactured by Flir) was positioned. The intensity recorded by the CCD was proportional to the square of the complex amplitude in the image plane, resulting in the loss of phase information. Consequently, during the reconstruction of the focused image, the conjugated defocused image was superimposed. To reconstruct the focused image without the influence of the conjugated image, it was essential to have complex amplitude data, which included both magnitude and phase information.
The DSB filter was employed in the in-line interferometer to obtain magnitude and phase information (digital hologram). The experimental scheme used to implement the DSB followed the proposal given in [25]. A description of the experimental arrangement used to implement the DSB filter is provided in the Supplementary Materials.
The transverse and longitudinal magnifications of the reconstructed images were calculated for the following microscopic objects: (a) a resolution test USAF 1951 and (b) glass microspheres (diameter: 14.5 µm ± 1 µm), as shown in Figure 11. In agreement with the simulations section, an extra example, a microscope reticle, was also experimentally studied, and the obtained results are provided in the Supplementary Materials. The microscopic objects were placed on a linear translation stage equipped with a vernier micrometer having a resolution of 10 µm. The objects were shifted in steps of 1 mm in the axial direction, covering a range from z o = 10 mm to z o = + 10 mm . In each displacement, the focused images were reconstructed.

4.1. Experimental Results

Hereafter, the experimental results for the transverse and longitudinal magnifications of the reconstructed images are presented for the two objects under test: the resolution USAF 1951 test (Section 4.1.1) and the glass microspheres (Section 4.1.2).

4.1.1. Resolution Test USAF 1951

The magnifications ML and MT as a function of the displacement distance are shown in Figure 12. The notable variation in the ML values observed in Figure 12a, compared with the simulation values showed in Figure 7a, could be attributed to the error introduced by the micrometer controlling the axial shifts in the linear translation stage. Consequently, variations become evident in the z o displacements. In contrast, the MT values exhibit greater stability, as is shown in Figure 12b. These values were calculated by dividing the width of five vertical bars (belonging to element 2, group 4) measured in the reconstructed image by the width of the same five bars specified in the test resolution specifications.
Note that, on the one hand, the error bars in Figure 12a are associated with micrometer resolution of the linear translation stage z ± 20 μ m . The error bars are centered around the theoretical value of the longitudinal magnification, M L = 9.480 ± 0.1896 . On the other hand, the error bars in Figure 12b are attributed to the resolution of the precision ruler utilized for measuring the image distance, s i ± 1 mm . In this case, the error bars are centered on the theoretical value of the transverse magnification, M T = 3.079 ± 0.01176 . For the sake of quantification, Table 2 presents the maximum and minimum values of the longitudinal and transverse magnifications, accounting for experimental errors.

4.1.2. Glass Microspheres

The glass microspheres were shifted over a range of displacement distances z o from −10 mm to +7 mm, at increments of 1 mm. Figure 13a shows the experimental values of ML obtained for different object displacements. In the calculation of the transverse magnification, x 0 = s p h / 2 represents the radius of the microspheres, with s p h = 14.5 μ m and x r = i m g / 2 corresponding to the diameter of a microsphere in the reconstructed image. The experimental values of MT as a function of z o are plotted in Figure 13b. In this case, the variation in the experimental values of the transverse magnification was greater than the calculated in the simulation. This was attributed to the fact that the magnified image of the diffracted wavefront by the microsphere was very small, and it was recorded in just a few pixels of the CCD. In such a form, tiny variations in the number of pixels of the reconstructed image represent a significant change in the calculated size of the image.
The average and standard deviation of the transverse and longitudinal magnification values were calculated for the same two microscopic objects studied. The corresponding results are summarized in Table 3. In almost all cases, the average values fall within the maximum and minimum boundaries of the expected magnifications, considering experimental errors. Note that the values with the least standard deviation for each object are shaded in Table 3. The Tamura coefficient and the entropy exhibit the smallest standard deviation, indicating that they are the most effective evaluation criteria for obtaining the best focused images. In the case of the image reconstruction of the glass microsphere, the standard deviation of the transverse magnification was higher, as observed in Figure 13b.

5. Discussion and Conclusions

Holographic interferometry is a mature optical tool of interest in multiple imaging applications. In-line holographic configurations are especially interesting when some error sources, as those related to mechanical vibrations or refractive index fluctuations, need to be minimized. However, those configurations have the disadvantage of presenting the conjugated images superimposed next to the reconstructed ones, which results in a loss of efficiency in the final reconstructed images. Under this scenario, some authors have presented a combination of the in-line interferometer with the DSB filter, providing the usefulness of the technique with removed conjugated images. However, no analysis of the magnification associated with such systems was presented in the literature, with this information being crucial for efficiently applying the method in real applications.
In this work, we provide a thorough study of the magnification relations (the transverse and longitudinal magnifications) associated with an in-line interferometer system based on a DSB filter. This analysis is combined with methods for hologram propagation and optimal focused image identification. Results provided in this work highlight the suitability of the approach for object position determination.
In particular, the presented simulations include computing the diffracted wavefront by microscopic objects, applying the Rayleigh–Sommerfeld diffraction equation to model the propagation of the diffracted wavefront. The numerical Fourier and the inverse Fourier transforms were used to block half of the spatial–frequency spectrum of the diffracted wavefront. Additional computing processes were used for simulating the transverse magnification produced by the optical system. Finally, the magnified focused image was reconstructed using the Rayleigh–Sommerfeld equation to propagate the digital hologram to the reconstruction plane. The reconstruction distance, related to longitudinal magnification, was estimated using three focusing criteria: the Tamura coefficient, the Gini Index, and the entropy. The Tamura coefficient and the entropy demonstrated the lowest standard deviation, suggesting that they are the most effective evaluation criteria for achieving the best focused images. The simulation results have confirmed the validity of our theoretical analysis. The fluctuations of the magnification values are not greater than 1%.
The presented methods were experimentally validated by analyzing the images obtained of three different micro-objects: glass microspheres, a micrometric reticle, and a resolution test chart USAF 1951. Experimental average values of the magnifications fall within the expected range, considering the experimental errors. Additional processing is required for the reconstructed images before they can be evaluated using focusing criteria.
The proposed configuration ensures that the reconstruction of 3D objects and the trajectory of moving objects will be free from distortions.
In summary, the provided method is able to determine, with precision, the magnification of the reconstructed image using an inline holography system based on the double-sideband filter. We have demonstrated that the lateral and transverse magnifications depend on the focal length of the lens and the distance between the lens and the camera. We used different focusing criteria to obtain the reconstructed and focused images. The Tamura criterion was the most accurate, both in the simulations and in the experimental validation. This study is useful for real-time measurements of the movement of microparticles, in the 3D reconstruction of microscopic objects, in the characterization of flows in microfluidic systems, etc.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app14125118/s1.

Author Contributions

Conceptualization, C.R., J.C. and Á.L.; methodology, C.R.; software, C.R.; validation, C.R. and L.G.-C.; formal analysis, C.R.; investigation, C.R. and L.G.-C.; resources, C.R. and Á.L.; data curation, Á.L.; writing—original draft preparation, C.R.; writing—review and editing, Á.L., J.C., I.E. and L.G.-C.; visualization, C.R.; funding acquisition, C.R. and Á.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by DGAPA-UNAM through projects PAPIIT: TA101020 and IG100121 and also by Ministerio de Ciencia e Innovación and Fondos FEDER (PID2021-562 126509OB-C21 and PDC2022-133332-C21) and Generalitat de Catalunya (2021SGR00138).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

Luisa García-Canseco acknowledges her master’s scholarship from CONAHCyT Mexico. IE acknowledges financial support from a Beatriu de Pinós Fellowship (2021-BP-00206).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Birdi, J.; Rajora, S.; Butola, M.; Khare, K. True 3D reconstruction in digital holography. J. Phys. Photonics 2020, 2, 044004. [Google Scholar] [CrossRef]
  2. Snyder, K.; Grier, D.G. Aberration compensation for enhanced holographic particle characterization. Opt. Express 2023, 31, 35200–35207. [Google Scholar] [CrossRef]
  3. Huang, L.; Liu, T.; Yang, X.; Luo, Y.; Rivenson, Y.; Ozcan, A. Holographic Image Reconstruction with Phase Recovery and Autofocusing Using Recurrent Neural Networks. ACS Photonics 2021, 8, 1763–1774. [Google Scholar] [CrossRef]
  4. Dyomin, V.; Davydova, A.; Polovtsev, I. Geometric-Optical Model of Digital Holographic Particle Recording System and Features of Its Application. Photonics 2024, 11, 73. [Google Scholar] [CrossRef]
  5. Norazman, S.H.B.; Nakamura, T.; Yamaguchi, M. Digital holography-assisted 3-D bright-field image reconstruction and refocusing. Opt. Rev. 2020, 27, 455–464. [Google Scholar] [CrossRef]
  6. Memmolo, P.; Miccio, L.; Finizio, A.; Netti, P.A.; Ferraro, P. Holographic tracking of living cells by three-dimensional reconstructed complex wavefronts alignment. Opt. Lett. 2014, 39, 2759–2762. [Google Scholar] [CrossRef]
  7. Paturzo, M.; Pagliarulo, V.; Bianco, V.; Memmolo, P.; Miccio, L.; Merola, F.; Ferraro, P. Digital Holography, a metrological tool for quantitative analysis: Trends and future applications. Opt. Lasers Eng. 2018, 104, 32–47. [Google Scholar] [CrossRef]
  8. Gabor, D. A new microscopic principle. Nature 1948, 4098, 777–778. [Google Scholar] [CrossRef]
  9. Leith, E.N.; Upatnieks, J. Reconstructed wavefronts and communication theory. J. Opt. Soc. Am. 1962, 52, 1123–1130. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Yao, Y.; Zhang, J.; Liu, J.P.; Poon, T.C. Off-axis optical scanning holography. J. Opt. Soc. Am. 2022, 39, A44–A51. [Google Scholar] [CrossRef]
  11. Shaked, N.T.; Mico, V.; Trusiak, M.; Kus, A.; Mirsky, S.K. Off-axis digital holographic multiplexing for rapid wavefront acquisition and processing. Adv. Opt. Photonics 2020, 12, 556–611. [Google Scholar] [CrossRef]
  12. Anand, A.; Chhaniwal, V.K.; Javidi, B. Real-time digital holographic microscopy for phase contrast 3D imaging of dynamic phenomena. J. Disp. Technol. 2010, 6, 500–505. [Google Scholar] [CrossRef]
  13. Frenklach, I.; Girshovitz, P.; Shaked, N.T. Off-axis interferometric phase microscopy with tripled imaging area. Opt. Lett. 2014, 39, 1525–1528. [Google Scholar] [CrossRef]
  14. De Almeida, J.L.; Comunello, E.; Sobieranski, A.; Fernandes, A.M.R.; Cardoso, G.S. Twin-image suppression in digital in-line holography based on wave-front filtering. Pattern Anal. Appl. 2021, 24, 907–914. [Google Scholar] [CrossRef]
  15. Oe, K.; Nomura, T. Twin-image reduction method using a diffuser for phase imaging in-line digital holography. Appl. Opt. 2018, 57, 5652–5656. [Google Scholar] [CrossRef]
  16. Zhang, W.; Cao, L.; Brady, D.J.; Zhang, H.; Cang, J.; Zhang, H.; Jin, G. Twin-Image-Free Holography: A Compressive Sensing Approach. Phys. Rev. Lett. 2018, 121, 093902. [Google Scholar] [CrossRef]
  17. Singh, D.K.; Panigrahi, P.K. Improved digital holographic reconstruction algorithm for depth error reduction and elimination of out-of-focus particles. Opt. Exp. 2010, 18, 2426–2448. [Google Scholar] [CrossRef]
  18. Bryngdahl, O.; Lohmann, A. Single-sideband holography. JOSA 1968, 58, 620–624. [Google Scholar] [CrossRef]
  19. Kreis, T.M. Frequency analysis of digital holography. Opt. Eng. 2002, 41, 771–778. [Google Scholar] [CrossRef]
  20. Ramírez, C.; Lizana, A.; Iemmi, C.; Campos, J. Inline digital holographic movie based on a double-sideband filter. Opt. Lett. 2015, 40, 4142–4145. [Google Scholar] [CrossRef]
  21. Ramírez, C.; Lizana, A.; Iemmi, C.; Campos, J. Method based on the double sideband technique for the dynamic tracking of micrometric particles. J. Opt. 2016, 18, 065603. [Google Scholar] [CrossRef]
  22. Zhang, H.; Monroy-Ramirez, F.A.; Lizana, A.; Iemmi, C.; Bennis, N.; Morawiak, P.; Piecek, W.; Campos, J. Wavefront imaging by using an inline holographic microscopy system based on a double-sideband filter. Opt. Lasers Eng. 2019, 113, 71–76. [Google Scholar] [CrossRef]
  23. Li, J.C.; Peng, Z.; Tankam, P.; Song, Q.; Picart, P. Digital holographic reconstruction of a local object field using an adjustable magnification. J. Opt. Soc. Am. A 2011, 28, 1291–1296. [Google Scholar] [CrossRef] [PubMed]
  24. Goodman, J.W. Introduction to Fourier Optics, 3rd ed.; Roberts & Company Publishers: Greenwood Village, CO, USA, 2005. [Google Scholar]
  25. Zhang, W.; Zhang, H.; Sheppard, C.J.R.; Jin, G. Analysis of numerical diffraction calculation methods: From the perspective of phase space optics and the sampling theorem. J. Opt. Soc. Am. A 2020, 37, 1748–1766. [Google Scholar] [CrossRef] [PubMed]
  26. Heurtley, J.C. Scalar Rayleigh-Sommerfeld and Kirchhoff diffraction integrals: A comparison of exact evaluations for axial points. J. Opt. Soc. Am. 1973, 63, 1003–1008. [Google Scholar] [CrossRef]
  27. Tamamitsu, M.; Zhang, Y.; Wang, H.; Wu, Y.; Ozcan, A. A robust holographic autofocusing criterion based on edge sparsity: Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront. In Quantitative Phase Imaging IV; SPIE: San Francisco, CA, USA, 2018; Volume 10503, pp. 22–31. [Google Scholar] [CrossRef]
  28. Memmolo, P.; Iannone, M.; Ventre, M.; Netti, P.A.; Finizio, A.; Paturzo, M.; Ferraro, P. On the holographic 3D tracking of in vitro cells characterized by a highly-morphological change. Opt. Exp. 2012, 20, 28485–28493. [Google Scholar] [CrossRef] [PubMed]
  29. Zonoobi, D.; Kassim, A.A. Gini Index as Sparsity Measure for Signal Reconstruction from Compressive Samples. IEEE J. Sel. Top. Signal Proc. 2011, 5, 927–932. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Wang, H.; Wu, Y.; Tamamitsu, M.; Ozcan, A. Edge sparsity criterion for robust holographic autofocusing. Opt. Lett. 2017, 42, 3824–3827. [Google Scholar] [CrossRef] [PubMed]
  31. Jiao, S.; Tsang, P.W.M.; Poon, T.C.; Liu, J.P.; Zou, W.; Lia, X. Enhanced Autofocusing in Optical Scanning Holography Based on Hologram Decomposition. IEEE Trans. Ind. Inform. 2017, 13, 2455–2463. [Google Scholar] [CrossRef]
Figure 1. (a) Scheme of the in-line interferometer and (b) sketch of the double-sideband filter (DSB).
Figure 1. (a) Scheme of the in-line interferometer and (b) sketch of the double-sideband filter (DSB).
Applsci 14 05118 g001
Figure 2. Scheme of the recording stage of the overlap between the diffracted and illumination wavefronts.
Figure 2. Scheme of the recording stage of the overlap between the diffracted and illumination wavefronts.
Applsci 14 05118 g002
Figure 3. Scheme of the magnified focused image reconstruction.
Figure 3. Scheme of the magnified focused image reconstruction.
Applsci 14 05118 g003
Figure 4. Magnitude of sparsity criteria (Tamura coefficient, Gini Index, and entropy) as a function of the reconstruction distance (Δzr). The critical points correspond to the best focused axial plane for the different functions (criteria).
Figure 4. Magnitude of sparsity criteria (Tamura coefficient, Gini Index, and entropy) as a function of the reconstruction distance (Δzr). The critical points correspond to the best focused axial plane for the different functions (criteria).
Applsci 14 05118 g004
Figure 5. Simulated microscopic objects: (a) resolution test USAF 1951 and (b) microsphere.
Figure 5. Simulated microscopic objects: (a) resolution test USAF 1951 and (b) microsphere.
Applsci 14 05118 g005
Figure 6. (a) The resolution test image after reconstruction and focusing and (b) profile graph of the horizontal yellow line that crosses the reconstructed image. The red line is the average value of the light intensity that is transmitted through the transparent regions of the resolution test chart. The blue line is the half value of the average intensity. The blue line is used to determine the width of the bright and dark lines.
Figure 6. (a) The resolution test image after reconstruction and focusing and (b) profile graph of the horizontal yellow line that crosses the reconstructed image. The red line is the average value of the light intensity that is transmitted through the transparent regions of the resolution test chart. The blue line is the half value of the average intensity. The blue line is used to determine the width of the bright and dark lines.
Applsci 14 05118 g006
Figure 7. (a) Longitudinal magnification, ML, as a function of displacement distance z o . (b) Transverse magnification, MT, as a function of z o for three sparsity image criteria: Tamura coefficient (black squares), Gini Index (red circles), and entropy (blue triangles).
Figure 7. (a) Longitudinal magnification, ML, as a function of displacement distance z o . (b) Transverse magnification, MT, as a function of z o for three sparsity image criteria: Tamura coefficient (black squares), Gini Index (red circles), and entropy (blue triangles).
Applsci 14 05118 g007
Figure 8. (a) Reconstructed and focused image of a glass microsphere and (b) profile graph of the horizontal yellow line that crosses the reconstructed image. i m g represents the diameter of the microsphere calculated as the full width at half maximum.
Figure 8. (a) Reconstructed and focused image of a glass microsphere and (b) profile graph of the horizontal yellow line that crosses the reconstructed image. i m g represents the diameter of the microsphere calculated as the full width at half maximum.
Applsci 14 05118 g008
Figure 9. (a) ML as a function of the microsphere displacement z o and (b) MT as a function of z o .
Figure 9. (a) ML as a function of the microsphere displacement z o and (b) MT as a function of z o .
Applsci 14 05118 g009
Figure 10. Experimental setup of the in-line interferometer with the DSB filter.
Figure 10. Experimental setup of the in-line interferometer with the DSB filter.
Applsci 14 05118 g010
Figure 11. Image reconstruction of the microscopic objects: (a) resolution test USAF 1951 and (b) glass microspheres (borosilicate).
Figure 11. Image reconstruction of the microscopic objects: (a) resolution test USAF 1951 and (b) glass microspheres (borosilicate).
Applsci 14 05118 g011
Figure 12. (a) Experimental longitudinal magnification ML as a function of displacement distance z o and (b) experimental transverse magnification MT as a function of z o . The error bars in (a) are associated with the micrometer resolution of the linear translation stage, and the error bars in (b) are associated with the resolution of the precision rules used for measuring the image distance.
Figure 12. (a) Experimental longitudinal magnification ML as a function of displacement distance z o and (b) experimental transverse magnification MT as a function of z o . The error bars in (a) are associated with the micrometer resolution of the linear translation stage, and the error bars in (b) are associated with the resolution of the precision rules used for measuring the image distance.
Applsci 14 05118 g012
Figure 13. (a) Experimental ML as a function of the microsphere displacement z o and (b) experimental MT as a function of the microsphere displacement z o . The error bars are associated with micrometer resolution of the linear translation stage and the resolution of the precision ruler used for measuring the image distance, respectively.
Figure 13. (a) Experimental ML as a function of the microsphere displacement z o and (b) experimental MT as a function of the microsphere displacement z o . The error bars are associated with micrometer resolution of the linear translation stage and the resolution of the precision ruler used for measuring the image distance, respectively.
Applsci 14 05118 g013
Table 1. Summary of the average (Ave.) and the standard deviation (S.D.) values of ML and MT for the two simulated microscopic objects for the three sparsity image criteria: Tamura coefficient (TC), Gini Index (GI), and entropy (Ent).
Table 1. Summary of the average (Ave.) and the standard deviation (S.D.) values of ML and MT for the two simulated microscopic objects for the three sparsity image criteria: Tamura coefficient (TC), Gini Index (GI), and entropy (Ent).
USAF 1951Microsphere
EntTCGIEntTCGI
MLAve.−9.4841−9.4786−9.4511−9.4686−9.4756−9.5697
S.D.0.00890.00580.02560.01380.05240.0693
MTAve.−3.0767−3.0776−3.0837−3.0770−3.0738−3.0864
S.D.0.01250.01200.01280.01760.01030.0572
Table 2. Maximum and minimum values of longitudinal magnification ML and transverse magnification MT attributed to experimental errors.
Table 2. Maximum and minimum values of longitudinal magnification ML and transverse magnification MT attributed to experimental errors.
TheoreticalMin.Max.
ML−9.480−9.670−9.290
MT−3.079−3.090−3.067
Table 3. Summary of average (Ave.) and standard deviation (S.D.) values of the experimental ML and MT magnifications for the tested microscopic objects: the USAF 1951 resolution test and the glass microsphere.
Table 3. Summary of average (Ave.) and standard deviation (S.D.) values of the experimental ML and MT magnifications for the tested microscopic objects: the USAF 1951 resolution test and the glass microsphere.
USAF 1951Microsphere
EntTCGIEntTCGI
MLAve.−9.4867−9.4490−9.4543−9.4695−9.4742−9.4857
S.D.0.47990.21840.24710.44540.11910.2718
MTAve.−3.0834−3.0787−3.0776−3.1198−3.0931−3.6058
S.D.0.02860.03240.03230.20850.15860.2772
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramírez, C.; Estévez, I.; Lizana, Á.; Campos, J.; García-Canseco, L. Analysis of the Image Magnification Produced by Inline Holographic Systems Based on the Double-Sideband Filter. Appl. Sci. 2024, 14, 5118. https://doi.org/10.3390/app14125118

AMA Style

Ramírez C, Estévez I, Lizana Á, Campos J, García-Canseco L. Analysis of the Image Magnification Produced by Inline Holographic Systems Based on the Double-Sideband Filter. Applied Sciences. 2024; 14(12):5118. https://doi.org/10.3390/app14125118

Chicago/Turabian Style

Ramírez, Claudio, Irene Estévez, Ángel Lizana, Juan Campos, and Luisa García-Canseco. 2024. "Analysis of the Image Magnification Produced by Inline Holographic Systems Based on the Double-Sideband Filter" Applied Sciences 14, no. 12: 5118. https://doi.org/10.3390/app14125118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop