Next Article in Journal
Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition
Previous Article in Journal
ARCS: Active Radar Cross Section for Multi-Radiator Problems in Complex EM Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Field Interference Simultaneously Imaging on Single Image for Dynamic Surface Measurement

1
Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China
2
Key Laboratory of Science and Technology on Space Optoelectronic Precision Measurement, CAS, Chengdu 610209, China
3
University of Chinese Academy of Sciences, Beijing 100149, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(12), 3372; https://doi.org/10.3390/s20123372
Submission received: 19 May 2020 / Revised: 10 June 2020 / Accepted: 11 June 2020 / Published: 15 June 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
To obtain the dynamic surface of high-frequency vibrating mirrors (VMs), a novel method involving multi-field interference (MFI) pattern imaging on a single image is proposed in this paper. Using multiple reflections and refractions, the proposed method generates three interference patterns at the same time, which improves the traditional time-series methods where a single interference pattern can be obtained at one time. Experimental results show that a series of MFI patterns can be obtained on a single image, with the laser repetition frequency (LRF) ranging from 200 Hz to 10 Hz, and the frame rate of the camera at 10 Hz. Particularly if the LRF (10 Hz) is equal to the frame rate of image, crosstalk is avoided completely, which is particularly desirable in dynamic surface measurement. In summary, the MFI imaging method provides an effective way for VM dynamic surface measurement.

1. Introduction

High-frequency vibrating mirrors (VM) [1] are commonly utilized to suppress background light during small infrared target detection. For example, Wang’s research on real-time background deduction using VMs shows that background radiation that is 208 times stronger than the target can be removed in real time [2]. However, the dynamic surface shape of the VM may directly affect the imaging quality of the optical system when the VM is vibrating. The vibration gives rise to a deformation of the mirror surface, which consequently causes the aberration of the wave front that is reflected by the mirror. This wave-front deformation will ultimately affect the imaging properties of optical systems that use VMs [3]. Therefore, it is important to measure the dynamic surface shape of the VM at optical level accuracy, so as to compensate for defects via digital signal processing in real time. Generally speaking, surface measurements are mainly divided into two types [4,5,6,7,8,9]: contact and non-contact methods. The main advantage of contact methods is that they can achieve micron-level accuracy. Their disadvantages are mainly as follows: (1) it is easy to damage the target’s surface; (2) subtle features of complex target surfaces are difficult to obtain; (3) they are slow; and (4) probes are easy to wear, which degrades the measurement accuracy and shortens their service life. Non-contact methods do not require contact with the target, thereby avoiding surface damage, and are faster than contact methods.
Among various non-contact surface measurement methods, optical methods [10,11,12,13,14,15,16] are widely used in surface measurements. In general, optical non-contact methods can be classified into the following methods: laser ranging, structured light and interference methods. Specifically, laser ranging methods obtain objects’ images through direct or indirect measurement of the travel time using a scanning mechanism. The imaging speed is limited by the scanning mechanism, which makes it unsuitable for dynamic measurements. Structured light methods project a prepared pattern image on the object’s surface, and the surface contour modulates this image. Then, an image of the object is reconstructed from the deformed reflection image. Compared with laser ranging and structured light methods, interference methods [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] are more suitable for dynamic measurements, due to the advantages of real-time performance and high accuracy.
Some interference measurement methods use scattered light on the object’s surface for vibrating surfaces. Chen et al. used electronic speckle pattern interferometry (ESPI) to measure four vibration modes of a corrugated plate excited by the harmonic wave of a 550 Hz speaker [32]. De Veuster et al. used digital speckle pattern interferometry (DSPI) [33] to measure the amplitude of a diaphragm driven by a speaker at a vibration frequency of 1000 Hz. These types of speckle interference methods are mainly aimed at rough surfaces, and are suitable for interferometric imaging measurements of vibration nodes in vibration modes, rather than surface shape measurement.
There are three ways to implement interference for surface shape measurements: fringe scanning [34]; multi-interferogram [35,36,37,38,39,40,41,42,43,44,45,46]; and single-interferogram [47] methods. Fringe scanning methods shift the mirror in the reference arm so that the light intensity at any point in the interferogram is modulated by the sine function; then, the surface to be measured is derived using the Fourier transform. Multi-interferogram methods acquire interferograms with different reference phase positions by moving a scanning mechanism. In fact, both fringe scanning and multi-interferogram methods require a scanning mechanism to obtain properly timed sequence images. As a result, it is impossible to measure the surface of fast-moving VMs due to the fast multi-frame image acquisition required. Compared with fringe scanning and multi-interferogram methods, single interferogram methods can capture one interferogram on a single image quickly, as they do not require a scanning mechanism. However, their accuracy is lower than that of fringe scanning and multi-interferogram methods (the accuracy of single interferogram methods is λ/10 [48], while the accuracy of fringe scanning reaches λ/100 [34] and multi-interferogram methods usually achieve better than λ/50 [48]). Furthermore, single interferogram methods cannot handle closely spaced interference fringes.
All the above methods are time-series methods, which can only obtain one interferogram at a time. Due to the shortcomings of single interferogram methods, it is common to use three interferograms for interferometric measurements. However, for high-frequency VM dynamic surface shape measurements, multiple images obtained at different time points cannot be used for this purpose. In this paper, a novel spatial sequence method based on single-image multi-field interference (MFI) imaging is proposed, which can effectively deal with the above-mentioned problems in existing methods. Using the proposed method, we can obtain three interference patterns from a single image at the same time, which improves the traditional time-series methods where a single pattern is obtained at one time. As a result, a new type of spatial sequencing is introduced. This allows us to avoid the time delay present in fringe scanning and multi-interferogram methods, which will be helpful in capturing the dynamic surface of VM.

2. Multi-Field Interference Imaging Setup

The schematic of the setup for applying multi-field interference is shown in Figure 1. First, light emitted from the laser is shaped through the cylindrical lens (CL), and then passes through a linear polarizer, which transforms partially polarized light to linearly polarized light (p-wave). Second, the linear polarizer, a polarization beam splitter (PBS) and a quarter wave plate (QWP) together act as a spatial light circulator. In this case, the transmitted and returned light can be separated easily, which increases the contrast of the image. Third, the p-wave is expanded by a beam expander (composed of lenses L2 and L3), and is transmitted to the light wedge (WL). The light on surface B of the WL interferes with the light reflected from the VM, so an interference pattern is generated. Fourth, the interference pattern arrives at the PBS. Since the light passes through the quarter-wave plate twice, the polarization direction of the p-wave is changed by 90 degrees. Since the detector plane of the camera is the image surface of surface B of WL, the interference pattern on surface B is formed on the detector plane of the camera. The distance between the detector plane of the camera and the focal plane of L4 is d, the calculation of which is shown in the Appendix A. Finally, the multi-field interference image is acquired using the camera. It should be noted that the QWP is placed near F2 so as to adjust the contrast of the interference image. The WL is the most critical element to generate the MFI, and its principle will be described in detail in the next section.

3. Principle of Multi-Field Interference

There are many situations where multiple beams’ interference is involved. Two classic examples are diffraction gratings [49] and plane-parallel plates [50]. In these two cases, only one interferogram is generated as the beams interfere. Our approach uses plane waves. In Figure 2, the red lines represent the plane waves from left to right, while the blue lines represent the plane waves from right to left. When light (Li) is incident on WL, refraction and reflection will take place one or more times on both surfaces of the WL. Then, the plane waves R1, R2 and R3 will be reflected, while T1, T2 and T3 will pass through the WL. Then, T2 is reflected by the VM and is again incident on the surface B of WL. Similarly, reflection will take place one or more times on both the surface B of the WL and the VM. Finally, R61, R62 and R63 will pass through the WL.
The two surfaces that interfere with each other are surface B of the WL and the VM. In the case of small angles, paraxial approximation is used in the following derivation of the relationship between reflection and refraction.
The reflection angle of R1 is equal to the incident angle of Li ( θ 1 ). The reflection angle of R2 ( θ 6 ) is given by
θ 6 = 2 n β θ 1 ,
where β and n represent the angle and the refractive index of WL, respectively. Similarly, the reflection angle of R3 ( θ 10 ) is given by:
θ 10 = 4 n β θ 1 ,
θ 4 , θ 8 and θ 12 are the refraction angles of T1, T2 and T3, respectively, and are calculated as follows:
θ 4 = θ 1 n β ,
θ 8 = θ 1 3 n β ,
θ 12 = θ 1 5 n β
Obviously, the angle between R1 and R2 and R2 and R3 is 2, as is the angle between T1 and T2 and T2 and T3.
Let R3 be parallel to Li; then we have θ 10 = θ 1 . Then, the incidence angle of Li, θ 1 , is equal to 2 n β . According to Equation (3), we know that the refraction angle of T1 ( θ 4 ) will be equal to , while according to Equation (4), we know that the refraction angle of T2 ( θ 8 ) will be equal to −nβ. If the VM is parallel to plane B of the WL, as shown in Figure 2a, the first order reflection angle of the rays of T2 reflected from the VM is , and is parallel to T1. The T2 rays propagate through the WL and form the plane wave R61, so the angle of R61 ( θ 61 ) is equal to θ 10 . The second order reflection of T2 reflected from the VM is R62, and the third order reflection of the T2 reflected from the VM is the R63, where R61, R62 and R63 are all parallel to Li and R3, which will overlap when imaging on the camera. To obtain separate multi-field interference patterns, the VM needs to rotate counterclockwise by an angle of α. Then, according to the law of reflection, R61, R62 and R63 will rotate counterclockwise by angles of 2α, 4α and 6α, respectively, as shown in Figure 2b, and the opposite holds for clockwise rotation.
Finally, the reflection angles of R61, R62 and R63 are given by:
θ 61 = θ 10 + 2 α ,
θ 62 = θ 10 + 4 α ,
θ 63 = θ 10 + 6 α
In other words, they will be separated by an angle 2α and interfere with R3 at the surface B of WL, where z = 0, so that multi MFI patterns are obtained.
In order to limit the MFI in the range of one-half to three-quarters of the camera field of view, the following relationship must be satisfied, the calculation of which is shown in the Appendix A
1 12 f 2 f 3 f 4 p s × r e α 1 8 f 2 f 3 f 4 p s × r e ,
where f 2 , f 3 and f 4 are the focal lengths of lenses L2, L3 and L4, respectively, ps is the pixel size of the camera and re is the resolution of the camera.
The details of the interference are described below.
We define the amplitude reflection and transmission coefficients of WL’s surface A as r A and r A , t A and t A , the corresponding coefficients for surface B as r B and r B , t B and t B and the amplitude reflection coefficient of the VM’s surface as r V M . The primes indicate reflection or transmission from within the WL.
So, the expression for the complex amplitude of R3 plane waves is
E R 3 ( x , y , z , t ) = t A r B r A r B t A A e i [ ω t + Δ ϕ R 3 ( x , y , z ) ] ,
and the expression for the complex amplitude of R6j plane waves is:
E 6 j ( x , y , z , t ) = t A r B r A t B t A r V M j r B j 1 t B A e i [ ω t + Δ ϕ R 6 j ( x , y , z ) ]
The calculations of Equation (10) and Equation (11) are shown in Appendix B.
The interference between R61 and R3, R61 and R62 and R62 and R63 is shown in Figure 3.
(1) In Figure 3a, beam R61 (p1-p2), which is the return of plane wave R61 from point p1 of the VM, interferes at point p2 with beam R3 (p2), which is the return of plane wave R3 from a position near point p2 on plane B of WL. The optical path difference of the two beams represents the local shape of the VM at point p1. The interference pattern formed by R61 and R3 is named S1.
(2) In Figure 3b, beam R62 (p1-p2-p3-p4), which is the reflection of plane wave R62 from point p1 on the VM, and through point p2 on the WL and point p3 on the VM, interferes at point p4 with beam R61 (p2-p3-p4), which is the reflection of plane wave R61 from point p2 of WL, and goes through point p3 on the VM. For these two beams, the optical path difference component caused by the local profile at point p3 of the VM is cancelled, and the remainder is the optical path difference corresponding to the local profile of p1. S2 is the interference pattern formed by R61 and R62.
(3) In Figure 3c, beam R63 (p1-p2-p3-p4-p5-p6), which is the reflection of plane wave R63 from point p1 on the VM and through point p2 on the WL, point p3 on the VM, point p4 on the WL and point p5 on the VM, interferes at point p6 with beam R62 (p2-p3-p4-p5-p6), which is the reflection of plane wave R62 from point p2 on the WL and through point p3 on VM, point p4 on the WL and point p5 on the VM. For these beams, the optical path differences caused by the local surface shapes at point p3 and p5 of the VM are cancelled, and the remainder is the optical path difference corresponding to the local shape of p1. As per previous cases, S3 is the interference pattern formed by R62 and R63.
Compared with S1, S2 will measure less the part between p3 and p1, while the distance between p3 and p1 is d1; compared with S2, S3 will measure less the part between p5 and p3, while the distance between p5 and p3 is d2. If the distance between VM and WL is L, then the distance between p3 and p1 is L (2 + 4α), and the distance between p5 and p3 is L (2 + 8α). According to the value range of α, its maximum is 0.23°. The refractive index of WL n is 1.5, and the wedge angle is 1°. When L is 10 mm, d1 = 0.68 mm and d2 = 0.84 mm. Since the short side length of VM is 40 mm, the influence of d1 and d2 on the whole mirror measurement is very small. As L decreases, d1 and d2 decrease accordingly, so the VM should be as close to WL as possible for the measurement.
The light intensity distribution of interference pattern S1 is calculated as follows:
The net complex amplitude at surface B of WL, where z = 0, is the sum of the complex amplitude of the R3 plane waves and the complex amplitude of R61 plane waves:
E s j ( x , y , t ) = t A r B r A t A A { r V M j r B j 1 t B e i [ ω t + Δ ϕ R 61 ( x , y ) ] + r B e i [ ω t + Δ ϕ R 3 ( x , y ) ] } ,
The resulting field intensity is
I S j ( x , y ) = E S j ( x , y , t ) E S j * ( x , y , t ) ,
where * denotes a complex conjugate.
I S 1 ( x , y ) = ( t A r B r A t A A ) 2 { ( r V M 1 ) 2 ( r B 1 1 ) 2 ( t B t B ) 2 + ( r B ) 2 + 2 r V M 1 r B 1 1 r B t B t B cos [ Δ ϕ R 61 ( x , y ) Δ ϕ R 3 ( x , y ) ] }
Similarly, the intensity distributions of the interference patterns S2 and S3 are calculated as follows
I S 2 ( x , y ) = ( t A r B r A t A t B t B A ) 2 { ( r V M 2 ) 2 ( r B 2 1 ) 2 + ( r V M 1 ) 2 ( r B 1 1 ) 2 + 2 r V M 2 r B 2 1 r V M 1 r B 1 1 cos [ Δ ϕ R 62 ( x , y ) Δ ϕ R 61 ( x , y ) ] }
I S 3 ( x , y ) = ( t A r B r A t A t B t B A ) 2 { ( r V M 3 ) 2 ( r B 3 1 ) 2 + ( r V M 2 ) 2 ( r B 2 1 ) 2 + 2 r V M 3 r B 3 1 r V M 2 r B 2 1 cos [ Δ ϕ R 63 ( x , y ) Δ ϕ R 62 ( x , y ) ] }
where the first and second terms are the intensities due to the two field R3 and R6j individually, which form the background on the image. The interference effects are contained in the third term, where each I S j produces an independent interference fringe S j .
We can draw two important conclusions from this result. First, the components Δ ϕ R 61 ( x , y ) Δ ϕ R 3 ( x , y ) , Δ ϕ R 62 ( x , y ) Δ ϕ R 61 ( x , y ) and Δ ϕ R 63 ( x , y ) Δ ϕ R 62 ( x , y ) in the third term contain the information of the VM’s surface shape, which is what we are looking for. Second, we can adjust the stripe contrast ratio (SCR) through the parameters r A and r A , t A and t A , r B and r B and t B and t B . The VM is the object to be measured. For simplification, and since the reflected beam is 180° out of phase as the beams with no primes are all reflected externally, we set r V M = 1 , and r B = r B . The SCR can be expressed as:
S C R S j = { 2 t B 2 r B ( t B 2 ) 2 + ( r B ) 2 , j = 1 2 r B ( r B ) 2 + 1 , j = 2 2 ( r B ) 3 ( r B ) 4 + ( r B ) 2 , j = 3 .
The minus sign indicates whether we are referring to a black or a white fringe. The amplitude of the fringe in the third term of expression 21 is:
A S j = { 2 r B t B 2 ( t A r B r A t A A ) 2 , j = 1 2 r B ( t A r B r A t A t B t B A ) 2 , j = 2 2 r B 3 ( t A r B r A t A t B t B A ) 2 , j = 3 .
The profile of S C R S j and A S j with the amplitude reflection coefficients t B are shown in Figure 4, where we have also assumed that there is no absorption in WL, so t B t B + ( r B ) 2 = 1 . We can draw an important conclusion from this figure: when t B ranges between 0.6 and 0.9, the interference effect is remarkable. The corresponding intensity transmission coefficients range from 0.36 to 0.81.
The constraint conditions to produce MFI using a laser pulse are summarized in two parts, as given below.
1. The third reflected wave vector R3 must be parallel to the incident wave vector Li, and the VM needs to rotate by an angle of α.
2. The corresponding intensity transmission coefficients of WL’s B plane must be within the range 0.36 to 0.81.
In the following sections, we present the experimental verification of our approach.

4. Results

4.1. Experimental Establishment

As shown in Figure 5, an experimental system was set up for multi-field interference imaging. In order to suppress the background light, all optical components were installed in a blackened box. The wedge of light was plated with an antireflection coating on side A with a transmittance of 0.98, while a portion of the transmission film was plated on side B with a transmittance of 0.73. The laser deployed is a model NPL52C from THORLABS.
The main parameters of the MFI imaging system are shown in Table 1.
The WL was mounted on a two-dimensional tilt adjustment mechanism (TDTAM), which allowed the easy adjustment of the pitch and azimuth. In order to make the reflected wave of R3 approximately parallel to the incident wave of Li, some necessary operations were performed. It should be noted that R1, R2 and R3—the waves in the propagation direction of the plane wave field—are difficult to recognize in the parallel optical path. Fortunately, they are easy to recognize in the convergent optical path, as light with different angles in the convergent optical path will converge on different locations on the focal planes. A paper screen was placed at the common focal plane of lenses L2 and L3 to observe R1, R2 and R3. Three corresponding light spots were produced in a straight line. When WL was rotated, the line connecting the three light spots was rotated accordingly, which represents the main section direction of the WL. A similar operation can be applied on the WL in order to move R3 closer to Li (see Figure 6b).
The VM was fixed on a one-dimensional turntable (ODT), with which horizontal adjustment of the VM was implemented. Meanwhile, pitch adjustment was achieved using some pads. Then, the S series light spots, including S1, S2 and S3, could be moved closer to R3 (see Figure 6c). When the ODT is fixed to the optical table, the phenomenon of MFI can be observed by fine-tuning the TDTAM and the viewing screen placed at the out-of-focus position of f2 (see Figure 6d).

4.2. Results and Discussion

The interference patterns obtained using the camera are shown in Figure 7. Three interference patterns, including S1, S2 and S3, were obtained in a single image. It should be noted that FG1 and FG2 in Figure 7a are two inherent patterns introduced by the surface reflection of the PBS. As shown in Figure 7b, when the vibration mirror is blocked, the interference patterns S1, S2 and S3 disappear, leaving only the inherent patterns. It can be seen from Figure 7a that S3 is not very clear, due to the low image contrast. However, the contrast of the interference patterns S1, S2 and S3 can be adjusted. As shown in Figure 7d, the contrast of S3 is obviously improved after enhancement.
Cross-sections of the interference patterns represented by three white lines in Figure 7c,d were extracted for further analysis. During the acquisition of the cross-sections of the interference patterns, the pulse frequencies of the laser were set to 200 Hz, 100 Hz, 50 Hz, 20 Hz and 10 Hz. As shown in Figure 8, the fluctuations of gray values in the curves are approximately consistent with the distribution in the interference patterns. The bottom of the curves corresponds to the black fringes in the interference pattern, while the peaks correspond to the white fringes.
Here, the contrast of the interference fringes is defined as:
S C R = S G max S G min S G max + S G min 2 o f f s e t ,
Statistical results of the contrast are shown in Table 2. It can be concluded that the contrast of S1, S2, and S3 reaches more than 0.7 when the repetition frequency is 100 Hz or 50 Hz, but is relatively low at other repetition frequencies. Hence, the contrast of interference fringes is improved by adjusting the repetition frequency of the laser. When the frame rate of the camera is low (10 Hz) and the repetition frequency of laser is high (100 Hz or even higher), multiple pulses will be captured during one image frame. In this case, the brightness of the interference pattern will increase accordingly. In static cases, the interference pattern generated by the accumulation of multiple pulses does not cause crosstalk; however, crosstalk will occur under dynamic conditions. To avoid crosstalk, a single laser pulse is allowed in an image frame. In other words, the repetition frequency of a laser is equal to the frame rate of an image.
To verify the effectiveness of the MFI interference pattern, the same mirror was used for comparative testing. The interferometer used for comparison was a Fisba μshape2 HR phase-shift digital wave front interferometer, as shown in Figure 9c. As shown in Figure 9d, produced by the instrument’s software, the root mean square (RMS) value measured was 0.03 λ , which is 18.9 nm for λ = 632.8   nm , while the RMS value measured using the method of this article was 20.3 nm. This is shown in Figure 9b, which was produced by the algorithm presented in Appendix C.
The difference between the two is very small, which proves that the MFI interference pattern can provide surface measurement information. The advantage of the proposed method is that it can produce three interference patterns simultaneously, which can be used for dynamic surface shape measurement while the VM is vibrating; the μshape2 HR interferometer does not have this ability [51].

5. Discussion and Conclusions

A multi-field interference imaging method is proposed to obtain the dynamic surface of the high-frequency vibrating mirrors. Compared with traditional interference methods, which produce only one interference pattern at a time, the proposed method can produce three interference patterns simultaneously at the surface B of the WL, which can be captured on a single image. A single laser pulse was applied in the MFI system and corresponding patterns were obtained on a single image. In this case, crosstalk was avoided perfectly, which is particularly desirable in dynamic applications. In summary, the MFI imaging method provides an effective way for dynamic surface measurement.

Author Contributions

Study design, data analysis, and writing: W.H.; literature search: X.G.; data collection: Z.C.; figures: L.B.; data interpretation: B.L.; editing: R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Thanks to our colleagues working with us in the department of the Institute of Optics and Electronics at the Chinese Academy of Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In order to limit MFI in the range of one-half to three-quarters of the camera field of view, the following relationship must be satisfied:
It can be seen from Figure A1 that when the VM rotates at an angle of α, the three interference patterns will be separated by 2α, accordingly. The relationship between WL and VM can be adjusted so that the three interference patterns are distributed along the row direction of the camera and occupy a total angle width of 6α when they are tangent to each other. If the resolution of the camera is re, the pixel size is ps and the field of view of the camera is “view”, then the following imaging relationship can be obtained.
view = p s × r e f 4 .
Figure A1. MFI interferogram imaging constraints in camera field of view: (a) 1/2 view; and (b) 3/4 view.
Figure A1. MFI interferogram imaging constraints in camera field of view: (a) 1/2 view; and (b) 3/4 view.
Sensors 20 03372 g0a1
The angular magnification from L3 to L2 is inversely proportional to the ratio of their focal lengths, so the following relationship can be obtained:
1 2 view 6 α f 3 f 2 3 4 view .
From the above two equations, the following relationships can be obtained:
1 12 f 2 f 3 f 4 p s × r e α 1 8 f 2 f 3 f 4 p s × r e
According to the parameter values in Table 1, the value of α ranges from 0.16 °   to   0.23 ° .
The amount of defocus d of the detector surface of the camera determines the size of the three interference images. When the three interference images are tangent to each other, the following equation is satisfied
2 α f 3 f 2 f 4 = D 4 f 4 d
from which we obtain:
d = 2 α f 3 f 2 D 4 f 4 2
The value range of d is 8.5 mm to 12.7 mm.

Appendix B

The complex amplitude of the Li plane waves is:
E L i ( x , y , z , t ) = A e i [ ω t + Δ ϕ L i ( x , y , z ) ]
The complex amplitude of the R3 plane waves is
E R 3 ( x , y , z , t ) = A R 3 e i [ Δ ϕ R 3 ( x , y , z ) ] ,
while the complex amplitude of the R 6 j ( j = 1 ,   2 ,   3 ) plane waves is:
E R 6 j ( x , y , z , t ) = A R 6 j e i [ ω t + Δ ϕ R 6 j ( x , y , z ) ]
We define the amplitude reflection and transmission coefficients of WL’s surface A as r A and r A and t A and t A , the corresponding coefficients for surface B as r B and r B and t B and t B and the amplitude reflection coefficient of the VM’s surface as r V M . The primes indicate reflection or transmission from within the WL. We can now calculate the amplitude of R3 as
A R 3 = t A r B r A r B t A A ,
and the amplitude of R6j as:
A R 6 j = t A r B r A t B t A r V M j r B j 1 t B A
So, the expression for the complex amplitude of R3 plane waves is rewritten as
E R 3 ( x , y , z , t ) = t A r B r A r B t A A e i [ ω t + Δ ϕ R 3 ( x , y , z ) ] ,
and the expression for the complex amplitude of R6j plane waves is rewritten as:
E 6 j ( x , y , z , t ) = t A r B r A t B t A r V M j r B j 1 t B A e i [ ω t + Δ ϕ R 6 j ( x , y , z ) ]

Appendix C

For Equation (14) to Equation (16), let C = t A r B r A t A A , r V M = 0.99 , t B = t B = 0.85 , r B = r B = 0.53 , r A = r A = 0.10 and t A = t A = 0.99 . A is the amplitude of the plane wave. Under ideal circumstances, the amplitude A of a plane wave is a constant. In practice, due to some factors such as nonuniformity, A has a specific distribution, A(x,y), and varies with time. The corresponding light intensity distribution of C is I 0 ( x , y ) . k = 2 π λ is the number of waves, λ = 532 nm is the wavelength and ω(x,y) is the wave aberration distribution, which represents the tested surface’s shape. Let the initial phase of I S 1 be 0; then, the three interference patterns can be expressed as:
I S 1 ( x , y ) = I 0 ( x , y ) { 0.89 + 0.88 cos [ 2 k ω ( x , y ) ] } ,
I S 2 ( x , y ) = I 0 ( x , y ) { 0.71 + 0.63 cos [ 2 k ω ( x , y ) + Δ φ 1 ] } ,
I S 3 ( x , y ) = I 0 ( x , y ) { 0.26 + 0.23 cos [ 2 k ω ( x , y ) + Δ φ 2 ] }
The proposed method obtains the phase difference through algorithm optimization. I 0 ( x , y ) changes slowly relative to the interference fringes. Let I 0 ( x , y ) = C = 1 0.89 E I S 1 , where E I S 1 is the average value of the image I S 1 . We calculate C = 1 0.89 1 m n x = 1 m y = 1 n I S 1 ( x , y ) , where m, n are the column and row numbers of the resolution of the interference pattern I S 1 . The wave aberration can be obtained using Equation (A13):
2 k ω ( x , y ) = arccos [ I S 1 ( x , y ) 0.89 C 0.88 C ]
The target image H S 2 is formed according to the format of I S 2 , and the phase difference φ is added. The optimal phase difference ϕ 1 is obtained by comparing H S 2 with the reference image I S 2 (value range: 0 ~ 4 π ) .
H S 2 ( x , y , φ ) = C { 0.71 + 0.63 cos [ 2 k ω ( x , y ) + φ ] }
For discrete digital images, the correlation coefficient ρ H S 2 I S 2 ( φ ) is used to reflect the correlation between two different images. According to the definition of the correlation coefficient in the theory of probability, the correlation coefficient is expressed as
ρ H S 2 I S 2 ( φ ) = x = 1 m y = 1 n [ H S 2 ( x , y ) E H S 2 ] [ I S 2 ( x , y ) E H S 2 ] x = 1 m y = 1 n [ H S 2 ( x , y ) E H S 2 ] 2 x = 1 m y = 1 n [ I S 2 ( x , y ) I H S 2 ] 2 ,
where E H S 2 and E I S 2 are the average values of H S 2   and   I S 2 , respectively.
The value of Δ φ 1 (which is ϕ 1 ) corresponding to the maximum value of ρ H S 2 I S 2 is obtained. The phase difference ϕ 2 is obtained using the same method. The calculated values of ϕ 1 and ϕ 2 are substituted into Equations (A14) and (A15), and the term I 0 ( x , y ) is eliminated to obtain Equation (A19) and Equation (A20):
C 1 ( x , y ) = I S 2 ( x , y ) I S 1 ( x , y ) = 0.71 + 0.63 cos [ 2 k ω ( x , y ) + ϕ 1 ] 0.89 + 0.88 cos [ 2 k ω ( x , y ) ] ,
C 2 ( x , y ) = I S 3 ( x , y ) I S 1 ( x , y ) = 0.26 + 0.23 cos [ 2 k ω ( x , y ) + ϕ 2 ] 0.89 + 0.88 cos [ 2 k ω ( x , y ) ]
Thus, the wave aberration distribution ω ( x , y ) is obtained:
ω ( x , y ) = λ 4 π arccos { 0.89 C 1 ( x , y ) 0.71 0.63 sin ( ϕ 1 ) 0.89 C 2 ( x , y ) 0.26 0.23 sin ( ϕ 2 ) [ 0.63 cos ( ϕ 1 ) 0.88 C 1 ( x , y ) ] 0.63 sin ( ϕ 1 ) [ 0.23 cos ( ϕ 2 ) 0.88 C 2 ( x , y ) ] 0.23 sin ( ϕ 2 ) }

References

  1. Han, W.; Fan, Z.; Le, B.; Huang, T.; Zhao, R.; Liu, B.; Gao, X. A 3 kHz High-Frequency Vibration Mirror Based on Resonant Mode. Acta Microsc. 2020, 29, 1866–1879. [Google Scholar]
  2. Wang, X.; Liao, S.; Shen, M.; Huang, J. Real time background deduction with a scanning mirror. Opto-Electron. Eng. 2005, 32, 9–11. [Google Scholar]
  3. Antonin, M.; Jiri, N. Analysis of imaging properties of a vibrating thin flat mirror. J. Opt. Soc. Am. A 2004, 21, 1724–1729. [Google Scholar]
  4. Li, Q.; Feng, H.; Xu, Z.; Han, Y.; Huang, H. Review of computer stereo vision technique. Opt. Tech. 1999, 5, 71–73. [Google Scholar]
  5. Hsueh, W.J. 3D Surface Digitizing and Modeling Development at ITRI. Proc. SPIE 2000, 4080, 14–20. [Google Scholar]
  6. Addison, A.C. Virtualized Architectural Heritage: New Tools and Techniques. IEEE Multimedia 2000, 7, 26–31. [Google Scholar] [CrossRef]
  7. Hu, Z. Flexible Measuring Equipment 3 Coordinate Measuring Machine. Precise Manuf. Autom. 2006, 2, 57–58. [Google Scholar]
  8. Zhang, X.; Liu, J.; Xia, X. Design of Deep-sea Sonar Altimeter Based on PIC16C74. Microcomput. Inf. 2008, 24, 126–127. [Google Scholar]
  9. Chen, F.; Brown, G.M.; Song, M. Overview of Three-Dimensional Shape Measurement Using Optical Methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar]
  10. Lu, Y. Progress on Laser Phase-shift Interferometry. Laser Infrared 1990, 20, 13–15. [Google Scholar]
  11. John, B.; Haubecker, H.; Geibler, P. Handbook of Computer Vision and Applications; Academic Press: Cardiff, UK, 1999. [Google Scholar]
  12. Gorthi, S.S.; Rastogi, P. Fringe projection techniques: Whither we are? Opt. Lasers Eng. 2010, 48, 133–140. [Google Scholar] [CrossRef] [Green Version]
  13. Salvi, J.; Pages, J.; Battle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef] [Green Version]
  14. Xian, T.; Su, X. Area modulation grating for sinusoidal structure illumination on phase-measuring profilometry. Appl. Opt. 2001, 40, 1201–1206. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, L.; Quan, C.; Tay, C.J.; Fu, Y. Shape measurement using one frame projected saw tooth fringe pattern. Opt. Commun. 2005, 246, 275–284. [Google Scholar] [CrossRef]
  16. Frank, O.C.; Ryan, P.; Wellesley, E.P.; John, K.; Jason, C. A passive optical technique to measure physical properties of a vibrating surface. Proc. SPIE 2014, 9219, 1–12. [Google Scholar]
  17. Massie, N.A.; Nelson, R.D.; Holly, S. High-performance real-time heterodyne interferometry. Appl. Opt. 1974, 18, 1797–1803. [Google Scholar] [CrossRef]
  18. John, E.G. Sub-Nyquist interferometry. Appl. Opt. 1987, 26, 5245–5258. [Google Scholar]
  19. John, E.G.; Andrew, E.L.; Russell, J.P. Sub-Nyquist interferometry: Implementation and measurement capability. Opt. Eng. 1996, 35, 2962–2969. [Google Scholar]
  20. Manuel, S.; Juan, A.Q.; Jose, M. General n-dimensional quadrature transform and its application to interferogram demodulation. J. Opt. Soc. Am. 2003, 20, 925–934. [Google Scholar]
  21. Marc, T.; Paul, D.; Greg, F. Sub-aperture approaches for asphere polishing and metrology. Proc. SPIE 2005, 5638, 284–299. [Google Scholar]
  22. Liu, H.; Hao, Q.; Zhu, Q.; Sha, D.; Zhang, C. A novel aspheric surface testing method using part-compensating lens. Proc. SPIE 2005, 5638, 324–329. [Google Scholar]
  23. Dumas, P.; Hall, C.; Hallock, B.; Tricard, M. Complete sub-aperture pre-polishing and finishing solution to improve speed and determinism in asphere manufacture. Proc. SPIE 2007, 6671, 667111. [Google Scholar]
  24. Liu, D.; Yang, Y.; Tian, C.; Luo, Y.; Wang, L. Practical methods for retrace error correction in nonnull aspheric testing. Opt. Express 2009, 17, 7025–7035. [Google Scholar] [CrossRef] [PubMed]
  25. Garbusi, E.; Baer, G.; Osten, W. Advanced studies on the measurement of aspheres and freeform surfaces with the tilted-wave interferometer. Proc. SPIE 2011, 8082, 80821F. [Google Scholar]
  26. Liu, D.; Shi, T.; Zhang, L.; Yang, Y.; Chong, S.; Shen, Y. Reverse optimization reconstruction of aspheric figure error in a non-null interferometer. Appl. Opt. 2014, 53, 5538–5546. [Google Scholar] [CrossRef]
  27. Saif, B.; Chaney, D.; Smith, W.; Greenfield, P.; Hack, W.; Bluth, J.; Feinberg, L. Nanometer level characterization of the James Webb Space Telescope optomechanical systems using high-speed interferometry. Appl. Opt. 2015, 54, 4285–4298. [Google Scholar] [CrossRef]
  28. Fortmeier, I.; Stavridis, M.; Wiegmann, A.; Schulz, M.; Osten, W.; Elster, C. Evaluation of absolute form measurements using a tilted-wave interferometer. Opt. Express 2016, 24, 3393–3404. [Google Scholar] [CrossRef]
  29. Hao, Q.; Wang, S.; Hu, Y.; Cheng, H.; Chen, M.; Li, T. Virtual interferometer calibration method of a non-null interferometer for freeform surface measurements. Appl. Opt. 2016, 55, 9992–10001. [Google Scholar] [CrossRef]
  30. Hao, Q.; Li, T.; Hu, Y.; Wang, S.; Ning, Y.; Tan, Y.; Zhang, X. Vertex radius of curvature error measurement of aspheric surface based on slope asphericity in partial compensation interferometry. Opt. Express 2017, 25, 18107–18121. [Google Scholar] [CrossRef]
  31. Hao, Q.; Wang, S.; Hu, Y.; Tan, Y.; Li, T.; Wang, S. Two-step carrier-wave stitching method for aspheric and freeform surface measurement with a standard spherical interferometer. Appl. Opt. 2018, 57, 4743–4750. [Google Scholar] [CrossRef]
  32. Chen, F.; Griffen, C.T.; Allen, T.E.; Brown, G.M. Measurement of shape and vibration using a single electronic speckle interferometry. Proc. SPIE 1996, 2860, 150–161. [Google Scholar]
  33. De Veuster, C.; Renotte, Y.L.; Berwart, L.; Lion, Y.F. Quantitative three-dimensional measurements of vibration amplitudes and phases as a function of frequency by Digital Speckle Pattern Interferometry. SPIE 1998, 3478, 322–333. [Google Scholar]
  34. Burning, J.H.; Herriott, D.R.; Gallagher, J.E.; Rosenfeld, D.P.; White, A.D.; Brangaccio, D.J. Digital Wave Front Measuring Interferometer for Testing Optical Surfaces and Lenses. Appl. Opt. 1974, 13, 2693–2703. [Google Scholar] [CrossRef] [PubMed]
  35. Dorband, B. Die 3-Interferogramm Method zur automatischen Streifenauswertung in rechnergesteuerten digitalten zweistrahlinterometern. Optik 1982, 60, 161–174. [Google Scholar]
  36. Wingerden, J.; Frankena, H.; Smorenburg, C. Linear approximation for measurement errors in phase shifting interferometry. Appl. Opt. 1991, 30, 2718–2729. [Google Scholar] [CrossRef]
  37. Baker, K.; Stappaerts, E. A single-shot pixellated phase-shifting interferometer utilizing a liquid-crystal spatial light modulator. Opt. Lett. 2006, 31, 733–735. [Google Scholar] [CrossRef]
  38. Styk, A.; Patorski, K. Analysis of systematic errors in spatial carrier phase shifting applied to interferogram intensity modulation determination. Appl. Opt. 2007, 46, 4613–4624. [Google Scholar] [CrossRef]
  39. Xu, J.; Xu, Q.; Chai, L. Tilt-shift determination and compensation in phase-shifting interferometry. J. Opt. A 2008, 10, 075011. [Google Scholar] [CrossRef]
  40. Abdelsalam, D.; Yao, B.; Gao, P.; Min, J.; Guo, R. Single-shot parallel four-step phase shifting using on-axis Fizeau interferometry. Appl. Opt. 2012, 51, 4891–4895. [Google Scholar] [CrossRef]
  41. Liu, Q.; Wang, Y.; Ji, F.; He, J. A three-step least-squares iterative method for tilt phase-shift interferometry. Opt. Express 2013, 21, 29505–29515. [Google Scholar] [CrossRef]
  42. Deck, L. Model-based phase shifting interferometry. Appl. Opt. 2014, 53, 4628–4636. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, Q.; Wang, Y.; He, J.; Ji, F. Modified three-step iterative algorithm for phase-shifting interferometry in the presence of vibration. Appl. Opt. 2015, 54, 5833–5841. [Google Scholar] [CrossRef] [PubMed]
  44. Pramod, R.; Erwin, H. Phase Estimation in Optical Interferometry; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  45. Jiri, N.; Pavel, N.; Antonin, M. Multi-step Phase-shifting Algorithms Insensitive to Linear Phase Shift Errors. Opt. Commun. 2008, 281, 5302–5309. [Google Scholar]
  46. Jiri, N. Five-Step Phase-Shifting Algorithms with Unknown Values of Phase Shift. Optik 2003, 114, 63–68. [Google Scholar]
  47. Takeda, M.; Ina, H.; Kobayashi, S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 1982, 72, 156–160. [Google Scholar] [CrossRef]
  48. Wang, W. Contemporary Optical Measurement Technology; China Machine Press: Beijing, China, 2013. [Google Scholar]
  49. Born, M.; Wolf, E. Principles of Optics; Cambridge University Press: Cambridge, UK, 1975. [Google Scholar]
  50. Hariharan, P.I. Basics of Interferometry; Academic Press: Boston, MA, USA, 1992. [Google Scholar]
  51. Han, W.; Gao, X.; Fan, Z.; Bai, L.; Liu, B. Long Exposure Short Pulse Synchronous Phase Lock Method for Capturing High Dynamic Surface Shape. Sensors 2020, 20, 2550. [Google Scholar] [CrossRef]
Figure 1. Schematic of multi-field interference (MFI) setup.
Figure 1. Schematic of multi-field interference (MFI) setup.
Sensors 20 03372 g001
Figure 2. Light propagation between the light wedge (WL) and the vibrating mirrors (VM). (a) VM parallel to plane B of the WL; and (b) VM rotated counterclockwise by an angle of α.
Figure 2. Light propagation between the light wedge (WL) and the vibrating mirrors (VM). (a) VM parallel to plane B of the WL; and (b) VM rotated counterclockwise by an angle of α.
Sensors 20 03372 g002
Figure 3. (a) Interference between R61 and R3; (b) interference between R61 and R62; and (c) interference between R62 and R63.
Figure 3. (a) Interference between R61 and R3; (b) interference between R61 and R62; and (c) interference between R62 and R63.
Sensors 20 03372 g003
Figure 4. Relationship between SCR Sj and A S j and the amplitude transmission coefficients t B , SCR is the stripe contrast ratio and A S j is the amplitude of the fringe. (a) Profile of SCR Sj ; and (b) profile of A S j .
Figure 4. Relationship between SCR Sj and A S j and the amplitude transmission coefficients t B , SCR is the stripe contrast ratio and A S j is the amplitude of the fringe. (a) Profile of SCR Sj ; and (b) profile of A S j .
Sensors 20 03372 g004
Figure 5. Layout of the optical path.
Figure 5. Layout of the optical path.
Sensors 20 03372 g005
Figure 6. Adjustment process: (a) adjustment of WL and VM; (b) finding the focus point of the three reflected plane wave vectors R1 and R2 and R3; (c) finding the focus of the interference fields S1 and S2 and S3; and (d) MFI observed at the out-of-focus position.
Figure 6. Adjustment process: (a) adjustment of WL and VM; (b) finding the focus point of the three reflected plane wave vectors R1 and R2 and R3; (c) finding the focus of the interference fields S1 and S2 and S3; and (d) MFI observed at the out-of-focus position.
Sensors 20 03372 g006
Figure 7. MFI patterns: (a) original image with resolution of 2048 × 2048; (b) Inherent patterns introduced by the PBS; (c) Contrast enhancement for S2; (d) Contrast enhancement for S3.
Figure 7. MFI patterns: (a) original image with resolution of 2048 × 2048; (b) Inherent patterns introduced by the PBS; (c) Contrast enhancement for S2; (d) Contrast enhancement for S3.
Sensors 20 03372 g007
Figure 8. Cross-sections of the interference patterns in Figure 7c,d at different laser pulse frequencies: (a) cross-sections of S1; (b) cross-sections of S2; and (c) cross-sections of S3.
Figure 8. Cross-sections of the interference patterns in Figure 7c,d at different laser pulse frequencies: (a) cross-sections of S1; (b) cross-sections of S2; and (c) cross-sections of S3.
Sensors 20 03372 g008
Figure 9. Comparative test. (a) MFI measurement; (b) result of the MFI; (c) μshape2 HR measurement; and (d) result of μshape2 HR.
Figure 9. Comparative test. (a) MFI measurement; (b) result of the MFI; (c) μshape2 HR measurement; and (d) result of μshape2 HR.
Sensors 20 03372 g009
Table 1. System parameters for MFI imaging.
Table 1. System parameters for MFI imaging.
ParameterValueUnit
Wavelength532nm
Repetition frequency10–200Hz
Pulse duration6–129ns
Pixel size5.5um
Resolution2048 × 2048pixels
Frame rate10fps
Integration Time100ms
Lens L1 focal length200mm
Lens L2 focal length200mm
Lens L3 focal length380mm
Lens L4 aperture40mm
Lens L4 focal length180mm
Table 2. Statistical results of the contrast. SGmax: Max Gray (Unit: DN); SGmin: Min Gray (Unit: DN); SCR: stripe contrast ratio; LRF: laser repetition frequency (Unit: Hz), DN: digital number (one grayscale unit).
Table 2. Statistical results of the contrast. SGmax: Max Gray (Unit: DN); SGmin: Min Gray (Unit: DN); SCR: stripe contrast ratio; LRF: laser repetition frequency (Unit: Hz), DN: digital number (one grayscale unit).
LRFS1S2S3
SGmaxSGminSCRSGmaxSGminSCRSGmaxSGminSCR
20040953310.91 37435870.78 6862100.80
10020543000.85 20263870.78 3071660.82
509901900.91 9912570.77 2461550.90
204921600.94 4811760.85 1661530.68
103171550.94 3071600.88 1581530.55

Share and Cite

MDPI and ACS Style

Han, W.; Gao, X.; Chen, Z.; Bai, L.; Liu, B.; Zhao, R. Multi-Field Interference Simultaneously Imaging on Single Image for Dynamic Surface Measurement. Sensors 2020, 20, 3372. https://doi.org/10.3390/s20123372

AMA Style

Han W, Gao X, Chen Z, Bai L, Liu B, Zhao R. Multi-Field Interference Simultaneously Imaging on Single Image for Dynamic Surface Measurement. Sensors. 2020; 20(12):3372. https://doi.org/10.3390/s20123372

Chicago/Turabian Style

Han, Weiqiang, Xiaodong Gao, Zhen Chen, Le Bai, Bo Liu, and Rujin Zhao. 2020. "Multi-Field Interference Simultaneously Imaging on Single Image for Dynamic Surface Measurement" Sensors 20, no. 12: 3372. https://doi.org/10.3390/s20123372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop