Next Article in Journal
Performance of Soil Moisture Sensors at Different Salinity Levels: Comparative Analysis and Calibration
Previous Article in Journal
Machine Learning Applied to Edge Computing and Wearable Devices for Healthcare: Systematic Mapping of the Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stereo Bi-Telecentric Phase-Measuring Deflectometry

State Key Laboratory of Precision Measuring Technology and Instruments, Laboratory of Micro/Nano Manufacturing Technology, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(19), 6321; https://doi.org/10.3390/s24196321
Submission received: 19 August 2024 / Revised: 20 September 2024 / Accepted: 27 September 2024 / Published: 29 September 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
Replacing the endocentric lenses in traditional Phase-Measuring Deflectometry (PMD) with bi-telecentric lenses can reduce the number of parameters to be optimized during the calibration process, which can effectively increase both measurement precision and efficiency. Consequently, the low distortion characteristics of bi-telecentric PMD contribute to improved measurement accuracy. However, the calibration of the extrinsic parameters of bi-telecentric lenses requires the help of a micro-positioning stage. Using a micro-positioning stage for the calibration of external parameters can result in an excessively cumbersome and time-consuming calibration process. Thus, this study proposes a holistic and flexible calibration solution for which only one flat mirror in three poses is needed. In order to obtain accurate measurement results, the calibration residuals are utilized to construct the inverse distortion map through bicubic Hermite interpolation in order to obtain accurate anchor positioning result. The calibrated stereo bi-telecentric PMD can achieve 3.5 μm (Peak-to-Valley value) accuracy within 100 mm (Width) × 100 mm (Height) × 200 mm (Depth) domain for various surfaces. This allows the obtaining of reliable measurement results without restricting the placement of the surface under test.

1. Introduction

1.1. Typical Measurement Techniques for Specular Surfaces

Optical freeform surfaces, characterized by their complex three-dimensional shapes and high design flexibility, play a pivotal role in modern optical systems [1]. These surfaces break away from the constraints of traditional spherical and aspherical designs, enabling more flexible optical mapping relationships [2]. This contributes to enhanced optical system performance and greater integration capabilities.
In recent years, freeform specular mirrors have been widely used in various fields, such as ultra short throw projectors [3,4], HUD (head-up displays) [5,6], and eyeglass lens molds [7]. Freeform specular mirrors, due to their unique design and manufacturing processes, are capable of maintaining high-precision optical performance even in complex environments. However, their intricate shapes pose challenges for both fabrication and measurement. To ensure surface accuracy, advanced measurement techniques are essential.
In contact measurement techniques, coordinate measuring machines (CMMs) or contact profilometers can perform 3D measurements of freeform surfaces for specific apertures through raster or radial scanning [8,9]. However, the preload force of the probe may scratch the surface of the mirror being measured. The UA3P series equipment from Panasonic, based on atomic force probes, can achieve non-destructive measurement of freeform surfaces with a contact force of no more than 0.3 mN, but its measurement speed is slow, and the spacing between measurement points is usually large, making it difficult to characterize mid-to-high-frequency information such as tool marks or scratches on specular surfaces [10].
In non-contact measurement techniques, the LUPHOScan series of non-contact surface measurement devices from Taylor Hobson (Leicester, UK) use multi-wavelength interference (MWLI) technology, which employs a non-contact point probe to precisely measure the sag (surface height) and XYZ position of the probe, thereby achieving nanometer-level precision in surface reconstruction. However, when the tangential slope of the mirror exceeds ± 8° along the scanning path, the probe cannot obtain valid sag data, limiting its application on surfaces with steep angles [11]. Phase-measuring profilometry (PMP) is a non-contact measurement method that uses a projector light source to project sinusoidal or Gray code patterns onto the measured surface, allowing for 3D surface measurement without physical contact and with a high dynamic range. However, when measuring high-reflectivity mirrors, specular reflections may cause distortion in the measurement results in bright regions. The traditional solution is to coat the mirror with a layer of powder to alter its reflective properties and reduce reflection distortion. However, the thickness and uniformity of the powder layer may affect measurement accuracy, and removing the powder after measurement is time-consuming and tedious. In certain sensitive or specific measurement scenarios, using powder may not be a viable option [12]. Modern Ronchi measurement methods use Ronchi gratings as the light source, capturing the Ronchi grating fringe images and, with advanced digital image processing techniques, the optical wavefront aberrations can be reconstructed, achieving accurate non-contact measurement of mirror surface shapes. However, Ronchi gratings need to be custom-made and replaced according to the dynamic range of the measured mirror surface, limiting its versatility [13]. Häusler et al. [14] were among the first to replace the projector light source in PMP with an LCD screen, thus proposing a technique called PMD for measuring large-aperture freeform mirrors. The core advantage of this technology lies in its ability to achieve micron-level measurement accuracy for apertures in the range of hundreds of millimeters [15], while offering nanometer-level resolution in the sag direction [16]. Since the LCD screen can project custom-coded information, it offers greater versatility compared to Ronchi measurement methods. The projector light source has high directionality, which can cause overexposure when measuring reflective surfaces, leading to reconstruction failures. The LCD screen used in PMD is a non-directional light source, making it more suitable for measuring reflective surfaces.
Figure 1 shows the characteristics of above typical contact and non-contact measurement techniques. PMD plays a crucial role in measuring freeform mirrors.

1.2. Typical PMD Configurations with Endocentric Lenses

There are three typical PMD setups with endocentric lenses, as shown in Figure 2. P represents the point corresponding to the Camera, while Q represents the point in the Screen. n is the normal calculated from the PMD.
The first typical configuration of PMD is the single-sensor-single-screen setup, as shown in Figure 2a. Su et al. [17] proposed an innovative idea that treats PMD as an inverse Hartmann measurement principle to measure the freeform mirror. In this method, the theoretical model is treated as the predicted shape of the surface being measured, and surface reconstruction is performed by iterative gradient calculation and integration, leading to the final surface measurement result. Huang et al. [18] used Fourier transform-based phase extraction algorithms on the single-sensor-single-screen setup PMD to achieve single-frame measurements of dynamic reflective surfaces. Meanwhile, Wang et al. [19] greatly improved the measurement speed of phase deflection techniques through the development of high-speed displays based on LED arrays, enabling dynamic surface measurements while retaining phase detail This method allows for the rapid capture of surface information in complex and dynamic environments, significantly enhancing measurement efficiency, particularly for precise detection of rapidly changing surfaces. Xu et al. [20] during the ultra-precision machining of freeform reflective mirrors, proposed an in-situ measurement method based on single-sensor-single-screen setup PMD. A common drawback of single-sensor-single-screen setup PMD is that it requires prior knowledge of the measured surface shape.
The second typical configuration of PMD is the single-sensor-multiple-screen setup, as shown in Figure 2b. Guo et al. [21] move the screen to different positions using a displacement platform for imaging, while Li et al. [22] introduce a transparent LCD, forming a multiple-screen structure. In addition, Tang et al. [23], Liu et al. [24] and Zhang et al. [25] incorporate beam splitter allowing the LCD to image at two different positions. The single-sensor-multiple-screen setup PMD eliminates the need for the estimated surface shape in single-sensor-single-screen PMD setup. However, the introduction of additional mechanical movement structures, beam splitter, and a second display increased the system’s size and cost. According to the performance analysis and evaluation of monocular bi-telecentric PMD, the distance between two LCD screen [26] and the refraction error of the beam splitter [27] should be concerned.
The third typical configuration of PMD is the multiple-sensors-single-screen setup, as shown in Figure 2c. Li et al. [28] and Han et al. [29] have built upon Häusler’s method by replacing the flat display with a curved display. This modification significantly enhanced the ability of stereophotogrammetry to measure the maximum slope of complex surfaces, making the technology more widely applicable and accurate for measuring irregular surfaces. Additionally, Zhang et al. [30] and Wang et al. [31] have further expanded the scope of stereophotogrammetry by using camera field stitching and fusion techniques, successfully enlarging the measurable surface aperture. Han et al. [32] proposed a reconstruction procedure for multiple-sensors-single-screen setup PMD, in which the specular surface under test (SUT) can be achieved through anchor point positioning followed by iterative surface reconstruction. Thus, the multiple-sensors-single-screen setup PMD does not require the predicted shape of the SUT.
Table 1 summarizes the characteristics of three typical PMD configurations.

1.3. PMD Configurations with Bi-Telecentric Lenses

Compared to endocentric lenses in high accuracy machine vision [33], bi-telecentric lenses not only exhibit lower image distortion but also simplify the calibration process [34]. The imaging models of endocentric lenses and bi-telecentric lenses are shown in Figure 3.
For a given pixel up = [u, v]T, its corresponding 3D point in the camera’s local coordinate can be denoted as cp = [xc, yc, zc]T. For the imaging model of endocentric lenses shown in Figure 3a, its principal point can be noted as up0 = [u0, v0]T and focal length can be noted as fx and fy in x and y direction. The projection of an arbitrary point cp to the sensor’s image coordinate in pixels is expressed as
z c u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x c y c z c .
As shown in Equation (1), the position of point cp in endocentric lenses’ local coordinate varies with zc.
For the imaging model of the bi-telecentric lenses shown in Figure 3b, magnification can be noted as mx and my in x and y direction. The projection of an arbitrary point cp to the sensor’s image coordinate in pixels is expressed as
u v 1 = m x 0 0 0 m y 0 0 0 1 x c y c 1 .
As shown in Equation (2), the bi-telecentric lens cannot identify the depth of zc.
Except for the distortion, the intrinsic parameters of bi-telecentric lenses only include magnification. In contrast, endocentric lenses adhere to the pinhole model, which requires calibration of the optical center u0, v0 and focal lengths fx and fy [35]. Moreover, a pinhole camera model is far from a true camera model [36,37]. Due to the lower distortion and fewer intrinsic parameters of bi-telecentric lenses, incorporating them into PMD can easily yield high-precision calibration results, thereby enhancing measurement accuracy. The three typical PMD setups with bi-telecentric lenses are shown in Figure 4.
In the single-sensor-single-screen configuration shown in Figure 4a, endocentric lenses have been substituted by bi-telecentric lenses. To further validate the nanometer-level height resolution of PMD, Häusler et al. [38] proposed a simplified method using a single-screen setup in a monocular microscopic system to measure surfaces with reflective properties. Liu et al. [39] replaced the traditional TFT screen light source with a combination of photolithography film, diffusion film, and LED lamps. This improvement effectively avoided the impact of screen refresh rates and grayscale discretization on measurement results, significantly enhancing both the accuracy and reliability of measurements.
As shown in Figure 4b, Niu et al. [40] and Huang et al. [41] position the LCD screen perpendicularly to the optical axis of the bi-telecentric imaging system. This approach eliminates ambiguity in the LCD screen’s rotation matrix, enabling the implementation of a single-sensor-multiple-screens phase-measuring deflectometry (PMD) using bi-telecentric lenses.
According to the literature review summarized in Table 2, there is currently no PMD technique that introduces bi-telecentric lenses into the multiple-sensors-single-screen PMD configuration. Based on Table 1, to improve measurement accuracy, avoid surface shape estimation, and reduce error sources in PMD, it is necessary to propose a stereo bi-telecentric PMD technique.

1.4. Proposed Stereo Bi-Telecentric PMD

This paper presents a stereo bi-telecentric PMD technique, while a holistic and flexible calibration method for stereo bi-telecentric PMD is also presented. In monocular bi-telecentric PMD, the LCD screen can be adjusted to be perpendicular to the optical axis of the bi-telecentric imaging system to avoid ambiguity in the LCD screen rotation matrix [40]. However, in the stereo bi-telecentric PMD, the LCD screen cannot be perpendicular to the optical axes of both two bi-telecentric imaging systems. The self-calibration method without the assistance of third-party devices is more popular in PMD calibration due to its low operational complexity and high measurement efficiency [42]. Under the same calibration conditions as stereo PMD based on endocentric lenses [43,44], only a markerless flat mirror is needed, which should be placed in three different poses to obtain the phase information of the flat LCD screen, thereby completing the calibration. The proposed calibration method does not require the help of a micro-positioning stage [45,46,47] in the telecentric lens calibration process to resolve the ambiguity of the rotation matrix in the extrinsic parameters. Therefore, this stereo bi-telecentric PMD technique and its calibration method are easy to set up in practical implementation.
Stereo PMD requires two steps to obtain the measurement result: the anchor point positioning procedure and iterative surface reconstruction procedure. In order to obtain accurate measurement results, the calibration residuals are utilized to construct the inverse distortion map through bicubic Hermite interpolation to obtain accurate anchor positioning result. The repeatability and reproducibility experiments have been conducted to investigate the accuracy of the proposed stereo bi-telecentric PMD system.

2. Flat-Mirror-Only Calibration Procedure

2.1. Measurement Principle of the Stereo Bi-Telecentric PMD

As shown in Figure 5, similar to stereo PMD with endocentric lenses [32], the reconstruction of the SUT in the stereo bi-telecentric PMD system can be accomplished through anchor point positioning, followed by iterative surface S reconstruction.
(1)
The detailed positioning procedure for anchor point Z will be discussed in Section 3.1, which deals with the triangulation in stereo-PMD setup. Although bi-telecentric lenses exhibit significantly lower distortion compared to endocentric lenses, it is still necessary to accurately handle the distortion mapping relationship during the anchor point positioning process to improve the accuracy of the measurement system. The distortion mapping for the pixel coordinate upA and 3D point pA in camera A will be discussed in Section 3.2, while the inverse distortion mapping for the 3D point pB and pixel coordinate upB in camera B will be discussed in detail in Section 3.4.
(2)
Once the anchor point has been established, the iteration starts with initial plane S0 in mono-PMD setup. Then the slope data of the guessed surface can be calculated. The updated iteration surface Sn can be calculated by the integral reconstruction of the slope data [48,49], until the iteration result converges [50].
The developed hardware (as shown in Figure 6) includes two bi-telecentric lenses with cameras, a flat LCD screen, and a 535 nm, 50 mW laser diode serving as the laser pointer. Since the mirror surface lacks distinct features, it is necessary to utilize its diffuse reflection characteristics during the anchor point positioning process. This can be achieved by using a laser to mark a spot on the surface that exhibits diffuse reflection properties. The model of the bi-telecentric lenses is TC23144, with a nominal magnification of 0.061. The cameras use the Sony IMX264 CMOS sensor with a resolution of 2448 × 2048. The model of the flat LCD screen is a ViewSonic VP2768 with a resolution 3840 × 2160. The LCD coordinate system {L} takes the first row and first column of the LCD screen as the origin, with the x-axis along the horizontal direction of the LCD screen, the y-axis along the vertical direction of the LCD screen, and the z-axis perpendicular to the screen inward.
From the reconstruction procedure, it can be seen that the geometric relationships between the cameras and LCD and the cameras’ intrinsic parameters with distortion must be precisely calibrated. As the extrinsic parameters, the rotation matrix R L A and the translation vector T L A represent the relative positions between {L} and camera A’s local coordinate system {A}. The magnification m x A (along camera horizontal direction) and m y A (along camera vertical direction) are the intrinsic parameters of camera A. The rotation matrix R L B and the translation vector T L B represent the relative positions between {L} and camera B’s local coordinate system {B}. The magnification m x B (along camera horizontal direction) and m y B (along camera vertical direction) are the intrinsic parameters of camera B.

2.2. Formula Derivation in the Calibration

The calibration process starts by placing a flat mirror in 3 different positions, allowing camera A and camera B to capture the phase-shifting fringes on the flat LCD screen (as shown in Figure 7). The red dots in Figure 7 are indicated by the laser pointer for anchor point positioning. And the blue frames are the bounding boxes of the red dots. To obtain reliable calibration results, nonlinear errors need to be suppressed [51]. By the optimal phase-shifting technique [52], the corresponding physical position pL = [xL, yL,0]T in the flat LCD coordinate system {L} of camera pixel up = [u, v]T can be acquired. Because the LCD is flat, the z-value of pL is always zero. So the 3D point pL = [xL, yL,0]T can be noted as a 2D point pL = [xL, yL]T. Its coordinate in camera B’s local coordinate system {B} is also a 2D point pB = [xB, yB]T. For camera B, the geometric relationship is drawn in Figure 8.
The rotation matrix A L B (whose determinant is −1) and the translation vector b L B represent the position relationship between the flat LCD screen virtual image coordinate system {L’} and camera B’s local coordinate system {B} when the flat mirror is at pose 1. The normal of flat mirror in {B} can be noted as Bn = [nx, ny, nz]T. The distance between the origin of {B} and the flat mirror is noted as L.
According to the geometric relationship [53], the relationship between the 2D coordinates of p in {L} and {B} can be derived as
p B = R 2 × 2 L B p L + T 2 × 1 L B .
where the number in the bottom right corner of the matrix symbol, for example, 2 × 2, represents the upper 2 × 2 submatrix set of this matrix. The geometric relationship between the pB and its virtual image p’B with respect to the plane mirror is
p B = p B + 2 d B n 2 × 1 .
where the number 2 × 1 in the bottom right corner of Bn represents the upper 2 × 1 submatrix set of this Bn2×1 = [nx, ny]T, and the distance d between the given point pB of {B} and the flat mirror is noted as
d = L n 2 × 1 L B p B .
Combining Equation (3) to Equation (5), the following equation can be obtained as
p B = R 2 × 1 L B p L + T 2 × 1 L B p B = p B + 2 d B n 2 × 1 d = L n 2 × 1 T B p B p B 1 = A 2 × 2 L B b 2 × 1 L B 0 1 × 2 1 p L 1 ,
where
A 2 × 1 L B = ( I 2 2 n 2 × 1 B n 2 × 1 T B ) R 2 × 1 L B b 2 × 1 L B = ( I 2 2 n 2 × 1 B n 2 × 1 T B ) T 2 × 1 L B + 2 L n 2 × 1 B .
where I2 means a 2 × 2 identity matrix. As bi-telecentric lenses perform parallel projection, it cannot identify the “z value” (or depth) from the coordinates of p and the normals of n in camera B’s local coordinate system {B}. Thus, these 2D equations in the camera’s local coordinate system can be obtained from the 3D ones by simply taking the first two components.
For the Sony IMX264 CMOS sensor, its pixel size is equal in x and y directions. Thus, it can be noted that m = m x B = m y B , then up = [u, v]T and pB = [xB, yB]T have the relationship as follows:
u v 1 = m 0 0 0 m 0 0 0 1 x B y B 1 .
Substituting Equation (6) into Equation (8), the following equation can be obtained as
u v 1 = m 0 0 0 m 0 0 0 1 r 11 r 12 t x r 21 r 22 t y 0 0 1 x L y L 1 = m r 11 m r 12 m t x m r 21 m r 22 m t y 0 0 1 x L y L 1 = h 11 h 12 h 13 h 21 h 22 h 23 0 0 1 x L y L 1 = H x L y L 1 ,
where rij is the element of rotation matrix A L B , and i , j 1 , 2 , 3 . b L B = [tx, ty]T. Equation (9) can be rewritten as
x L y L 1 0 0 0 0 0 0 x L y L 1 h 11 h 12 h 13 h 21 h 22 h 23 = u v ,
where hij can be calculated directly from a least square solution from Equation (10). Thus, to calculate A L B and b L B , m must be determined first.
With the orthogonality of the rotation matrix A L B with reflection, rij has a relationship as
r 11 r 12 + r 21 r 22 r 31 r 32 = 0 .
Since each column of the rotation matrix is a unit vector, rij has the second relationship as
r 31 = ± 1 r 11 2 r 21 2 r 32 = 1 r 12 2 r 22 2 .
From Equations (11) and (12), the following equation can be obtained as
1 r 11 2 + r 21 2 + r 12 2 + r 22 2 + r 11 r 22 r 12 r 21 2 = 0 .
From the relationship between rij and hij in Equation (10), Equation (13) can be rewritten as
m 4 h 11 2 + h 21 2 + h 12 2 + h 22 2 m 2 + h 11 h 22 h 12 h 21 2 = 0 .
where m is the maximum non-negative root satisfying
r i j = h i j m 1 i , j { 1 , 2 } .
Now m has been determined. According to Equation (9), the translation vector b L B can be retrieved by
t x = h 13 m t y = h 23 m .
By Equations (12) and (15), rij where i , j 1 , 2 can be determined. The signs of r31 and r32 are uncertain. Since the rotation matrix is unitary and orthogonal, the remains can be determined by
[ r 13 , r 23 , r 33 ] = [ r 11 , r 21 , r 31 ] × [ r 12 , r 22 , r 32 ] .
Now, the following equation can be obtained as
A + L B = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 32 A L B = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 32 b L B = t x t y .
It should be noticed that A L B has an ambiguous solution. These two solutions are noted as A + L B and A L B . The reason why there are two solutions of A L B is that the bi-telecentric lenses perform parallel projection, and cannot identify the depth. Figure 9 shows the geometric relationship of the two possible LCD virtual images, which the bi-telecentric lenses cannot identify.

2.3. Eliminating Ambiguities within Monocular Setup

According to Equation (7), the rotation matrix A L B at pose i ( i 1 , 2 , 3 ) can be calculated as
A i L B = I 3 2 B n i B n i T R B L B = 1 2 n i x n i x 2 n i x n i y 2 n i x n i z 2 n i x n i y 1 2 n i y n i y 2 n iy n i z 2 n i x n i z 2 n iy n i z 1 2 n i z n i z R L B .
Let
A i L B A j T L B = I 3 2 B n i B n i T I 3 2 B n j B n j T T i , j 1 , 2 , 3   a n d   i j .
It can be noticed that
det ( A i L B A j T L B ) = 1   ,
and A i L B A j T L B is orthogonal. Therefore, A i L B A j T L B is a rotation matrix. The eigenvector ψ i j of A i L B A j T L B corresponding to the unit eigenvalue can be calculated from
A i L B A j T L B I 3 ψ i j = 0 ψ i j = n i y n j z n j y n i z n i x n j y n j x n i y n j x n i z n i x n j z n i x n j y n j x n i y 1 = 1 n i x n j y n j x n i y n i y n j z n j y n i z n j x n i z n i x n j z n i x n j y n j x n i y .
It also can be noted that
n i B × n j B = n i x n i y n i z × n j x n j y n j z = n i y n j z n j y n i z n j x n i z n i x n j z n i x n j y n j x n i y .
Compare Equations (22) and (23), which have the relationship as
n i B × n j B = n i x n j y n j x n i y ψ i j .
Here, is defined as the operator that performs normalization after the cross product. Thus, all the normal vectors of the flat mirror at three different poses can be obtained by
n 1 B n 2 B = ψ 12 n 1 B n 3 B = ψ 13 n 2 B n 3 B = ψ 23 n 1 B = ψ 12 ψ 13 n 2 B = ψ 12 ψ 23 n 3 B = ψ 13 ψ 23 .
Derived from Equation (6),
R 1 L B = I 3 2 B n 1 B n 1 T A 1 L B R 2 L B = I 3 2 B n 2 B n 2 T A 2 L B R 3 L B = I 3 2 B n 3 B n 3 T A 3 L B R ¯ L B = R 1 L B + R 2 L B + R 3 L B 3 .
Thus, according to [43] the rotation matrix between camera B and flat LCD screen can be calculated as
R L B = R ¯ T L B R ¯ L B 0.5 1 R ¯ L B ,
where (   ) 0.5 indicates the square root of the elements within the matrix. The translation vector between {B} and {L} can be determined by a least square solution of
I 2 2 B n 1 2 × 1 B n 1 2 × 1 T 2 B n 1 2 × 1 0 2 × 1 0 2 × 1 I 2 2 B n 2 2 × 1 B n 2 2 × 1 T 0 2 × 1 2 B n 2 2 × 1 0 2 × 1 I 2 2 B n 3 2 × 1 B n 3 2 × 1 T 0 2 × 1 0 2 × 1 2 B n 3 2 × 1 T L B L 1 L 2 L 3 = b 1 L B b 2 L B b 3 L B ,
which is a general form Dx = c (where D is a 6 × 6 known matrix, c is a 6 × 1 know vector, and x is the 6 × 1 vector of unknown). The reprojection error of the least square process can be defined as
E r r o r = D x c 2 .
It is very important to know that the ambiguous solution of A i L B i 1 , 2 , 3 may cause an increase in error. The rotation matrix at three different LCD virtual image positions, with each position having two solutions, results in a total of eight possible solutions. Table 3 and Table 4 show the reprojection error and solutions of Equation (28) at these eight possible solutions for camera A and B, respectively. For every camera, six incorrect solutions can be eliminated from the Error, leaving two solutions marked in red and blue in Table 3 and Table 4.

2.4. Eliminating Ambiguities within Stereo Setup

Now the stereo system has four solutions. Obviously, the distance d between the given point p and the flat mirror should be same in the two cameras’ geometric relationships. Here, the given point can be set as the origin pL = [0, 0]T of {L}. Combining Equations (3) and (5), the distance d can be calculated as
d = L n 2 × 1 T B T 2 × 1 L B .
Here, the calculated d at three poses can be collected as d = [d1, d2, d3]T. Thus, the remaining four solutions can be filtered using the d from Equation (30). As shown in Table 5, d A d B 2 can be utilized to eliminate two incorrect solutions with higher differences of distance d. The remaining two solutions are marked in red (as solution A) and blue (as solution B) in Table 5.
One way to find the last ambiguity is to plot the geometric relationships for each of these two solutions and observe which scenario matches the real physical situation depicted in Figure 6. Figure 10 illustrates the geometric relationships of solution A (Figure 10a) and solution B (Figure 10b). It can be easily noticed that solution B is the solution that matches the real physical situation depicted in Figure 6.
Another way to resolve the ambiguity is to use stereo deflectometry technology. For the anchor point highlighted with the laser shown in Figure 7, the ambiguity solution can be filtered by the residual of the angle θ minimization between the normal, calculated from camera A and B.
t ^ = arg min t normal A t , normal B t .
As shown in Figure 11, the anchor point Z must hit in the surface under test, when the searching depth t makes the angle between the calculated normalA and normalB equal to zero. The ambiguity solution will make the minimization residual very large.

2.5. Refine Calibration Results by the Bundle Adjustment

Finally, a bundle adjustment with distortion compensation should be performed by Equation (32) to refine all parameters in the above calibration process for accuracy [54]
u p = j = 1 N i = 1 3 p d u m x A , m y A , R L A , T L A , m x B , m y B , R L B , T L B , n 1 L , n 2 L , n 3 L , d 1 , d 2 , d 3 , p L i j u p p L i j ,
where N is the point sampling number at pose i. Ln is the normal in coordinate system {L}. pLij represents the LCD sampling coordinates at pose i and sampling order j. The integer sampling real image coordinates can be represented as up(pLij) according to the phase-shifting method. The optimized image decimal coordinates upd can be calculated from the optimization parameters. Because the phase-shifting method is needed to retrieve the LCD sampling coordinates pLij, the u and v coordinates of up are all integer camera pixel indices (as in the blue dots in Figure 12). There will always be some deviation Δup between the optimized image coordinates upd and the real image integer coordinates up. Here, upd is noted as the distorted image coordinates upd = [ud, vd]T (as in the red crosses in Figure 12). So ud and vd are decimal numbers. The relationship between upd and upd is
u p = p d u p u u v = u d v d u v .
After the bundle adjustment, the calibration results are shown in Table 6. The calibration procedure for the proposed stereo bi-telecentric phase-measuring deflectometry, which requires only a flat mirror, is illustrated in Figure 13.
The translation vector T L A and T L B in Table 6 are the 2D translation vectors. The physical implication is that the bi-telecentric imaging system cannot perceive depth information, meaning its spatial position can be at any location along its optical axis. To facilitate subsequent surface reconstruction in 3D space, the depth information component Tz is added to the 2D translation vector. When Tz is 0, 50, or 100, the schematic diagram of the position of the bi-telecentric imaging system in the coordinate system is shown in Figure 14. The value of Tz does not affect the distribution of the incident light rays for imaging. Therefore, to facilitate the subsequent coordinate transformation calculations, the third dimension Tz = 0 is added to the translation vector T L A and T L B to clearly define the relative spatial position of the bi-telecentric imaging system. Let the previous 2D translation vectors T L A and T L B be denoted as T 2 x 1 L A and T 2 x 1 L B , respectively. After adding the depth component Tz = 0, the resulting 3D translation vectors are
T L A = 461.28868 254.18019 0 T L B = 409.74525 282.33540 0 .

3. Accurate Inverse Distortion Mapping

3.1. Anchor Point Positioning Procedure

To achieve accurate surface measurements, it is essential to obtain precise anchor points Z [55,56] from the calibrated parameters. The calculation procedure of anchor point Z is shown in Figure 15. For a give pixel up of camera A,
(a1)
Use the phase-shifting method to retrieve its corresponding LCD point QA.
(a2)
According to the calibration residuals, find up’s corresponding distorted image coordinate upd = [ud, vd]T from the distortion map D(u, v).
(a3)
According Equation (35), retrieve the x and y coordinates pA = [xA, yA]T of 2D anchor point Z in camera A’s local coordinate {A}.
u d v d 1 = m x A 0 0 0 m y A 0 0 0 1 x A y A 1 .
(a4)
The bi-telecentric lenses cannot identify the depth. By setting the searching depth as t, ZA can be noted as ZA = [xA, yA, t]T.
(a5)
According Equation (36), retrieve the 3D coordinates of anchor point Z from camera local coordinate {A} to LCD coordinate {L} as
Z L = R 1 L A ( Z A T L A ) .
(a6)
According to the law of reflection, the calculated normalA is the bisector of line Z L p A and Z L Q A .
(b1)
According Equation (37), retrieve the 2D coordinates of anchor point Z in camera local coordinate {B} as
Z B = ( R L B Z L ) 2 x 1 + T 2 x 1 L B .
(b2)
The corresponding distorted image coordinate upd = [ud, vd]T of ZB can be calculated from
u d v d 1 = m x B 0 0 0 m y B 0 0 0 1 x B y B 1 .
(b3)
Find distorted image coordinate upd’s corresponding pixel coordinate up = [u, v]T of camera B from the inverse distortion map D−1(ud, vd).
(b4)
Use the phase-shifting method to retrieve its corresponding LCD point QB.
(b5)
The calculated normalB is the bisector of line Z L p B and Z L Q B .
(c1)
By optimizing the searching depth t, the anchor point Z can be precisely positioned, when the angle θ between normalA and normalB is normalB zero.
The schematic positioning procedure of anchor point Z is shown in Figure 15. From the positioning process of anchor point Z, it can be found how to accurately achieve the distortion map D(u, v) in Procedure (a2), and the inverse distortion map D−1(ud, vd) in Procedure (b3) is the key to determining the accuracy of the anchor point Z calculation.

3.2. Build Distortion Maps

As the pixel coordinates of up, the u and v are all integer values. There will always be some deviation between the optimized image coordinates upd and the real image integer coordinates up. So ud and vd are decimal values. The relationship between upd and upd is
u d v d = u v + u v .
Note Equation (39) as the distortion map D() between the integer pixel up and the distorted decimal pixel upd. Therefore, Equation (33) can be formed as
u d = D u ( u , v ) v d = D v ( u , v ) .
Because up is the 2D integer tabulated coordinate, the bicubic Hermite interpolation can be utilized as the distortion map, whose interpolation control grid is composed by the 2D integer tabulated coordinate [u, v]T [57].

3.3. Use Polynomial Fitting Method to Build Inverse Distortion Maps

To build the inverse distortion map D−1() between the distorted decimal pixel [ud, vd]T and the integer pixel [u, v]T, a natural idea is to perform polynomial fitting on the scatter pointset (ud, vd, Δu) shown in Figure 16a. The inverse distortion map Δu = D−1(ud, vd) constructed by a polynomial fitting is shown in Figure 16b. It can be seen that there will be some fitting residuals, as shown in Figure 16c. Thus, the inverse distortion mapping constructed by polynomial fitting has low mapping accuracy due to the presence of fitting residuals.

3.4. Use Bicubic Hermite Interpolation to Build Inverse Distortion Maps

The inverse distortion map D−1 can also be accurately achieved by a smooth triangulation-based interpolation for the 2D look-up table (ud, vd, Δu). (It is not recommended to use linear interpolation because the interpolating function is not differentiable at the control points). However, applying the analytical derivatives framework or the automatic differentiation framework into this triangulation-based interpolation is almost impossible [58]. The characteristics of the above two common inverse distortion mapping methods are shown in Table 7. To obtain the reconstruction results accurately and efficiently in deflectometry, the inverse distortion mapping should be constructed as follows: detailed preserved and automatic differentiable. Thus, the bicubic Hermite interpolation is utilized to build the inverse distortion mapping [59].
To further illustrate the construction mechanisms of the inverse distortion map, the cubic Hermite interpolation for dimension u is taken as an example. The control points (u, ud) are the red pots in Figure 17a. Thus, the original distortion map D(u) = ud can be built by a cubic Hermite interpolation of (u, ud) (the yellow curve in Figure 17a). The resampled control points (u’, ud’) (the blue dots in Figure 17a) can be retrieved from
u = arg min u D ( u ) round [ D ( u ) ] 2 ,
where resampled ud’ = D(u’) is the integer value. Thanks to the fact that ud’ has a monotonically increasing nature, the inverse distortion map constructed by cubic Hermite interpolation D−1(ud’) = u’ (shown in Figure 17b) can be constructed directly by swapping the dependent and independent variables of D(u’) = ud’ (the blue curve in Figure 17a). It should be noted that the round [D(u’)] may not cover all integers in the interval, so a further interpolation [60] should be employed to fill the absence control points (the magenta dots in Figure 17a).
Similarly, the distortion maps ud = Du(u, v) and vd = Dv(u, v) for camera A in u and v direction acquired by the calibration are shown in Figure 18a,c (only Δu and Δv components are shown). The inverse distortion mapping u = Du−1(ud, vd) and v = Dv−1(ud, vd) (shown in Figure 18b,e) can be constructed by the bicubic Hermite interpolation in the same manner as
u , v = arg min u , v D u u , v round D u u , v 2 + D v u , v round D v u , v 2 .
The accuracy of the constructed inverse distortion maps can be indicated by the reprojection errors Erroru and Errorv according to Equation (43). The reprojection errors are shown in Figure 18c,f. The construction results of the inverse distortion maps for camera B are shown in Figure 19. The majority reprojection errors in both u and v directions for both cameras are both below 0.1 pixel. This is lower than the reprojection errors of the polynomial fitting method.
E r r o r u = D u 1 D u u , v , D v u , v u E r r o r v = D v 1 D u u , v , D v u , v v

4. Accuracy Investigation

4.1. Repeatability

After the holistic calibration of the stereo bi-telecentric PMD system and the construction of the mutual distortion maps, the surface under test can be retrieved from the shape reconstruction algorithms [48,49,50]. In order to quantitatively evaluate the repeatability of the proposed stereo bi-telecentric PMD, a concave mirror (stock #43-553) from Edmund with λ/4 surface accuracy was measured four times at the same position. The experimental setup is shown in Figure 20. The red dots in Figure 20 are indicated by the laser pointer for anchor point positioning. And the blue frames are the bounding boxes of the red dots. The results of the last three surface measurements differ from the first one and are shown in Figure 21. For the measurement of this mirror, the repeatability error of proposed stereo bi-telecentric PMD is less than 0.1 μm.

4.2. Reproducibility

In order to quantitatively evaluate the reproducibility of the proposed stereo dual telecentric PMD, five different aperture and curvature mirrors were used to measure at different positions in the measurement space. The reference radii of all mirrors have been calibrated by the 3D optical profilometer LUPHOScan 260 HD from Taylor Hobson. The corresponding measurement errors are summarized in Figure 22.
The geometric relationships between the various measured mirrors and the calibrated stereo bi-telecentric PMD in {L} are shown in Figure 23. The study shows that the proposed approach can achieve a measurement accuracy of less than 3.5 μm (Peak-to-Valley value) within 100 mm (Width) × 100 mm (Height) × 200 mm (Depth) domain for various surfaces. The proposed method does not require the help of a micro-positioning stage in the bi-telecentric lens calibration process to resolve the ambiguity of the rotation matrix in the extrinsic parameters. The accurate calibration results and inverse distortion maps allow for obtaining reliable measurement results without restricting the placement of the SUT.

5. Conclusions

This work combines the Hesch’s stereo PMD calibration method (suitable for endocentric lenses) and Li’s telecentric lens calibration method (a translate stage is needed), and develops the 3D geometric relationship of flat screen, endocentric cameras and flat mirrors to the 2D geometric relationship of flat screen, bi-telecentric cameras and flat mirrors through the formula derivation. The detailed derivation process is used to demonstrate the rigor of the method and to derive how to avoid using the translate stage and even chessboard during the calibration process.
The method proposed in this work successfully avoids the use of the translate stage in any calibration procedure for the stereo bi-telecentric PMD system by filtering the reprojection errors, which significantly enhances the efficiency and reliability of the measurement. In order to obtain accurate measurement results, the calibration residuals of camera pixels are utilized to construct the inverse distortion map through bicubic Hermite interpolation to obtain an accurate anchor positioning result.

Author Contributions

Conceptualization, Y.W. and F.F.; methodology F.F. and Y.W.; software, Y.W.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W.; resources, F.F.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, F.F.; visualization, Y.W.; supervision, F.F.; project administration, F.F.; funding acquisition, F.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Foundation of China, grant number 52035009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are grateful to Junzhe Fu for her help with the preparation of figures in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fang, F.; Zhang, X.; Weckenmann, A.; Zhang, G.; Evans, C. Manufacturing and measurement of freeform optics. Cirp Ann. 2013, 62, 823–846. [Google Scholar] [CrossRef]
  2. Fang, F.; Cheng, Y.; Zhang, X. Design of freeform optics. Adv. Opt. Technol. 2013, 2, 445–453. [Google Scholar] [CrossRef]
  3. Nie, Y.; Mohedano, R.; Benítez, P.; Chaves, J.; Miñano, J.C.; Thienpont, H.; Duerr, F. Optical design of an ultrashort throw ratio projector with two freeform mirrors. In Current Developments in Lens Design and Optical Engineering XVII; SPIE: Bellingham, WA, USA, 2016; Volume 9947. [Google Scholar]
  4. Gao, Y.; Cheng, D.; Xu, C.; Wang, Y. Design of an ultra-short throw catadioptric projection lens with a freeform mirror. In Advanced Optical Design and Manufacturing Technology and Astronomical Telescopes and Instrumentation; SPIE: Bellingham, WA, USA, 2016; Volume 10154. [Google Scholar]
  5. Park, H.S.; Park, M.W.; Won, K.H.; Kim, K.H.; Jung, S.K. In-vehicle AR-HUD system to provide driving-safety information. ETRI J. 2013, 35, 1038–1047. [Google Scholar] [CrossRef]
  6. Pauzie, A. Head up display in automotive: A new reality for the driver. In Design, User Experience, and Usability: Interactive Experience Design, Proceedings of the 4th International Conference, DUXU 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015; Springer International Publishing: Cham, Switzerland, 2015; Part III 4. [Google Scholar]
  7. Chamorro, E.; Cleva, J.M.; Concepción, P.; Subero, M.S.; Alonso, J. Lens design techniques to improve satisfaction in free-form progressive addition lens users. JOJ Ophthalmol. 2018, 6, 555688. [Google Scholar] [CrossRef]
  8. Han, Z.; Wang, Y.; Ma, X.; Liu, S.; Zhang, X.; Zhang, G. T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM. Appl. Sci. 2017, 7, 1092. [Google Scholar] [CrossRef]
  9. Wang, Y.; Li, Z.; Fu, Z.; Fang, F.; Zhang, X. Radial scan form measurement for freeform surfaces with a large curvature using stylus profilometry. Meas. Sci. Technol. 2019, 30, 045010. [Google Scholar] [CrossRef]
  10. Tsutsumi, H.; Yoshizumi, K.; Takeuchi, H. Ultrahighly accurate 3D profilometer. In Optical Design and Testing II; SPIE: Bellingham, WA, USA, 2005; Volume 5638. [Google Scholar]
  11. Petter, J.; Nicolaus, R.; Noack, A.; Tschudi, T. Multi wavelength interferometry for high precision distance measurement. In Proceedings of the OPTO 2009 Proceedings of SENSOR+ TEST Conference, Nuremberg, Germany, 26–28 May 2009. [Google Scholar]
  12. Li, W.; Liu, T.; Tai, M.; Zhong, Y. Three-dimensional measurement for specular reflection surface based on deep learning and phase-measuring profilometry. Optik 2022, 271, 169983. [Google Scholar] [CrossRef]
  13. Cordero-Dávila, A.; González-García, J.; Robledo-Sánchez, C.I.; Leal-Cabrera, I. Local and global surface errors evaluation using Ronchi test, without both approximation and integration. Appl. Opt. 2011, 50, 4817–4823. [Google Scholar] [CrossRef] [PubMed]
  14. Knauer, M.C.; Kaminski, J.; Hausler, G. Phase-measuring deflectometry: A new approach to measure specular free-form surfaces. In Optical Metrology in Production Engineering; SPIE: Bellingham, WA, USA, 2004; pp. 366–376. [Google Scholar]
  15. Faber, C.; Olesch, E.; Krobot, R.; Häusler, G. Deflectometry challenges interferometry: The competition gets tougher! In Interferometry XVI: Techniques and analysis; SPIE: Bellingham, WA, USA, 2012; Volume 8493. [Google Scholar]
  16. Häusler, G.; Knauer, M.C.; Faber, C.; Richter, C.; Peterhänsel, S.; Kranitzky, C.; Veit, K. Deflectometry challenges interferometry: 3D-metrology from nanometer to meter. In Digital Holography and Three-Dimensional Imaging; Optica Publishing Group: Berlin, Germany, 2009. [Google Scholar]
  17. Su, P.; Parks, R.E.; Wang, L.; Angel, R.P.; Burge, J.H. Software configurable optical test system: A computerized reverse Hartmann test. Appl. Opt. 2010, 49, 4404–4412. [Google Scholar] [CrossRef]
  18. Huang, L.; Ng, C.S.; Asundi, A.K. Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry. Opt. Express 2011, 19, 12809–12814. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, J.; Liu, W.; Guo, J.; Wei, C.; Yang, L.; Peng, R.; Yue, H.; Liu, Y. Ultra high-speed 3D shape measurement technology for specular surfaces based on μPMD. Opt. Express 2024, 32, 34366–34380. [Google Scholar] [CrossRef]
  20. Xu, X.; Zhang, X.; Niu, Z.; Wang, W.; Xu, M. Extra-detection-free monoscopic deflectometry for the in situ measurement of freeform specular surfaces. Opt. Lett. 2019, 44, 4271–4274. [Google Scholar] [CrossRef]
  21. Guo, H.; Tao, T. Specular surface measurement by using a moving diffusive structured light source. In Optical Design and Testing III; SPIE: Bellingham, WA, USA, 2007; Volume 6834. [Google Scholar]
  22. Li, C.; Li, Y.; Xiao, Y.; Zhang, X.; Tu, D. Phase measurement deflectometry with refraction model and its calibration. Opt. Express 2018, 26, 33510–33522. [Google Scholar] [CrossRef] [PubMed]
  23. Tang, Y.; Su, X.; Liu, Y.; Jing, H. 3D shape measurement of the aspheric mirror by advanced phase-measuring deflectometry. Opt. Express 2008, 16, 15090–15096. [Google Scholar] [CrossRef] [PubMed]
  24. Liu, Y.; Huang, S.; Zhang, Z.; Gao, N.; Gao, F.; Jiang, X. Full-field 3D shape measurement of discontinuous specular objects by direct phase-measuring deflectometry. Sci. Rep. 2017, 7, 10293. [Google Scholar] [CrossRef]
  25. Zhang, Z.; Wang, Y.; Huang, S.; Liu, Y.; Chang, C.; Gao, F.; Jiang, X. Three-dimensional shape measurements of specular objects using phase-measuring deflectometry. Sensors 2017, 17, 2835. [Google Scholar] [CrossRef] [PubMed]
  26. Zhao, P.; Gao, N.; Zhang, Z.; Gao, F.; Jiang, X. Performance analysis and evaluation of direct phase-measuring deflectometry. Opt. Lasers Eng. 2018, 103, 24–33. [Google Scholar] [CrossRef]
  27. Li, Y.; Gao, F.; Xu, Y.; Zhang, Z.; Jiang, X. Error analysis of the plate beamsplitter in near optical coaxial phase measurement deflectometry. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2024; Volume 401. [Google Scholar]
  28. Liu, C.; Zhang, Z.; Gao, N.; Meng, Z. Large-curvature specular surface phase-measuring deflectometry with a curved screen. Opt. Express 2021, 29, 43327–43341. [Google Scholar] [CrossRef]
  29. Han, H.; Wu, S.; Song, Z. Curved LCD based deflectometry method for specular surface measurement. Opt. Lasers Eng. 2022, 151, 106909. [Google Scholar] [CrossRef]
  30. Zhang, X.; Ren, Y.; Chen, Y.; Li, S. Large-area measurement with stereo deflectometry. In Optical Fabrication and Testing; Optica Publishing Group: Bellingham, WA, USA, 2021. [Google Scholar]
  31. Wang, R.; Li, D.; Zhang, X.; Zheng, W.; Yu, L.; Ge, R. Marker-free stitching deflectometry for three-dimensional measurement of the specular surface. Opt. Express 2021, 29, 41851–41864. [Google Scholar] [CrossRef]
  32. Han, H.; Wu, S.; Song, Z.; Gu, F.; Zhao, J. 3D reconstruction of the specular surface using an iterative stereoscopic deflectometry method. Opt. Express 2021, 29, 12867–12879. [Google Scholar] [CrossRef]
  33. Williamson, M. Optics for high accuracy machine vision. Quality 2018, 18, 8–11. [Google Scholar]
  34. Li, H.; Liao, Z.; Cai, W.; Zhong, Y.; Zhang, X. Flexible calibration of the telecentric vision systems using only planar calibration target. In IEEE Transactions on Instrumentation and Measurement; IEEE: New York, NY, USA, 2023. [Google Scholar]
  35. Moru, D.; Borro, D. Analysis of different parameters of influence in industrial cameras calibration processes. Measurement 2021, 171, 108750. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Chang, C.; Liu, X.; Li, Z.; Shi, Y.; Gao, N.; Meng, Z. Phase-measuring deflectometry for obtaining 3D shape of specular surface: A review of the state-of-the-art. Opt. Eng. 2021, 60, 020903. [Google Scholar] [CrossRef]
  37. Wang, R.; Li, D.; Zheng, W.; Yu, L.; Ge, R.; Zhang, X. Vision ray model based stereo deflectometry for the measurement of the specular surface. Opt. Lasers Eng. 2024, 172, 107831. [Google Scholar] [CrossRef]
  38. Häusler, G.; Richter, C.; Leitz, K.H.; Knauer, M.C. Microdeflectometry—A novel tool to acquire three-dimensional microtopography with nanometer height resolution. Opt. Lett. 2008, 33, 396–398. [Google Scholar] [CrossRef]
  39. Liu, Y.; Lehtonen, P.; Su, X. High-accuracy measurement for small scale specular objects based on PMD with illuminated film. Opt. Laser Technol. 2012, 44, 459–462. [Google Scholar] [CrossRef]
  40. Niu, Z.; Gao, N.; Zhang, Z.; Gao, F.; Jiang, X. 3D shape measurement of discontinuous specular objects based on advanced PMD with bi-telecentric lens. Opt. Express 2018, 26, 1615–1632. [Google Scholar] [CrossRef]
  41. Huang, L.; Wang, T.; Austin, C.; Lienhard, L.; Hu, Y.; Zuo, C.; Kim, D.W.; Idir, M. Collimated phase-measuring deflectometry. Opt. Lasers Eng. 2024, 172, 107882. [Google Scholar] [CrossRef]
  42. Xu, X.; Zhang, X.; Niu, Z.; Wang, W.; Zhu, Y.; Xu, M. Self-calibration of in situ monoscopic deflectometric measurement in precision optical manufacturing. Opt. Express 2019, 27, 7523–7536. [Google Scholar] [CrossRef]
  43. Xiao, Y.; Su, X.; Chen, W. Flexible geometrical calibration for fringe-reflection 3D measurement. Opt. Lett. 2012, 37, 620–622. [Google Scholar] [CrossRef] [PubMed]
  44. Xu, Y.; Gao, F.; Zhang, Z.; Jiang, X. A holistic calibration method with iterative distortion compensation for stereo deflectometry. Opt. Lasers Eng. 2018, 106, 111–118. [Google Scholar] [CrossRef]
  45. Chen, Z.; Liao, H.; Zhang, X. Telecentric stereo micro-vision system: Calibration method and experiments. Opt. Lasers Eng. 2014, 57, 82–92. [Google Scholar] [CrossRef]
  46. Yao, L.; Liu, H. A flexible calibration approach for cameras with double-sided telecentric lenses. Int. J. Adv. Robot. Syst. 2016, 13, 82. [Google Scholar] [CrossRef]
  47. Li, D.; Tian, J. An accurate calibration method for a camera with telecentric lenses. Opt. Lasers Eng. 2013, 51, 538–541. [Google Scholar] [CrossRef]
  48. Huang, L.; Xue, J.; Gao, B.; Zuo, C.; Idir, M. Zonal wavefront reconstruction in quadrilateral geometry for phase-measuring deflectometry. Appl. Opt. 2017, 56, 5139–5144. [Google Scholar] [CrossRef]
  49. Huang, L.; Xue, J.; Gao, B.; Zuo, C.; Idir, M. Spline based least squares integration for two-dimensional shape or wavefront reconstruction. Opt. Lasers Eng. 2017, 91, 221–226. [Google Scholar] [CrossRef]
  50. Graves, L.R.; Choi, H.; Zhao, W.; Oh, C.J.; Su, P.; Su, T.; Kim, D.W. Model-free deflectometry for freeform optics measurement using an iterative reconstruction technique. Opt. Lett. 2018, 43, 2110–2113. [Google Scholar] [CrossRef]
  51. Dai, G.; Hu, X. Correction of interferometric high-order nonlinearity error in metrological atomic force microscopy. Nanomanufacturing Metrol. 2022, 5, 412–422. [Google Scholar] [CrossRef]
  52. Wang, Y.; Fang, F. Optimal phase-shifting parameter in phase-measuring deflectometry. Meas. Sci. Technol. 2024, submitted.
  53. Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Mirror-based extrinsic camera calibration. In Algorithmic Foundation of Robotics VIII: Selected Contributions of the Eight International Workshop on the Algorithmic Foundations of Robotics; Springer: Berlin/Heidelberg, Germany, 2010; pp. 285–299. [Google Scholar]
  54. Xu, Y.; Gao, F.; Ren, H.; Zhang, Z.; Jiang, X. An iterative distortion compensation algorithm for camera calibration based on phase target. Sensors 2017, 17, 1188. [Google Scholar] [CrossRef] [PubMed]
  55. Knauer, M.C.; Kaminski, J.; Häusler, G. Absolute Phasenmessende Deflektometrie; Lehrstuhl für Mikrocharakterisierung: Erlangen, Germany, 2006. [Google Scholar]
  56. Wang, R.; Li, D.; Zhang, X. Systematic error control for deflectometry with iterative reconstruction. Measurement 2021, 168, 108393. [Google Scholar] [CrossRef]
  57. Robert, K. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar]
  58. Flötotto, J. 2D and surface function interpolation. In CGAL User and Reference Manual, 5.6.1 ed; CGAL Editorial Board: Heidelberg, Germany, 2024. [Google Scholar]
  59. Agarwal, S.; Mierle, K. Ceres Solver (Version 2.2). 2023. Available online: https://github.com/ceres-solver/ceres-solver (accessed on 1 July 2024).
  60. Hiroshi, A. A new method of interpolation and smooth curve fitting based on local procedures. J. ACM 1970, 17, 589–602. [Google Scholar]
Figure 1. Characteristics of typical contact and non-contact measurement techniques for freeform mirrors.
Figure 1. Characteristics of typical contact and non-contact measurement techniques for freeform mirrors.
Sensors 24 06321 g001
Figure 2. Three typical PMD configurations with endocentric lenses. (a) Single sensor and single screen based on predicted shape; (b) Single sensor and multiple screens based on screen movement; (c) Multiple sensors and single screen based on stereo deflectometry.
Figure 2. Three typical PMD configurations with endocentric lenses. (a) Single sensor and single screen based on predicted shape; (b) Single sensor and multiple screens based on screen movement; (c) Multiple sensors and single screen based on stereo deflectometry.
Sensors 24 06321 g002
Figure 3. The imaging models of (a) endocentric lenses and (b) bi-telecentric lenses.
Figure 3. The imaging models of (a) endocentric lenses and (b) bi-telecentric lenses.
Sensors 24 06321 g003
Figure 4. Three typical PMD configurations with bi-telecentric lenses. (a) Single sensor and single screen based on predicted shape; (b) Single sensor and multiple screens based on screen movement; (c) Multiple sensors and single screen based on stereo deflectometry.
Figure 4. Three typical PMD configurations with bi-telecentric lenses. (a) Single sensor and single screen based on predicted shape; (b) Single sensor and multiple screens based on screen movement; (c) Multiple sensors and single screen based on stereo deflectometry.
Sensors 24 06321 g004
Figure 5. The reconstruction procedure for the specular SUT in the stereo bi-telecentric PMD.
Figure 5. The reconstruction procedure for the specular SUT in the stereo bi-telecentric PMD.
Sensors 24 06321 g005
Figure 6. Setup of the stereo bi-telecentric PMD.
Figure 6. Setup of the stereo bi-telecentric PMD.
Sensors 24 06321 g006
Figure 7. The reflected phase patterns of the flat LCD screen, as captured by two cameras under three different poses of the flat mirror.
Figure 7. The reflected phase patterns of the flat LCD screen, as captured by two cameras under three different poses of the flat mirror.
Sensors 24 06321 g007
Figure 8. The geometric relationship in the monocular situation.
Figure 8. The geometric relationship in the monocular situation.
Sensors 24 06321 g008
Figure 9. The geometric relationship between two rotation matrix solutions A + L B and A L B .
Figure 9. The geometric relationship between two rotation matrix solutions A + L B and A L B .
Sensors 24 06321 g009
Figure 10. The geometric relationship in the remaining (a) solution A and (b) solution B.
Figure 10. The geometric relationship in the remaining (a) solution A and (b) solution B.
Sensors 24 06321 g010
Figure 11. The ambiguity solutions can be automatically resolved by the residual of θ minimization process.
Figure 11. The ambiguity solutions can be automatically resolved by the residual of θ minimization process.
Sensors 24 06321 g011
Figure 12. The pixel coordinate (u, v) and its corresponding distorted coordinate (ud, vd).
Figure 12. The pixel coordinate (u, v) and its corresponding distorted coordinate (ud, vd).
Sensors 24 06321 g012
Figure 13. The holistic calibration procedure of the proposed stereo bi-telecentric phase-measuring deflectometry, with only a flat mirror required.
Figure 13. The holistic calibration procedure of the proposed stereo bi-telecentric phase-measuring deflectometry, with only a flat mirror required.
Sensors 24 06321 g013
Figure 14. The influence of the translation vector z component (Tz) on the spatial position of a bi-telecentric imaging system.
Figure 14. The influence of the translation vector z component (Tz) on the spatial position of a bi-telecentric imaging system.
Sensors 24 06321 g014
Figure 15. The calculation procedure for anchor point Z. (a) Use the distortion map to search Z for a given pixel upA in camera A; (b) Use the inverse distortion map to retrieve the corresponding pixel upB of anchor point Z in camera B.
Figure 15. The calculation procedure for anchor point Z. (a) Use the distortion map to search Z for a given pixel upA in camera A; (b) Use the inverse distortion map to retrieve the corresponding pixel upB of anchor point Z in camera B.
Sensors 24 06321 g015
Figure 16. An example for inverse distortion mapping constructed by polynomial fitting. (a) The acquired pointset from calibration. (b) Polynomial fitting result of (a); (c) Polynomial fitting residuals.
Figure 16. An example for inverse distortion mapping constructed by polynomial fitting. (a) The acquired pointset from calibration. (b) Polynomial fitting result of (a); (c) Polynomial fitting residuals.
Sensors 24 06321 g016
Figure 17. An example for inverse distortion map construction in u direction by cubic Hermite interpolation. (a) The distortion map D and the resampling process; (b) The constructed inverse distortion map D−1.
Figure 17. An example for inverse distortion map construction in u direction by cubic Hermite interpolation. (a) The distortion map D and the resampling process; (b) The constructed inverse distortion map D−1.
Sensors 24 06321 g017
Figure 18. The construction of the inverse distortion maps for camera A. (a) The distortion map Du in u direction; (b) The constructed inverse distortion map Du−1 in u direction; (c) The reprojection error of the constructed inverse distortion map in u direction; (d) The distortion map Dv in v direction; (e) The constructed inverse distortion map Dv−1 in v direction; (f) The reprojection error of the constructed inverse distortion map in v direction.
Figure 18. The construction of the inverse distortion maps for camera A. (a) The distortion map Du in u direction; (b) The constructed inverse distortion map Du−1 in u direction; (c) The reprojection error of the constructed inverse distortion map in u direction; (d) The distortion map Dv in v direction; (e) The constructed inverse distortion map Dv−1 in v direction; (f) The reprojection error of the constructed inverse distortion map in v direction.
Sensors 24 06321 g018
Figure 19. The construction of the inverse distortion maps for camera B. (a) The distortion map Du in u direction; (b) The constructed inverse distortion map Du−1 in u direction; (c) The reprojection error of the constructed inverse distortion map in u direction; (d) The distortion map Dv in v direction; (e) The constructed inverse distortion map Dv−1 in v direction; (f) The reprojection error of the constructed inverse distortion map in v direction.
Figure 19. The construction of the inverse distortion maps for camera B. (a) The distortion map Du in u direction; (b) The constructed inverse distortion map Du−1 in u direction; (c) The reprojection error of the constructed inverse distortion map in u direction; (d) The distortion map Dv in v direction; (e) The constructed inverse distortion map Dv−1 in v direction; (f) The reprojection error of the constructed inverse distortion map in v direction.
Sensors 24 06321 g019
Figure 20. Repeatability verification of a 3-inch concave spherical mirror using proposed stereo bi telecentric phase-measuring deflectometry.
Figure 20. Repeatability verification of a 3-inch concave spherical mirror using proposed stereo bi telecentric phase-measuring deflectometry.
Sensors 24 06321 g020
Figure 21. The surface differences between the measurement results of the repeatability verification.
Figure 21. The surface differences between the measurement results of the repeatability verification.
Sensors 24 06321 g021
Figure 22. Measurement error of various mirrors for the proposed stereo bi-telecentric PMD after calibration.
Figure 22. Measurement error of various mirrors for the proposed stereo bi-telecentric PMD after calibration.
Sensors 24 06321 g022
Figure 23. The geometric relationship between the surfaces-under-test and the calibrated stereo bi-telecentric PMD system.
Figure 23. The geometric relationship between the surfaces-under-test and the calibrated stereo bi-telecentric PMD system.
Sensors 24 06321 g023
Table 1. Comparison of three typical PMD configurations.
Table 1. Comparison of three typical PMD configurations.
PMD ConfigurationAdvantagesDisadvantages
Single-sensor-single-screenSimple setupPredicted SUT shape needed
Single-sensor-multiple-screensEasy to reconstruct the shape of SUTExtra equipment introduces more error sources.
Multiple-sensors-single-screenDon’t need the predicted shape of SUTDifficult to calibrate
Table 2. The literature review of three PMD setups with endocentric and bi-telecentric lenses.
Table 2. The literature review of three PMD setups with endocentric and bi-telecentric lenses.
PMD ConfigurationsSingle Sensor and Single ScreenSingle Sensor and Multiple ScreensMultiple Sensors and Single Screen
With endocentric lensesHäusler et al. [14]Guo et al. [21]Li et al. [28]
Su et al. [17]Li et al. [22]Han et al. [29]
Huang et al. [18]Tang et al. [23]Zhang et al. [30]
Wang et al. [19]Liu et al. [24]Wang et al. [31]
Xu et al. [20]Zhang et al. [25]Han et al. [32]
With bi-telecentric lensesHäusler et al. [38]Niu et al. [40]Proposed in this paper
Liu et al. [39]Huang et al. [41]
Table 3. Reprojection error and solutions of Equation (28) for camera A.
Table 3. Reprojection error and solutions of Equation (28) for camera A.
sgn ( A i L A )   i 1 , 2 , 3 +++++-+-++---++-+---+---
L1−21.531−71.481−66.002−5.8945.89466.00271.48121.531
L266.46129.054−66.991−6.8866.88666.99129.05466.461
L380.482−72.31041.866−6.8066.806−41.86672.310−80.482
Error0.01330.14810.13640.00020.00020.13640.14810.0133
An operator sgn() is a symbol that indicates two rotation matrices A i L A by + and -. Two possible solutions are marked in red and blue.
Table 4. Reprojection error and solutions of Equation (28) for camera B.
Table 4. Reprojection error and solutions of Equation (28) for camera B.
sgn ( A i L B )   i 1 , 2 , 3 +++++-+-++---++-+---+---
L1121.001−340.782−291.935−123.652123.652291.935340.782−121.001
L2121.362−344.473−272.385−191.708191.708272.385344.473−121.362
L3121.478−356.023−295.936−194.714194.714295.936356.023−121.478
Error0.00054.15324.15394.12204.12204.15394.15320.0005
An operator sgn() is a symbol that indicates two rotation matrices A i L B by + and -. Two possible solutions are marked in red and blue.
Table 5. The remaining two solutions filtered by d and their differences.
Table 5. The remaining two solutions filtered by d and their differences.
sgn ( A i )   i 1 , 2 , 3 Camera ACamera BCamera ACamera BCamera ACamera BCamera ACamera B
+--+++-++---+------+++++
d1−136.790136.545136.790−136.545−136.790−136.545136.790136.545
d2−131.450131.159131.450−131.159−131.450−131.159131.450131.159
d3−128.906128.656128.906−128.656−128.906−128.656128.906128.656
d A d B 2 458.272458.2720.4550.455
An operator sgn() is a symbol that indicates two rotation matrices A i by + and -. Two possible solutions are marked in red and blue.
Table 6. The calibration results after the bundle adjustment.
Table 6. The calibration results after the bundle adjustment.
ParametersCamera ACamera B
magnification m x A = 0 . 05790 m x B = 0 . 05796
m y A = 0.05628 m y B = 0 . 05609
Rotation matrix between the flat LCD and Camera R L A = 0 . 98765 0 . 02942 0 . 15388 0 . 08725 0 . 71251 0 . 69621 0 . 13012 0 . 70104 0 . 70115 R L B = 0 . 98798 0 . 12017 0 . 09725 0 . 01597 0 . 70510 0 . 70892 0 . 15376 0 . 69885 0 . 69855
Translation vector between the flat LCD and Camera T L A = 461.28868 254.18019 T L B = 409.74525 282.33540
The normals of the flat mirror n 1 L = 0 . 05334 0 . 45445 0 . 88914 , n 2 L = 0 . 06838 0 . 45371 0 . 88849 , n 3 L = 0 . 06999 0 . 45954 0 . 88536
The distances between the origin of {L} and the flat mirror d 1 = 135 . 633123 , d 2 = 133 . 37208 , d 3 = 130 . 78636
Table 7. The characteristics of various inverse distortion mapping methods.
Table 7. The characteristics of various inverse distortion mapping methods.
The Inverse Distortion Mapping MethodThe Method Preserves the Detailed Information for AccuracyThe Automatic Differentiation Framework is Applied Easily
Polynomial fittingNoYes
Triangulation-based interpolationYesNo
Bicubic Hermite interpolation (proposed)YesYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Fang, F. Stereo Bi-Telecentric Phase-Measuring Deflectometry. Sensors 2024, 24, 6321. https://doi.org/10.3390/s24196321

AMA Style

Wang Y, Fang F. Stereo Bi-Telecentric Phase-Measuring Deflectometry. Sensors. 2024; 24(19):6321. https://doi.org/10.3390/s24196321

Chicago/Turabian Style

Wang, Yingmo, and Fengzhou Fang. 2024. "Stereo Bi-Telecentric Phase-Measuring Deflectometry" Sensors 24, no. 19: 6321. https://doi.org/10.3390/s24196321

APA Style

Wang, Y., & Fang, F. (2024). Stereo Bi-Telecentric Phase-Measuring Deflectometry. Sensors, 24(19), 6321. https://doi.org/10.3390/s24196321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop