Next Article in Journal
Low-Cost 3D Indoor Visible Light Positioning: Algorithms and Experimental Validation
Previous Article in Journal
Efficient Direct Detection of Twin Single-Sideband Quadrature-Phase Shift Keying Using a Single Detector with Hierarchical Blind-Phase Search
Previous Article in Special Issue
Laser Machining at High ∼PW/cm2 Intensity and High Throughput
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast 2D Subpixel Displacement Estimation

1
Department of Electrical Engineering, Eindhoven University of Technology, 5612 AE Eindhoven, The Netherlands
2
School of Electrical and Electronic Engineering, College of Engineering and Architecture, University College Dublin, Belfield, D04 V1W8 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(7), 625; https://doi.org/10.3390/photonics11070625
Submission received: 30 April 2024 / Revised: 19 June 2024 / Accepted: 27 June 2024 / Published: 29 June 2024
(This article belongs to the Special Issue Advanced Photonic Sensing and Measurement II)

Abstract

:
Fast and simple methods for motion estimation with subpixel accuracy are of interest in a variety of applications. In this paper, we extend a recently proposed method for quantifying 1D displacements with subpixel accuracy, referred to as the subtraction method (SM) to 2D motion. Simulation and experimental results are presented. The results indicate that any general motion in 2D involving combinations of in-plane motions in x and y can be determined using SM after a 1D calibration. The errors between the actual motion and estimated are examined.

1. Introduction

Subpixel displacement estimation is a technique used in various fields to measure the shifts or movements of an object at a scale smaller than the pixel resolution of an imaging sensor, based on the images taken before and after the translation. Its importance spans several areas due to its capability to provide higher accuracy and finer details than standard pixel-level methods. For instance, subpixel methods enhance the capability to detect and analyze subtle motions, leading to a better performance in tracking and monitoring systems [1,2]. It also improves the analysis of satellite and aerial images by providing more precise data on land use, environmental changes, and urban development. Other applications include video compression [3], image stabilization [4], super-resolution imaging [5], and computer vision [6].
At present, such displacements are commonly determined using the correlation of images before and after the motion; we will refer to the correlation method (CM). Frequency-based correlation is considered more accurate and faster than commonly used spatial correlation methods such as normalized cross-correlation. The principles and algorithms used in digital image correlation for this application of in-plane displacement (along with strain measurement) have been described in [7,8]. The importance of CM is such that there is an extensive literature devoted to reducing the CM calculation time, increasing the lateral resolution, reducing peak-locking bias error and suppressing noise effects in digital images [9,10,11,12,13,14]. Other non-correlation-based techniques have been used to determine displacement, such as gradient-based methods [15] by analyzing the intensity gradients in the image, block matching [16] by dividing the image into blocks and finding the best match for each block in the next frame, phase-based techniques [2,17] by the use of phase gradients or phase unwrapping methods, analyzing the phase information of signals to estimate displacement, and pattern recognition techniques [18] by leveraging machine learning or deep learning models trained to recognize and predict displacement patterns based on image data. Each of these techniques has its advantages and is suitable for different applications, depending on factors like computational efficiency, robustness to noise and the specific characteristics of the displacement being measured.
We have previously introduced a novel fast approach, the subtraction method (SM), to estimate displacement [8,19,20] which only employs subtraction and addition operations. This algorithm is much faster than CM. One-dimensional motion measurement has been demonstrated in detail using both simulated and experimental data. Detailed comparisons of CM results to the corresponding SM results are also reported. In [21], Belen et al. performed an experiment with a portable photo booth having lead stripes and reflective walls. Hence, they demonstrated the capabilities of the SM to track subpixel displacements. They compared it with an interpolated cross-correlation function. Four different targets with displacements of 0.002 pixels were used, which they claim approaches the theoretical digital resolution limit. They demonstrate that SM can be used for motion estimation without suffering from peak-locking error, and thus is a reliable alternative to the correlation method.
In this paper, SM is analyzed and applied to 2D motion estimation using both simulated and experimental data for the first time. This is a non-trivial extension because of crosstalk between the two dimensions. We present simulation and experimental results that demonstrate how SM can be used to determine 2D motion following appropriate 1D calibration.
This paper is organized as follows: In Section 2, a 2D analysis of SM, using continuous and discrete functions, is presented. In Section 3, simulation results are used to examine in detail the relationship between 2D in-plane motion and the corresponding 1D components (in x and y) of that motion. In Section 4, illustrative experimental results are presented, and the errors examined. Finally, we present our conclusions.

2. Two-Dimensional Theory

In this section, we derive the theoretical model of SM for 2D motion. In Section 2.1 of [1], a theoretical analysis of SM for 1D motion was presented based on the Taylor series expansion, and we follow that presentation. It is also possible to derive the algorithm using Newton’s difference quotient or by interpreting it as a digital filter [20], but we omit those for space considerations. We consider the case of a continuous image in Section 2.1 and a discrete image in Section 2.2.

2.1. Continuous Model 2D Motion

Given an image, I ( x , y ) , over a given region, a x b , c y d , we expand x + x , y + y around (x, y) using a Taylor series expansion.
I x + x , y + y = I x , y + I x x , y x + I y x , y y + R 2 , N ,
We assume that higher-order terms are negligible, i.e., the remainder R 2 , N 0 .
Let I x x , y and I y x , y be the partial derivatives of I x , y with respect to x and y,
I x x , y = I x , y x , I y x , y = I x , y y .
We now define a difference function between the original image and the translated image.
D x , y ; x , y = I x + x , y + y I x , y I x x , y x + I y x , y y .
This difference may be trivially calculated from the images before and after translation; this forms the basis of the SM algorithm. The difference function is used to calculate the following 1D functions,
C x ; x , y = D x , y ; x , y d y = x I x x , y d y + y I y x , y d y ,
R y ; x , y = D x , y ; x , y d x = x I x x , y d x + y I y x , y d x   .
The absolute sum of C x , y ; x , y and R x , y ; x , y are used to find two further functions, as follows.
T C x , y = C x , y ; x , y d x = x I x x , y d y + y I y x , y d y d x ,  
T R x , y = R x , y ; x , y d y = x I x x , y d x + y I y x , y d x d y .
Equations (5a) and (5b) permit us to bound TC and TR as follows.
x I x x , y d y d x y I y x , y d y d x T C x , y x I x x , y d y d x + y I y x , y d y d x ,
x I x x , y d x d y y I y x , y d x d y T R x , y x I x x , y d x d y + y I y x , y d x d y .
By examining these equations for 1D motion, we can define some useful (image-dependent) constants. For motion only in the x direction,
    T C x , y = 0 = x I x x , y d y d x = P C , 1 x ,      
T R x , y = 0 = x I x x , y d x d y = P R , 1 x .
T C x , y = 0 and T R x , y = 0 are both linear functions of x . From their definitions, we observe that P C , 1 and P R , 1 are positive real variables.
Similarly, for motion exclusively in the y direction, we find the following:
T C x = 0 , y = y I y x , y d y d x = M C , 1 y ,
T R x = 0 , y = y I y x , y d x d y = M R , 1 y .
Again, we find positive coefficients, now named M C , 1 and M R , 1 . The relative sizes of the coefficients P C , 1 , P R , 1 , M C , 1 and M R , 1 will be critical to the sensitivity of the method. Again, the coefficients are object-dependent, meaning a calibration step is needed.
We return to the case of 2D motion, i.e., x 0 , a n d y 0 . Now, T C x , y ; x , y and   T R x , y ; x , y from Equation (6) can be rewritten more compactly as follows.
P C , 1 x M C , 1 y T C x , y P C , 1 x + M C , 1 y ,
P R , 1 x M R , 1 y T R x , y P R , 1 x + M R , 1 y .
Next, we derive the model for discrete images.

2.2. Discrete Model 2D Motion

This section follows the derivation for the continuous case closely. The difference is that we assume our image, I i , j , is discrete. Following Equation (1), over the sampled region 1 x :   i ,   y :   j N ,
I i , j ( x , y ) I i , j 0,0 + I i , j ; x 0,0 x + I i , j ; y ( 0,0 ) y ,
where I i , j ; x 0,0 and I i , j ; y 0,0 are the partial derivatives of I i , j 0,0 with respect to x and y,
I i , j ; x ( 0,0 ) [ I i , j ( x , y = 0 ) I i , j ( 0,0 ) ] / x ,
I i , j ; y ( 0,0 ) [ I i , j ( x = 0 , y ) I i , j ( 0,0 ) ] / y
The discrete difference function D i , j x , y is as follows.
D i , j ( x , y ) = I i , j x , y I i , j ( 0,0 ) I i , j ; x ( 0,0 ) x + I i , j ; y ( 0,0 ) y
We use D i , j x , y to generate two 1D vectors. The first is the sum of the values in the jth row, yielding R j ( x , y ), a 1 × N vector. The second is the sum of the values in the ith column, yielding C i x , y , an N × 1 vector. Explicitly,
C i x , y = j = 1 N D i , j x , y , R j x , y = i = 1 N D i , j x , y .
Hence,
T C x , y = i C i x , y , T R x , y = j R j x , y .
Equations (13) and (14) are illustrated in Figure 1.
For the 1D case, y = 0 or x = 0 , in the x and y directions, respectively. We define the following:
T C x , y = 0 = i C i x , y = 0 ,                         T R x , y = 0 = j R j x , y = 0 ,
T C x = 0 , y = i C i x = 0 , y ,                       T R x = 0 , y = j R j x = 0 , y .
As before, both T C x , y = 0 and T R x , y = 0 are linear functions of x . Hence,
T C x , y = 0 = P C , 1 x ,               a n d             T R x , y = 0 = P R , 1 x .
Further, if T R x = 0 , y and T C x = 0 , y are linear functions of y ,
T R x = 0 , y = M R , 1 y ,               a n d             T C x = 0 , y = M C , 1 y .
From Equations (17) and (18), the linear coefficients P C , 1 , P R , 1 , M C , 1 and M R , 1 can be found.
Given these coefficients (M and P), and the values for TC and TR for any general motion in 2D, involving any combination of x   a n d   y , can we determine the values of x   a n d   y and could we even possibly find the direction of motion?
To examine this situation, we return to the 2D case, i.e., when x 0 , a n d   y 0 , T C x , y   a n d   T R x , y as defined in Equation (14) can be rewritten from Equation (19) as,
L o w ( T C ) = P C , 1 x M C , 1 y T C x , y P C , 1 x + M C , 1 y = U p T C ,
Low T R = P R , 1 x M R , 1 y T R x , y P R , 1 x + M R , 1 y = U p T R .
In these equations, we have introduced the upper and lower limiting values of TC and TR, i.e., Up(T) and Low(T), which are defined assuming the validity of our linear approximation, i.e., x and y are small and Equation (10) is valid. If TC and TR could both be described as linear combinations of x and y , there would be two simultaneous equations for two unknown values. In such a case, x   a n d y , could be unambiguously determined for any 2D motion. However, Equations (19) and (20) emphasize the ambiguity introduced by the use of the absolute value operation and the loss of directional information.

3. Two-Dimensional Simulation Results

In our simulation, we wish to examine the performance of SM for 2D motion estimation, especially the relationship between 2D motion and the corresponding 1D components. To do so, we first present a set of simulations using the standard USAF chart image. In the next section, Section 4, a corresponding set of experimental results illustrating the calibration process is presented.
To start, we define the quadrants, as shown in Figure 2. Any motion in the first quadrant (Q1) can be described as involving components of motion to the right (+x) and up (+y). Similarly, the second quadrant (Q2) involves motion to the right (+x) and down (−y), the third quadrant (Q3) motion to the left (−x) and down (−y), and the fourth quadrant (Q4) to the left (−x) and up (+y). Reference to diagonal motion in Q1, Q2, Q3 and Q4 involves motion along lines at 45°, −45°, −135° and 135°, with respect to the positive x axis.
Recall that SM is based on the calculation of the absolute sum of accumulated grey level changes, TC and TR values, and the direction information has been eliminated. To avoid repetition, we only present the simulation and experimental results for Q1. The corresponding results for Q2–Q4 can be found in [22].
In all the simulations presented below, an 8-bit grey level, 780 × 780-pixel USAF chart is first numerically converted to a 16-bit image, using the ‘im2uint16’ Matlab 2018a command. This is then used as the input, or original image, I; see Figure 3a. Next, we visualized this image positioned behind a fixed window, which represents the limiting field of view of the camera. This results in the 760 × 760-pixel windowed images, IW, shown in Figure 3b, which are then zero-padded up to 1024 × 1024 pixels, IPW, shown in Figure 3c. For model displacement, the original image, I, is shifted by δx behind the window to form the translated image, Iδx.
In order to simulate subpixel motion, bicubic interpolation is applied to the original image before translation. Simpler interpolation methods, e.g., linear and nearest neighbor, have been examined. In general, the Bicubic command which is the default setting for image interpolation in Matlab provides an accurate representation but requires similar computational time to that of the Linear or Nearest interpolation commands. We note that in all simulation results presented here, double precision calculations are used and, in some cases, this was necessary in order to avoid numerical inaccuracy. For each translation step, the same process is followed, i.e., windowing, padding, etc.
Using this input image, we now examine SM. All calculations are performed using an Intel(R) Core (TM) i7-6700 [email protected] GHz with an installed RAM of 32.0 GB.

3.1. Subtraction Method: Object without Noise

SM is now applied to examine the same 1D and 2D motion cases. For 1D motion in the +x{+y} direction, the displacement, δx, varies from 1/6.5 to 4 pixels in 26 steps. Two-dimensional motion occurs simultaneously in Q1, in both the x and y directions along the diagonal, y = x, at 45° to the positive x axis. Each step along the diagonal is made up of equal steps in x and y; therefore, both +x and +y vary from 1/6.5 to 4 pixels in 26 steps. The accumulated grey level values (TC and TR), i.e., Equation (14), as functions of the translated pixel displacements are presented in Figure 4a,b, respectively. In Figure 4c, the TR and TC results shown in Figure 4a,b, and their linear fits are presented. The results for 2D diagonal motion in Q1 are shown in Figure 4d.
We note that the result in Figure 4a corresponds to the case described by Equation (17). The slopes of the line fits to the accumulated grey levels, i.e., the TC values, over the two regions: (i) 0.15–1 pixels, and (ii) 1–4 pixels, are listed in Table 1. In the table, for 1D motion along +x, we list the slopes of the linear fits m, for TC and TR, which correspond to the P C , 1 and P R , 1 coefficients, respectively. Similarly, for motion along +y, the slopes for TC and TR are listed as M C , 1 and M R , 1 ; see Equation (18). The larger coefficient values are shown in bold text. The quality of the linear fits is indicated by the R2 value. Note that this tabulated 1D motion information is used below, in this section, to calibrate the 2D motion.
From the table, we note that P C , 1 >> P R , 1 > 0 . Similarly, based on the 1D results in Figure 4b, from Table 1, we see that M R , 1 >> M C , 1 > 0 . Good quality fits to the 2D y = x results in Figure 4c are shown. To do so quantitively, the values at particular points in Figure 4a,b and their ratios, will be used. For example, from these figures, we see that T R x = 0 , y = 4 T C x = 4 , y = 0 = (1.748 × 109/1.312×109) = 1.33. Furthermore, T R 4 , 0 T C 4 , 0 = (9.239 × 105/1.312 × 109) = 7.042 × 10−4; similarly, T C 0 , 4 T R 0 , 4 = (1.119 × 106/1.748×109) = 6.40 × 10−4. As can be seen, the TR term in Figure 4a and the TC term in Figure 4b are very small. The 2D T C and T R terms in Figure 4c are almost identical to the corresponding 1D terms in Figure 4a,b.
Based on the results for Q1 presented in Figure 4, we now examine the predictions in Section 2.2 regarding (a) the existence of a linear relationship between the accumulated grey level values and translated distance; (b) the possibility of the expressing 2D motion in terms the corresponding 1D motions; and (c) the existence of upper and lower limits on the values of the 2D accumulated grey level values.
In relation to (a), from Table 1, we see that good quality linear fits exist for all the simulation results, i.e., R2  1 .
In relation to (b), we examine the results listed in Table 1. It can be seen that the 1D TC results for +x closely agree with the 2D y = x results. Similarly, the 1D TR results for +y closely agree with the 2D y = x results. This supports the prediction that the 2D motion can be expressed in terms of the 1D motion.
In relation to (c), let us return to the discussion in Section 2.2 regarding inequalities and limiting values. To define these limits, the Up(T) and Low(T) expressions in Equations (19) and (20) were introduced. These place limits on the 2D result using the linear fits to the 1D TR and TC values given in Table 1. Alternatively, the limits on the 2D simulation results in Figure 4d can be directly calculated using the 1D simulation results in Figure 4a,b. In this case, linear fits are not used. In Equations (21) and (22), we define the 1D T_ Sum and 1D T_ Diff operations as follows:
1 D   T C   Sum   =   T C x , y = 0 + T C x = 0 , y ,
1 D   T C   Diff   =   T C x , y = 0 T C x = 0 , y ,
1 D   T R   Sum   =   T R x , y = 0 + T R x = 0 , y ,
1 D   T R   Diff   =   T R x = 0 , y T R x , y = 0
In Figure 5a, the 2D results for TC in Figure 4d are shown to lie between the sum of the 1D results for TC in Figure 4a,b, using Equation (21a), and the difference between these same results, using Equation (21b). Similarly, in Figure 5b, the results for TR in Figure 4d lie between the sum of the values for TR in Figure 4a,b and their difference; see Equations (22a) and (22b).
For the simulated results presented in Figure 5, if we compare the corresponding T Sum and Up(T) values or T Diff and Low(T) values, it found that they closely agree. For example, for a displacement value of 1.077 pixels, i.e., at the interface between the line fits in regions (i) and (ii), Up(TC) = 4.86 × 108  Low(TC) = 4.853 × 108 > TC Sum = 4.466 × 108   TC Diff = 4.459 × 108.
From Figure 5, it can once again be seen that in this case, i.e., when y = x, the SM simulations predict that T C x , y   T C x , y = 0 and T R x , y   T R x = 0 , y .
Therefore, based on the predictions when y = x shown in Figure 5, we next estimate the general 2D motion case in terms of the corresponding 1D calibration; see Table 1. The 2D motion in Q1 along the trajectories (1) y = 2x, (2) y = x and (3) y = x/2 are examined, and the corresponding accumulated grey levels for these three cases are calculated using SM. The 2D motion estimation results are shown in Figure 6a.
To generate Figure 6a, the 1D piecewise calibration process (see Figure 4 and Table 1) is used to estimate the corresponding 2D motion. To examine 2D motion estimation accurately, a separate 1D calibration process is performed for the 2D motion of y = 2x and y = x/2, respectively. In other words, the corresponding 1D +x from 0 to 2 pixels and 1D +y from 0 to 4 pixels is calibrated for the 2D motion along the y = 2x estimation. Similarly, the 1D +x from 0 to 4 pixels and 1D +y from 0 to 2 pixels is calibrated for 2D motion along the y = x/2 estimation. The chosen lines are arbitrary and illustrative; we have also examined additional angles including some with irrational multipliers with equivalent results but excluded them for brevity.
Using 1D calibration, given the linear coefficients m and c, and the values for TC and TR, the linear fits coefficient values for 2D estimation along the trajectories (1) y = 2x, (2) y = x and (3) y = x/2 are listed in Table 2. The root mean square error is defined as R M S E = i = 1 N ( E s t i m a t e d   m o t i o n L i n e   F i t s ) 2 / N , where N is the number of translation steps. Once again, the 2D motion can be determined. We note that the simulated results in Table 2 indicate that when the ranges over which the 1D calibrations take place are identical to the range of the 2D motion, the RMSE values are reduced. Furthermore, increasing the range of motion over which the 1D calibration takes place increases the fit (lowers the RMSE).
To explore the accuracy of SM in determining the displacements of the USAF image in Figure 3, a series of 500 displaced images were numerically generated along y = x, from 0 to 10 pixels, i.e., in steps of 0.02 pixels in x and y. For motion along y = 2x, the image is translated by a maximum of 5 pixels in x (250 steps of 0.02 pixels each) and 10 pixels in y (250 steps of 0.04 pixels each). Similarly, along y = x/2, there is a maximum of 10 pixels translated in x (250 steps, 0.04 pixels each) and 5 pixels translated in y (250 steps, 0.02 pixels each). The TR and TC accumulated values are calculated using SM from the sequenced 2D displaced images. The 2D motion is estimated by applying the 1D calibration, as shown in Figure 4a,b and Table 1.
The error (in pixels) between the estimated and actual motion is calculated as follows: Error = Estimated motion—Actual motion. The resulting errors in x and y motion for up to a 10-pixel displacement along y = x are shown in Figure 6b,d. The smaller inserted figure shows the same errors over 0–4 pixels.
Piecewise 1D calibration was performed for motions over the ranges of 0–1 for the subpixel region and 1–4 pixels for the super-pixel region. As can be seen in Figure 6b,d, the extremum errors identified over this range are relatively small, i.e., −0.15 (at 1 pixel) and 0.07 pixels (at 2.76 pixels) in x, and −0.06 (at 1 pixel) and 0.038 pixels (at 2.64 pixels) in y. For motion larger than 4 pixels, the error rapidly increased, i.e., −2.75 pixels at 10 pixels, ~27.5% error, due the use of super-pixel calibration, i.e., 1–4 pixels, rather than 1–10 pixels.
A significant error (and discontinuity) appears at one pixel motion. We know that this is a numerical artifact caused by our choice of a piecewise linear fitting during calibration, i.e., use of the sub- and super-pixel displacement regions, as shown in Figure 4a,b. The error is a function of displacement, i.e., it is not a random process, but systematic. This could be avoided by employing a quadratic, cubic or higher-order polynomial fit to the TR and TC accumulated values. However, we have intentionally emphasized the linearity of SM, and piecewise linear fitting is simple and effective.
The mean absolute error, MAE = mean(abs(Estimated motion—Actual motion)), was calculated. Small values, e.g., 0.005 pixels in x and 0.004 in y, are indicated in the subpixel region in the small, inserted figures in Figure 6b,d. In the range of 1–4 pixels, the MAE are 0.056 and 0.024 in x and y, and 0.948 and 0.588 in the range of 1–10 pixels in x and y. The error increases significantly over 4 pixels, which again shows that calibration is necessary in the use of SM.
Large and small inserted histograms corresponding to the results presented in Figure 6b,d are shown in Figure 6c,e, respectively. The intervals in the two histograms are 0.2 pixels and 0.01 pixels for motion over the 0–10 pixel and 0–4 pixel ranges, respectively. Examining the two smaller inserted histograms indicates the presence of two clusters of error values. The lower error value cluster predominantly arises in the subpixel range, while the higher error value cluster is dominated by the errors arising in the super-pixel range. This characteristic, which reflects the piecewise linear calibration process, also appears in our experimental results in Section 4.
At the start of Section 3, we noted that bicubic interpolation was used to model the translation of the image by subpixel resolution steps. One question that arises is as follows: what are the effects of the type of interpolation used? For the example presented (y = x and the range of 0–4 pixels), the maximum absolute error difference between results predicted using bilinear and bicubic interpolation were calculated. Maximum values of 0.062 pixels in x and 0.039 in y were found, and we note that the bicubic results were smoother. To avoid numerical artifacts, clearly the choice of interpolation algorithm is important.
The results presented here are explored experimentally in Section 4.3.

3.2. Subtraction Method: Simulated Object with Noise

The object we used for simulation is a binary USAF chart. To examine the robustness of SM, we add random noise into the USAF chart. The random noise is defined by the standard deviation (σ) and the mean value (µ). In the case of SM, the mean value has been subtracted; therefore, we choose the noise with a mean value equal to zero, i.e., µ = 0. The signal to noise ratio (S/N) is calculated by the total power of the signal and noise. As the standard deviation increases linearly, the S/N decreases exponentially. Here, we examine the cases when S/N = 10 (σ = 6619) and S/N = 1 (σ = 6.632 × 104). For the 2D motion along y = x, the results of SM for these two cases, S/N = 10 and S/N = 1, are shown in Figure 7a,b, respectively.
As can be seen in Figure 7, for S/N = 10 and S/N = 1, the SM results show a good linear performance in both the sub- and super-pixel regions. Therefore, it demonstrates that SM is very robust. The experiments using the USAF with a diffuser will be examined in Section 4.4.

4. Two-Dimensional Experiment Results

We now describe our measurement system.

4.1. Image Capturing

To examine the practical performance of our algorithm in determining 2D translations, images were captured using the transmission type 4-f imaging system with unit magnification, shown in Figure 8. The experiments are performed using a 1951 USAF chart. This is illuminated by a collimated plane wave from a diode pumped crystal laser of wavelength 532 nm. A Zyla 5.5 sCMOS camera with 2560 × 2160 pixels and 6.5 µm pixel pitch is used to capture the resulting 16-bit digital images. The object position is altered using a 3D-translation stage, controlled using LabVIEW.
In this section, 1D and 2D motion are examined in detail. In the case of Q1, we begin by capturing a sequence of 14 images in the +x direction {1}, as shown in Figure 9. The motors in the motion stage are then reversed to move the object back to the starting point (0,0). Next, 14 images are captured with the same displacements in the +y direction {2} and once again we return to (0,0). Finally, combining steps in x and y, 14 images are captured along the diagonal direction, corresponding to trajectory {3} as shown in Figure 9.
Each of the adjacent images in the sequences, i.e., along the x and y axis, involve the object being translated by 2 µm (0.3077 pixels). Therefore, the first image, I0, and the Mth image, I(M−1)δx, have a relative displacement of (M−1)δx between them. Given M = 14, and δx = 2 µm, then (M−1) δx = 26 µm (~4 pixels), which is the total distance of travel along each axis. In the case of diagonal motion between each image captured, equal steps are performed in x and y. We note that the sample used is a transparent object and the reflection feedback is strong; therefore, during the experiment, the sample has been slightly rotated. This leads to the misalignment between the object and camera plane which can introduce error in the results. As noted, the camera has a total of 2560 × 2160 pixels. In our 1:1 imaging system, the USAF chart only fills a central part of this field of view, 620 × 620 pixels. A selected central region of the image captured by the camera, corresponding to 800 × 800 pixels, is used in our calculations following appropriate zero-padding.
If the illumination intensity varies significantly between image captures, temporal normalization may be necessary. To do so, the intensity in each image is normalized with respect to the sum of 100 × 100 pixels values from the top left-hand corner of each image. This is calculated as follows,
I M 1 δ x = S 0 S M 1 I M 1 δ x ,
where S 0 = i , j I 0 i , j , S M 1 = i , j I M 1 δ x i , j and a i , j b . As indicated, we choose a = 1, b = 100. The resulting 800 × 800 data values are next zero-padded up to an array of 1024 × 1024 values. This is the form of the optical image data used to produce the results presented below.
Applying SM, two distinct approaches are used to extract subpixel information. Method 1 refers to the ‘M- -1′ method, and Method 2, refers to the ‘M+q- -M’ method; see [8,19].

4.2. Examination of 2D Experimental Results

In Section 3.1, applying SM, the simulated results for motion in Q1 were presented. Before proceeding to describe the calibration of this method, we first wish to provide some validation of the simulation results.
In Figure 10, experimental results, produced using the USAF chart, which correspond to the simulation results presented in Figure 4, are given. We recall that a 16-bit camara is used in our experiments. The line fits to the data, i.e., the T C and T R values, over the two regions: (i) 0.3–1 pixels and (ii) 1–4 pixels, are listed in Table 3.
We note that in Figure 10a, unlike in Figure 4a, the T R x , y = 0 values are relatively larger. Similarly, in Figure 10b, the T C x = 0 , y values are larger than the corresponding simulation results in Figure 4b.
As noted, in Section 3.2, we wish to compare our experimental and simulation results. We previously examined the results in Figure 4a,b. It was shown for example that T R 0 , 4 T C 4 , 0 = 1.33. Examining Figure 10a,b, we see that in this case T R 0 , 4 T C 4 , 0 = (6.67 × 108/5.19 × 108) = 1.28. This indicates a reasonably good quantitative agreement between the simulated and experimental results. However, there are clearly significant differences. Thus, for example, from Figure 10a,b, we can also see that T R 4 , 0 T C 4 , 0 = (6.53 × 107/5.19 × 108) = 0.126; similarly, T C 0 , 4 T R 0 , 4 = (4.29 × 107/6.67 × 108) = 0.064. The values of these ratios are much larger than those for the corresponding simulated results in Figure 4a,b.
In Figure 11, the experimental results corresponding to the predictions of our simulation in Figure 5 are presented. We note the agreement is not as close as in the simulation; however, the 2D results are consistently bound by the 1D values (Sum above and Diff below).
In Figure 11a, in agreement with Figure 5a, the measured 2D TC results, which correspond to displacements in Q1 along the y = x line, lie between the 1D TC Sum and TC Diff results, i.e., Equation (21). Similarly, in agreement with Figure 5b, the 2D TR results lie between the 1D TR Sum and TR Diff results, shown in Figure 11b. However, a larger gap separates the 1D T Sum and T Diff results in Figure 11 than in Figure 5. This may be due to experimental errors.
In Figure 12, we compare the 2D results from Figure 10c with the corresponding 1D results in Figure 10a,b.
These experimental results confirm the predictions in Section 2.2, that in this case, i.e., when y = x, T C x , y   T C x , y = 0 and T R x , y   T R x = 0 , y . However, we again note the larger difference between the 1D and 2D cases than that predicted by our simulations in Figure 4.
In this section, it has been shown that qualitative and quantitative agreements exist between the simulation and experimental results. However, the results presented in Figure 11 are different from those presented in Figure 5. As indicated, some of this difference may arise due to experimental errors. Any calibration of the system will require that both systematic and random errors are included.

4.3. Calibration: 1D and 2D

As discussed in Section 2.2, in order to extract accurate 2D motion information we wish to examine the relationship between the 1D calibration results and the corresponding 2D results. In the case of Q1, we previously discussed expressing the variations along {3}: y = x, in terms of the results for {1}: +x and {2}: +y; see Figure 11. Before proceeding, we recall that in that case, both our simulations and our experimental results in Section 4.2 indicate that T C x , y T C x , y = 0 , and T R x , y T R x = 0 , y , for some range of displacement, e.g., 0 < x < 4     p i x e l s , when moving along y = x; see Figure 5 and Figure 12 for simulation and experimental results, respectively. In this case, when y = x, if we find that T C x , y we know x , and therefore, we also know   y . Therefore, we perform estimation of the general 2D motion case using 1D calibration experimental data. The experimental 2D motion in Q1 along the trajectories (1) y = 2x, (2) y = x and (3) y = x/2 are examined; a similar procedure is performed as in Section 3.1. The corresponding accumulated grey levels for (1) y = 2x, (2) y = x and (3) y = x/2 are calculated using SM. The 2D motion estimation results for these three cases are shown in Figure 13a.
As can be seen in Figure 13a, a general 2D motion along (1) y = 2x, (2) y = x and (3) y = x/2 can be estimated using the 1D calibrated data. The corresponding linear fits to the results in Figure 13a are listed in Table 4.
We note that no attempt is made here to correct the systematic error occurring in the experimental results presented in Figure 13a. Therefore, the line fit or estimated 2D motion, i.e., along y = x, does not intersect the origin (0,0) and the point (4,4); see the purple line in Figure 13a. This misalignment indicates that the SM suffers from systematic error when estimating 2D motion.
In Figure 13b–e, we examine the errors present following the previous discussion of the simulation results in Figure 6. The errors in the estimated x and y motion with respect to the desired motion from 0–4 pixels along y = x are shown in Figure 13b,d. In Figure 13b, the smallest error value is 0.028 pixels occurring for a desired motion of at 1.077 pixels, corresponding to a 2.9% error. The largest error value is 0.41 pixels for a 2.308-pixel shift (corresponding to a 17.76% error) in x. In Figure 13d, the smallest error is −0.012 for a 1.077-pixel shift (1.11%) and the highest 0.14 pixels at 2.308-pixel shift (6.7% error) in y.
In Figure 13b, the MAE values are 0.160 and 0.321 pixels in x for the sub- and super-pixel regions. In Figure 13d, the corresponding MAE values are 0.064 and 0.081 pixels in y. This indicates the presence of measurement bias. Systematic errors can arise in our desired motion value due to background lighting fluctuations, system drift (due to temperature variations) or if the motor is improperly calibrated, e.g., the desired motion is not equal to the actual motion taking place (the real object displacement). Optoelectronic noise and mechanical vibration can introduce random errors. These issues are not discussed further here.
The corresponding histograms of Figure 13b,c with 0.02-pixel intervals are shown in Figure 13c,e. The presence of two clusters of error values is again shown in these two histograms, which have similar characteristics to the smaller inserted histograms in Figure 6c,e in simulation.
From the table, good linear fits, with high R2 values of 0.9811, 0.9992 and 0.9903; slope values, m, close to the ideals (2~2.037, 1~0.9375 and 0.5~0.4453); and magnitudes of c < 1 are found for motion along the three trajectories: (1) y = 2x, (2) y = x and (3) y = x/2, respectively. As in Table 2, the RMSE value is calculated using the estimated motion and line fits.

4.4. Object with Diffuser

Although the experimental data typically include random noise, to further examine the robustness of the SM for a general object, a layer of diffuse material (paper) is inserted just before and attached to the USAF test target. Both are simultaneously moved by the translation stage. The SM results for 2D motion along y = x are shown in Figure 14.
As can be observed in Figure 14, the accumulated grey level values, TC and TR, are the linear function of the desired motion in regions (i) and (ii).

5. Discussion

We have examined a subpixel motion estimation method, called the subtraction method, SM, for 2D motion estimation using 1D calibration. A detailed basis for the use of SM in 2D motion estimation is provided. Both continuous and discrete analyses are presented, and the relationships between the 1D calibration results and the corresponding 2D results are indicated. It is demonstrated that, following careful calibration, the in-plane motion in 2D can be determined. The robustness of the SM is also examined using both simulated and experimental results. Simulations for different noise levels, i.e., S/N = 10 and S/N = 1, are presented, and experiments involving a diffuse speckle field are performed. In the experiments, the RMSE values followed the same trend observed for the simulations with the linear fits for the 2D motion, y = x being better (having a lower RMSE) than for the other two, yx, cases. We note that the results for 1D motion estimation using SM and CM are compared in detail in [8]. The systematic and random errors have been examined. To avoid repetition, the CM results for 2D motion estimation are not shown here but have been presented in [22].
In all cases, the results show the SM results exhibit highly linear variation. It is shown that, given such linear variation, a change from increasing to decreasing accumulated grey level values indicates a change in direction [8]. When measuring in 2D, tracking the direction of motion becomes critically important. It is also worthwhile to note that SM will be unsuitable for use as a standalone method; some calibration process will be necessary.
SM is a fast method to measure in-plane object motion with subpixel accuracy. It can be used as a complement to standard correlation methods and/or in conjunction with a reported extension of SM [20] to determine the direction which permits an improvement in overall performance. SM offers significant computation time savings (3N2+ 2N) in comparison to the corresponding CM calculation (O (N2log2N). To quantify this, we note that subtracting two N by N images requires N2 operations. Adding across the N rows and down the N columns requires N2 + N2 operations. Adding the absolute values of the elements of the C and R vectors, to calculate the TC and TR, requires dropping the sign bit and 2N operations. Therefore, it only requires 3N2 + 2N addition operations and no multiplications.
As noted, when performing experimental measurements to calibrate the system, knowing the relative position of the object with respect to the camera is critically important. Any inaccuracy in controlling the position (the object displacement) will degrade the calibration process. Therefore, methods to correct for systematic and random errors during calibration will be further explored [22].

6. Conclusions

In this paper, we proposed an SM method for quantifying 2D displacements with subpixel accuracy. The theoretical analysis has been driven by both discrete and continuous models. Simulation and experimental results are presented and the corresponding error analysis (between the actual motion and estimated) for 2D motion has been given. The results indicate that any general motion in 2D involving combinations of in-plane motions in x and y can be determined using the SM after a 1D calibration.
Since the method does not indicate the direction of motions but a change in direction, some calibration process will be necessary. However, in conjunction with standard correlation methods and/or recently proposed methods [20], SM offers significant computation time-saving advantages.
Potential applications of this method would appear to exist; for example, with only slight modifications the accumulated grey level values TR and TC can be used to perform image stitching, subpixel registration, particle tracking and to determine the frequency of vibration.

Author Contributions

Conceptualization, J.T.S.; methodology, M.W., J.J.H. and J.T.S.; software, M.W.; validation, M.W.; formal analysis, M.W. and J.T.S.; investigation, M.W., J.J.H. and J.T.S.; resources, J.T.S. and M.W.; data curation, M.W.; writing—original draft preparation, M.W.; writing—review and editing, J.J.H. and J.T.S.; visualization, M.W.; supervision, J.T.S.; project administration, M.W.; funding acquisition, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by RECENTRE program with grant number OA102070-10, and National Nature Science Foundation of China (62220106005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

This paper was substantially complete at the time of J.T.S.’s death in late 2022. It has taken us a long time to revisit it because it was deeply painful to close off this chapter of our professional careers under the mentorship of a man of great wit and wisdom. J.T.S. thanks Science Foundation Ireland and Enterprise Ireland for support under the National Development Plan.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pan, Z.; Geng, D.; Owens, A. Self-Supervised Motion Magnification by Backpropagating Through Optical Flow. Adv. Neural Inf. Process. Syst. 2024, 36, 1–21. [Google Scholar]
  2. Konstantinidis, D.; Stathaki, T.; Argyriou, V. Phase amplified correlation for improved sub-pixel motion estimation. IEEE Trans. Image Process. 2019, 28, 3089–3101. [Google Scholar] [CrossRef] [PubMed]
  3. Chi, Y.M.; Tran, T.D.; Etienne-Cummings, R. Optical flow approximation of sub-pixel accurate block matching for video coding. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April 2007; Volume 1, p. I-1017. [Google Scholar]
  4. Tico, M.; Alenius, S.; Vehvilainen, M. Method of motion estimation for image stabilization. In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II-277-280. [Google Scholar]
  5. Ruan, H.; Tan, Z.; Chen, L.; Wan, W.; Cao, J. Efficient sub-pixel convolutional neural network for terahertz image super-resolution. Opt. Lett. 2022, 47, 3115–3118. [Google Scholar] [CrossRef] [PubMed]
  6. Feng, Y.; Yang, T.; Niu, Y. Subpixel computer vision detection based on wavelet transform. IEEE Access 2020, 8, 88273–88281. [Google Scholar] [CrossRef]
  7. Yoneyama, S. Basic principle of digital image correlation for in-plane displacement and strain measurement. Adv. Compos. Mater. 2016, 25, 105–123. [Google Scholar] [CrossRef]
  8. Wan, M.; Healy, J.J.; Sheridan, J.J. Fast subpixel displacement measurement: Part I: 1-D Analysis, simulation, and experiment. Opt. Eng. 2022, 61, 043105. [Google Scholar] [CrossRef]
  9. Guizar-Sicairos, M.; Thurman, S.T.; Fienup, J.R. Efficient subpixel image registration algorithms. Opt. Lett. 2008, 33, 156–158. [Google Scholar] [CrossRef] [PubMed]
  10. Karybali, I.; Psarakis, E.; Berberidis, K.; Evangelidis, G. An efficient spatial domain technique for subpixel image registration, Sig. Process. Image Commun. 2008, 23, 711–724. [Google Scholar] [CrossRef]
  11. Michaelis, D.; Neal, D.R.; Wieneke, B. Peak-locking reduction for particle image velocimetry, Meas. Sci. Technol. 2016, 27, 104005. [Google Scholar]
  12. Tong, W. Subpixel image registration with reduced bias. Opt. Lett. 2011, 36, 763–765. [Google Scholar] [CrossRef] [PubMed]
  13. Mas, D.; Ferrer, B.; Sheridan, J.T.; Espinosa, J. Resolution limits to object tracking with subpixel accuracy. Opt. Lett. 2012, 37, 4877–4879. [Google Scholar] [CrossRef] [PubMed]
  14. Mas, D.; Perez, J.; Ferrer, B.; Espinosa, J. Realistic limits for subpixel movement detection. Appl. Opt. 2016, 55, 4974–4979. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, G.; Li, M.; Zhang, W.; Gu, J. Subpixel matching using double-precision gradient-based method for digital image correlation. Sensors 2021, 21, 3140. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, K.; Zhang, Y.; Li, Z. Motion estimation of the common carotid artery wall in ultrasound images using an improved sub-pixel block matching method. Optik 2022, 270, 169929. [Google Scholar] [CrossRef]
  17. Marinel, C.; Mathon, B.; Losson, O.; Macaire, L. Comparison of Phase-based Sub-Pixel Motion Estimation Methods. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 561–565. [Google Scholar]
  18. Tu, B.; Ren, Q.; Li, Q.; He, W.; He, W. Hyperspectral image classification using a superpixel-pixel-subpixel multilevel network. IEEE Trans. Instrum. Meas. 2023, 72, 5013616. [Google Scholar] [CrossRef]
  19. Wan, M.; Healy, J.J.; Sheridan, J.T. Fast, sub-pixel accurate, displacement measurement method: Optical and terahertz systems. Opt. Lett. 2020, 45, 6611–6614. [Google Scholar] [CrossRef] [PubMed]
  20. Healy, J.J.; Wan, M.; Sheridan, J.T. Direction-sensitive fast measurement of sub-sampling-period delays. In Proceedings of the 8th International Conference on Signal Processing and Integrated Networks, SPIN-2021, Noida, India, 26–27 August 2021; pp. 228–232. [Google Scholar]
  21. Ferrer, B.; Tomás, M.B.; Wan, M.; Sheridan, J.T.; Mas, D. Comparative Analysis of Discrete Subtraction and Cross-Correlation for Subpixel Object Tracking. Appl. Sci. 2023, 13, 8271. [Google Scholar] [CrossRef]
  22. Wan, M. Optical Imaging Techniques in the THz Regime. PhD Thesis, University College Dublin, Dublin, Ireland, 2021. [Google Scholar]
Figure 1. Illustration of the calculation for Equations (13) and (14) [19].
Figure 1. Illustration of the calculation for Equations (13) and (14) [19].
Photonics 11 00625 g001
Figure 2. Motion in the four quadrants. In this paper, simulation and experimental results are presented for Q1.
Figure 2. Motion in the four quadrants. In this paper, simulation and experimental results are presented for Q1.
Photonics 11 00625 g002
Figure 3. The windowing and zero-padding processing for image simulation. (a) The original image with 780 × 780 pixels; (b) the central part of (a) having 760 × 760 pixels, IW; (c) zero-padding, IPW, of (b) with 1024 × 1024 pixels.
Figure 3. The windowing and zero-padding processing for image simulation. (a) The original image with 780 × 780 pixels; (b) the central part of (a) having 760 × 760 pixels, IW; (c) zero-padding, IPW, of (b) with 1024 × 1024 pixels.
Photonics 11 00625 g003
Figure 4. The SM results for the USAF chart for motion in Q1; (a) 1D motion in +x, (1D,+x), i.e., giving T C x , y = 0 and T R x , y = 0 ; (b) 1D, +y, i.e., giving T C x = 0 , y and T R x = 0 , y ; (c) 1D motion, T R x , y = 0 , from (a), and T C x = 0 , y , from (b); (d) 2D motion along the path y = x, (2D, y = x), i.e., giving T C x , y and T R x , y . Two regions are separated by green line: (i) subpixel at 0.15–1 pixels, and (ii) super-pixel at 1–4 pixels.
Figure 4. The SM results for the USAF chart for motion in Q1; (a) 1D motion in +x, (1D,+x), i.e., giving T C x , y = 0 and T R x , y = 0 ; (b) 1D, +y, i.e., giving T C x = 0 , y and T R x = 0 , y ; (c) 1D motion, T R x , y = 0 , from (a), and T C x = 0 , y , from (b); (d) 2D motion along the path y = x, (2D, y = x), i.e., giving T C x , y and T R x , y . Two regions are separated by green line: (i) subpixel at 0.15–1 pixels, and (ii) super-pixel at 1–4 pixels.
Photonics 11 00625 g004
Figure 5. The 2D results when y = x lies between the sum, TSum, and the difference, TDiff, of the 1D results: (a) TC and (b) TR. This figure clearly indicates that the 2D TC results and the 2D TR results are in this case practically identical (within a fraction of a percentage point) to the corresponding 1D TC and 1D TR results.
Figure 5. The 2D results when y = x lies between the sum, TSum, and the difference, TDiff, of the 1D results: (a) TC and (b) TR. This figure clearly indicates that the 2D TC results and the 2D TR results are in this case practically identical (within a fraction of a percentage point) to the corresponding 1D TC and 1D TR results.
Photonics 11 00625 g005
Figure 6. (a) Simulated 2D motion in Q1 along the estimated trajectories (1) y = 2x, (2) y = x and (3) y = x/2. Accumulated grey level values are converted to displacements using 1D calibration (1D, +x and 1D, +y) shown in Table 1. The linear fits to the estimated motion are found; see Table 2. (be) The errors in estimated motion for the y = x case are examined. The errors for (b) the x motion and (d) the corresponding y motion for displacements of 0–10 pixels (inserted figure 0–4 pixels) are presented. (c,e) The corresponding histograms are shown for the results in (b,d).
Figure 6. (a) Simulated 2D motion in Q1 along the estimated trajectories (1) y = 2x, (2) y = x and (3) y = x/2. Accumulated grey level values are converted to displacements using 1D calibration (1D, +x and 1D, +y) shown in Table 1. The linear fits to the estimated motion are found; see Table 2. (be) The errors in estimated motion for the y = x case are examined. The errors for (b) the x motion and (d) the corresponding y motion for displacements of 0–10 pixels (inserted figure 0–4 pixels) are presented. (c,e) The corresponding histograms are shown for the results in (b,d).
Photonics 11 00625 g006
Figure 7. SM results for the USAF chart with noise for 2D motion in Q1 along y = x. (a) S/N = 10 and (b) S/N = 1.
Figure 7. SM results for the USAF chart with noise for 2D motion in Q1 along y = x. (a) S/N = 10 and (b) S/N = 1.
Photonics 11 00625 g007
Figure 8. (a) Schematic diagram of the 4-f optical imaging systems, f = 10 cm, M = 1; (b) experimental setup. The red box represents (a).
Figure 8. (a) Schematic diagram of the 4-f optical imaging systems, f = 10 cm, M = 1; (b) experimental setup. The red box represents (a).
Photonics 11 00625 g008
Figure 9. Trajectories used in the experiments in Q1.
Figure 9. Trajectories used in the experiments in Q1.
Photonics 11 00625 g009
Figure 10. Application of SM to experimental data. These results for the USAF chart correspond to the simulated results in Figure 4. Motion in Q1; (a) 1D motion in +x, i.e., giving T C x , y = 0 and T R x , y = 0 ; (b) 1D motion in +y, i.e., giving T C x = 0 , y and T R x = 0 , y ; (c) 2D motion along y = x, i.e., giving T C x , y and T R x , y .
Figure 10. Application of SM to experimental data. These results for the USAF chart correspond to the simulated results in Figure 4. Motion in Q1; (a) 1D motion in +x, i.e., giving T C x , y = 0 and T R x , y = 0 ; (b) 1D motion in +y, i.e., giving T C x = 0 , y and T R x = 0 , y ; (c) 2D motion along y = x, i.e., giving T C x , y and T R x , y .
Photonics 11 00625 g010
Figure 11. The 2D experimental results, 2D T, when y = x, lie between the sum, T Sum, and the difference, T Diff, of the 1D results for (a) TC and (b) TR. No corrections for systematic experimental errors are included.
Figure 11. The 2D experimental results, 2D T, when y = x, lie between the sum, T Sum, and the difference, T Diff, of the 1D results for (a) TC and (b) TR. No corrections for systematic experimental errors are included.
Photonics 11 00625 g011
Figure 12. Comparison between the experimental 2D accumulated grey level results and the corresponding 1D results for (a) T C and (b) T R . No corrections for systematic experimental errors are included. This figure experimentally demonstrates that the 2D TC results and the 2D TR results are practically identical (within a fraction of a percentage point) to the corresponding 1D TC and 1D TR results.
Figure 12. Comparison between the experimental 2D accumulated grey level results and the corresponding 1D results for (a) T C and (b) T R . No corrections for systematic experimental errors are included. This figure experimentally demonstrates that the 2D TC results and the 2D TR results are practically identical (within a fraction of a percentage point) to the corresponding 1D TC and 1D TR results.
Photonics 11 00625 g012
Figure 13. (a) The experimental 2D motion in Q1 along y = 2x, y = x and y = x/2 estimation using 1D calibration (1D, +x and 1D, +y). The linear fits to the estimated motion are performed; see Table 4. The error for (b) x motion and (d) the corresponding y motion for displacements of 0–4 pixels are presented. (c,e) are the corresponding histograms for the results in (b,d).
Figure 13. (a) The experimental 2D motion in Q1 along y = 2x, y = x and y = x/2 estimation using 1D calibration (1D, +x and 1D, +y). The linear fits to the estimated motion are performed; see Table 4. The error for (b) x motion and (d) the corresponding y motion for displacements of 0–4 pixels are presented. (c,e) are the corresponding histograms for the results in (b,d).
Photonics 11 00625 g013
Figure 14. The SM results for the USAF chart with a diffuser for 2D motion in Q1 along y = x.
Figure 14. The SM results for the USAF chart with a diffuser for 2D motion in Q1 along y = x.
Photonics 11 00625 g014
Table 1. Piecewise linear fits to simulated data: g = m δ +c, for motion in Q1. The larger coefficient values are shown in bold text. The quality of the linear fits is indicated by the R2 value. Note that this tabulated 1D motion information is used below, in this section, to calibrate the 2D motion.
Table 1. Piecewise linear fits to simulated data: g = m δ +c, for motion in Q1. The larger coefficient values are shown in bold text. The quality of the linear fits is indicated by the R2 value. Note that this tabulated 1D motion information is used below, in this section, to calibrate the 2D motion.

Q1
+x
(m: PC,1 or PR,1)
+y
(m: MC,1 or MR,1)
y = x
2D
Range
(Pixels)
(i)
0.15~1
(ii)
1~4
(i)
0.15~1
(ii)
1~4
(i)
0.15~1
(ii)
1~4
TCm
c
R2
4.328 × 108
−2.67 × 106
0.9994
2.934 × 108
1.697 × 108
0.9946
3.359 × 105
2575
0.9996
2.639 × 105
8.961 × 104
0.9963
4.328 × 108
−2.671 × 106
0.9994
2.934 × 108
1.697 × 108
0.9946
TRm
c
R2
2.85 × 105
−632.8
0.9997
2.122 × 105
9.971 × 105
0.9947
5.053 × 108
−6.998 × 105
0.9996
4.168 × 108
1.059 × 108
0.9989
5.053 × 108
−7.044 × 105
0.9996
4.167 × 108
1.059 × 108
0.9989
Table 2. The linear fit coefficients for 2D motion examination along the trajectories: y = 2x, y = x and y = x/2.
Table 2. The linear fit coefficients for 2D motion examination along the trajectories: y = 2x, y = x and y = x/2.
Data
(TC & TR)
y = 2xy = xy = x/2
1D Calibrationx: 0~2
y: 0~4
x: 0~4
y: 0~4
x: 0~4
y: 0~4
x: 0~4
y: 0~2
x: 0~4
y: 0~4
m1.9982.0040.99910.50360.4978
c0.00180.02050.0017−0.0118−0.0019
R20.99880.99310.99960.9980.9965
RMSE0.04050.09550.02370.02560.0338
Table 3. Piecewise linear fits to experimental data: g = m δ +c, for motion in Q1.
Table 3. Piecewise linear fits to experimental data: g = m δ +c, for motion in Q1.

Q1
+x
(m: PC,1 or PR,1)
+y
(m: MC,1 or MR,1)
y = x
2D
Range
(Pixel)
(i)
0.3~1
(ii)
1~4
(i)
0.3~1
(ii)
1~4
(i)
0.3~1
(ii)
1~4

TC
m
c
R2
1.592 × 108
2.758 × 107
0.9828
1.123 × 108
8.017 × 107
0.9976
−1.51 × 107
3.573 × 107
0.9914
1.533 × 106
3.469 × 107
0.0367
1.462 × 108
3.654 × 107
0.9924
1.143 × 108
7.018 × 107
0.9989

TR
m
c
R2
2.673 × 107
2.094 × 107
0.9061
5.356 × 106
5.805 × 107
0.3275
1.825 × 108
3.361 × 107
0.9987
1.48 × 108
8.124 × 107
0.9961
1.851 × 108
4.833 × 107
0.9975
1.511 × 108
7.677 × 107
0.9996
Table 4. The linear fit coefficients for 2D motion examination along y = 2x, y = x and y = x/2.
Table 4. The linear fit coefficients for 2D motion examination along y = 2x, y = x and y = x/2.
Data
(TC & TR)
y = 2xy = xy = x/2
1D Calibration+x: 0~4 pixels          +y: 0~4 pixels
m2.0370.93750.4453
c−0.1316−0.059810.02585
R20.98110.99920.9903
RMSE0.13860.03320.0475
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wan, M.; Healy, J.J.; Sheridan, J.T. Fast 2D Subpixel Displacement Estimation. Photonics 2024, 11, 625. https://doi.org/10.3390/photonics11070625

AMA Style

Wan M, Healy JJ, Sheridan JT. Fast 2D Subpixel Displacement Estimation. Photonics. 2024; 11(7):625. https://doi.org/10.3390/photonics11070625

Chicago/Turabian Style

Wan, Min, John J. Healy, and John T. Sheridan. 2024. "Fast 2D Subpixel Displacement Estimation" Photonics 11, no. 7: 625. https://doi.org/10.3390/photonics11070625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop