Next Article in Journal
Design of Acoustical Bessel-Like Beam Formation by a Pupil Masked Soret Zone Plate Lens
Next Article in Special Issue
Graph Cut-Based Human Body Segmentation in Color Images Using Skeleton Information from the Depth Sensor
Previous Article in Journal
Improved Location Estimation in Wireless Sensor Networks Using a Vector-Based Swarm Optimized Connected Dominating Set
Previous Article in Special Issue
A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pixelwise Phase Unwrapping Based on Ordered Periods Phase Shift

by
Satoshi Tabata
1,*,
Michika Maruyama
1,
Yoshihiro Watanabe
2 and
Masatoshi Ishikawa
3
1
Department of Information Physics and Computing, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
2
Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, 4259-G2-31, Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
3
Department of Creative Informatics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(2), 377; https://doi.org/10.3390/s19020377
Submission received: 20 December 2018 / Revised: 14 January 2019 / Accepted: 14 January 2019 / Published: 17 January 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)

Abstract

:
The existing phase-shift methods are effective in achieving high-speed, high-precision, high-resolution, real-time shape measurement of moving objects; however, a phase-unwrapping method that can handle the motion of target objects in a real environment and is robust against global illumination as well is yet to be established. Accordingly, a robust and highly accurate method for determining the absolute phase, using a minimum of three steps, is proposed in this study. In this proposed method, an order structure that rearranges the projection pattern for each period of the sine wave is introduced, so that solving the phase unwrapping problem comes down to calculating the pattern order. Using simulation experiments, it has been confirmed that the proposed method can be used in high-speed, high-precision, high-resolution, three-dimensional shape measurements even in situations with high-speed moving objects and presence of global illumination. In this study, an experimental measurement system was configured with a high-speed camera and projector, and real-time measurements were performed with a processing time of 1.05 ms and a throughput of 500 fps.

1. Introduction

Three-dimensional (3D) measurement is used in various fields. In particular, achieving high-speed, high-precision, high-resolution real-time shape measurement of moving objects is important in fields requiring high-speed feedback, such as robotics, user interface, virtual reality, and augmented reality. It has been observed that a high-speed measurement, about 500–1000 fps, is effective for accurately measuring a moving object [1]. It has also been observed that a structured light method is effective for ensuring a sufficient amount of light is available in high-speed measurement with short exposure time and for measuring a wide range at once [1]. There are various structured light methods [2,3]. These methods can be classified into two types, one-shot and multi-shot types. The former is a method of projecting one pattern, and the latter is a method of projecting many patterns. Between these two types, the one-shot type with a small number of projection patterns is suitable for measuring moving objects. Although this method increases the computational complexity of pixel correspondence between the projector and the camera, there are some techniques that achieve efficient calculation by encoding the information into the spatial structure of patterns and achieve speed exceeding 500 fps [4,5]. However, it is necessary to structure span a plurality of pixels for one measurement point; thus, the resolution tends to be low. On the other hand, in the multi-shot type, the projection of the plurality of sheets is necessary; thus, the total projection time becomes longer than the one-shot type. However, it can measure independently for each pixel; thus, high resolution can be achieved. Therefore, methods that speed up the Gray code method [6] or the phase-shift method [7] have also been proposed. In particular, the phase-shift method requires a few patterns, and pixel correspondence can be achieved with subpixel accuracy; therefore, the distance can be measured with high accuracy. In addition, an 8-bit projector at 1000 fps has been realized in recent years [8]. The projection time can be reduced, and environments capable of measuring moving objects even in the multi-shot type are available [7].
In the phase-shift method, projecting a minimum of three sine wave patterns, the phase for each pixel of the image captured by the camera is calculated [9]. However, only the wrapped phase can be obtained because the inverse tangent function is used. Accordingly, phase unwrapping is required to obtain the absolute phase. Many phase-unwrapping methods corresponding to various conditions have been proposed [3,10].
There are two important problems in the measurement of moving objects using the phase-shift method. The first one is the motion error. This occurs because of the use of multi-shots. The error is caused by the shift in the projected position as the object moves under measurement. The second one is global illumination. This problem arises as the projected light is reflected in the object under measurement. The intensity of the reflection target changes depending on the intensity of the reflection source when secondary reflection occurs; thus, different intensity changes occur for each projection pattern. These cause error in the phase calculations. This error becomes large when low frequency patterns are used [11].
Methods have been proposed to solve the motion error problem, by motion compensation [12,13], or reducing the number of projections [7,14,15]. Particularly, the approach to reduce the number of projections is considered to be promising to achieve high accuracy with the small calculation amount. For example, two-plus-two method [15] reduced the number of projections and achieved the measurement with only four projections, without any other additional devices. However, this method is easily affected by the global illumination, not only in the calculation of relative phase, but also in the phase unwrapping. On the other hand, to solve the global illumination problem, a method of suppressing the effect by using only high-frequency patterns has been proposed [11]. However, a large number of projection patterns are used in this method, making it unsuitable for measurements of moving objects. Although methods to solve each problem separately have been proposed, there is no known method that solves both problems and can be used in high-speed measurements of moving objects.
Moreover, reducing the processing time is also important to achieve real-time measurement. There is the system which achieved 500 fps based on combination of phase-shift method and high-speed projector [7]. However, this system needs additional camera which placed coaxially, and the camera axes are aligned high-precision.
Accordingly, this study proposes a method to solve these two problems simultaneously without any other additional devices by applying phase unwrapping using an ordered structure of patterns. In this method, unlike the widely used conventional approach of repeating the sine wave patterns, phase unwrapping is achieved by switching the order of the function in each period (2 π ). This approach is able to reduces the number of projection patterns. That is, with N types of functions, the maximum number of switches possible is N P N , and the period that can be expressed depends on the number of steps. This paper proposes two type of the projection patterns. The first one is for the case with the minimum of three steps, and the second one is for the case with four steps where enough periods can be obtained. Moreover, the proposed method incorporates a robust design to handle the effect of global illumination without using low-frequency patterns.
In the experiments, accuracy comparisons were performed using simulations. The results confirmed that, compared with the conventional methods [15], the proposed method can achieve robust measurements even with global illumination. Moreover, a high-speed system achieving 500 fps (throughput) was realized using a high-speed 8-bit projector [8].

2. Related Methods

2.1. Basic Method

In the phase-shift method, using the number of sine wave patterns N ( N 3 ) projected, the phase of each pixel in the image captured by the camera is calculated, and the shape is measured using the triangulation method. In particular, for N = 3 , the measurements can be performed with minimum number of steps. In the following, although the method is explained using N = 3 , the phase can also be calculated for N > 3 . Figure 1a shows an example of the projection pattern for N = 3 . This method is called the three-step method. For a projector pixel ( u , v ) , assuming the phase as ϕ p ( u , v ) , the intensity given by I 1 p ( u , v ) , I 2 p ( u , v ) , and I 3 p ( u , v ) can be expressed as follows:
I 1 p ( u , v ) = I D C p + I ϕ p cos ( ϕ p ( u , v ) 2 3 π )
I 2 p ( u , v ) = I D C p + I ϕ p cos ϕ p ( u , v )
I 3 p ( u , v ) = I D C p + I ϕ p cos ( ϕ p ( u , v ) + 2 3 π )
where I D C p is the offset value of DC (Direct Current) component and I ϕ p is the amplitude of the sine wave.
In the camera pixel ( x , y ) , I 1 c , I 2 c , and I 3 c are captured after being affected by distance attenuation, object surface reflectance ratio, and ambient light.
I 1 c ( x , y ) = A ( x , y ) I 1 p ( u , v ) + B ( x , y )
I 2 c ( x , y ) = A ( x , y ) I 2 p ( u , v ) + B ( x , y )
I 3 c ( x , y ) = A ( x , y ) I 3 p ( u , v ) + B ( x , y )
where A is the attenuation because of the distance and object surface reflectance ratio, and B is the ambient light. The distance is determined by calculating the phase ϕ ( x , y ) at each camera pixel from the captured I 1 c , I 2 c , and I 3 c , and the triangulation measurements corresponding to the projector coordinates using ϕ p . Although Equation (7) is used in calculating the phase, only the relative phase ϕ ( 0 ϕ < 2 π ) can be obtained.
ϕ ( x , y ) = arctan 3 ( I 1 c I 3 c ) 2 I 2 c I 1 c I 3 c
Accordingly, the absolute phase ϕ needs to be calculated by performing phase unwrapping to calculate the period k Z in Equation (8).
ϕ ( x , y ) = ϕ ( x , y ) + 2 π k ( x , y )
Various methods have been proposed not only using the sine wave as the projection pattern but also using the triangular [16] and trapezoidal waves [17]. However, these patterns are prone to defocused blur [18,19], and it is observed that using sine wave patterns is preferable.
Moreover, to suppress the motion error, the two-plus-one phase-shift method, in which one step is taken as steady light, has been proposed [14]. In this method, instead of Equations (1)–(3) of the three-step method,
I 1 p ( u , v ) = I D C p + I ϕ p sin ϕ p ( u , v )
I 2 p ( u , v ) = I D C p + I ϕ p cos ϕ p ( u , v )
I 3 p ( u , v ) = I D C p
are projected. Figure 1b shows the projection pattern in the two-plus-one phase-shift method. The relative phase of each pixel in the camera is calculated using the following equation with these values.
ϕ ( x , y ) = arctan I 1 c I 3 c I 2 c I 3 c
In this method, Equation (11) forms a uniform pattern for the entire projector; thus, it is not affected by the projected position change caused by motion. This way the effect of the motion error can be suppressed. However, similarly to the three-step method, this method also requires phase unwrapping, as presented in Equation (8).
The relative phase and absolute phase coincide as the only projected phase is 0 ϕ < 2 π , and this phase can be identified without performing phase unwrapping. However, the spatial resolution is severely limited because of the limitations of intensity, such as gradation. Therefore, phase unwrapping becomes necessary. The following describes this in detail.

2.2. Phase Unwrapping

In phase unwrapping, the two broad approaches are the spatial and temporal methods [10]. This section summarizes the characteristics, merits, and demerits of these methods. First, in the spatial phase-unwrapping method, phase unwrapping is performed utilizing the spatial phase continuity without increasing the number of projections, and accordingly, various methods have been proposed [20,21,22,23]. Although the spatial phase-unwrapping method can measure a single continuous shape, it is prone to failure in case of multiple objects or discontinuous changes. Thus, the measurement targets are limited. Furthermore, there are single-shot type of spatial methods for the fringe pattern [24,25]. Although these are not proposed for the phase-shift method, it could be applied to the phase-shift method. These methods are able to be applied in case of multiple objects, because each pixel only depends on small patch. However, there are some error and blur around the discontinuous changes, since they are not pixelwise methods.
In contrast, in the temporal phase-unwrapping method, pixelwise phase unwrapping can be performed using additional projections in addition to the three steps, which is the minimum number of projections required to calculate the relative phase [26]. In the temporal phase-unwrapping method, there is no limitation to the additional projection patterns, and various patterns, such as multi-frequency and Gray-code, have been proposed [11,15,27,28,29,30,31]. It is important to note that, depending on the projection pattern type, the number of additional projections varies. For example, in the multi-frequency method, 3 F additional projections are required if the number of frequencies to be added is F; and for the Gray-code pattern, F projections are required to express 2 F periods. However, if the number of projections increases, it takes longer to project all the patterns resulting in slower measurement speed. In particular, in measuring moving objects, the movement distance during projection becomes longer, resulting in increase in error caused by motion. To solve such problems associated with an increase in the number of projections in phase unwrapping, a measurement method using four steps has been proposed [15]. This method is based on the two-plus-one phase-shift method [14], utilizing projections using Equations (9)–(11). Thus, instead of Equation (11), this method uses projection patterns increasing or decreasing monotonically and identifies the period based on the magnitude of the difference. However, the monotonically increasing or decreasing projection patterns can be affected by global illumination and period misidentification. Although methods have been proposed to reduce the number of projections by distributing each pattern in RGB and projecting them simultaneously [31,32,33,34], in all these methods, the measurement accuracy depends on the surface color of the target object. Moreover, a method has also been proposed in which the phase-shift pattern is expressed in seven bits and the Gray-code pattern is expressed in one bit [7]; and by adjusting the projection time for each bit of the projector, the patterns are embedded in one image [7]. However, two cameras should be placed coaxially to capture the respective patterns, which makes the system complex and calls for high-precision alignment of the camera axes.
Other than the aforementioned methods, various phase-unwrapping methods incorporating geometrical constraints using a number of cameras [35,36], or limiting the range of measurement [37], have also been proposed. However, the methods based on multiple cameras need long processing time, bacause they need global search to select the correct corresponding point [37], and the methods based on limiting the range of measurement can measure only a narrow area.
Moreover, a method has been proposed in which the amplitude of conventional phase-shift method pattern changes for each period, and phase unwrapping is performed by identifying the phase from the magnitude of the amplitude [38]. Although the period can be identified using only the projected pattern without increasing the number of projections required for phase unwrapping, pixelwise phase unwrapping has not been achieved because using a spatial codeword is necessary because of the limitations of amplitude resolution.
As opposed to the aforementioned spatial methods, the proposed method can be applied in case of multiple objects or discontinuous changes without blur of depth, because pixelwise phase unwrapping can be performed in the proposed method. Moreover, as opposed to the aforementioned temporal methods, the proposed method can also be applied to moving objects, because phase unwrapping can be performed using as few as three or four steps in the proposed method. Furthermore, by making the period identification condition used in phase unwrapping robust enough against intensity changes, stable application in cases with global illumination is also possible. Also, because no extra cameras or limiting the measurement range is necessary, the method enables application to a wide range of measurements using a simple system.

3. Proposed Method

Many conventional methods use a function that repeats the same value for each period in each image of the projection pattern. Therefore, the projection pattern satisfies Equation (13).
I n p ( ϕ ) = I n p ( ϕ + 2 π k ) { n | n = 1 N } , k Z
In contrast to conventional methods, the proposed method rearranges part of the projection pattern for each period among the images of the projection patterns. In this way, as described in the Section 3.1.1, each function is arranged in a different order. This structure is called “ordered periods” in this paper.
With this structure, phase unwrapping is solved from what order each part of each projection pattern is projected. In next subsection, the procedure of designing the proposed method and the actual projection pattern for three-step and four-step methods are described.

3.1. Ordered Periods Phase Shift (OPPS)

3.1.1. Basic Method

First, this subsection introduces about the procedure of designing the OPPS method. Let I n ( n = 1 N ) be one period of the cyclic N patterns, which used in the standard methods such as three-step method or two-plus-one method. With dividing M periods into 1st period ( [ 0 , 2 π ) ), 2nd period ( [ 2 π , 4 π ) ),…, and Mth period ( [ ( M 1 ) · 2 π , M · 2 π ) ), N patterns of OPPS are denoted as
I n p ( ϕ p ) = I S n 1 ( ϕ p ) ( 0 ϕ p < 2 π ) I S n m ( ϕ p ) ( ( m 1 ) · 2 π ϕ p < m · 2 π ) I S n M ( ϕ p ) ( ( M 1 ) · 2 π ϕ p < M · 2 π )
where S is rearrangement of s = { 1 , , N } , S m is mth tuple of S, and S n m is nth element of S m .
In the case of N = 3 , let the rearrangement be
S = { ( 1 , 2 , 3 ) , ( 1 , 3 , 2 ) , ( 2 , 1 , 3 ) , ( 2 , 3 , 1 ) , ( 3 , 1 , 2 ) , ( 3 , 2 , 1 ) }
Further, these elements are S 1 1 = 1 , S 1 2 = 1 , and S 2 1 = 2 ; and I S 1 1 = I 1 , I S 1 2 = I 1 , and I S 2 1 = I 2 . With the patterns designed in such procedure, phase unwrapping problem results into the problem of identifying the tuple in each pixel.
Here, the three-step OPPS pattern is introduced. For example, for the three-step OPPS pattern, Equations (1)–(3) for three-step method, or Equations (9)–(11) for two-plus-one method, can be considered as standard pattern. However, I 1 , I 2 , and I 3 of three-step method have same intensity pair with phase shift of 2 3 π , and I 1 and I 2 of two-plus-one method have it with phase shift of π 2 . Therefore, with the rearrangement S (Equation (15)), large areas have same intensity pair in different tuples. Accordingly, it is necessary to omit the areas with duplicate intensity to appropriately determine the absolute phase, but it becomes unsuitable as the usable range of absolute phase becomes smaller. Therefore, in this study, the following are used.
I 1 ( u , v ) = I D C + I ϕ cos ϕ ( u , v )
I 2 ( u , v ) = 2 I D C
I 3 ( u , v ) = 0
Here, I D C is the offset value of DC (Direct Current) component and I ϕ is the amplitude of the sine wave. If I D C and I ϕ satisfy 0 < I ϕ < I D C , this pattern satisfies I 3 < I 1 < I 2 everywhere.
An ordered structure is incorporated in this pattern. In case of three-step, because the maximum number of tuples shown in Equation (15) exists, expressing up to six periods is possible. Therefore, creating I n p ( ϕ p ) as shown in Equation (14), it can be used up to 12 π . However, there are certain problems in using this in actual measurements. Accordingly, patterns obtained by rearranging Equation (14) are used. The rearrangement of Equation (14) is described in the following subsections.

3.1.2. Realizing Unique Tuple Identification

First, as represented by Equations (4)–(6), the camera pixel intensity is actually measured only after being affected by distance attenuation, object surface reflectance ratio, and ambient light. Accordingly, it is necessary to uniquely identify the tuple only from this value, using the relative magnitude or comparisons. The essential condition for uniquely identifying one tuple is that any set of { I 1 c , I 2 c , I 3 c , . . . I N c } in it does not appear in any other tuples. Accordingly, among the I n p ( ϕ p ) created using rearrangement S, for the areas satisfying Equation (19), patterns are omitted to enable unique tuple identification.
n = 1 N | I n c ( ϕ ) I n c ( θ ) | < ϵ { 0 < θ < 2 M π }
where ϵ is the difference in value to consider the intensity to be identical, which is set on the basis of the intensity of noise acquired in the real system. This way, by updating the N number of projection patterns I n p ( n = 1 N ) , phase unwrapping can be performed by calculating the period because the tuple can be identified only from the pixelwise intensity.
The following holds for Equations (16)–(18) used in this study; thus,
cos ϕ = cos ( 2 π ϕ )
for each period, only one range from 0 ϕ < π and π ϕ < 2 π can be used. This selection is used in each period considering spatial continuity, as described in the following subsection. The projection patterns designed in this manner are shown in Figure 2c.

3.1.3. Spatial Continuity of Patterns

Let us consider the problem in which an intermediate intensity value is generated because of light entering from more than two projector pixels into one pixel in the camera. Different patterns are projected for each period in the proposed method; thus, the part ϕ p = 2 k π ( k Z ) , where the period is unwrapped (connected), can be discontinuous. Around this discontinuous part, the intermediate intensity value deviates largely from the original intensity. That may lead to wrong results.
Accordingly, the projection patterns are improved to be continuous everywhere, to eliminate this error. In the proposed phase-unwrapping method, the processing for the projection patterns are independently done at each pixel; thus, even if the rearrangement is done considering the internal coordinate system ( u , v ) of the projection patterns, the tuple can be identified without any problem. Accordingly, the projection pattern I n p ( ϕ p ) is divided into parts at constant intervals, and spatial rearrangement is performed so that the intensity change is continuous at the period unwrapping part. The rearrangement information is stored in a table, and the phase is transformed by referring to this table at the time of measurement. There are many candidate procedures for this rearrangement. Although spatial continuity can be achieved using multiple rearrangement methods, it is desirable that the average partial intensity be constant over N projections, so that the effect of global illumination can be suppressed.
In Equations (16)–(18) used in this study, spatial continuity is achieved by dividing the entire range at intervals of π and rearranging them. First, index κ ( κ = 1 , . . . , 6 ) is assigned so that Equation (21) is satisfied at the divided regions.
( κ 1 ) π ϕ p < κ π
Further, the regions assigned with index κ are rearranged using the data shown in Table 1. Using this processing, the patterns shown in Figure 2d are obtained.
Finally, the entire range of 0 ϕ p < 6 π can be projected. Moreover, it is possible to suppress the motion error because the total number of projection steps is three, and I 1 ( u , v ) is the only term that contains ϕ which may cause a phase error because of the effects of motion.

3.1.4. Decoding Projected Patterns

In this subsection, the decoding method is described. This method is applied in measurements using the resultant patterns. Even when I 1 c , I 2 c , and I 3 c at the camera are obtained using Equations (4)–(6), the relative magnitude order I 3 < I 1 < I 2 as determined later using Equations (16)–(18) does not change. Therefore, the relative phase is given by,
ϕ = arccos 2 · I mid c I min c I max c I min c 1 I D C I ϕ
where I mid c , I max c , and I min c are the middle, maximum, and minimum values of { I 1 c , I 2 c , I 3 c } , respectively. Moreover, before rearranging the intensity by magnitude, index κ in Figure 2c can be identified, as shown in Table 2.
Referring to Table 1 with the index κ obtained from Table 2, the absolute phase can be calculated as
ϕ = ϕ + ( hash ( κ ) 1 ) · π ( κ = 2 , 3 , 6 ) ( π ϕ ) + ( hash ( κ ) 1 ) · π ( κ = 1 , 4 , 5 )
where grouping is done based on κ because 0 arccos θ < π .
In the case of the measurement of moving objects, the projected position of each pixel change among I 1 c , I 2 c , and I 3 c . Therefore, phase unwrapping may fail around the connection point of two regions which assigned with different indexes κ . However, phase unwrapping don’t fail inside the each region, because the relative magnitude order of { I 1 c , I 2 c , I 3 c } is same inside the each region.

3.1.5. Measurement Interpolation

In the three-step OPPS, only Equation (16) has the phase information; thus, for each pixel, the distance corresponding to the timing of projection of I 1 is measured. Accordingly, the distance is measured for the timings that are different at different parts of the projection pattern. In case of a high frame rate of measurement, the velocity change before and after the measurement can be considered to be very small. Accordingly, by interpolating the distance from before and after measurements, measurements can be performed without lowering the frame rate.

3.1.6. Projection Period Limitations

Based on the aforementioned design, no additional projection is required for phase unwrapping in OPPS; thus, measurements can be performed with the minimum of three steps. Fewer steps are preferable to suppress the effect of motion error. However, the periods over which the ordered structure of three steps can be defined are fewer; thus, in cases where more periods are required, use of four steps is preferable. Accordingly, the design of the four-step OPPS is described in the following section.

3.2. Four-Step OPPS

3.2.1. Basic Method

First, as the standard pattern for the four-step OPPS, the following extensions of Equations (9)–(11) of two-plus-one method are used (Figure 3a).
I 1 ( u , v ) = I D C + I ϕ sin ϕ ( u , v )
I 2 ( u , v ) = I D C + I ϕ cos ϕ ( u , v )
I 3 ( u , v ) = 2 I D C
I 4 ( u , v ) = 0
where I D C is the offset value of DC (Direct Current) component and I ϕ is the amplitude of the sine wave. An ordered structure is incorporated into this pattern. For the four steps, as presented in Equation (28), there are maximum 4 P 4 = 24 combinations for the rearrangement (Figure 3b).
S = { ( 1 , 2 , 3 , 4 ) , ( 2 , 1 , 3 , 4 ) , ( 1 , 2 , 4 , 3 ) , ( 2 , 1 , 4 , 3 ) , ( 4 , 1 , 2 , 3 ) , ( 4 , 2 , 1 , 3 ) , ( 3 , 1 , 2 , 4 ) , ( 3 , 2 , 1 , 4 ) , ( 3 , 4 , 1 , 2 ) , ( 3 , 4 , 2 , 1 ) , ( 4 , 3 , 1 , 2 ) , ( 4 , 3 , 2 , 1 ) , ( 1 , 3 , 4 , 2 ) , ( 2 , 3 , 4 , 1 ) , ( 1 , 4 , 3 , 2 ) , ( 1 , 3 , 4 , 2 ) , ( 1 , 3 , 2 , 4 ) , ( 2 , 3 , 1 , 4 ) , ( 1 , 4 , 2 , 3 ) , ( 2 , 4 , 1 , 3 ) , ( 4 , 1 , 3 , 2 ) , ( 4 , 2 , 3 , 1 ) , ( 3 , 1 , 4 , 2 ) , ( 3 , 2 , 4 , 1 ) }

3.2.2. Realizing Unique Tuple Identification

Similarly to the three-step OPPS, the pattern is improved to enable unique tuple identification. In Equations (24)–(27), sin θ and cos θ have a phase difference of π 2 . Therefore, in rearranging Equation (28), the rearrangements of I 1 and I 2 are omitted. This enables expression up to 12 periods. Accordingly, by creating I n p ( ϕ p ) as shown in Equation (14), it can be used up to 24 π . The resultant projection patterns are shown in Figure 3c.

3.2.3. Spatial Continuity of Patterns

Further, continuity is enforced because there is discontinuity in the period unwrapping part. Using Equations (24)–(27), spatial continuity is achieved by dividing the whole range into intervals of π 2 and rearranging them. First, index κ ( κ = 1 , , 48 ) is assigned so that Equation (29) is satisfied at the divided regions.
( κ 1 ) · π 2 ϕ p < κ · π 2
The regions assigned with index κ are then rearranged using the data shown in Table 3. Using this processing, the patterns shown in Figure 3d are obtained. There are various ways of rearrangement to enforce continuity. In this study, to suppress the effect of global simulation, the rearrangement was selected for which the differences between the images were minimum, after applying a median filter.
Finally, the projection can be done for the entire range of 0 ϕ p < 24 π . Moreover, I 1 ( u , v ) and I 2 ( u , v ) contain ϕ which may cause phase error because of the effects of motion, and it is to be noted that depending on which order these are projected, the magnitude of motion error varies.

3.2.4. Decoding Projection Patterns

In this subsection, the decoding method for four-step OPPS is described. This method is applied in measurements using the patterns created as described in Section 3.2.1, Section 3.2.2 and Section 3.2.3. With respect to I 1 , I 2 , I 3 , and I 4 ,
I 3 = max { I 1 , I 2 , I 3 , I 4 }
I 4 = min { I 1 , I 2 , I 3 , I 4 }
always hold. Accordingly, based on the relative magnitude of { I 1 c , I 2 c , I 3 c , I 4 c } , the tuple can be identified as shown in Table 4. Moreover, the corresponding tuple index κ tuple is determined as given in Table 4. At this time, the relative phase given by,
ϕ = arctan 2 I sin c ( I max c + I min c ) 2 I cos c ( I max c + I min c )
where I max c and I min c are maximum and minimum values, respectively, and I sin c and I cos c are determined as given in Table 4.
Moreover, the region is divided into intervals of π / 2 , as given by Equation (29), and then rearranged as presented in Table 3. Accordingly, depending on the phase quadrant of ϕ , the quadrant index κ ϕ is defined as follows:
κ ϕ = 1 ( 0 ϕ < π 2 ) 2 ( π 2 ϕ < π ) 3 ( π ϕ < 3 2 π ) 4 ( 3 2 π ϕ < 2 π )
Here, the index κ in Figure 3c, before undertaking the spatial continuity procedure described in Section 3.2.3, is obtained as given below.
κ = 4 κ tuple + κ ϕ
Using this index κ , and Table 3, the absolute phase is calculated as follows.
ϕ = ϕ π 2 κ ϕ + hash ( κ ) · π 2

4. Experiments

4.1. Evaluation of Motion Error

The proposed method enables measurement using few steps, and because the number of steps affected by motion is few, the effects of motion associated with moving targets can be suppressed. To verify this, the measurement error under moving conditions was evaluated using simulation utilizing OpenGL. The comparison targets were the proposed three-step and four-step methods, one period of the two-plus-one method (2 + 1) [14] that does not need phase unwrapping, and the two-plus-two method (2 + 2) [15] with temporal phase unwrapping needing the minimum number of steps. Here, the projection period used in the 2 + 2 method was the same 24 π as used in the proposed four-step method. Moreover, in the 2 + 2 method, the error changes because of motion in between the two sine wave projection patterns with non-monotonous changes. Accordingly, as the four patterns are projected cyclically, and when the measurements are performed at the same frame rate as the projector, in each frame, the order of the four patterns differ which results in the difference in magnitude of the error. Therefore, the respective errors were calculated for the case of ( I 1 , I 2 , I 3 , I 4 ) , ( I 2 , I 3 , I 4 , I 1 ) , ( I 3 , I 4 , I 1 , I 2 ) , and ( I 4 , I 1 , I 2 , I 3 ) (abbreviated as 2 + 2(1234), 2 + 2(2341), 2 + 2(3412), and 2 + 2(4123), respectively) and the corresponding average value (2 + 2(ave)).
A planar surface as the measurement target was placed at a distance of d = 600 [ mm ] from the camera. With the position of the camera as the origin of the coordinate system, the projector was placed at the position (0, 200, and 50 mm), pointing to the position (0, 0, and 600 mm). Moreover, in the experiments, the resolution of the camera and the projector was set as 1024 × 768, and the respective internal parameters, K cam and K proj , were set as follows.
K cam = 1600 0 512 0 1600 384 0 0 1 , K proj = 1911 0 512 0 1433 384 0 0 1
Figure 4 shows the experimental configuration for simulation, including the placement of the camera and the projector. Here, to separate the effect of global illumination, the rendering was done assuming absence of secondary reflection.
Measurements were conducted with this experimental configuration, by changing the noise magnitude and motion velocity, and under each condition, the distance error was calculated. Here, the noise was applied at three different levels of σ = 0 % , 1 % , and 5 % corresponding to the intensity. Moreover, the direction of motion was set as v = ( 0 , 0 , 1 ) [ mm / frame ] , and the measurements were performed corresponding to each noise setting by changing the velocity. Also, the real value was taken as the position of the planar surface at the median time ( N 2 ) among the N projections used in each of the methods, and the root mean square error (RMSE) calculated from this planar surface was taken as the error. Here, the outliers because of erroneous period were excluded.
Figure 5 shows the motion error for each of the methods under each of the conditions.
First, as shown in Figure 5a, under the condition of no noise, as the velocity increases, the error increases for the methods in the order of proposed method (three-step), 2 + 2(4123), proposed method (four-step), 2 + 2(ave), 2 + 2(3412), 2 + 2(1234), 2 + 1, and 2 + 2(2341).
In particular, in case of the proposed method (three-step) the error is constant and does not change with velocity. This is because the proposed method (three-step) is not affected by motion.
Moreover, in the 2 + 2 method, I 1 p and I 2 p mainly include ϕ and thereby affected by motion; thus, for the 2 + 2(4123) that falls in between these projections the error is the lowest. There is only one frame in between I 1 p and I 2 p , and it is relatively robust against motion. In case of 2 + 2(3412) and 2 + 2(1234), there is a symmetrical deviation corresponding to the real value at the planar surface based on the order of I 1 p and I 2 p , and the level of error is similar. In contrast, in case of 2 + 2(2341), there are three frames in between I 1 p and I 2 p , and the error is especially large.
Moreover, the proposed method (four-step) and 2 + 2(ave) show similar levels of error. This may be because of the fact that the resolutions are similar, excluding the effects of phase unwrapping and global illumination, because the number of periods projected is the same. Moreover, the error in 2 + 2(4123) is lower than that in the proposed method (four-step). However, in case of cyclic projection, the frame rate is 1/4 of the proposed method (four-step). Moreover, in the proposed method (four-step), for each region, the order of I 1 p and I 2 p can be obtained. If the measurement timing is corrected using this information, error level is similar to that of 2 + 2(4123) can be achieved. Moreover, as it can be considered as measurement at different regions at different points in time, it is likely that highly accurate reproduction of shape is possible by incorporating spatial and temporal corrections.
As shown in Figure 5b,c, as the noise increases, in particular, the errors become larger in the proposed method (three-step) and 2 + 1 method. This is because the smaller the number of periods, the larger is the effect on the phase when the intensity varies by 1. Accordingly, in case of noise of σ = 5 % , the error caused by noise becomes larger than the error caused by motion, showing larger error in the proposed (three-step) method.
Finally, based on the aforementioned results, the way of selecting proposed methods is able to be described; thus, it can be presumed that the proposed method (three-step) with fewer number of steps is suitable in cases where the effect of motion is dominant, whereas the proposed method (four-step) with larger number of periods is suitable in cases where the effect of noise is dominant.

4.2. Evaluation of Global Illumination

Next simulation experiments were conducted to measure the effect of global illumination. Here, assuming absence of motion, the error was evaluated by rendering using cycles render in blender 2.79a [39] to simulate global illumination. Moreover, the comparison targets were the proposed three-step, four-step methods, and the 2 + 2 method.
The camera and the projector were placed coaxially, because mainly the error in phase unwrapping is evaluated in this study. Therefore, the error is calculated from the absolute phases which measured by each methods; thus, the phase difference with the ground truth was taken as the error. Two planar surfaces intersecting at 90 degrees and a Stanford Bunny were placed in front of the camera as measurement targets.
The measurement results for the planar surfaces intersecting at 90 degrees are shown in Figure 6, and the results for the Stanford Bunny are shown in Figure 7. Here, the magnitude of the error is expressed using blue ( 0 [ rad . ] ) red ( 4 π [ rad . ] ) for 2 + 2 and proposed four-step methods where 24 π is used, and using blue ( 0 [ rad . ] ) red ( π [ rad . ] ) for the proposed three-step method where 6 π is used. Moreover, the areas that are not measured are not considered for error calculation and are shown in black.
As shown in Figure 6, in the 2 + 2 method, period error is generated in the areas above and below the central part of the image. In particular, in the concave part, the errors are larger. In contrast, in both the proposed three-step and four-step methods, excluding the period boundaries, no period error is generated. However, phase errors are generated to some extent, because the effect itself—a result of intensity changes caused by global illumination—cannot be avoided. Moreover, the magnitude of phase error for the proposed three-step method with fewer periods is larger than that for the proposed four-step method. As shown in Figure 7, similar errors in the concave areas of the Stanford Bunny, especially period errors, are generated in the 2 + 2 method.

4.3. High-Speed OPPS

To conduct actual measurements using the proposed methods, a high-speed measurement system was configured using a high-speed 8-bit projector, and a high-speed camera. The projector used was DynaFlash, the camera was acA640-750um (from Basler AG), and the PC was equipped with Intel ®Xeon ®CPU E5-2687W v4 and NVIDIA Quadro M5000 GPU. Using a frame rate of 500 fps, the projector with resolution set at 1024 × 768 and the camera with resolution set at 640 × 480 were synchronized. Figure 8 shows the system implemented and the experimental environment.
Here, the motion was kept at a level that can be manually generated, and because the environment had both noise and global illumination, the proposed four-step method was implemented. CPU processing was used only for control of projection by the projector and image capture by the camera, whereas GPU was used in the processing for calculating the phase from the images captured and calculating the 3D point group using triangulation and in the processing for displaying the obtained 3D point group implemented in shader using OpenGL. Because the processing time was 1.05 ms, which was well over the camera frame rate, real-time measurements at 500 fps could be performed. The measurement results are shown in Figure 9.

5. Discussion

First, as discussed in Section 4.1, it has been confirmed that in measuring moving objects, the fewer the number of projections, the lower would be the motion error. In particular, in N projection patterns I n ( n = 1 N ) , there are very few terms that contain phase ϕ , and the smaller the time interval in terms containing phase ϕ the smaller the motion error. Moreover, the higher the velocity of the moving object, the more prominent would be the effect. Accordingly, when the velocity of the moving object is high, using the proposed three-step method, in which the number of projections and terms containing phase ϕ are fewer, it is possible to achieve highly accurate measurements.
Further, it was confirmed that for fewer number of projection periods, the effect of noise is strong, whereas the more the number of projection periods, the robustness against noise is also more. This is because, for larger number of periods, the effects on phase or projector coordinates corresponding to a deviation of 1 in intensity are smaller. Therefore, using larger number of periods in the patterns would lead to better measurement accuracy. However, the projector resolution has limitations; thus, having a certain number of periods can be sufficient. For the proposed three-step method, in which only periods up to 6 π can be used, the effect of noise is strong because of insufficient number of periods; whereas, for the proposed four-step method, in which periods up to 24 π can be used, robust measurements against noise could be performed, thereby suggesting that the number of periods was sufficient.
Moreover, as discussed in Section 4.2, even under conditions of intensity changes because of global illumination, robust measurements could be performed by the proposed method, which is not possible with the 2 + 2 method. This is because, in the proposed method, the average partial (at different regions) intensity for N projections is designed to be constant. Furthermore, while the intensity value is directly used in the 2 + 2 method, the proposed method is based on the relative magnitude of the intensity, in which stable phase unwrapping can be achieved, even with intensity changes because of global illumination. However, in the phase-shift method, the calculation of relative phase is affected by global illumination. Here, as expressed in Equations (7) and (12), difference and proportion are used to eliminate the effect of ambient light and reflectance ratio of the target measurement object. However, since the projection pattern changes for each frame, the light reflected at the target measurement object because of the projected pattern changes in each image captured. Accordingly, total elimination cannot be achieved, resulting in error. In the proposed method also, where Equations (22) and (32) are used, the generation of error in relative phase cannot be avoided.
Moreover, in the proposed method, pixelwise calculations are undertaken independently for calculating the absolute phase from the captured images. This enables stable measurement without being affected by the number of measurement targets or discontinuities. In addition, this is suitable for parallel processing using GPU, and as mentioned in Section 4.3, using the actual system, a processing time of 1.05 ms could be achieved. This way real-time measurement with a throughput of 500 fps was realized.
Furthermore, in Figure 6, Figure 7 and Figure 9, there are loss and noise like stripes, which did not occur in the conventional methods. That is caused by ambiguity of the tuple identification in the connection point. It may be easy to remove the noise and fill the loss by filtering.
As discussed above, for moving objects under global illumination conditions, using the proposed method, stable phase unwrapping can be achieved, and high-speed 3D measurements can be performed. It is to be noted that subtle errors in relative phase because of global illumination or motion error could not be completely suppressed. However, the ordered structure incorporated in the proposed method is not just limited to Equations (16)–(18) and Equations (24)–(27) used in the study, but can also be incorporated in other different patterns as well. For example, it could be effective that uses the coprime pair used in the conventional multi-frequency approaches. Accordingly, it is also possible to create a new design by selecting patterns based on robustness against effects of motion, necessary number of periods, and global illumination. Moreover, by combining with conventional motion error correction methods, it is likely that these errors can be suppressed further.

6. Conclusions

This paper proposed a phase-shift method for pixelwise phase unwrapping using a minimum of three steps by introducing a structured order, in which the project patterns are rearranged in each period. This method can be used with fewer steps; thus, it can be applied to moving objects. Further, compared with the conventional methods that are applicable to moving objects, the proposed method can achieve even more robust measurements against global illumination because low-frequency projection patterns are not used in it. Using the simulation and real environment experiments, it is confirmed that the proposed method can achieve highly accurate measurements of moving objects, even under environmental conditions in which noise and global illuminations are generated. Moreover, the processing time was about 1 ms ; thus, by implementing a high-speed, high-tone projector and a high-speed camera in a system, real-time measurements at 500 fps were achieved.

Author Contributions

Conceptualization, S.T. and Y.W.; Data curation, S.T.; Funding acquisition, Y.W. and M.I.; Investigation, S.T.; Methodology, S.T. and M.M.; Project administration, Y.W. and M.I.; Resources, S.T.; Software, S.T. and M.M.; Supervision, Y.W. and M.I.; Validation, S.T.; Visualization, S.T.; Writing—original draft, S.T.; Writing—review & editing, S.T. and Y.W.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Watanabe, Y. High-speed optical 3D sensing and its applications. Adv. Opt. Technol. 2016, 5, 367–376. [Google Scholar] [CrossRef]
  2. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
  3. Van der Jeught, S.; Dirckx, J.J. Real-time structured light profilometry: A review. Opt. Lasers Eng. 2016, 87, 18–31. [Google Scholar] [CrossRef]
  4. Watanabe, Y.; Komuro, T.; Ishikawa, M. 955-fps real-time shape measurement of a moving/deforming object using high-speed vision for numerous-point analysis. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3192–3197. [Google Scholar]
  5. Tabata, S.; Noguchi, S.; Watanabe, Y.; Ishikawa, M. High-speed 3D sensing with three-view geometry using a segmented pattern. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 3900–3907. [Google Scholar]
  6. Gao, H.; Takaki, T.; Ishii, I. GPU-based real-time structured light 3D scanner at 500 fps. Real-Time Image and Video Processing. In Proceedings of the International Society for Optics and Photonics, Brussels, Belgium, 16–19 April 2012; Volume 8437, p. 84370J. [Google Scholar]
  7. Maruyama, M.; Tabata, S.; Watanabe, Y. Multi-pattern Embedded Phase Shifting Using a High-Speed Projector for Fast and Accurate Dynamic 3D Measurement. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, 12–15 March 2018; pp. 921–929. [Google Scholar]
  8. Watanabe, Y.; Narita, G.; Tatsuno, S.; Yuasa, T.; Sumino, K.; Ishikawa, M. High-speed 8-bit image projector at 1,000 fps with 3 ms delay. In Proceedings of the 22nd International Display Workshops, Otsu, Japan, 9–11 December 2015; pp. 1064–1065. [Google Scholar]
  9. Srinivasan, V.; Liu, H.C.; Halioua, M. Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt. 1984, 23, 3105–3108. [Google Scholar] [CrossRef]
  10. Zuo, C.; Huang, L.; Zhang, M.; Chen, Q.; Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [Google Scholar] [CrossRef]
  11. Gupta, M.; Nayar, S.K. Micro phase shifting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 813–820. [Google Scholar]
  12. Weise, T.; Leibe, B.; Van Gool, L. Fast 3d scanning with automatic motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  13. Boisvert, J.; Drouin, M.A.; Dicaire, L.G.; Picard, M.; Godin, G. Motion Compensation for Phase-Shift Structured-Light Systems Based on a Total-Variation Framework. In Proceedings of the IEEE International Conference on 3D Vision, Qingdao, China, 10–12 October 2017; pp. 658–666. [Google Scholar]
  14. Zhang, S.; Yau, S.T. High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm. Opt. Eng. 2007, 46, 113603. [Google Scholar] [CrossRef]
  15. Zuo, C.; Chen, Q.; Gu, G.; Feng, S.; Feng, F. High-speed three-dimensional profilometry for multiple objects with complex shapes. Opt. Express 2012, 20, 19493–19510. [Google Scholar] [CrossRef]
  16. Jia, P.; Kofman, J.; English, C.E. Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement. Opt. Eng. 2007, 46, 083201. [Google Scholar]
  17. Huang, P.S.; Zhang, S.; Chiang, F.P. Trapezoidal phase-shifting method for three-dimensional shape measurement. Opt. Eng. 2005, 44, 123601. [Google Scholar] [CrossRef]
  18. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  19. Chen, X.; Chen, S.; Luo, J.; Ma, M.; Wang, Y.; Wang, Y.; Chen, L. Modified Gray-Level Coding Method for Absolute Phase Retrieval. Sensors 2017, 17, 2383. [Google Scholar] [CrossRef]
  20. Chen, K.; Xi, J.; Yu, Y. Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement. Opt. Commun. 2013, 294, 139–147. [Google Scholar] [CrossRef]
  21. Kemao, Q.; Gao, W.; Wang, H. Windowed Fourier-filtered and quality-guided phase-unwrapping algorithm. Appl. Opt. 2008, 47, 5420–5428. [Google Scholar] [CrossRef]
  22. Bioucas-Dias, J.; Valadao, G. Phase unwrapping via graph cuts. IEEE Trans. Image Process. 2007, 15, 698–709. [Google Scholar] [CrossRef]
  23. Zheng, D.; Da, F. A novel algorithm for branch cut phase unwrapping. Opt. Lasers Eng. 2011, 49, 609–617. [Google Scholar] [CrossRef]
  24. Li, F.; Gao, S.; Shi, G.; Li, Q.; Yang, L.; Li, R.; Xie, X. Single Shot Dual-Frequency Structured Light Based Depth Sensing. IEEE J. Sel. Top. Signal Process. 2015, 9, 384–395. [Google Scholar] [CrossRef]
  25. Yang, L.; Li, F.; Xiong, Z.; Shi, G.; Niu, Y.; Li, R. Single-shot dense depth sensing with frequency-division multiplexing fringe projection. J. Vis. Commun. Image Represent. 2017, 46, 139–149. [Google Scholar] [CrossRef]
  26. Saldner, H.O.; Huntley, J.M. Temporal phase unwrapping: Application to surface profiling of discontinuous objects. Appl. Opt. 1997, 36, 2770–2775. [Google Scholar] [CrossRef]
  27. Chen, F.; Su, X. Phase-unwrapping algorithm for the measurement of 3D object. Opt.-Int. J. Light Electron Opt. 2012, 123, 2272–2275. [Google Scholar] [CrossRef]
  28. Song, L.; Dong, X.; Xi, J.; Yu, Y.; Yang, C. A new phase unwrapping algorithm based on Three Wavelength Phase Shift Profilometry method. Opt. Laser Technol. 2013, 45, 319–329. [Google Scholar] [CrossRef]
  29. Sansoni, G.; Corini, S.; Lazzari, S.; Rodella, R.; Docchio, F. Three-dimensional imaging based on Gray-code light projection: Characterization of the measuring algorithm and development of a measuring system for industrial applications. Appl. Opt. 1997, 36, 4463–4472. [Google Scholar] [CrossRef]
  30. Chen, X.; Wang, Y.; Wang, Y.; Ma, M.; Zeng, C. Quantized phase coding and connected region labeling for absolute phase retrieval. Opt. Express 2016, 24, 28613–28624. [Google Scholar] [CrossRef]
  31. Moreno, D.; Hwang, W.Y.; Taubin, G. Rapid hand shape reconstruction with chebyshev phase shifting. In Proceedings of the 2016 IEEE Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 157–165. [Google Scholar]
  32. Cao, Y.; Su, X. RGB tricolor based fast phase measuring profilometry. In Advanced Materials and Devices for Sensing and Imaging, Proceedings of the International Society for Optics and Photonics, Shanghai, China, 14–18 October 2002; International Society for Optics and Photonics: Bellingham, WA, USA, 2002; Volume 4919, pp. 528–536. [Google Scholar]
  33. Schubert, E.; Rath, H.; Klicker, J. Fast 3D object recognition using a combination of color-coded phase-shift principle and color-coded triangulation. In Sensors and Control for Automation, Proceedings of the International Society for Optics and Photonics, Frankfurt, Germany, 19–24 June 1994; International Society for Optics and Photonics: Bellingham, WA, USA, 1994; Volume 2247, pp. 202–214. [Google Scholar]
  34. Zhang, Z.; Guo, T. Absolute Phase Measurement Based on Combining Binary Color-Code and Phase-Shift Light Projection. AIP Conf. Proc. 2010, 1236, 427–432. [Google Scholar]
  35. Tao, T.; Chen, Q.; Da, J.; Feng, S.; Hu, Y.; Zuo, C. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system. Opt. Express 2016, 24, 20253–20269. [Google Scholar] [CrossRef]
  36. Hyun, J.S.; Chiu, G.T.C.; Zhang, S. High-speed and high-accuracy 3D surface measurement using a mechanical projector. Opt. Express 2018, 26, 1474–1487. [Google Scholar] [CrossRef]
  37. An, Y.; Hyun, J.S.; Zhang, S. Pixel-wise absolute phase unwrapping using geometric constraints of structured light system. Opt. Express 2016, 24, 18445–18459. [Google Scholar] [CrossRef]
  38. Chen, S.Y.; Cheng, N.J.; Su, W.H. Phase-shifting projected fringe profilometry using ternary-encoded patterns. In Photonic Fiber and Crystal Devices: Advances in Materials and Innovations in Device Applications XI, Proceedings of the International Society for Optics and Photonics, San Diego, CA, USA, 21–25 August 2011; International Society for Optics and Photonics: Bellingham, WA, USA, 2011; Volume 10382, p. 103820U. [Google Scholar]
  39. Blender Online Community. Blender—A 3D Modelling and Rendering Package; Blender Foundation, Blender Institute: Amsterdam, The Netherlands, 2017. [Google Scholar]
Figure 1. Examples of the projection pattern using the phase-shift method. (a) Three-step phase shift and (b) Two-plus-one phase shift.
Figure 1. Examples of the projection pattern using the phase-shift method. (a) Three-step phase shift and (b) Two-plus-one phase shift.
Sensors 19 00377 g001
Figure 2. Design of projection pattern for the three-step OPPS. (a) Standard projection patterns I 1 p , I 2 p , and I 3 p . (b) Incorporating ordered structure (Section 3.1.1). Three steps are reordered; thus, there are 3 P 3 = 6 combinations. (c) Constrained to uniquely identify tuple (Section 3.1.2). (d) Spatial continuity (Section 3.1.3).
Figure 2. Design of projection pattern for the three-step OPPS. (a) Standard projection patterns I 1 p , I 2 p , and I 3 p . (b) Incorporating ordered structure (Section 3.1.1). Three steps are reordered; thus, there are 3 P 3 = 6 combinations. (c) Constrained to uniquely identify tuple (Section 3.1.2). (d) Spatial continuity (Section 3.1.3).
Sensors 19 00377 g002
Figure 3. Design of projection pattern for the four-step OPPS. (a) Standard projection patterns I 1 p , I 2 p , I 3 p , and I 4 p . (b) Incorporating ordered structure (Section 3.2.1). Four steps are reordered; thus, there are 4 P 4 = 24 combinations. (c) Constrained to uniquely identifying tuple (Section 3.2.2). (d) Spatial continuity (Section 3.2.3).
Figure 3. Design of projection pattern for the four-step OPPS. (a) Standard projection patterns I 1 p , I 2 p , I 3 p , and I 4 p . (b) Incorporating ordered structure (Section 3.2.1). Four steps are reordered; thus, there are 4 P 4 = 24 combinations. (c) Constrained to uniquely identifying tuple (Section 3.2.2). (d) Spatial continuity (Section 3.2.3).
Sensors 19 00377 g003
Figure 4. Experimental configuration for simulation.
Figure 4. Experimental configuration for simulation.
Sensors 19 00377 g004
Figure 5. Motion errors. (a) RMSE for noise σ = 0 % , (b) RMSE for noise σ = 1 % , and (c) RMSE for noise σ = 5 % .
Figure 5. Motion errors. (a) RMSE for noise σ = 0 % , (b) RMSE for noise σ = 1 % , and (c) RMSE for noise σ = 5 % .
Sensors 19 00377 g005
Figure 6. Measurement results for two planar surfaces intersecting at 90 degrees.
Figure 6. Measurement results for two planar surfaces intersecting at 90 degrees.
Sensors 19 00377 g006
Figure 7. Measurement results for Stanford Bunny.
Figure 7. Measurement results for Stanford Bunny.
Sensors 19 00377 g007
Figure 8. Experimental environment. The high-speed projector and high-speed camera were synchronized, and the camera was placed vertically above the projector to perform the measurements. The measurement results were obtained in real-time and displayed in the monitor screen.
Figure 8. Experimental environment. The high-speed projector and high-speed camera were synchronized, and the camera was placed vertically above the projector to perform the measurements. The measurement results were obtained in real-time and displayed in the monitor screen.
Sensors 19 00377 g008
Figure 9. Measurement results in real environment. Each row shows the 3D shape reproduced using I 1 c , I 2 c , I 3 c , and I 4 c (starting from left). From the data measured at 500 fps, data at 200 ms intervals were extracted. Starting from the top, the measurement results are shown at time t = 0, 200, 400, and 600 ms.
Figure 9. Measurement results in real environment. Each row shows the 3D shape reproduced using I 1 c , I 2 c , I 3 c , and I 4 c (starting from left). From the data measured at 500 fps, data at 200 ms intervals were extracted. Starting from the top, the measurement results are shown at time t = 0, 200, 400, and 600 ms.
Sensors 19 00377 g009
Table 1. Continuity table for proposed method (three-step). hash( κ ) denotes the movement destination in Figure 2d corresponding to index κ in Figure 2c.
Table 1. Continuity table for proposed method (three-step). hash( κ ) denotes the movement destination in Figure 2d corresponding to index κ in Figure 2c.
κ in Figure 2c123456
hash ( κ )142356
Table 2. Decoding in proposed method (three-step).
Table 2. Decoding in proposed method (three-step).
I 3 c < I 1 c < I 2 c I 2 c < I 1 c < I 3 c I 3 c < I 2 c < I 1 c I 1 c < I 2 c < I 3 c I 2 c < I 3 c < I 1 c I 1 c < I 3 c < I 2 c
tuple(1,2,3)(1,3,2)(2,1,3)(2,3,1)(3,1,2)(3,2,1)
κ in Figure 2c123456
Table 3. Table for continuity in proposed method (four-step). hash( κ ) denotes the motion destination in Figure 3d corresponding to index κ in Figure 3c.
Table 3. Table for continuity in proposed method (four-step). hash( κ ) denotes the motion destination in Figure 3d corresponding to index κ in Figure 3c.
κ in Figure 3c12345678910111213141516
hash( κ ) in Figure 3d1234331415161334272410856
κ in Figure 3c17181920212223242526272829303132
hash( κ ) in Figure 3d711129212223282546472017303144
κ in Figure 3c33343536373839404142434445464748
hash( κ ) in Figure 3d41424332373839402918194845263536
Table 4. Decoding in proposed method (four-step). Based on condition of I max c and I min c , the corresponding tuple and I sin c and I cos c are determined. Moreover, the tuple index κ tuple is also similarly determined.
Table 4. Decoding in proposed method (four-step). Based on condition of I max c and I min c , the corresponding tuple and I sin c and I cos c are determined. Moreover, the tuple index κ tuple is also similarly determined.
I max c I 3 c I 4 c I 4 c I 1 c I 1 c I 2 c I 2 c I 3 c I 2 c I 4 c I 3 c I 1 c
I min c I 4 c I 3 c I 1 c I 4 c I 2 c I 1 c I 3 c I 2 c I 4 c I 2 c I 1 c I 3 c
tuple(1,2,3,4)(1,2,4,3)(4,1,2,3)(3,1,2,4)(3,4,1,2)(4,3,1,2)(1,3,4,2)(1,4,3,2)(1,3,2,4)(1,4,2,3)(4,1,3,2)(3,1,4,2)
I sin c I 1 c I 1 c I 2 c I 2 c I 3 c I 3 c I 1 c I 1 c I 1 c I 1 c I 2 c I 2 c
I cos c I 2 c I 2 c I 3 c I 3 c I 4 c I 4 c I 4 c I 4 c I 3 c I 3 c I 4 c I 4 c
κ tuple 01234567891011

Share and Cite

MDPI and ACS Style

Tabata, S.; Maruyama, M.; Watanabe, Y.; Ishikawa, M. Pixelwise Phase Unwrapping Based on Ordered Periods Phase Shift. Sensors 2019, 19, 377. https://doi.org/10.3390/s19020377

AMA Style

Tabata S, Maruyama M, Watanabe Y, Ishikawa M. Pixelwise Phase Unwrapping Based on Ordered Periods Phase Shift. Sensors. 2019; 19(2):377. https://doi.org/10.3390/s19020377

Chicago/Turabian Style

Tabata, Satoshi, Michika Maruyama, Yoshihiro Watanabe, and Masatoshi Ishikawa. 2019. "Pixelwise Phase Unwrapping Based on Ordered Periods Phase Shift" Sensors 19, no. 2: 377. https://doi.org/10.3390/s19020377

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop