Next Article in Journal
First Steps Towards Site Characterization Activities at the CSTH Broad-Band Station of the Campi Flegrei’s Seismic Monitoring Network (Italy)
Next Article in Special Issue
High-Density Arrayed Spectrometer with Microlens Array Grating for Multi-Channel Parallel Spectral Analysis
Previous Article in Journal
Predictive Maintenance System to RUL Prediction of Li-Ion Batteries and Identify the Fault Type of Brushless DC Electric Motor from UAVs
Previous Article in Special Issue
Measurement Techniques for Highly Dynamic and Weak Space Targets Using Event Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction

1
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
2
School of Electronic Engineering, Guangxi University of Science and Technology, Liuzhou 545616, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4786; https://doi.org/10.3390/s25154786
Submission received: 22 June 2025 / Revised: 24 July 2025 / Accepted: 25 July 2025 / Published: 3 August 2025

Abstract

Fringe projection profilometry (FPP) has been widely applied in industrial 3D measurement due to its high precision and non-contact advantages. However, FPP often encounters measurement problems with high-dynamic-range objects, consequently impacting phase computation. In this paper, an adaptive exposure time selection method is proposed to calculate the optimal number of exposures and exposure time by using an improved clustering method to divide the region with different reflection degrees. Meanwhile, the phase order sharing strategy is adopted in the phase unwrapping stage, and the same set of complementary Gray code patterns is used to calculate the phase orders under different exposure times. The experimental results demonstrate that the measurement error of the method described in this paper was reduced by 25.4% under almost the same exposure times.

1. Introduction

Optical measurement in industrial visual inspection utilizes light-based technologies to capture, analyze, and quantify the visual features of products or components, such as dimensions, surface defects, and color consistency, thereby enabling non-contact, high-precision, and efficient quality control in manufacturing processes [1,2,3,4]. Fringe projection profilometry (FPP) is a commonly used method for 3D measurement [5,6]. Compared with other methods such as stereo vision [7,8] and line laser scanning [9,10], FPP has the advantages of high accuracy and fast speed [11]. Its main technical process includes coded projection [12], camera acquisition, system calibration [13], phase solving, and point cloud reconstruction [14,15,16,17,18,19]. In pursuit of enhancing measurement precision and operational efficiency, researchers have dedicated their efforts to optimizing different aspects, including projection devices [20,21,22,23], calibration methods [24,25,26], phase recovering algorithms [15,16,27,28], and deep learning [5,28,29,30]. However, when performing 3D measurements of objects in real production, the measurement effectiveness of structured light systems is often affected by the optical properties of the surface of the object being measured [31]. In general, high-dynamic-range (HDR) objects are those with a wide range of surface reflectance variations, such as rusty, oily, or shiny surfaces, which result in a dynamic range that exceeds the ability of traditional low-dynamic-range sensors to capture images [32,33]. In industrial inspection, they refer to workpieces with surfaces that are under-modulated and have oversaturated pixels in the captured image due to differences in the material.
Adjusting the exposure time is a common method used to cope with the 3D measurement of HDR scenes, and researchers have proposed a series of multiple exposure-based techniques to measure high-dynamic-range objects [34,35,36,37]. Feng et al. [23] proposed a fringe projection method that can inspect scenes with a large range of reflectivity variation. Rao et al. [24] carry out high-dynamic-range 3D shape determination based on automatic exposure selection. The images collected at different exposure times are arranged in order from brightest to darkest. The image is then traversed pixel by pixel to select the brightest, yet unsaturated, pixels, generating a new fused image for subsequent absolute phase recovery. The fused image has a good striation system for both the bright and dark areas of the streak image, enabling the method to be used for effective measurements of different reflective surfaces. However, the selection of the method’s multiple exposure parameters is based on experience rather than a specific calculation method. Additionally, scenes with too large a dynamic range need multiple exposures, the measurement speed is low, the number of projected coded images in the paper is high, and the measurement efficiency is low.
Research has also been conducted regarding the selection of exposure time, with some researchers developing an exposure time selection method based on pixel analysis [38,39,40,41]. Lin et al. [29] present a new adaptive digital fringe projection technique that avoids image saturation and has a high signal-to-noise ratio (SNR) in the three-dimensional (3D) shape measurement of objects that have a large range of reflectivity variation across their surfaces. Jiang et al. [30] proposed 180-degree phase-shifted (or inverted) fringe patterns to complement regular fringe patterns. This method measures the reflectance characteristics of the target surface pixel by pixel and then constructs the optimal exposure sequence. The core process involves acquiring image sequences under different projection intensities, establishing a pixel intensity–projection intensity response model, and determining the optimal exposure parameters by calculating the reflectance gradient. However, the pixel-by-pixel nature of this method leads to a significant increase in the complexity of the algorithm, which makes it difficult to meet real-time demands and does not take into account how the number of exposure groups should be divided.
Existing exposure assessment metrics are mainly designed based on the principles of gradient maximization [42] or information entropy maximization [43], which are theoretically based on the fact that conventional vision algorithms rely on gradient or entropy features for target recognition. In structured light 3D measurement, however, it is the integrity of the phase encoding information that is the core factor in determining the 3D reconstruction accuracy. For this reason, Liu et al. [44] proposed and experimentally verified the effectiveness of a phase information evaluation index based on intensity modulation, which assumes that the quality of phase encoding is positively correlated with the intensity of stripe modulation [45]. However, this scheme is only applicable to short exposure control in low illumination scenes. When facing the wide reflectance distribution (0.05–0.95) of industrial parts, the long exposure strategy will lead to overexposure of the high-dynamic-range region, which seriously affects the phase decoding accuracy.
In this paper, we propose incorporating complementary Gray codes into the multi-exposure fusion method to improve measurement efficiency. Generally, the multi-frequency method is the unwrapping algorithm used in multi-exposure fusion, which leads to a larger number of projected images and lower measurement efficiency when applied to multi-exposure techniques. On the other hand, complementary Gray codes are binary encodings used solely for unwrapping and possess advantages such as high reliability and robustness to noise. More importantly, they only require the projection of one set, thereby improving the efficiency of 3D measurement. This paper discusses the principles and methods for implementing multi-exposure fusion combined with complementary Gray codes. Experimental results demonstrate the effectiveness and speed of this method.

2. Methodology

Structured light 3D measurement generally consists of four parts: encoded projection, camera acquisition, phase unwrapping, and point cloud reconstruction. The method proposed in this paper focuses on improving the two parts of camera acquisition and phase unwrapping. The adaptive exposure fusion method determines the optimal number of exposures and exposure time required in HDR scenes through clustering. The phase level sharing method based on complementary Golay code refers to using the same set of Golay code patterns for phase unwrapping of phase-shift images captured under different exposure times, thereby reducing the number of projection images in HDR scenes. The specific contents of these two methods are introduced below.

2.1. Adaptive Exposure Time Selection Method

The grey scale of an image captured by the camera is affected by the external light, camera gain, and reflectivity of the captured object and is described by the following equation, Equation (1):
I c ( x , y ; t ) = t α ( ρ ( x , y ) l p ( x , y ) + ρ ( x , y ) l e ( x , y ) ) + l n ( x , y )
where I c ( x , y ; t ) is the brightness of the image obtained by the camera sensor at the pixel coordinates x , y with the exposure time, t is the exposure time, α is the camera’s scale factor for converting the incoming light intensity into grey values, ρ ( x , y ) is the reflectivity of the illuminated object, l p x , y is the intensity of the projector’s projection at the pixel coordinates, and l e x , y is the intensity of the ambient light on the surface of the object. l n x , y includes the camera’s own noise and the ambient light that enters directly into the lens. Figure 1a shows our pixel-projection intensity mapping model.
The projected luminance is linearly related to the image greyness as follows:
I c x , y ; t = K x , y l p x , y + B x , y K x , y = t α ρ x , y B x , y = t α ρ x , y l e x , y + l n x , y
If l p x , y is known, t α is constant after a fixed exposure time, so that two different pictures of known brightness are projected through a projector, and the reflectance ρ x , y of an object at each pixel point within the imaging range of the camera can be calculated by capturing the two pictures.
Assume that the projector pre-projects a uniform grey scale image with an intensity of l p l = 51 and l p h = 255 and captures image values of I c l x , y ; t and I c h x , y ; t . If there are no oversaturated pixels in the current I c h x , y ; t , the reflectance of the object at this time is calculated using Equations (3) and (4):
K x , y = I c h x , y ; t I c l x , y ; t l p h x , y ; t l p l x , y ; t I c h x , y < 255 I c l x , y ; t B x , y l p l x , y I c h x , y = 255   and   I c l x , y < 255
B ( x , y ) = I c 0 x , y ; t = t α ρ x , y l p x , y + l e ( x , y )
In this case, if there are saturated pixels in I c h x , y ; t , it is necessary to set the projected brightness to 0, set t α to a constant value, and acquire the corresponding image I c 0 , when I c 0 x , y = B x , y .
If the light intensity of the captured image is too dark, it will make the overall signal-to-noise ratio low, and the phase information will be easily affected by the camera noise and ambient light. If the light intensity is too strong, there will be a loss of phase information in the grey saturated region, so the ideal pixel grey value I t h is set to be slightly below the upper limit of the camera’s grey level (the threshold for setting the maximum unsaturated value is <255, which is generally set to 250, it is necessary to leave some redundancy to reduce the error), and then the optimal exposure time can be expressed as follows:
t m x , y = I t h B x , y I c h B x , y l p x , y l p h x , y t 0 , I c h x , y < 255 I t h B x , y I c l B x , y l p x , y l p l x , y t 0 , I c h x , y = 255   and   I c l x , y < 255
Ideal exposure time is largely determined by the degree of surface reflectance of the object, which in turn is largely dependent on the material of the object. For objects with a high degree of surface reflectance, the reflectance distribution will have several peaks depending on the material composition. Therefore, we can use some methods to automatically determine the top K exposure times that account for the largest percentage of the exposure time t m ( x , y ) represented, so that the exposure time sequence can be adaptively selected according to the scene.
Figure 1b illustrates the flowchart of the adaptive exposure time calculation. According to Equation (5), the optimal exposure time for each pixel position can be calculated from the results of uniform grey-scale images with pre-projected intensities l p l , l p h , and l c 0 . By analyzing the image intensity values of the object under test, we can find a set of representative exposure time sequences. The K-means clustering algorithm is a classical and widely used automatic classification clustering method that is an unsupervised learning algorithm and can classify data into K classes based on the similarity and the distance metric. This paper proposes a strategy for adaptive exposure time selection based on the principle of K-means clustering. The strategy can divide the one-dimensional array t m = t m ( x 1 , y 1 ) , t m ( x 1 , y 1 ) , , t m ( x w h , y w h ) consisting of exposure values t m ( x , y ) into K groups of C = C 1 , C 2 , C 3 , , C k classes. The adaptive exposure time strategy is chosen to reduce the saturated area in the image corresponding to any pixel (i, j); the light intensity of the captured image at that pixel position will be saturated if the corresponding exposure time t m x i , y i is exceeded. Therefore, the minimum exposure value t c i of each group is chosen as the typical value of the group, instead of the group average value in the conventional clustering algorithm. The specific adaptive exposure time strategy is as follows:
Step 1: Pre-projection image acquisition. A uniform grey-scale map with projection intensities of l p l = 51 and l p h = 255 is projected onto the surface of the object under test, the corresponding reflected light field images I c l and I c h are captured by the camera, and the ambient light image I c 0 is captured in the unprojected state.
Step 2: Calculation of the ideal pixel-by-pixel exposure time t m ( x , y ) . The optimal pixel-by-pixel exposure time can be calculated according to Equation (5), which serves as the sample set for subsequent cluster analysis.
Step 3: Calculation of the optimal exposure time series based on the improved K-means principle.
(1)
Set the maximum number of acceptable clustering clusters K max , the maximum number of iterations i t e r , the minimum exposure time t min , the maximum exposure time t max , and the improvement threshold i m p r o t h .
(2)
An initial cluster center is randomly selected for each cluster, the sample set is assigned to the nearest neighboring cluster according to the minimum distance principle, and the cluster centers are updated using the sample means for each cluster center.
(3)
Repeat step (2) until the clustering center no longer changes or the maximum number of iterations is reached.
(4)
Calculate and save the sum of the current distances from each intra-cluster point to the cluster center D i s k (when the number of clusters k > 2), and calculate the current degree of improvement i m p r o = ( D i s k D i s k 1 ) / D i s k 1 . If i m p r o > i m p r o t h and k < K max , add 1 to the current number of clusters k and repeat step (2); otherwise, output the final number of clusters k and the minimum value of each intra-cluster point t c i ( i = 1 , 2 , , k ) .
It is worth noting that before the actual clustering operation, the ideal exposure time t m ( x , y ) for the pixel-by-pixel calculation is truncated up and down so that the maximum value of t m ( x , y ) is t max and the minimum value is t min , where t max and t min are determined according to the demand during the actual measurement. According to experience, t min 6   ms , t max < 60 ms, i m p r o t h = 0.5 .
After the determination of the optimal number of exposure groups and times by the adaptive exposure time selection method, multiple image sequences under the corresponding exposure times are acquired and then fused to generate high-dynamic-range images with uniform light intensity distribution and a high image signal-to-noise ratio. The multi-exposure fusion algorithm used in this paper is similar to the multi-polarization method [46], which is based on projecting and acquiring multiple sets of phase-shifting images with different camera exposure times and then selecting the maximum unsaturated channel for each pixel from the multiple sets of phase-shifting images and ensuring that each pixel of the same frequency streak image also uses the same channel to generate the fused image, so as to reduce the number of saturated points of the image to restore the saturated region phase and to ensure that the dark region greyness is not weakening. The specific formula is as follows:
p f s = arg max n { I p t , n x , y   f o r n = 1 , 2 , , N }
I f ( x , y ) = I p f s , n ( x , y )
where p t is the maximum unsaturated exposure time channel under different exposure times in the same step stripe pattern. p f s , n is the exposure group sequence number corresponding to a position in the N-th step of the phase-shifting sequence when the maximum unsaturated luminance is obtained, also known as the fusion decision map. After the fused image I f is obtained, the absolute phase of each position can be solved, and the 3D point cloud information can be calculated by substituting the phase with the system calibration parameters.

2.2. Phase Order Sharing Method

The projection strategy in the experiments is the N-step phase-shifting method. This method is not affected by violent changes or discontinuities on the surface of the object to be measured, and it has been widely applied in fringe projection profilometry (FPP) systems [47]. The gray-value map captured by the camera can be expressed as
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) + 2 π n / N ]
where n is the phase-shifting index, the value of N in the experiments is 6, (x, y) denotes the pixel coordinates of the camera, A and B denote the average intensity modulation, respectively, and ϕ is the phase value we need to solve. The average intensity  A , intensity modulation B , and phase value ϕ can be calculated by the following equations:
A ( x , y ) = 1 N n = 1 N I n ( x , y )
B ( x , y ) = 2 N n = 1 N I n sin 2 π n / N 2 + n = 1 N I n cos 2 π n / N 2
ϕ x , y = tan 1 n = 1 N I n sin 2 π n / N n = 1 N I n cos 2 π n / N
Since the inverse tangent function is used in the calculation, the value domain solved by the phase-shifting method alone is in (−π,π], showing a truncated distribution. If we want to extend to the whole space, we need to align the phase expansion to obtain the absolute phase. We need to uniquely determine the phase order solved at each pixel position and recover the absolute phase accordingly, which is a process called phase unwrapping or absolute phase expansion.
In this paper, a 6-step phase-shifting is mainly used to calculate the wrapped phase, and then the phase orders are calculated by the complementary Gray code method. However, in high-dynamic-range scenes where multiple exposure image fusion is required to be used, with the increase in the number of exposure groups, each group needs to project a Gray code pattern for phase unwrapping, which brings the problem of image redundancy.
In contrast, the complementary-Gray-code-based phase order sharing method uses a set of complementary Gray code images of a good exposure case to compute the phase orders, which can be applied to phase unwrapping under different exposure times. This method takes advantage of the fact that the phase orders are for the same static object and the same phase expansion method. Different exposure times are basically the same, so the number of images projected in the actual high-dynamic-range measurements can be greatly reduced and the measurement efficiency can be improved under the premise of guaranteeing the correctness of the phase order solving. Figure 2 illustrates the difference between multi-exposure with the multi-frequency method and multi-exposure with the complementary Gray code method. The area of the yellow rectangle represents the projection time for the multi-frequency method, while the area of the green rectangle represents the time required for our method. In Figure 2, the 3-frequency 4-step multi-frequency method is taken as an example and compared with our method. In the multi-frequency method, Fre1 and Fre2 belong to the low-frequency stripe patterns, which are used to gradually expand the high-frequency encapsulation phase. However, in the Gray code level sharing strategy, the GC pattern performs the same function. The figure visually shows the difference in total exposure time between the two methods under the condition that the three exposure times used are exactly the same. It can be seen that our method uses less exposure time in the phase unwrapping stage, and we will prove this in the experiment in Section 3.
The principle of the phase order sharing method refers to the complementary Gray code method [48], including binarization, decoding fringe orders k 1 ( x , y ) and k 2 ( x , y ) , and phase unwrapping based on segmented phase ranges, as shown in Equation (12), thereby reducing periodic order misalignment issues.
Φ ( x , y ) = ϕ f ( x , y ) + 2 π k 2 ( x , y ) , ϕ f ( x , y ) π / 2 ϕ f ( x , y ) + 2 π k 1 ( x , y ) , π / 2 < ϕ f ( x , y ) < π / 2 ϕ f ( x , y ) + 2 π [ k 2 ( x , y ) 1 ] , ϕ f ( x , y ) π / 2
where ϕ f ( x , y ) is the new phase-shifting image sequence obtained after multiple-exposure fusion using a high-frequency sinusoidal fringe sequence. k 1 ( x , y ) and k 2 ( x , y ) are the levels calculated from the Gray code pattern of the projection acquisition at a certain well-exposed time. The exposure time is generally chosen to be the group with the largest distribution ratio in the adaptive exposure strategy.
From the above principle, it is clear that complementary Gray code images are only used for the phase unwrapping process. Therefore, when using the multi-exposure fusion algorithm, only one set of patterns is needed to complete phase unwrapping, effectively reducing projection time.

3. Experiments and Validation

3.1. System Setup

The system shown in Figure 3 consists of a DLP (digital light processing) projector (the Texas Instrument PRO6500-RGB-235 with resolution of 1920 × 1080 and a refresh rate of 247 fps @ 8 bit) and a camera (the Basler aca 2040–120 um grayscale camera, with a resolution of 2048 × 1536 and a frame rate of 119 fps). Compared to MEMS projection devices [49,50], DLP projectors offer higher precision and easier operation, making them suitable for projecting different grating patterns. The target is placed at a distance of approximately 930 mm from the camera. For the measurement tasks detailed in the following, we adopted a unidirectional structured light calibration method for system calibration [51]. We chose a six-step phase-shifting strategy with a stripe frequency of 32. The structured light patterns were projected sequentially along the vertical direction, and there were seven complementary Gray code projection patterns (five complementary-Gray-coded patterns and two pure white-and-black patterns).

3.2. Phase Order Sharing Experiment

In order to verify the effectiveness of the proposed complementary-Gray-code-based phase order sharing strategy, the experiments described in this section were carried out on the constructed high-dynamic-range measurement system. In these experiments, three methods, namely multi-frequency, complementary Gray code, and complementary Gray code based on phase order sharing, were chosen for phase unfolding for the convenience of control. The multi-frequency step-by-step unfolding had a sinusoidal stripe frequency ratio of 1:2:8:32, and the six-step phase-shifting method was used to calculate the parcel phase. The complementary Gray code group also used the six-step phase-shifting method. The DLP projector was set to 100ms for the projection exposure time for both the 8-bit sinusoidal stripe map and the 1-bit Gray code map, and the camera exposure time was divided into three groups of exposure times, short, medium, and long, which were 6 ms, 30 ms, and 48 ms, respectively, smaller than the projection period.
For this section, the outer spherical bearing housing shown in Figure 4a was selected as the measurement object. It can be seen that the object is prepared from different materials with large variations in surface reflectance. The outer material is made of an alloy with a rough surface and has a low surface reflectivity, while the outer and inner rings in the center are made of a metal with a smooth surface and have a high reflectivity, and intensity saturation can be clearly observed in this region. Figure 4b corresponds to the phase-shifting image at the selected camera exposure time = 30 ms, and (c) shows the parcel phase map obtained by solving the calculation. (d)–(f) show the projected complementary Gray code image, the phase order, and the absolute phase map obtained by the final solution.
In order to further verify the effectiveness of the complementary Gray code phase unfolding method based on order sharing, three different phase unfolding methods were used in the experiments, namely multi-frequency unwrapping, the complementary Gray code method, and the order sharing method. And the phase solution effect under different camera exposure times was observed. The specific settings were as follows:
(1) Multi-frequency step-by-step phase unfolding: Using the low-frequency streak image information collected under the current exposure time, the parcel phase was unfolded to obtain the absolute phase.
(2) Complementary Gray code phase unfolding: Using the complementary Gray code image information collected under the current exposure time, the parcel phase was unfolded to obtain the absolute phase.
(3) Complementary Gray code phase expansion based on order sharing: The complementary Gray code image information collected under the camera’s exposure time = 30 ms with a good exposure effect was used to expand the parcel phase and obtain the absolute phase.
The phase resolution results corresponding to the different methods at three exposure times, short, medium, and long, are shown in Figure 5. On the left side is the acquired image of the high-frequency streak projection, and on the right side is the absolute phase map and the line graph plotted along the horizontal axis by selecting a certain row of phase values. It can be seen that with a short exposure time of 6 ms, the acquired image is overall darker and basically free of intensity saturation. And as the exposure time is prolonged, the overexposure phenomenon on the surface of the measurement object becomes more and more obvious, mainly appearing in the middle of the workpiece’s inner and outer axial ring part, with a strong reflection phenomenon. In terms of the absolute phase solution, both the step-by-step expansion and the complementary Gray code expansion based on phase order sharing can successfully expand the parcel phase in the low exposure case, while the absolute phase solved by the complementary Gray code method has a phase mutation, which indicates that there is an error in this part of the step-by-step expansion, since the exposure time is too short, which results in the black-and-white edge transition segment in the acquired Gray code projection image during binarization. In the case of medium exposure time, all three methods can correctly solve the absolute phase. It can be seen that intensity saturation occurs in many regions on the surface of the object in the case of long exposure time, and the phase solving effect of the complementary Gray code method based on phase order sharing is still better than that of the traditional complementary Gray code method, which proves that the phase order sharing method reduces the projected image and proves the selection of a group of Gray codes with appropriate exposure for decoding. The effect will also be better than a group with too weak or too strong exposure.
The phase level sharing method based on complementary Golay code refers to using the same set of Golay code patterns for phase unwrapping of phase-shift images captured under different exposure times, thereby reducing the number of projection images in HDR scenes. At the same time, by comparing a line of phase change line graphs drawn under different exposures of the same method, it can also be found that the absolute phase of the solution has a jitter phenomenon when the exposure time is shorter, which is affected by the noise, indicating that even if overexposure phenomenon does not occur when the overall grey value of the image is low, but the phase-shifting image stripes are not obvious in the contrast of the acquisition, which will also affect the accuracy of the final calculation of the absolute phase.

3.3. Experiments on 3D Measurement of Standard Workpieces

The adaptive exposure fusion method determines the optimal number of exposures and exposure time required in HDR scenes through clustering. In the experiment comparing the measurement accuracy of different methods, two precision machined standard metal gauges with thicknesses of 2 mm and 5 mm, respectively, and different surface reflectivities were used. They were denoted as gauge 1 and gauge 2, and used as measurement objects in this section. The two gauges were superimposed and fixed using a device during the measurement, and the accuracy of the different methods was evaluated by measuring the spacing between the upper and lower planes, the thickness of gauge 2 (5 mm), and the fixation of the gauge as shown in Figure 6a. Figure 6b shows the reflections on the surfaces of the two gauge blocks during the projection of a high-frequency sinusoidal stripe image, and it can be seen that there is an overexposure phenomenon for gauge block 1 while the surface of gauge block 2 is normal for the projected image at this exposure time. This experiment simulated a high-dynamic-range scenario in real industrial inspection by setting up measurement objects with different reflectivities.
The main methods used for this section were as follows:
(1)
Using the phase-shifting method to calculate the wrapped phase at a single exposure time and the step-by-step unfolding method to obtain the absolute phase, as a control group under no exposure fusion, using 4 frequencies and 6 steps for a total of 24 images; this method is denoted as SE (single exposure).
(2)
Using eight groups of equal-step exposure times (6 ms, 12 ms, 18 ms, …, 48 ms) and multiple-exposure fusion to obtain phase-shifting images and using multi-frequency step-by-step unfolding to obtain the absolute phase belonging to a common structured-light camera on the market to measure the high dynamic range of the processing method, with 24 images per exposure group; this method is denoted as ME (multi-exposure).
(3)
Using two, four, and eight groups of equal-step exposure time and multiple-exposure fusion to obtain phase-shifting images and using complementary Gray code expansion to obtain the absolute phase, with a total of five projected Gray code images, one each for high-intensity uniform grey-scale images and when not projecting images, and six images for each group of phase-shifting images; this method is denoted as GC_i (Gray code, i is the corresponding number of projection groups).
(4)
The method proposed in this section, which determines the number of exposure groups and the corresponding time by using adaptive exposure fusion method and solves the phase values by the complementary Gray code method based on phase order sharing; there are five projected Gray code images in total, one high uniform grey scale image and one low uniform grey scale image are collected with and without a projected image (surface reflection model is calculated for pre-projection), and there are six phase-shifting images in each group; this method is denoted as Ours.
Figure 7 shows the process of measuring the volume blocks using the adaptive exposure fusion method proposed in this section. Figure 7a represents the ideal exposure time distribution map t m (x, y) on the object surface calculated pixel by pixel after the establishment of the surface reflectance model, and it can be seen that the results can be roughly divided into two intervals through the distribution. Figure 7b shows that the optimal number of exposure groups is determined to be two after the calculation of the adaptive exposure time method, with one group of exposure time being 6.3ms and one group of exposure time being 41.3 ms. Figure 7c represents the two groups of exposure time adapted to different regions of the object surface, which basically divides the object into two regions, corresponding to the different surface reflectivities of the two volume blocks; Figure 7d–f show the new phase-shifting image obtained after fusion, the wrapped phase obtained by the phase-shifting method for solving the phase, and the absolute phase map unfolded by the complementary Gray code method using the phase order sharing method, respectively.
The point cloud reconstruction results of the volume block surface for different methods are shown in Figure 8 and Table 1. When only one set of exposure times (SE method) was used, the point cloud reconstruction details of the volume block 2 surface were enlarged, and it was found that the point cloud was corrugated rather than smooth planes, whereas the surface of volume block 1 was smoother in comparison. This is due to the fact that the acquired image of the volume block 1 surface at this camera exposure time is not overexposed, while the volume block 2 surface is overexposed, which leads to a phase resolution error and further increases the error of the actual point cloud, whereas, when the fusion of the images acquired with eight groups of fixed-step exposure time is used and then the multi-frequency stepwise expansion (ME) or complementary Gray code (GC_8) method is used to calculate the point cloud, the point clouds of the surfaces of volume blocks 1 and 2 are all smoother than a single group. The data are smoother and more in line with the actual data than a single group. However, as the number of exposure groups decreases to the point where only two groups of exposure time fusion are used, the computed surface of volume block 2 still appears to be rippled. This is due to the fact that even though multiple-exposure fusion is used, the choice of exposure time can only be determined empirically; the determined exposure time is not optimal, and the phase-shifting image obtained after fusion still has the problem of intensity saturation. Figure 8f shows the point cloud calculated by the adaptive exposure fusion method proposed in this paper, which uses two sets of exposure time images for fusion as in Figure 8e, but its exposure time is calculated adaptively according to the surface reflection model, and the surface of the reconstructed point cloud is also flatter.
In order to quantify the measurement accuracy of the method proposed in this section, the height of gauge block 2 was selected as the measurement term, and the thickness was measured by taking the points in the fixed regions of the upper and lower surfaces for plane fitting and finding the mean value of the distances from the points in the upper surface region to the fitted plane on the lower surface, as well as the mean value of the distances from the points in the lower surface region to the fitted plane on the upper surface as the measurement value H, and the standard deviation of the distances std was used as a measure of the distribution of the error. The above operations can be performed using the point cloud processing software CloudCompare 2.11.0.
The measurement results are recorded as shown in Figure 9. It can be seen that, without the use of multi-exposure fusion, the single-exposure method SE measurement error is the largest, and the data fluctuation is large. The error is the smallest with the use of eight groups of equal-step exposure time fusion for the phase-shifting method and the phase order sharing complementary Gray code for phase resolution of GC_8. For this paper’s proposed method, the use of two groups of exposure time under the phase-shifting image for the fusion, the measured value is 5.053 mm, the error is 0.053 mm, and the standard deviation is 0.091 mm. The error and standard deviation are reduced by 25.4% and 40.1%, respectively, compared with the GC_2 method using two groups of images, and the measurement accuracy is basically the same as that of GC_4 using four groups of images and ME using eight groups of images. However, the method in this paper reduces the number of images used from 31 to 20 compared to GC_4, which is a reduction of about 35%, and reduces the number of images by about 79% compared to the ME method, which does not use phase order sharing, which verifies that the proposed method in this paper is able to determine the optimal number of exposure groups and time efficiently and has the advantages of self-adaptation, fewer projected images, and higher accuracy compared to the traditional method.

4. Conclusions

In this paper, an adaptive exposure time selection method is proposed to calculate the optimal number of exposures and exposure times by using an improved clustering method to divide regions with different reflection degrees. At the same time, the phase order sharing strategy is adopted in the phase unwrapping stage, and the same set of complementary Gray code patterns is used to calculate the phase orders under different exposure times. The experimental results verify the effectiveness of the phase order sharing method in multiple image exposure fusion, and the comparison experiments show that the measurement error of the method described in this paper was reduced by 25.4% under almost the same exposure times.
It is inevitable that there are still some limitations. The current calibration strategy is to adopt the pixel-by-pixel polynomial equation method. Although this method has high accuracy, the calibration parameters obtained are only applicable to 3D measurements within the calibration area and are not suitable for measurements outside the calibration area. In industrial production, the size range of workpieces that need to be inspected is relatively large. Subsequently, targeted improvements can be made by using calibration strategies that can measure the entire field. In addition, deep learning [52,53] and transfer learning algorithms [54,55,56] will be applied for HDR structured light 3D reconstruction.

Author Contributions

Conceptualization, methodology, software, validation, and formal analysis, Z.L.; investigation, resources, data curation, and writing—original draft preparation, Z.L. and R.M.; writing—review and editing, visualization, supervision, and project administration, R.M. and S.D.; funding acquisition, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shenzhen Science and Technology Program, grant number JCYJ20240813112003005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

The authors thank all people who offered help for this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shimizu, Y.; Chen, L.-C.; Kim, D.W.; Chen, X.; Li, X.; Matsukuma, H. An insight into optical metrology in manufacturing. Meas. Sci. Technol. 2021, 32, 042003. [Google Scholar] [CrossRef]
  2. Gao, W.; Kim, S.-W.; Bosse, H.; Haitjema, H.; Chen, Y.; Lu, X.; Knapp, W.; Weckenmann, A.; Estler, W.; Kunzmann, H. Measurement technologies for precision positioning. CIRP Ann. 2015, 64, 773–796. [Google Scholar] [CrossRef]
  3. Gao, W.; Haitjema, H.; Fang, F.; Leach, R.; Cheung, C.; Savio, E.; Linares, J.-M. On-machine and in-process surface metrology for precision manufacturing. CIRP Ann. 2019, 68, 843–866. [Google Scholar] [CrossRef]
  4. Xu, B.; Jia, Z.; Li, X.; Chen, Y.-L.; Shimizu, Y.; Ito, S.; Gao, W. Surface form metrology of micro-optics. In Proceedings of the International Conference on Optics in Precision Engineering and Nanotechnology (icOPEN2013), Singapore, 9–12 April 2013; p. 876902. [Google Scholar]
  5. Li, Y.; Li, Z.; Chen, W.; Zhang, C.; Wang, H.; Wang, X.; Gui, W.; Gao, W.; Liang, X.; Li, X. Reliable 3D Reconstruction with Single-Shot Digital Grating and Physical Model-Supervised Machine Learning. IEEE Trans. Instrum. Meas. 2025, 74, 5505413. [Google Scholar]
  6. Lei, M.; Fan, J.; Shao, L.; Song, H.; Xiao, D.; Ai, D.; Fu, T.; Lin, Y.; Gu, Y.; Yang, J. Double-Shot 3D Shape Measurement with a Dual-Branch Network for Structured Light Projection Profilometry. IEEE Trans. Circuits Syst. Video Technol. 2024, 35, 3893–3906. [Google Scholar] [CrossRef]
  7. Ye, X.C.; Fan, X.; Zhang, M.L.; Xu, R.; Zhong, W. Unsupervised Monocular Depth Estimation via Recursive Stereo Distillation. IEEE Trans. Image Process 2021, 30, 4492–4504. [Google Scholar] [CrossRef] [PubMed]
  8. Han, M.; Zhang, C.; Zhang, Z.; Li, X. Review of MEMS vibration-mirror-based 3D reconstruction of structured light. Opt. Precis. Eng. 2025, 33, 1065–1090. [Google Scholar] [CrossRef]
  9. Chen, R.; Li, Y.; Xue, G.; Tao, Y.; Li, X. Laser triangulation measurement system with Scheimpflug calibration based on the Monte Carlo optimization strategy. Opt. Express 2022, 30, 25290–25307. [Google Scholar] [CrossRef]
  10. Li, J.X.; Zhou, Q.; Li, X.H.; Chen, R.M.; Ni, K. An Improved Low-Noise Processing Methodology Combined with PCL for Industry Inspection Based ON Laser Line Scanner. Sensors 2019, 19, 3398. [Google Scholar] [CrossRef]
  11. Bai, Y.; Zhang, Z.; Fu, S.; Zhao, H.; Ni, Y.; Gao, N.; Meng, Z.; Yang, Z.; Zhang, G.; Yin, W. Recent Progress of Full-Field Three-Dimensional Shape Measurement Based on Phase Information. Nanomanufacturing Metrol. 2024, 7, 9. [Google Scholar] [CrossRef]
  12. Li, Z.; Li, X. Improving Generalization in Fringe Projection Profilometry Networks through Physics-Informed Data Augmentation. In Proceedings of the 2025 4th International Symposium on Computer Applications and Information Technology (ISCAIT), Xi’an, China, 21–23 March 2025; pp. 420–423. [Google Scholar]
  13. Gao, F.; Xu, Y.; Li, Y.; Zhong, W.; Yu, Y.; Li, D.; Jiang, X. In-Situ Form Metrology of Structured Composite Surfaces Using Hybrid Structured-Light Measurement with a Novel Calibration Method. Nanomanufacturing Metrol. 2024, 7, 23. [Google Scholar] [CrossRef]
  14. Zuo, C.; Feng, S.J.; Huang, L.; Tao, T.Y.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  15. Zuo, C.; Huang, L.; Zhang, M.L.; Chen, Q.; Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [Google Scholar] [CrossRef]
  16. Zhang, Q.C.; Su, X.Y.; Xiang, L.Q.; Sun, X.Z. 3-D shape measurement based on complementary Gray-code light. Opt. Lasers Eng. 2012, 50, 574–579. [Google Scholar] [CrossRef]
  17. Bai, J.; Wang, Y.; Wang, X.; Zhou, Q.; Ni, K.; Li, X. Three-Probe Error Separation with Chromatic Confocal Sensors for Roundness Measurement. Nanomanufacturing Metrol. 2021, 4, 247–255. [Google Scholar] [CrossRef]
  18. Lei, F.; Han, M.; Jiang, H.; Wang, X.; Li, X. A phase-angle inspired calibration strategy based on MEMS projector for 3D reconstruction with markedly reduced calibration images and parameters. Opt. Lasers Eng. 2024, 176, 108078. [Google Scholar] [CrossRef]
  19. Li, Y.; Li, Z.; Liang, X.; Huang, H.; Qian, X.; Feng, F.; Zhang, C.; Wang, X.; Gui, W.; Li, X. Global phase accuracy enhancement of structured light system calibration and 3D reconstruction by overcoming inevitable unsatisfactory intensity modulation. Measurement 2024, 236, 114952. [Google Scholar] [CrossRef]
  20. Han, M.; Xing, Y.B.; Wang, X.H.; Li, X.H. Projection superimposition for the generation of high-resolution digital grating. Opt. Lett. 2024, 49, 4473–4476. [Google Scholar] [CrossRef] [PubMed]
  21. Li, K.; Liang, Y.; Lin, Y.-S. MEMS-based meta-emitter with actively tunable radiation power characteristic. Discov. Nano 2024, 19, 133. [Google Scholar] [CrossRef]
  22. Han, M.; Lei, F.; Shi, W.; Lu, S.; Li, X. Uniaxial MEMS-based 3D reconstruction using pixel refinement. Opt. Express 2023, 31, 536–554. [Google Scholar] [CrossRef]
  23. Song, J.; Liu, K.; Sowmya, A.; Sun, C. Super-resolution phase retrieval network for single-pattern structured light 3D imaging. IEEE Trans. Image Process. 2022, 32, 537–549. [Google Scholar] [CrossRef]
  24. Feng, S.J.; Zuo, C.; Zhang, L.; Tao, T.Y.; Hu, Y.; Yin, W.; Qian, J.M.; Chen, Q. Calibration of fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2021, 143, 106622. [Google Scholar] [CrossRef]
  25. Lei, F.X.; Ma, R.J.; Li, X.H. Use of Phase-Angle Model for Full-Field 3D Reconstruction under Efficient Local Calibration. Sensors 2024, 24, 2581. [Google Scholar] [CrossRef] [PubMed]
  26. Yu, J.; Da, F.; Li, W. Calibration for camera–projector pairs using spheres. IEEE Trans. Image Process. 2020, 30, 783–793. [Google Scholar] [CrossRef]
  27. Zappa, E.; Busca, G. Static and dynamic features of Fourier transform profilometry: A review. Opt. Lasers Eng. 2012, 50, 1140–1151. [Google Scholar] [CrossRef]
  28. Zuo, C.; Qian, J.M.; Feng, S.J.; Yin, W.; Li, Y.X.; Fan, P.F.; Han, J.; Qian, K.M.; Chen, Q. Deep learning in optical metrology: A review. Light-Sci. Appl. 2022, 11, 39. [Google Scholar] [CrossRef] [PubMed]
  29. Li, Y.; Li, Z.; Zhang, C.; Han, M.; Lei, F.; Liang, X.; Wang, X.; Gui, W.; Li, X. Deep Learning-Driven One-Shot Dual-View 3-D Reconstruction for Dual-Projector System. IEEE Trans. Instrum. Meas. 2024, 73, 5021314. [Google Scholar] [CrossRef]
  30. Wang, H.; Lu, Z.Y.; Huang, Z.Y.; Li, Y.M.; Zhang, C.B.; Qian, X.; Wang, X.H.; Gui, W.H.; Liang, X.J.; Li, X.H. A High-Accuracy and Reliable End-to-End Phase Calculation Network and Its Demonstration in High Dynamic Range 3D Reconstruction. Nanomanufacturing Metrol. 2025, 8, 5. [Google Scholar] [CrossRef]
  31. Wei, L.; Xiangchao, Z.; Yunuo, C.; Ting, C.; Peide, Y.; Min, X.; Xiangqian, J. Deterministic form-position deflectometric measurement of monolithic multi-freeform optical structures via Bayesian multisensor fusion. Light Adv. Manuf. 2025, 6, 29. [Google Scholar] [CrossRef]
  32. Zhang, P.; Zhong, K.; Zhongwei, L.; Xiaobo, J.; Bin, L.; Congjun, W.; Yusheng, S. High dynamic range 3D measurement based on structured light: A review. J. Adv. Manuf. Sci. Technol. 2021, 1, 2021004. [Google Scholar] [CrossRef]
  33. Zhao, X.Y.; Yu, T.C.; Liang, D.; He, Z.X. A review on 3D measurement of highly reflective objects using structured light projection. Int. J. Adv. Manuf. Technol. 2024, 132, 4205–4222. [Google Scholar] [CrossRef]
  34. Feng, S.J.; Zhang, Y.Z.; Chen, Q.; Zuo, C.; Li, R.B.; Shen, G.C. General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique. Opt. Lasers Eng. 2014, 59, 56–71. [Google Scholar] [CrossRef]
  35. Rao, L.; Da, F.P. High dynamic range 3D shape determination based on automatic exposure selection. J. Vis. Commun. Image Represent. 2018, 50, 217–226. [Google Scholar] [CrossRef]
  36. Ekstrand, L.; Zhang, S. Autoexposure for three-dimensional shape measurement using a digital-light-processing projector. Opt. Eng. 2011, 50, 123603. [Google Scholar] [CrossRef]
  37. Li, Y.; Chen, W.; Li, Z.; Zhang, C.; Wang, X.; Gui, W.; Gao, W.; Liang, X.; Li, X. SL3D-BF: A Real-World Structured Light 3D Dataset with Background-to-Foreground Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2025. [Google Scholar] [CrossRef]
  38. Zhong, K.; Li, Z.; Zhou, X.; Li, Y.; Shi, Y.; Wang, C. Enhanced phase measurement profilometry for industrial 3D inspection automation. Int. J. Adv. Manuf. Technol. 2015, 76, 1563–1574. [Google Scholar] [CrossRef]
  39. Zhang, S. Rapid and automatic optimal exposure control for digital fringe projection technique. Opt. Lasers Eng. 2020, 128, 106029. [Google Scholar] [CrossRef]
  40. Lin, H.; Gao, J.; Mei, Q.; He, Y.; Liu, J.; Wang, X. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement. Opt. Express 2016, 24, 7703–7718. [Google Scholar] [CrossRef]
  41. Jiang, C.F.; Bell, T.; Zhang, S. High dynamic range real-time 3D shape measurement. Opt. Express 2016, 24, 7337–7346. [Google Scholar] [CrossRef] [PubMed]
  42. Shim, I.; Lee, J.-Y.; Kweon, I.S. Auto-adjusting Camera Exposure for Outdoor Robotics using Gradient Information. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, USA, 14–18 September 2014; pp. 1011–1017. [Google Scholar]
  43. Huimin, L.; Hui, Z.; Shaowu, Y.; Zhiqiang, Z. Camera Parameters Auto-Adjusting Technique for Robust Robot Vision. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–8 May 2010; pp. 1518–1523. [Google Scholar]
  44. Liu, X.; Chen, W.; Madhusudanan, H.; Ge, J.; Ru, C.; Sun, Y. Optical Measurement of Highly Reflective Surfaces From a Single Exposure. IEEE Trans. Ind. Inform. 2021, 17, 1882–1891. [Google Scholar] [CrossRef]
  45. Li, J.L.; Hassebrook, L.G.; Guan, C. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2003, 20, 106–115. [Google Scholar] [CrossRef] [PubMed]
  46. Salahieh, B.; Chen, Z.Y.; Rodriguez, J.J.; Liang, R.G. Multi-polarization fringe projection imaging for high dynamic range objects. Opt. Express 2014, 22, 10064–10071. [Google Scholar] [CrossRef]
  47. Pan, B.; Kemao, Q.; Huang, L.; Asundil, A. Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry. Opt. Lett. 2009, 34, 416–418. [Google Scholar] [CrossRef] [PubMed]
  48. Wu, Z.; Zuo, C.; Guo, W.; Tao, T.; Zhang, Q. High-speed three-dimensional shape measurement based on cyclic complementary Gray-code light. Opt. Express 2019, 27, 1283–1297. [Google Scholar] [CrossRef]
  49. Han, M.; Shi, W.J.; Lu, S.H.; Lei, F.X.; Li, Y.M.; Wang, X.H.; Li, X.H. Internal-External Layered Phase Shifting for Phase Retrieval. IEEE Trans. Instrum. Meas. 2024, 73, 4501013. [Google Scholar] [CrossRef]
  50. Han, M.; Jiang, H.; Lei, F.X.; Xing, Y.B.; Wang, X.H.; Li, X.H. Modeling window smoothing effect hidden in fringe projection profilometry. Measurement 2025, 242, 115852. [Google Scholar] [CrossRef]
  51. Zhang, S. Flexible and high-accuracy method for uni-directional structured light system calibration. Opt. Lasers Eng. 2021, 143, 106637. [Google Scholar] [CrossRef]
  52. Xue, R.; Hooshmand, H.; Isa, M.; Piano, S.; Leach, R.K. Applying machine learning to optical metrology: A review. Meas. Sci. Technol. 2024, 36, 012002. [Google Scholar] [CrossRef]
  53. Liu, H.; Yan, N.; Shao, B.; Yuan, S.; Zhang, X. Deep learning in fringe projection: A review. Neurocomputing 2024, 581, 127493. [Google Scholar] [CrossRef]
  54. Chen, B.; Li, Q.; Ma, R.; Qian, X.; Wang, X.; Li, X. Towards the generalization of time series classification: A feature-level style transfer and multi-source transfer learning perspective. Knowl. Based Syst. 2024, 299, 112057. [Google Scholar] [CrossRef]
  55. Li, C.; Pan, X.; Zhu, P.; Zhu, S.; Liao, C.; Tian, H.; Qian, X.; Li, X.; Wang, X.; Li, X. Style Adaptation module: Enhancing detector robustness to inter-manufacturer variability in surface defect detection. Comput. Ind. 2024, 157, 104084. [Google Scholar] [CrossRef]
  56. Li, C.; Yan, H.; Qian, X.; Zhu, S.; Zhu, P.; Liao, C.; Tian, H.; Li, X.; Wang, X.; Li, X. A domain adaptation YOLOv5 model for industrial defect inspection. Measurement 2023, 213, 112725. [Google Scholar] [CrossRef]
Figure 1. The pipeline of our adaptive exposure time selection method. (a) Pixel-projected intensity response model. (b) The clustered adaptive exposure time selection flowchart.
Figure 1. The pipeline of our adaptive exposure time selection method. (a) Pixel-projected intensity response model. (b) The clustered adaptive exposure time selection flowchart.
Sensors 25 04786 g001
Figure 2. Comparison of different phase unwrapping strategies.
Figure 2. Comparison of different phase unwrapping strategies.
Sensors 25 04786 g002
Figure 3. Diagram of our structured light system.
Figure 3. Diagram of our structured light system.
Sensors 25 04786 g003
Figure 4. Raw image acquired with camera exposure time of 30 ms and phase resolution results. (a) Original object image; (b) phase-shifting image at the selected camera exposure time = 30 ms; (c) parcel phase map; (d) projected complementary Gray code image; (e) phase order image; (f) absolute phase map.
Figure 4. Raw image acquired with camera exposure time of 30 ms and phase resolution results. (a) Original object image; (b) phase-shifting image at the selected camera exposure time = 30 ms; (c) parcel phase map; (d) projected complementary Gray code image; (e) phase order image; (f) absolute phase map.
Sensors 25 04786 g004
Figure 5. Comparison of phase solving results under different methods.
Figure 5. Comparison of phase solving results under different methods.
Sensors 25 04786 g005
Figure 6. Measurement of the object’s metal mass. (a) The setup of gauge 1 and gauge 2; (b) the reflection of gauge 1 and gauge 2.
Figure 6. Measurement of the object’s metal mass. (a) The setup of gauge 1 and gauge 2; (b) the reflection of gauge 1 and gauge 2.
Sensors 25 04786 g006
Figure 7. Adaptive exposure fusion method used to calculate phase results. (a) The ideal exposure time distribution map tm(x, y) on the object surface calculated pixel by pixel after the establishment of the surface reflectance model; (b) the optimal number of exposure groups is determined to be 2 after the calculation of the adaptive exposure time method; (c) 2 groups of exposure time adapted to different regions of the object surface; (d) the new phase-shifting image obtained after fusion; (e) the wrapped phase obtained by the phase-shifting method for solving the phase; (f) the absolute phase map unfolded by the complementary Gray code method using the phase order sharing method.
Figure 7. Adaptive exposure fusion method used to calculate phase results. (a) The ideal exposure time distribution map tm(x, y) on the object surface calculated pixel by pixel after the establishment of the surface reflectance model; (b) the optimal number of exposure groups is determined to be 2 after the calculation of the adaptive exposure time method; (c) 2 groups of exposure time adapted to different regions of the object surface; (d) the new phase-shifting image obtained after fusion; (e) the wrapped phase obtained by the phase-shifting method for solving the phase; (f) the absolute phase map unfolded by the complementary Gray code method using the phase order sharing method.
Sensors 25 04786 g007
Figure 8. Surface point cloud results of standard parts reconstructed by different methods.
Figure 8. Surface point cloud results of standard parts reconstructed by different methods.
Sensors 25 04786 g008
Figure 9. Block thickness measurement results for different methods.
Figure 9. Block thickness measurement results for different methods.
Sensors 25 04786 g009
Table 1. The comparison of measurement accuracy among different methods. The true value of the gauge block height is 5.000 mm.
Table 1. The comparison of measurement accuracy among different methods. The true value of the gauge block height is 5.000 mm.
MethodProjected Images
Ntotal
Total Exposure Time
Ttotal/ms
Result
Hi/mm
Standard Deviation
std/mm
SE242405.1870.179
ME19251845.0640.110
GC_85513805.0320.088
GC_4316605.0530.094
GC_2193365.0710.152
Ours20381.65.0530.091
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Ma, R.; Duan, S. A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction. Sensors 2025, 25, 4786. https://doi.org/10.3390/s25154786

AMA Style

Li Z, Ma R, Duan S. A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction. Sensors. 2025; 25(15):4786. https://doi.org/10.3390/s25154786

Chicago/Turabian Style

Li, Zhuang, Rui Ma, and Shuyu Duan. 2025. "A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction" Sensors 25, no. 15: 4786. https://doi.org/10.3390/s25154786

APA Style

Li, Z., Ma, R., & Duan, S. (2025). A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction. Sensors, 25(15), 4786. https://doi.org/10.3390/s25154786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop