Next Article in Journal
Post-Quantum Secure Identity-Based Signature Scheme with Lattice Assumption for Internet of Things Networks
Previous Article in Journal
Optimal Time Frequency Fusion Symmetric Dot Pattern Bearing Fault Feature Enhancement and Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Weighted Data Fusion for Line Structured Light and Photometric Stereo Measurement System

School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(13), 4187; https://doi.org/10.3390/s24134187
Submission received: 26 May 2024 / Revised: 19 June 2024 / Accepted: 24 June 2024 / Published: 27 June 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
Line structured light (LSL) measurement systems can obtain high accuracy profiles, but the overall clarity relies greatly on the sampling interval of the scanning process. Photometric stereo (PS), on the other hand, is sensitive to tiny features but has poor geometrical accuracy. Cooperative measurement with these two methods is an effective way to ensure precision and clarity results. In this paper, an LSL-PS cooperative measurement system is brought out. The calibration methods used in the LSL and PS measurement system are given. Then, a data fusion algorithm with adaptive weights is proposed, where an error function that contains the 3D point cloud matching error and normal vector error is established. The weights, which are based on the angles of adjacent normal vectors, are also added to the error function. Afterward, the fusion results can be obtained by solving linear equations. From the experimental results, it can be seen that the proposed method has the advantages of both the LSL and PS methods. The 3D reconstruction results have the merits of high accuracy and high clarity.

1. Introduction

Line structured light (LSL) sensors have the advantages of simple structure, high accuracy, and low cost. A typical LSL sensor consists of a camera, a laser line projector, and a frame that connects them together [1,2]. Currently, they are widely used in quality evaluation [3], geometric measurement [4,5], visual tracking [6], railway inspection [7], etc. In the measuring process, a laser line is projected onto the object, and the camera captures the perturbed stripe image that carries the profile information. Camera coordinates of each point on this profile can be solved with camera intrinsic parameters and the laser plane equation [8].
Photometric stereo (PS) measurement has the advantages of fast measurement speed, simple structure, and high clarity. The classical PS system consists of a camera and several light spots [9]. It has been applied in defect detection [10,11,12,13,14], face recognition [15,16], and cultural heritage digitization [17]. The measurement process of PS is achieved by taking images of the object under different light spots. The surface normal vector of the object can be calculated according to the light and dark changes. The 3D result can be achieved by gradient integration [18,19].
LSL and PS are two measurement techniques that have the advantages of low cost, high degree of automation, and simple operation. Although LSL can provide 3D geometrical information with high accuracy, its clarity is highly affected by the noises introduced in center extraction of the laser stripe and the sampling interval of the scanning. On the contrary, PS is sensitive to the details on the object. The measurement accuracy is low due to the noise accumulation in gradient integration. Therefore, how to achieve high precision and high clarity results efficiently is a key issue in the research of 3D measurement. Cooperative measurement with LSL and PS may be a solution.
Based on the above considerations, Nehab et al. [20] fused the position information obtained from a depth scanner with the normal vectors computed by PS, combining the advantages of both measurements. Haque et al. [21] added Laplace smoothing terms to the optimized surface equations, aiming to make the result smoother at the edges, but the reconstructed surfaces had holes. Zhang et al. [22] constructed an optimized surface equation for data fusion where a Gaussian filter was designed by considering both the neighborhood and depth values, while it required a complex iterative process and was time consuming.
Okatani et al. [23] solved the optimization problem efficiently by using recurrent belief propagation. It has the limitation that accurate results can only be obtained when an appropriate confidence level is selected. Bruno et al. [24] proposed a method combining coded structured light and PS for the 3D reconstruction of underwater objects, but the image acquisition time is very long and further improvement is needed for practical applications. Massot [25] and Li [26] also combined structured light and PS for the 3D reconstruction of underwater objects. Riegler et al. [27] combined photometric loss and geometric loss to train a model in a self-supervised way, but the accuracy of their reconstruction results was not high. Lu et al. [28] proposed a multiresolution surface reconstruction scheme, which combines low-resolution geometric images with PS data, but the iterative process in their algorithm takes a long time. Li et al. [29] proposed a novel local feature descriptor to fuse neighborhood point cloud coordinates and normal vectors. The accuracy of the results is improved, but the computation time is long, especially when the number of point clouds is large. Antensteiner et al. [30] proposed a fusion method based on the total generalized variance to improve the accuracy, but its computational speed still needs to be improved. Hao et al. [31] corrected the deviation of the PS by fitting an error surface using a 3D point cloud of structured light. The depth of PS is achieved by the integration process, and the noises are also accumulated.
In this paper, we propose an adaptive weighted fusion algorithm based on the angle of the adjacent normal vector. Firstly, the PS method is used to calculate the surface normal vectors, and perform weighted fusion by use of the normal vector angles of the neighboring points. Next, the error function of the fused surface is established, which consists of the error in the 3D point cloud and the normal vector. The fusion result can be obtained by establishing a sparse matrix and solved with a linear equation. Our algorithm has the advantages of both the LSL and PS methods, and can achieve a high accuracy and high clarity result.

2. Measurement Principle

The LSL-PS system measurement principle is shown in Figure 1. The LSL sensor consists of a camera and a laser line projector. The laser plane is emitted by the laser projector and intersects with the part to be measured. A perturbed laser stripe that carries the geometrical information of the profile can be captured by the camera. Since the relative position between the camera and the laser line projector is fixed, the coordinates of the points on the intersecting profile can be solved using pre-calibrated sensor parameters. As the part moves, the laser plane intersects the part at different positions and a series of intersecting profiles can be calculated. By combining these profiles with the translation distances, 3D point cloud of the part can be obtained.
The PS sensor uses the same camera and twelve spot light sources (LEDs). The light sources are arranged at equal intervals on a circular plate. Each LED is switched on/off in turn. The camera captures one image under the corresponding spot light to complete the PS measurement. The surface normal vector can be achieved according to the pre-calibrated sensor parameters, and then the depth value is calculated from the normal vector.
LSL and PS measurement are carried out sequentially. 3D measurement results from LSL are translated into the pixel coordinate system and matched with the PS results. Data interpolation of the LSL is carried out according to the pixel coordinates of the PS results so as to make the number of the two data sets consistent. The final step is to fuse the 3D point cloud of the LSL with the normal vector of the PS to achieve high precision and high clarity results.

3. Line Structured Light Measurement

Suppose that P has the camera coordinates of (xc, yc, zc) and the corresponding world coordinates of (Xw, Yw, Zw), then
[ x c   y c   z c ] T = R [ X w   Y w   Z w ] T + T ,
where R is the rotation matrix and T is the translation vector. Let p (x, y) be the projection point of P on the normalized image plane with coordinates of
x , y = f x · x c / z c , f y · y c / z c ,
The projected coordinates after considering radial and tangential distortions are
x = x ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) y = y ( 1 + k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2 y 2 ) + 2 p 2 x y ,
where k1, k2, p1, and p2 are the distortion coefficients and r2 = x2 + y2. The pixel coordinates of P can be derived from Equation (4):
[ u   v   1 ] T = A [ x   y   1 ] T ,
A = f x 0 u 0 0 f y v 0 0 0 1 ,
where A is the internal matrix, fx and fy are the focal lengths, and u0 and v0 are the coordinates of the camera principal point. Camera coordinates of the points on the laser stripe can be obtained by taking images of the planar target and the corresponding laser stripe in different positions [2]. The point cloud of the laser stripe is fitted by the random sample consensus (RANSAC) algorithm [32] to obtain a more accurate laser plane equation, as shown in Equation (6).
B 1 x c + B 2 y c + B 3 z c + B 4 = 0 ,
where B1, B2, B3 and B4 are the coefficients of the laser plane equation. For any laser stripe image, the pixel coordinates of the stripe center are extracted using the improved gray gravity method [33]. The normalized image plane coordinates after aberration correction can be computed from Equations (3) and (4) in turn. Then, the camera coordinate of the cross-section profile is obtained by Equations (2) and (6). Motional direction is achieved by taking two images of the target at different translation positions [2].

4. Photometric Stereo Measurements

A ceramic ball is used to successively calibrate the direction of each spot light. Let P be the highlight point on the sphere captured by camera, and H be the surface normal vector at P, as shown in Figure 2a. The image of point P and the corresponding cross-section are shown in Figure 2b. O1 is the pixel coordinate of the sphere center, and the radius of this cross-section is r = ||O1P||. The surface normal vector at P is
H = u p u c , v p v c , R 2 r 2 ,
where R is the radius of the ceramic sphere. The camera view direction is V, and then the light source direction can be obtained by Figure 2c:
L = 2 H · V H V ,
From the Lambert reflection model, the luminance value I at any point on the surface can be expressed as
I = ρ N · L ,
where ρ is the reflectivity. N is the surface normal vector and can be expressed by
N = ( L T L ) 1 L T I / ρ ,
Based on the normal vectors, the gradients qx and qy in x and y directions can be calculated. The depth Z is obtained by use of the Fourier basis function method, as shown in Equation (11).
Z = F 1 j 2 π u N F q x + 2 π v M F q y 2 π u N 2 + 2 π v M 2 ,
where F and F−1 are the two-dimensional fast Fourier transform and its inverse transform, u and v represent the frequency indexes in the row and column directions, and M and N are the number of rows and columns of the image, respectively.

5. Adaptive Weighted Fusion

A flowchart showing the adaptive weighted fusion method is shown in Figure 3. Firstly, the LSL and PS sensors are calibrated. Next, the 3D point cloud obtained from the LSL is fused with the normal vector obtained from the PS. The fusion is performed by minimizing an error function to obtain the optimized surface, which consists of a depth error and a surface normal vector error. Adaptive weights are calculated from the angles between adjacent normal vectors. With the method, the depth value will no longer need to be calculated from the surface normal vectors.
The fusion principle is shown in Figure 4. ZGT is the true depth, and ZPS is the PS value. N i , j G T and N i , j P S are the corresponding normal vectors. ZLSLS is the profile from LSL, ZOPT is the optimized depth, and Pi,j represents the points at pixel position (u, v) above it; di is the distance from Pi,j to the corresponding point of the ZLSLS profile in the vertical direction; T i , j x and T i , j y are the tangent vectors of ZOPT in the x and y directions at the pixel (u, v). With the fusion, the optimal depth value can be calculated for each pixel (u, v).
The 3D coordinates of Pi,j can then be expressed as
P i , j ( u , v ) = u f x Z i , j OPT ( u , v ) v f y Z i , j OPT ( u , v ) Z i , j OPT ( u , v ) T ,
where Z i , j O P T (u, v) is the depth of the surface point at (u, v), and fx and fy are the camera focal lengths. Based on the error between the LSL measured profile and the optimized profile in the depth direction, the depth error function is constructed as
E p = 1 M × N i = 1 M × N μ i , j Z i , j OPT μ i , j Z i , j LSLS 2 ,
where µi,j = [−u/fx, −v/fy, 1]T and Z i , j L S L S are the depth values obtained from the LSL measurements.
Normal vectors would change dramatically within the detail-rich region and slightly in the flat region. Thus, the weights of the pixel points can be assigned according to the normal vector angles between the current pixel and its neighbors. The computation principle for weights is shown in Figure 5, where Figure 5a is the neighborhood of normal vectors and Figure 5b is the angle change of the adjacent normal vectors.
Normal vector N i , j P S at point Pi,j can be expressed as
N i , j PS = N x , N y , N z T ,
Assuming that the number of rows and columns of points Pi,j in the image are i and j, respectively, then the normal vectors at points Pi,j can be represented by Equation (15).
V i , j = N x ( i , j ) , N y ( i , j ) , N z ( i , j ) T ,
At this time, the normal vectors of the neighboring points on the left and right of point Pi,j are Vi,j−1 and Vi,j+1, and the normal vectors of the neighboring points on the top and bottom sides of point Pi,j are Vi−1,j and Vi+1,j. The angle between point Pi,j and its neighboring points in the X-direction and the angle in the Y-direction can be calculated.
θ i , j x = a r c cos V i , j · V i , j 1 V i , j V i , j 1 + a r c cos V i , j · V i , j + 1 V i , j V i , j + 1 θ i , j y = a r c cos V i , j · V i 1 , j V i , j V i 1 , j + a r c cos V i , j · V i + 1 , j V i , j V i + 1 , j ,
After calculating the angles between the normal vectors at points Pi,j and their neighbors, a weight function on the magnitude of the angle between the normal vectors is obtained as
W i , j = θ i , j x + θ i , j y ,
At this point, the normal vectors N i , j APS after adding the weighting function are
N i , j APS = N i , j PS · W ,
From Equation (12) the tangent vector at Pi,j is
T x i , j = P i , j u = 1 f x ( u Z i , j OPT u + Z i , j OPT ) 1 f y v Z i , j OPT u Z i , j OPT u T T y i , j = P i , j v = 1 f x u Z i , j OPT v 1 f y ( v Z i , j OPT v + Z i , j OPT ) Z i , j OPT v T ,
In ideal case, the tangent vector is perpendicular to the normal vector and its projection along the direction of the normal vector is zero. Based on the relationship between the normal vector of the PS and the tangent vector of the ideal result, the normal vector error function is constructed as
E n = 1 M × N i = 1 M × N T x i , j ( P i , j ) · N i , j APS 2 + T y i , j ( P i , j ) · N i , j APS 2 ,
Finally, by combining the depth error function of the LSL and the normal vector error function of the PS, the fusion can be achieved by minimization of the error function, and is expressed by
Z OPT = arg min Z OPT λ E p + ( 1 λ ) E n
where λ ∈ [0,1], which is used to control the degree of influence of the point cloud values and normal vectors on the fused result; the smaller λ is, the fusion result is influenced more by the normal vectors; the larger λ is, the fusion result is influenced more by the 3D point cloud of the LSL.

6. Measurement Results and Discussions

The cooperative measurement system is shown in Figure 6. It consists of a laser line projector (Shengzuan Laser, Shenzhen, China), a camera (MV-UB500M, MindVision, Shenzhen, China), 12 LED light sources, a linear stage, and the components connecting them together. The laser line projector has a wavelength of 650 nm and a power of 5 mW. The minimum line width can reach 0.4 mm at the projection distance of 300 mm. The resolution of the camera is 800 × 600 pixels, and the focal length of the lens is 4–12 mm, which can be adjusted manually. The angle between the camera optical axis and the laser plane is about 60° and the scanning is 10 mm/s for the following experiments. The LED light sources are mounted around the camera on equally spaced circular panels. The luminance of each source is the same, and the tilt angle of the light sources and the camera’s optical axis is about 45°. The image plane of the camera is parallel to the circular plane where the light source is located. The radius of the circular plane where the light source is located is 600 mm. About 1.2 s is needed to obtain the part images under different light spots for the PS measurement. The computer has an Intel i5-8300 CPU and 4 GB RAM.

6.1. Measurement and Evaluation of Stairs

To verify the effectiveness of the system, measurement was carried out for precision-milled stairs, as shown in Figure 7a. The topmost step serves as the reference plane, and the remaining steps are named S1, S2, and S3. The heights between the steps and the reference plane are denoted by H1, H2, and H3. The diffused laser stripes can be seen from the steps, which are fine and bright to ensure the accuracy. Figure 7b shows the point cloud, and the boundary points on the steps are excluded before evaluation. The reference plane was first calculated by plane fitting and then the average distance from each step to the reference plane was calculated. Similarly, the heights of the steps were measured on a CMM (Hexagon GLOBAL 7107, Qingdao, China) using the topmost step as the reference plane with the measurement error less than 3 μm. Measurement results and errors are shown in Table 1. The mean absolute error (MAE) of the deviation in H3 is 0.0735 mm and the relative error (RE) is 0.41%, which indicates high measurement accuracy of the LSL sensor.
When the laser plane is calibrated with the RANSAC algorithm, the measurement accuracy can be further improved, as shown in Table 2. The MAE of H3 is 0.0328 mm and the relative error is reduced from 0.41% to 0.18%. The REs of H1 and H2 are also reduced significantly.
Afterward, the fusion was performed and the results are shown in Figure 8. Figure 8a–c are the LSL, the PS, and the fusion results. The LSL measurement results and the fusion results are close to each other, while the PS result has a larger error because the light source in the photometric stereo measurement is not a uniform parallel light source, which leads to an error in the normal vector, and then the error accumulates in calculating the depth value, which leads to a larger overall bias in the PS measurement results.
Measurement errors of the LSL, the PS, and the fused results are evaluated by comparing with the CMM results, as shown in Table 3. It can be seen that the absolute error (AE) of the LSL measurement result of H3 is 0.0349 mm, the error of the PS measurement result is 0.9620 mm, and the error of the fused result is 0.0293 mm. The error of fused result is reduced by 16.0% compared to the LSL measurement result, and 97.0% compared to that of the PS measurement result. Therefore, the fusion method can further improve the accuracy.

6.2. Effect of Different Values of λ

Different values of λ were analyzed to show its impact on the fusion results, as shown in Figure 9. The sum of MAEs of the steps (H1, H2, and H3) varies along with λ. When λ is 0.1, the error is the largest. With a gradual increase in λ, the overall trend of the error value is decreasing, and when λ is 0.7, the error is the smallest.
The effect on the clarity of the fusion result when taking different λ was also analyzed. Measurement results of an aluminum part are fused, and the results are shown in Figure 10.
For Figure 10a–f, the same position is analyzed that is at the outermost edge of the petal indicated by the arrow. When λ = 0.1 and 0.2, it can be seen that the details in this region are relatively blurred. When λ = 0.3, the details become somewhat clearer. When λ = 0.4, the undulations at edge of the petals in this region are further increased, which is more closely matched with the actual object. In addition, a small bump starts to appear at the upper left of the arrow. In Figure 10f–h, the small bumps are no longer changing compared to Figure 10e. Therefore, when λ is greater than 0.5, the clarity of the fusion result has stabilized. Combining the results of the accuracy at different values of λ, λ is taken as 0.7 when fusion is performed.

6.3. Measurement of Complex Parts

The purpose of this measurement system is to obtain 3D geometric information of complex parts, which can be used for quality inspection and reverse engineering. Firstly, six letters ”HEBUST“ were milled by a precision machine. Figure 11a shows the machined parts. The measurement result using the LSL sensor is shown in Figure 11b. The normal vector calculated using PS is shown in Figure 11c, where each letter can be seen. The angle of the normal vector in the X and Y directions is calculated using the proposed method, as shown in Figure 11d and e, respectively. The letters can only be clearly observed in the corresponding directions. Figure 11f is the fused result where each letter becomes very clear, the same as that of Figure 11c. The running time for the fusion is about 8 s.
The fusion results of “HEBUST” are shown in Figure 12. The fused result of the “HEBUST” by Nehab method [20] is shown in Figure 12a, where the six letters can be seen, but the lateral part of the letters is insufficiently clear. In contrast, with the proposed method all of the letters can be seen clearly, as shown in Figure 12c. Figure 12b,d are the enlargements denoted in Figure 12a,c, respectively. Note that the features of the two letters “BU” in the transverse direction are very fuzzy in Figure 12b. When using our method, these letters become very clear and the transverse features can be seen.
To further verify the effectiveness of the proposed method, a coin with rich texture information was measured. These textures include portraits, letters, and numbers. Figure 13b is the measurement result of the LSL where the approximate outline can be seen, but the details are not clear. Figure 13c shows the normal vector calculated from the PS, which clearly shows its detailed features. The angles of the normal vector in X and Y directions are calculated using the proposed method, as shown in Figure 13d,e, respectively. The coin can only be clearly characterized in the corresponding directions. Figure 13f is the fused result where detailed features such as the characters, letters, and numbers on the coin become clear.
The fusion result of the coin is shown in Figure 14. Figure 14a shows the detail achieved by the Nehab method. The fusion result using the proposed method is shown in Figure 14c. Computing time required for data fusion was about 6 s. The details of the result in the middle position are clearer compared to the Nehab method. Enlargement of the fusion result is also shown. In Figure 14b, the approximate features of the hair can be observed, but it is insufficiently clear. In Figure 14d, it becomes very clear with our method.
A cross-section profile of the coin is selected for comparative analysis, as shown in Figure 15. This profile was obtained by use of the Nehab method, the proposed method, and a chromatic confocal (CC) sensor, respectively. The CC sensor (Liyi D35A18R8S25, Shenzhen, China) is shown in Figure 15a, with a resolution of 40 nm and a linear accuracy of up to ±2 µm. Measurement accuracy of the CC sensor is very high, so it can be used as the reference for accuracy evaluation of the fused results.
Figure 15b shows the measurement result of the center profile; it can be seen that the peak to valley value of the profile is D1 = 0.2598 mm using the Nehab method. With our method D2 = 0.2334 mm, and the reference value D3 is 0.1901 mm. The deviation between the Nehab method and the CC sensor is 0.0697 mm. With the proposed method, the deviation is reduced to 0.0433 mm, a reduction of 37.9%. Therefore, the proposed method not only improves the clarity, but also improves the accuracy.

7. Conclusions

A LSL-PS cooperative measurement system is designed, and an adaptive weighted data fusion method is proposed. The adaptive fusion is based on the normal vector that is computed with the PS method. The 3D point cloud obtained from the LSL can be directly fused with the normal vector from the PS. Therefore, the integration process can be eliminated in the PS measurement, which avoids the error accumulation. The weight function based on the angle of the normal vector is added to the normal vector error function, which makes the features of the fusion result clearer. More experiments will be carried out in the future for complex surfaces with fine features.

Author Contributions

Conceptualization, Y.L. and J.Z.; data curation, J.S.; formal analysis, J.S., Y.L. and J.Z.; funding acquisition, Y.L.; methodology, Y.L. and T.L.; project administration, T.L.; resources, Z.Z.; software, J.S. and Z.Z.; supervision, T.L.; validation, Z.Z. and J.Z.; visualization, J.S., Z.Z. and J.Z.; writing—original draft, J.S.; writing—review and editing, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hebei Province, grant number E2022208020, the S&T Program of Hebei, grant number 22311701D, and by the Hebei Provincial Department of Education, grant number JZX2024014, CXZZSS2023092.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wei, Z.; Shao, M.; Wang, Y.; Hu, M. A Sphere-Based Calibration Method for Line Structured Light Vision Sensor. Adv. Mech. Eng. 2013, 5, 580417. [Google Scholar] [CrossRef]
  2. Li, Y.; Zhou, J.; Mao, Q.; Jin, J.; Huang, F. Line Structured Light 3D Sensing with Synchronous Color Mapping. IEEE Sens. J. 2020, 20, 9796–9805. [Google Scholar] [CrossRef]
  3. Zhao, J.; Cheng, Y.; Cai, G.; Feng, C.; Liao, L.; Xu, B. Correction model of linear structured light sensor in underwater environment. Opt. Lasers Eng. 2022, 153, 107013. [Google Scholar] [CrossRef]
  4. Xue, Q.; Ji, W.; Meng, H.; Sun, X.; Ye, H.; Yang, X. Estimating the quality of stripe in structured light 3D measurement. Optoelectron. Lett. 2022, 18, 103–108. [Google Scholar] [CrossRef]
  5. Deng, Z.; Ruan, Y.; Hao, F.; Liu, T. Hand-eye calibration of line structured-light sensor by scanning and reconstruction of a free-placed standard cylindrical target. Measurement 2024, 229, 114487. [Google Scholar] [CrossRef]
  6. Yang, L.; Fan, J.; Huo, B.; Li, E.; Liu, Y. Image denoising of seam images with deep learning for laser vision seam tracking. IEEE Sens. J. 2022, 22, 6098–6107. [Google Scholar] [CrossRef]
  7. Mao, Q.; Cui, H.; Hu, Q.; Ren, X. A rigorous fastener inspection approach for high-speed railway from structured light sensors. ISPRS J. Photogramm. Remote Sens. 2018, 143, 249–267. [Google Scholar] [CrossRef]
  8. Li, Y.; Zhou, J.; Liu, L. Research progress of the line structured light measurement technique. J. Hebei Univ. Sci. Technol. 2018, 39, 116–124. [Google Scholar]
  9. Fan, H.; Qi, L.; Wang, N.; Dong, J.; Chen, Y.; Yu, H. Deviation correction method for close-range photometric stereo with nonuniform illumination. Opt. Eng. 2017, 56, 103102. [Google Scholar] [CrossRef]
  10. Ma, L.; Liu, Y.; Liu, J.; Pei, X.; Sun, F.; Shi, L.; Fang, S. A multi-scale methodology of turbine blade surface recovery based on photometric stereo through fast calibrations. Opt. Lasers Eng. 2022, 150, 106837. [Google Scholar] [CrossRef]
  11. Liu, H.; Wu, X.; Yan, N.; Yuan, S.; Zhang, X. A novel image registration-based dynamic photometric stereo method for online defect detection in aluminum alloy castings. Digit. Signal Process. 2023, 141, 104165. [Google Scholar] [CrossRef]
  12. Wang, S.; Xu, K.; Li, B.; Cao, X. Online micro defects detection for ductile cast iron pipes based on twin light photometric stereo. Case Stud. Constr. Mater. 2023, 19, e02561. [Google Scholar] [CrossRef]
  13. Gould, J.; Clement, S.; Crouch, B.; King, R.S. Evaluation of photometric stereo and elastomeric sensor imaging for the non-destructive 3D analysis of questioned documents—A pilot study. Sci. Justice 2023, 63, 456–467. [Google Scholar] [CrossRef] [PubMed]
  14. Blair, J.; Stephen, B.; Brown, B.; McArthur, S.; Gorman, D.; Forbes, A.; Pottier, C.; McAlorum, J.; Dow, H.; Perry, M. Photometric stereo data for the validation of a structural health monitoring test rig. Data Brief 2024, 53, 110164. [Google Scholar] [CrossRef] [PubMed]
  15. Pattnaik, I.; Dev, A.; Mohapatra, A.K. A face recognition taxonomy and review framework towards dimensionality, modality and feature quality. Eng. Appl. Artif. Intell. 2023, 126, 107056. [Google Scholar] [CrossRef]
  16. Sikander, G.; Anwar, S.; Husnain, G.; Thinakaran, R.; Lim, S. An Adaptive Snake Based Shadow Segmentation for Robust Driver Fatigue Detection: A 3D Facial Feature based Photometric Stereo Perspective. IEEE Access 2023, 11, 99178–99188. [Google Scholar] [CrossRef]
  17. Bornstein, D.; Keep, T.J. New Dimensions in Conservation Imaging: Combining Photogrammetry and Photometric Stereo for 3D Documentation of Heritage Artefacts. AICCM Bull. 2023, 1–15. [Google Scholar] [CrossRef]
  18. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 191139. [Google Scholar] [CrossRef]
  19. Zhou, J.; Shi, J.; Li, Y.; Liu, X.; Yao, W. Data fusion of line structured light and photometric stereo point clouds based on wavelet transformation. In Proceedings of the Third International Computing Imaging Conference (CITA 2023), Sydney, Australia, 1–3 June 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12921, pp. 960–964. [Google Scholar]
  20. Nehab, D.; Rusinkiewicz, S.; Davis, J.; Ramamoorthi, R. Efficiently combining positions and normals for precise 3D geometry. ACM Trans. Graph. (TOG) 2005, 24, 536–543. [Google Scholar] [CrossRef]
  21. Haque, M.; Chatterjee, A.; Madhav Govindu, V. High quality photometric reconstruction using a depth camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2275–2282. [Google Scholar]
  22. Zhang, Q.; Ye, M.; Yang, R.; Matsushita, Y.; Wilburn, B.; Yu, H. Edge-preserving photometric stereo via depth fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 2472–2479. [Google Scholar]
  23. Okatani, T.; Deguchi, K. Optimal integration of photometric and geometric surface measurements using inaccurate reflectance/illumination knowledge. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 254–261. [Google Scholar]
  24. Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [Google Scholar] [CrossRef]
  25. Massot-Campos, M.; Oliver-Codina, G.; Kemal, H.; Petillot, Y.; Bonin-Font, F. Structured light and stereo vision for underwater 3D reconstruction. In Proceedings of the OCEANS 2015-Genova, Genova, Italy, 18–21 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  26. Li, X.; Fan, H.; Qi, L.; Chen, Y.; Dong, J.; Dong, X. Combining encoded structured light and photometric stereo for underwater 3D reconstruction. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), San Francisco, CA, USA, 4–8 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  27. Riegler, G.; Liao, Y.; Donne, S.; Koltun, V.; Geiger, A. Connecting the dots: Learning representations for active monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7624–7633. [Google Scholar]
  28. Lu, Z.; Tai, Y.W.; Ben-Ezra, M.; Brown, M.S. A framework for ultra high resolution 3D imaging. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1205–1212. [Google Scholar]
  29. Li, Q.; Ren, J.; Pei, X.; Ren, M.; Zhu, L.; Zhang, X. High-accuracy point cloud matching algorithm for weak-texture surface based on multi-modal data cooperation. Acta Opt. Sin. 2022, 42, 0810001. [Google Scholar]
  30. Antensteiner, D.; Stolc, S.; Pock, T. A review of depth and normal fusion algorithms. Sensors 2018, 18, 431. [Google Scholar] [CrossRef] [PubMed]
  31. Fan, H.; Rao, Y.; Rigall, E.; Qi, L.; Wang, Z.; Dong, J. Near-field photometric stereo using a ring-light imaging device. Signal Process. Image Commun. 2022, 102, 116605. [Google Scholar] [CrossRef]
  32. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  33. Li, Y.; Zhou, J.; Huang, F.; Liu, L. Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method. Sensors 2017, 17, 814. [Google Scholar] [CrossRef]
Figure 1. Illustration of the cooperative measurement system.
Figure 1. Illustration of the cooperative measurement system.
Sensors 24 04187 g001
Figure 2. Light source direction calibration: (a) computing the spherical normal direction, (b) circular section where P locates, and (c) calibration of light source direction.
Figure 2. Light source direction calibration: (a) computing the spherical normal direction, (b) circular section where P locates, and (c) calibration of light source direction.
Sensors 24 04187 g002
Figure 3. Flow chart showing the adaptive weighted fusion method.
Figure 3. Flow chart showing the adaptive weighted fusion method.
Sensors 24 04187 g003
Figure 4. Illustration of the fusion principle.
Figure 4. Illustration of the fusion principle.
Sensors 24 04187 g004
Figure 5. Weights computation using normal vectors: (a) normal vector neighborhood and (b) angle between adjacent normal vectors.
Figure 5. Weights computation using normal vectors: (a) normal vector neighborhood and (b) angle between adjacent normal vectors.
Sensors 24 04187 g005
Figure 6. The LSL-PS cooperative measurement system.
Figure 6. The LSL-PS cooperative measurement system.
Sensors 24 04187 g006
Figure 7. Measurement stairs using the LSL sensor: (a) stairs and (b) measured point cloud.
Figure 7. Measurement stairs using the LSL sensor: (a) stairs and (b) measured point cloud.
Sensors 24 04187 g007
Figure 8. Measurement results using the different methods: (a) LSL, (b) PS, and (c) fused results.
Figure 8. Measurement results using the different methods: (a) LSL, (b) PS, and (c) fused results.
Sensors 24 04187 g008
Figure 9. Mean absolute error for different λ.
Figure 9. Mean absolute error for different λ.
Sensors 24 04187 g009
Figure 10. Fusion results of an aluminum part for different values of λ: (a) λ = 0.1, (b) λ = 0.2, (c) λ = 0.3, (d) λ = 0.4, (e) λ = 0.5, (f) λ = 0.6, (g) λ = 0.7, and (h) λ = 0.8.
Figure 10. Fusion results of an aluminum part for different values of λ: (a) λ = 0.1, (b) λ = 0.2, (c) λ = 0.3, (d) λ = 0.4, (e) λ = 0.5, (f) λ = 0.6, (g) λ = 0.7, and (h) λ = 0.8.
Sensors 24 04187 g010
Figure 11. Measurement of a machined part with letters: (a) the part, (b) LSL measurement results, (c) PS normal vectors, (d) angle of the adjacent normal vector in the X direction, (e) angle of the adjacent normal vector in the Y direction, and (f) fused results.
Figure 11. Measurement of a machined part with letters: (a) the part, (b) LSL measurement results, (c) PS normal vectors, (d) angle of the adjacent normal vector in the X direction, (e) angle of the adjacent normal vector in the Y direction, and (f) fused results.
Sensors 24 04187 g011
Figure 12. Fusion results of “HEBUST” and its details: (a) Nehab method, (b) enlargement of Nehab method, (c) our method, and (d) enlargement of our method.
Figure 12. Fusion results of “HEBUST” and its details: (a) Nehab method, (b) enlargement of Nehab method, (c) our method, and (d) enlargement of our method.
Sensors 24 04187 g012
Figure 13. Measurement of coin parts: (a) coin, (b) LSL sensor measurement results, (c) PS normal vectors, (d) angle of adjacent normal vector in the X direction, (e) angle of adjacent normal vector in the Y direction, and (f) fused results.
Figure 13. Measurement of coin parts: (a) coin, (b) LSL sensor measurement results, (c) PS normal vectors, (d) angle of adjacent normal vector in the X direction, (e) angle of adjacent normal vector in the Y direction, and (f) fused results.
Sensors 24 04187 g013
Figure 14. Fusion result of the coin: (a) Nehab method, (b) enlargement of (a), (c) our method, and (d) enlargement of (c).
Figure 14. Fusion result of the coin: (a) Nehab method, (b) enlargement of (a), (c) our method, and (d) enlargement of (c).
Sensors 24 04187 g014
Figure 15. Comparison of fusion results for coins: (a) chromatic confocal sensor and (b) cross-section profile obtained using the different methods.
Figure 15. Comparison of fusion results for coins: (a) chromatic confocal sensor and (b) cross-section profile obtained using the different methods.
Sensors 24 04187 g015
Table 1. Measurement results for the stairs (unit: mm).
Table 1. Measurement results for the stairs (unit: mm).
No.CMM12345MAERE
H17.99927.96647.97037.98957.97347.97410.02450.31%
H213.998213.934013.927013.949113.937713.93310.06200.44%
H317.998717.998717.920617.916717.942117.92080.07350.41%
Table 2. Measured results for aluminum stairs using the RANSAC (unit: mm).
Table 2. Measured results for aluminum stairs using the RANSAC (unit: mm).
No.CMM12345MAERE
H17.99928.00097.98468.00187.98628.00230.00400.05%
H213.998213.969413.966413.979713.959413.98090.02700.19%
H317.998717.968017.960717.970217.949917.98090.03280.18%
Table 3. Comparison of measured results using different methods (unit: mm).
Table 3. Comparison of measured results using different methods (unit: mm).
No.CMMLSLAEPSMAEFusionAE
H17.99927.97910.02016.15261.84668.00950.0103
H213.998213.99190.006311.36442.633814.01210.0139
H317.998718.03360.034917.03670.962018.02800.0293
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, J.; Li, Y.; Zhang, Z.; Li, T.; Zhou, J. Adaptive Weighted Data Fusion for Line Structured Light and Photometric Stereo Measurement System. Sensors 2024, 24, 4187. https://doi.org/10.3390/s24134187

AMA Style

Shi J, Li Y, Zhang Z, Li T, Zhou J. Adaptive Weighted Data Fusion for Line Structured Light and Photometric Stereo Measurement System. Sensors. 2024; 24(13):4187. https://doi.org/10.3390/s24134187

Chicago/Turabian Style

Shi, Jianxin, Yuehua Li, Ziheng Zhang, Tiejun Li, and Jingbo Zhou. 2024. "Adaptive Weighted Data Fusion for Line Structured Light and Photometric Stereo Measurement System" Sensors 24, no. 13: 4187. https://doi.org/10.3390/s24134187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop