Next Article in Journal
Research on Unsupervised Low-Light Railway Fastener Image Enhancement Method Based on Contrastive Learning GAN
Previous Article in Journal
Development of an Artificial Vision for a Parallel Manipulator Using Machine-to-Machine Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Dynamic Light-Section 3D Reconstruction Method for Wide-Range Sensing

1
School of Advanced Science and Technology, Hiroshima University, Higashihiroshima 739-8527, Japan
2
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3793; https://doi.org/10.3390/s24123793
Submission received: 16 May 2024 / Revised: 6 June 2024 / Accepted: 7 June 2024 / Published: 11 June 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Existing galvanometer-based laser-scanning systems are challenging to apply in multi-scale 3D reconstruction because of the difficulty in achieving a balance between a high reconstruction accuracy and a wide reconstruction range. This paper presents a novel method that synchronizes laser scanning by switching the field-of-view (FOV) of a camera using multi-galvanometers. Beyond the advanced hardware setup, we establish a comprehensive geometric model of the system by modeling dynamic camera, dynamic laser, and their combined interaction. Furthermore, since existing calibration methods mainly focus on either dynamic lasers or dynamic cameras and have certain limitations, we propose a novel high-precision and flexible calibration method by constructing an error model and minimizing the objective function. The performance of the proposed method was evaluated by scanning standard components. The results show that the proposed 3D reconstruction system achieves an accuracy of 0.3 mm when the measurement range is extended to 1100 mm × 1300 mm × 650 mm. This demonstrates that for meter-scale reconstruction ranges, a sub-millimeter measurement accuracy is achieved, indicating that the proposed method realizes multi-scale 3D reconstruction and simultaneously allows for high-precision and wide-range 3D reconstruction in industrial applications.

1. Introduction

Light-section vision systems are widely used in many applications for their adaptability, high accuracy, and effective cost [1,2,3], such as rail traffic monitoring [4], medical imaging [5], robotics [6], and industrial production [7]. Such systems typically comprise a camera, laser projector, and mechanical scanning platform. The line laser projects laser stripes onto the surface of the object, whereas the camera captures an image of the object with the laser stripes. The three-dimensional (3D) geometric information of the object is then obtained by triangulation, as extensively reviewed in the literature [8]. The 3D reconstruction of an object can be completed by passing laser stripes or objects through a mechanical scanning platform.
Traditional laser scanners rely primarily on mechanical driver shafts, which are large, complex, and slow [9,10]. To overcome these limitations, various scanning mechanisms have been proposed. For instance, Du [2] designed a system that mounts a line laser on the end of a robotic arm to improve scanning flexibility; however, its scanning accuracy is limited by the precision of the robotic arm. Jiang [11] proposed a system that uses gimbals to drive the laser and camera for scanning; however, the system’s size is significant, and its scanning speed is slow. In recent years, galvanometers have emerged as promising scanning devices because of their small size, fast rotation, and high control accuracy. This galvanometer-based solution provides a better alternative in terms of laser-scanning accuracy and speed [12]. However, existing galvanometer-based laser-scanning systems are primarily designed to perform laser scanning while leaving the camera fixed. The limited FOV of a fixed camera causes a trade-off between the accuracy and the sensing range of the system, which significantly affects its efficiency.
In this study, we propose a novel dynamic light-section 3D reconstruction system that combines a dynamic laser and dynamic camera using multi-galvanometers. Our approach utilizes multiple galvanometers to synchronize laser scanning and the FOV switching of the camera, thereby enabling high-precision and wide-range 3D reconstruction. Calibration is required to achieve this, which includes the system calibration of the galvanometer-based dynamic laser and camera, and their joint calibration. For calibrating galvanometer-based dynamic laser systems, Eisert [13] introduced a geometric model and calibration procedure; however, the model is complicated, and its optimization is difficult, thus leading to a low accuracy. Yu [14] designed a one-mirror galvanometer laser scanner. However, the calibration procedure is complex, and the objective function is difficult to optimize. Similarly, Yang [15] proposed a calibration method based on a precision linear stage. However, this approach relies on a precision instrument and lacks flexibility. For calibrating galvanometer-based dynamic camera systems, Ying et al. [16,17,18] introduced self-calibration methods, which are complex in theory and difficult to implement. Kumar [19] proposed a calibration method based on the look-up table (LUT) using simple linear parameters, which requires complex pre-processing. Junejo et al. [16,17,20,21] proposed feature-based calibration methods, which are time-consuming and have a low accuracy. Han [22] introduced a calibration method for galvanometer-based camera using an end-to-end single-hidden-layer feed forward neural network model, but it is computationally intensive. Boi et al. [23,24] proposed manifold constrained Gaussian process regression methods for galvanometer setup calibration, which rely on data-driven and complex calibration procedures. Hu [25] built a galvanometer mirror-based stereo vision measurement system and established a mirror reflection model, but it still lacks an accurate calibration method.
In conclusion, current light-section 3D reconstruction systems cannot simultaneously have a high accuracy and wide range. Moreover, existing calibration methods only focus on calibrating either dynamic lasers or dynamic cameras and still have some shortcomings, as mentioned above. To address these limitations, this study proposes a novel dynamic 3D reconstruction system that overcomes the trade-off between accuracy and measurement range by synchronizing laser scanning and the FOV switching of a camera based on multiple galvanometers. Additionally, we propose a novel comprehensive calibration solution for the proposed system, encompassing the calibration of the dynamic camera, dynamic laser, and their joint calibration. The contributions of this study can be summarized as follows:
(1)
A novel dynamic light-section 3D reconstruction system is designed based on multiple galvanometers. To the best of our knowledge, this system is the first to synchronize laser scanning and the FOV switching of camera, thus enabling high-precision and wide-range 3D reconstruction simultaneously.
(2)
A novel high-precision and flexible calibration method for the dynamic 3D system is proposed by constructing an error model and objective function based on the combined model of the dynamic camera and dynamic laser. This method is not only applicable to the proposed system but also to other single galvanometer-based dynamic laser or camera systems.
(3)
Experiments were conducted to validate the proposed dynamic 3D reconstruction method and demonstrate its accuracy. To the best of our knowledge, compared to all existing galvanometer-based laser-scanning methods, our approach has the highest measurement range while maintaining the same level of measurement accuracy.
The system design and geometric model are described in Section 2. The proposed calibration method and error compensation methods are described in Section 3. Section 4 presents the validation experiments and results. Finally, Section 5 presents the conclusions.

2. System Design and Geometric Model

2.1. System Design

The dynamic light-section 3D reconstruction system consists of a CMOS camera, a line laser, and two galvanometer mirror systems, as shown in Figure 1. The camera and Galvanometer-1 form a dynamic camera system, whereas the laser and Galvanometer-2 form a dynamic laser system.
Based on the geometric model of the system and pre-calibration, the 3D information of the target can be calculated from the captured laser image and voltage values of the two galvanometers. The working principle is illustrated in Figure 2. A spherical object on a flat plane is employed to demonstrate the process of dynamic 3D reconstruction. First, the system utilizes multi-galvanometer control to scan the target surface. When the system is activated, a line laser projects a laser stripe onto Galvanometer-2, which reflects the stripe onto the surface of the object. By controlling the voltage of Galvanometer-2, the laser stripe can scan the target. Simultaneously, the dynamic camera system captures laser images from different angles by adjusting the voltage of Galvanometer-1. Next, the laser-center-pixel coordinates are obtained using the laser stripe extraction algorithm. The 3D reconstruction of the laser stripe is performed by combining the voltage values of multiple galvanometers, laser pixel coordinates, geometric models of the dynamic 3D reconstruction system, and calibrated parameters. The point clouds of all the laser stripes are converted to the same coordinate frame using the transformation matrix of the dynamic camera. The system performs error correction based on the joint calibration to optimize accuracy. Finally, the system generates a point cloud for the target and completes the dynamic 3D reconstruction. Accurate geometric modeling and calibration methods are essential to ensure the 3D reconstruction accuracy of the system.

2.2. Geometric Model

The geometric model of the system is shown in Figure 2b. When the system is precisely machined, we can assume that for the dynamic camera system, the optical axis of the camera is perpendicular to the rotation axis of Galvanometer-1’s pan mirror and incident at the center of the pan mirror. For the dynamic laser system, the laser plane is perpendicular to the rotation axis of Galvanometer-2’s pan mirror and incident on the center of the mirror. Four coordinate frames are established. { W } is the world coordinate frame defined on the surface of the planar chessboard target, with the origin point O located at the upper-left corner of the chessboard. The X-axis and Y-axis are parallel to the chessboard array, while the Z-axis is perpendicular to the O-XY plane, following the right-hand coordinate system. { C } represents the camera coordinate frame, where the Z-axis corresponds to the camera’s optical axis, and the X-axis and Y-axis are parallel to the image plane, following the right-hand coordinate frame. { G } denotes the coordinate frame of Galvanometers-1, with the Z-axis corresponding to the rotation axis of the pan mirror. The X-axis is aligned with the camera’s optical axis, while the Y-axis aligns with the line connecting the center points of the pan and tilt mirrors. { V t } is the virtual camera coordinate frame formed by the reflection of { C } through the pan and tilt mirrors of Galvanometers-1. According to the operating principle of a galvanometer, the rotation angle of the pan–tilt mirror is proportional to the voltage. For Galvanometer-1, the voltages of the pan–tilt mirrors are denoted by U 1 p a n and U 1 t i l t . Therefore, the rotation angle of the pan mirror is θ 1 = k 1 p a n U 1 p a n , and the rotation angle of the tilt mirror is θ 2 = k 1 t i l t U 1 t i l t . As the pan–tilt mirror rotates, { V t } reflects the change in U 1 p a n and U 1 t i l t . When U 1 p a n = U 1 t i l t = 0 , the initail virtual camera coordinate frame is denoted by { V 0 } . The relationship between { V t } and { V 0 } is given by Equation (1).
T V 0 V t = T G V t T V 0 G ,
where T V 0 V t is the transformation matrix between { V 0 } and { V t } . T G V t denotes the transformation matrix between { G } and { V t } . T V 0 G denotes the transformation matrix between { V 0 } and { G } . As shown in the geometric model diagram, { C } is first reflected by a pan mirror and then by a tilt mirror. The geometries of the two reflections are modeled using Equation (3). For  { V 0 } , the rotation angle of the pan–tilt mirror is θ 1 = θ 2 = 45 . The transformation matrix T V 0 V t is calculated using Equation (2). Thus, the geometric model of the dynamic camera is established.
T V 0 V t = T G V t T G 1 V 0 | ( U 1 p a n = U 1 t i l t = 0 ) .
T G V t = 1 0 0 0 0 cos 2 θ 2 sin 2 θ 2 d ( 1 cos 2 θ 2 ) 0 sin 2 θ 2 cos 2 θ 2 d sin 2 θ 2 0 0 0 1 cos 2 θ 1 sin 2 θ 1 0 0 sin 2 θ 1 cos 2 θ 1 0 0 0 0 1 0 0 0 0 1 0 0 1 l 1 0 0 0 0 1 0 0 0 0 0 1 = sin 2 θ 1 0 cos 2 θ 1 l cos 2 θ 1 cos 2 θ 1 cos 2 θ 2 sin 2 θ 2 sin 2 θ 1 sin 2 θ 2 l sin 2 θ 1 cos 2 θ 2 + d ( 1 cos 2 θ 2 ) cos 2 θ 1 sin 2 θ 2 cos 2 θ 2 sin 2 θ 1 sin 2 θ 2 l sin 2 θ 1 sin 2 θ 2 d sin 2 θ 2 0 0 0 1 .
The coordinates of the pixel points on the laser stripe are denoted by ( u , v ) . The coordinates of the corresponding 3D points in { V t } are denoted by ( X V , Y V , Z V ) . Based on the pinhole model of the camera, the mapping relationship between their coordinates can be obtained using Equation (4).
Z V u v 1 = f x 0 u 0 0 f y v 0 0 0 1 X V Y V Z V ,
where ( u 0 , v 0 ) is the principal point of the image, and  f x and f y are the focal length of the camera. For Galvanometer-2, the rotation angle of the pan mirror is denoted by θ 3 = k 2 p a n U 2 p a n , and the rotation angle of the tilt mirror is denoted by θ 4 = k 2 t i l t U 2 t i l t . When U 2 p a n = U 2 t i l t = 0 , the dynamic laser is in its initial position. The initial laser plane in { V 0 } is denoted by p l a n e 0 V 0 , and its equation is A 0 x + B 0 y + C 0 z + D 0 = 0 . The rotation axis of the dynamic laser in { V 0 } is denoted as n = ( n x , n y , n z ) . The plane of the laser after rotation about the rotation axis is denoted as p l a n e 0 V 0 , and the equation is A V 0 x + B V 0 y + C V 0 z + D V 0 = 0 .
The light path of the dynamic laser is reflected by a mirror and rotated along its axis. The rotation angle of the laser plane is twice that of the mirror plane. Therefore, the equation for the dynamic laser plane after rotation in { V 0 } can be solved using Equation (5).
A V 0 B V 0 C V 0 = R n , 2 θ 4 A 0 B 0 C 0 ,
where R represents the Rodrigues transformation. Using a point P = ( X 0 , Y 0 , Z 0 ) on the rotation axis, D V 0 can be calculated as D V 0 = A V 0 X 0 B V 0 Y 0 C V 0 Z 0 . Combining this with the transformation matrix T V 0 V t in Equation (2), the equation for the dynamic laser plane in { V t } can be calculated as
p l a n e V t = T V 0 V t p l a n e V 0 .
The equation for p l a n e V t is denoted as A V x + B V y + C V z + D V = 0 . By extracting the pixel points ( u , v ) from the laser stripe, the corresponding 3D point can be calculated as P V t = ( X V , Y V , Z V ) . Therefore, the relationship between P V t and the change in the galvanometer mirror angles can be expressed by Equation (7).
Z V = D V / [ A V × ( u u 0 ) / f x + B V × ( v v 0 ) / f y + C V ] X V = ( u u 0 ) / f x × Z V Y V = ( v v 0 ) / f y × Z V A V , B V , C V , D V = F θ 1 , θ 2 , θ 4 ; A 0 , B 0 , C 0 , D 0 , l , d , n , P .
F represents a parameterized mapping function, where θ 1 , θ 2 , θ 4 are variables and other parameters are constants. Because  { V t } changes constantly with the scanning angle, it is necessary to convert all P V t into a coordinate frame that is fixed with { W } . For ease of calculation, we choose { V 0 } and obtain P V 0 = ( X V 0 , Y V 0 , Z V 0 ) = T V t V 0 P V t .
Thus, we establish the relationship between ( u , v ) and ( X V 0 , Y V 0 , Z V 0 ) to formulate the 3D reconstruction. In the geometric model of the 3D dynamic system, Equation (7) shows that f x , f y , u 0 , v 0 can be obtained by calibrating the camera. A 0 , B 0 , C 0 , D 0 can be obtained from the laser plane calibration. l , d , n = ( n x , n y , n z ) , P = ( X 0 , Y 0 , Z 0 ) are unknown, and a calibration algorithm must be designed to obtain the unknowns.

3. Calibration Method

The proposed system calibration method is divided into three parts: the dynamic camera calibration, the dynamic laser calibration, and the joint calibration of the dynamic camera and laser for error correction.

3.1. Dynamic Camera Calibration

Firstly, we perform intrinsic parameter calibration using Zhang’s method [26] for distortion correction and obtain the camera parameters f x , f y , u 0 , v 0 . The dynamic camera geometric model described in Section 2 is used to determine the constraint relationship between { V t } and { G } , as shown in Equation (3); thus, we can obtain the parameters l and d. The proposed dynamic camera calibration method uses a large calibration board as shown in Figure 2b. The calibration board measures 740 × 740 mm and comprises a total of 35 × 35 circular markers. These markers are constructed from 25 individual 7 × 7 sub-patterns. Each sub-pattern features a central larger circular marker with a diameter of 15 mm, while the remaining smaller circular markers have a diameter of 10 mm, with a center-to-center spacing of 20 mm. The purpose of the larger circular markers is to establish the relationships between calibration points across different FOVs. The calibration board is scanned by varying the galvanometer voltage to obtain numerous images at different rotation angles. Each image corresponds to a virtual coordinate frame. The number of images is denoted as n. The transformation matrix between { V t } and { W } is obtained through Zhang’s extrinsic parameter calibration method [26], denoted as T W V 0 , T W V 1 , T W V 2 , , T W V n . As the relative positions between the calibration points in these images are known, the transformation matrix between the virtual coordinate frames is calculated as T V 0 V 1 , T V 0 V 2 , , T V 0 V n . These values are taken as observations. Multiple sets of observations are used to solve for the parameters to be calibrated. The initial pan–tilt angles of Galvanometer-1 are denoted by θ 1 ( 0 ) and θ 2 ( 0 ) . They are the corresponding angles of { V 0 } and the first calibration image. The pan–tilt angles of Galvanometer-1 corresponding to { V n } and nth calibration images are denoted as θ 1 ( n ) and θ 2 ( n ) . Therefore, { V 0 } and { V n } are defined as follows:
T G V 0 = T G V t | θ 1 = θ 1 ( 0 ) , θ 2 = θ 2 ( 0 ) = k i j ( θ 1 ( 0 ) , θ 2 ( 0 ) , l , d ) , T G V n = T G V t | θ 1 = θ 1 ( n ) , θ 2 = θ 2 ( n ) = g i j ( θ 1 ( n ) , θ 2 ( n ) , l , d ) ,
where k i j and g i j represent the function of T G V 0 and T G V n . l and d are parameters to be calibrated using Equation (7). The transformation matrix between { V 0 } and { V n } is a 4 × 4 matrix, which can be expressed as follows:
T V 0 V n ( n ) = a 11 n a 12 n a 13 n a 14 n a 21 n a 22 n a 23 n a 24 n a 31 n a 32 n a 33 n a 34 n 0 0 0 1 = a i j n .
Simultaneously, T G V n can be calculated using T G V 0 from Equation (8) and T V 0 V n . For ease of representation, this is denoted as h i j .
T G V n = T V 0 V n ( n ) · T G V 0 = i = 1 4 j = 1 4 a i j n k i j θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = h i j a i j n , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d
Equation (10) is the result of { V t } obtained from multiple observations. Equation (8) shows the results calculated using the geometric model of the dynamic camera. For all the measured coordinate frames ( { V 0 } , { V 1 } , { V 2 } , , { V n } ), the sum of the errors between the theoretical and measured values must be minimized. Therefore, the objective function is formulated using Equation (11).
l * , d * = arg min l , d t = 1 n i = 1 4 j = 1 4 h i j a i j n , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d g i j θ 1 ( n ) , θ 2 ( n ) , l , d 2 .
In Equation (3), the parameters l and d exist only in a translation vector. Based on the objective function, Equation (12) can be obtained. Finally, the parameters l and d are obtained by solving Equation (12) using the least-square method.
g 14 θ 1 ( 1 ) , θ 2 ( 1 ) , l , d h 14 a i j 1 , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 g 24 θ 1 ( 1 ) , θ 2 ( 1 ) , l , d h 24 a i j 1 , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 g 34 θ 1 ( 1 ) , θ 2 ( 1 ) , l , d h 34 a i j 1 , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 g 14 θ 1 ( n ) , θ 2 ( n ) , l , d h 14 a i j n , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 g 24 θ 1 ( n ) , θ 2 ( n ) , l , d h 24 a i j n , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 g 34 θ 1 ( n ) , θ 2 ( n ) , l , d h 34 a i j n , θ 1 ( 0 ) , θ 2 ( 0 ) , l , d = 0 .

3.2. Dynamic Laser Calibration

Based on the geometric modeling of the dynamic laser presented in Section 2, the equation of p l a n e 0 V 0 and the rotation axis of the dynamic laser n must be calibrated when Galvanometer-1 is at the initial position. The calibration of p l a n e 0 V 0 is conducted using the methodology outlined in [27]. This process involves the acquisition of laser images at various positions using a checkerboard calibration plate. Through this approach, the laser plane is accurately defined by fitting multiple laser lines. For the extraction of the laser center, the technique described in [28] is employed. This algorithm, based on the Hessian matrix, can achieve sub-pixel precision extraction and a processing speed of 1350 frames per second. Following these procedures, we successfully derive the equation A 0 x + B 0 y + C 0 z + D 0 = 0 .
The Galvanometer-2 voltages U 2 t i l t = U 1 , U 2 , , U m are used to move the laser and obtain multiple laser planes. Next, the respective equations are calibrated in the same manner as p l a n e 0 V 0 and denoted as p l a n e 1 V 0 , p l a n e 2 V 0 , , p l a n e m V 0 . The unit normal vectors of these planes are calculated as n 0 ( n x 0 , n y 0 , n z 0 ) , n 1 ( n x 1 , n y 1 , n z 1 ) , n 2 ( n x 2 , n y 2 , n z 2 ) , , n m ( n x m , n y m , n z m ) . In the absence of errors, the laser planes intersect along the same straight line. This line is the laser rotation axis n and is also the rotation axis of the tilt mirror in Galvanometer-2. For a normal vector in any laser plane, we obtain n · n i = 0 ( i = 0 , 1 , 2 , , m ) . However, n · n i is not exactly equal to zero because of various errors. Therefore, for all laser planes, the objective function is formulated as shown in Equation (13).
n * = arg min n x , n y , n z i = 0 m n x n x i + n y n y i + n z n z i 2 .
The direction vector n of the rotation axis is obtained by minimizing the objective function. P = ( X 0 , Y 0 , Z 0 ) is a point on the rotation axis in all laser planes and can be obtained using the least-square method. All parameters in Equation (7) are obtained by calibrating the camera, laser plane, dynamic camera, and dynamic laser. Thus, we complete the calibration of the proposed dynamic 3D system.

3.3. Joint Calibration for Error Correction

For a well-calibrated dynamic light-section 3D reconstruction system, there are two sources of error, dynamic camera and dynamic laser, as listed in Table 1.
This study proposes an error-correction method based on the joint calibration of a dynamic camera and dynamic laser. After the calibration is completed, error correction is performed based on the 3D reconstructed results. Theoretically, when Galvanometer-1 is scanning and Galvanometer-2 is fixed, the reconstructed laser point cloud coincides perfectly. However, as explained in the error source analysis, there is some deviation between the multiple laser point clouds owing to these errors. We correct these errors using point-cloud registration based on the Iterative Closest Point (ICP) algorithm [29] to obtain the accurate transformation matrix.
The calibration process is designed based on the following principle. If Galvanometer-2 remains stationary, the laser will be out of the FOV after Galvanometer-1 has scanned a certain range. Therefore, multiple calibration positions must be set in advance to maintain the laser in the FOV. These are set in advance as p 1 , p 2 , , p n , and the corresponding voltages of Galvanometer-2 at these positions are ( s 1 , t 1 ) , ( s 2 , t 2 ) , , ( s n , t n ) , respectively. The laser plane equations p l a n e p 1 , p l a n e p 2 , , p l a n e p n at these positions are calibrated using the laser plane calibration method described in Part B. The voltages of Galvanometer-1 are denoted by ( s 1 , t 1 ), ( s 2 , t 2 ) , , ( s m , t m ), respectively. The error-correction flow is presented in Algorithm 1.
Algorithm 1 Joint Calibration for Error Correction
  1:
Input:  f x , f y , u 0 , v 0 l , d A 1 , B 1 , C 1 , D 1 , , A n , B n , C n ,
D n ( s 1 , t 1 ) , ( s 2 , t 2 ) , , ( s n , t n ) .
  2:
Output:  T V 1 V 0 , T V 2 V 0 , , T V m V 0 .
  3:
Initialize: i← 1, ( s , t ) ( s 1 , t 1 ) , ( s , t ) f ( s , t ) , du ← 0.1, p l a n e V 1 p l a n e p 1 , M ← Unit matrix.
  4:
Using Equation (7) to obtain the laser point cloud P 1 V 0 .
  5:
while  i < m  do
  6:
     Laser Image capture and center curve extraction;
  7:
      p l a n e V t = g ( s , t , s , t ) ;
  8:
     Using Equations (1)–(3) to calculate T V t V ;
  9:
      p l a n e V T V t V p l a n e V t ;
10:
     Using Equation (7) to obtain the point cloud P i V 0 .
11:
      T V t V ← Transformation matrix between P 1 V 0 and P i V 0 ;
12:
      T V t V 0 M V T V t ;
13:
     if  i % 50 = = 0  then
14:
          ( s , t ) ( s ( i / 50 ) , t ( i / 50 ) ) , p l a n e V t V p l a n e ;
15:
         Using Step. (8)–Step. (11) to obtain point cloud P .
16:
          M ← Transformation matrix between P and P i V 0 ;
17:
     end if
18:
      i i + 1 ;
19:
      ( s , t ) ( s + d u × i % 200 , t + 1 × i n t ( i / 200 ) ) ;
20:
end while

4. Experiment

The proposed dynamic 3D reconstruction system was built as shown in Figure 3. The camera model was MV-CA004-10UC, with a pixel size of 6.9 µm × 6.9 µm, resolution of 720 pixels × 540 pixels, and frame rate of 500 fps. The exposure time of the camera to capture the dark image of the laser was 500 µs. The laser model was LXL65050-16, and the laser wavelength was 650 nm. The models of both Galvanometer-1 and Galvanometer-2 were TSH8310. The galvanometer was used to scan a range of ± 20 using a control voltage range from 10 V to +10 V. The maximum scan frequency was 1 kHz, with an angular resolution of 0.0008; thus, the system had the potential for a high accuracy and resolution.

4.1. Calibration Accuracy Verification

4.1.1. Dynamic Camera Calibration Accuracy

Twenty-five images of the calibration board were collected when U 1 p a n = U 1 t i l t = 0 . The camera was calibrated using Zhang’s [26] calibration method in OpenCV. After calibration, the intrinsic parameters were f x = 7801.38, f y = 7798.24, u 0 = 359.51, and v 0 = 269.54. The focal length was 53.83 mm. According to the calibration method for the dynamic cameras presented in Section 3, the system parameters were solved as l = 83.45 mm and d = 22.14 mm. Based on these calibration results, the geometric model proposed in Section 2 can be used to calculate the theoretical transformation matrix for the pan–tilt mirror of Galvanometer-1 at different angles.
The transfer matrices corresponding to these angles are directly measured using a calibration board. The matrix 2-norm is calculated according to Equation (14) to compare the theoretical matrices A and measured transfer matrices B for the calibration accuracy verification. The pan–tilt voltages of Galvanometer-1 are varied from 10 V to +10 V at intervals of 4 V. Thirty-six positions are measured. The error between the theoretical and measured transfer matrices is obtained, and the error curves are shown in Figure 4a. The results show that the RMSE (Root Mean Square Error) is 1.231 mm between the theoretical and measured values.
E ( A , B ) = i = 1 4 j = 1 4 A i j B i j 2
This confirms the accuracy of the dynamic camera calibration. The observed errors originate from the geometric model and the calibration process, as explained in the error analysis section. It is important to note that the measured values obtained for the virtual camera using the calibration board may also exhibit slight deviations. Consequently, these findings serve as a validation of the accuracy of the dynamic camera calibration; however, these cannot be solely relied upon to assess the accuracy of the calibration. A more detailed accuracy verification can be conducted based on the outcomes of the 3D reconstruction analysis.

4.1.2. Dynamic Laser Calibration Accuracy

With Galvanometer-1 fixed, the calibration board was positioned within the FOV of the virtual camera. The tilt mirror of Galvanometer-2 was rotated 30 times with a step size of 0.1 V, allowing the system to scan the calibration board, whose position was randomly changed 5 times (ensuring clear imaging in the virtual camera); the same 30 scans were repeated for each position. The laser rotation axis is solved as ( n , P ) = ( [ 0.99 , 0.02 , 0.0004 ] , [ 18310.30 , 195.93 , 257.97 ] ) . Figure 4b visually represents the laser plane and the rotation axis. Notably, the calibrated rotation axis align with the intersection of the laser planes, providing evidence for the accuracy of the dynamic laser calibration. A detailed accuracy assessment is subsequently performed by analyzing the results of the 3D reconstruction.

4.1.3. Joint Calibration Accuracy

A calibration sphere was selected as the 3D reconstruction target for error correction. Galvanometer-2 was controlled to project the laser stripe onto the sphere, while Galvanometer-1 was fixed. The virtual camera, controlled by Galvanometer-1, captured images of the laser stripe from different views. The 3D reconstruction of these laser stripe images was performed based on the calibration results and geometric models of the 3D dynamic system. The reconstructed point clouds, which are indicated as white, are shown in Figure 2a, ’Error Correction’. Notably, white point clouds exhibit non-overlapping regions owing to errors. The correction method described in Algorithm 1 is employed to register these white point clouds. The registration results are shown as colored point clouds in Figure 2a. The distance between the point clouds before and after the correction is calculated to evaluate the error. The calculation formula is as follows:
d = 1 | P s | i = 1 P s p t i p s i 2 ,
Here, p s represents the point cloud of the laser stripe captured in the first virtual camera view, and p t represents the point cloud of the laser stripe captured from another view. | P s | denotes the number of points in p s . The error between p t and p s is determined by performing a nearest-neighbor search, denoted as Error1. After the point-cloud registration, the error between p t and p s is calculated as Error2. In addition, the error before correction is computed as Error3 using the matched points from the point-cloud registration result. The error curves are shown in Figure 4c. The RMSE of Error1 and Error3 before correction are calculated as 4.928 and 5.475 mm, respectively. However, after error correction, the RMSE of Error2 is significantly reduced to 0.197 mm. These results evidently indicate a substantial improvement in the accuracy following the error-correction process.

4.2. 3D Reconstruction Accuracy Verification

4.2.1. Standard Blocks Reconstruction Test

A standard stair block was employed to test the stability of the system and analyze its reconstruction accuracy at different angles. The stair block had a distance of 30 mm between its two planes, with machining errors within 1 µm. The 3D dynamic system proposed in the paper was used to reconstruct a stair block. Scanning was performed by synchronously controlling the tilt mirrors of both Galvanometer-1 and Galvanometer-2, rotating each by 0.1°. Once the scanning and reconstruction processes were completed, a point cloud of the stair block was generated. Two planes (Plane-1 and Plane-2) of the stairs were fitted, and the distance between them was calculated. Point clouds belonging to Plane-2 were used to fit a plane equation using the least-square method. Next, 500 points belonging to Plane-1 were randomly selected, and the average distance between these points and Plane-2 was calculated as the distance of the fitted plane. The difference between the calculated and actual distances is considered as the error, which serves as a measure of the reconstruction accuracy achieved by the system.
The reconstruction distance is 650 mm. The FOV for a single virtual camera is 120 mm × 120 mm, while the dynamic camera’s FOV expands to 1300 mm × 1300 mm (including a 10% overlap area for better stitching), thus enlarging the camera’s imaging range by a factor of 117.4. The scanning range of the dynamic laser is 1100 mm × 1640 mm. The measurement range of the system is determined by the overlapping FOV of the dynamic camera and dynamic laser, which measures 1100 mm × 1300 mm. Thirty different positions are selected to analyze the reconstruction accuracy at different angles. The dynamic camera and laser simultaneously scan the target from these positions to complete the 3D reconstruction process. Figure 5a shows examples of reconstructions obtained from four different positions, providing a visual representation of the reconstructed 3D models. The thickness error, which is related to the rotation angles of Galvanometer-1 and Galvanometer-2, is analyzed, as shown in Figure 5b. It is evident from the graph that the error in 3D reconstruction increases as the rotation angles of the galvanometers deviate from their initial positions (calibration position). This is because the camera’s focal length is adapted to the calibration position, and imaging areas far from the calibration position may become blurred due to defocusing, thereby affecting accuracy. Furthermore, as analyzed in Table 1, errors due to various reasons accumulate more as the distance from the calibration position increases. The RMSE for these thirty positions is calculated as 0.165 mm. These values provide evidence of the high precision achieved by the proposed system for 3D reconstruction.

4.2.2. Accuracy and Reconstruction Range Comparison with Existing Methods

To compare the performance of the proposed method with that of existing methods [12,14,15,30,31,32,33], we conducted comparative experiments using the standard component scanning method. The accuracy of the dynamic light-section 3D system depends primarily on the working distance. To perform a fair comparison, we repeated the standard component scanning procedure at various reconstruction distances, namely, 100, 200, 350, 400, and 1000 mm, which are consistent with the working distances employed in existing methods. As analyzed in the previous experiment, the system’s reconstruction accuracy is related to the scanning position. In order to reduce errors other than calibration errors, we placed the target at the center of the scanning area.
The 3D reconstruction accuracy, the measurement ranges of the traditional methods, and the magnification factor of the proposed method’s reconstruction range compared to traditional methods are presented in Table 2. From the obtained results, it can be concluded that the proposed method exhibits smaller errors and larger measurement ranges than the existing methods at the corresponding working distances. This demonstrates the superior performance of our method in terms of accuracy and range compared with existing methods.

4.2.3. Large Object Scanning Test

The large high-precision machined flat plate and sphere were also used to test the 3D reconstruction accuracy. The size of the plate was 740 mm × 740 mm, and the sphere had a diameter of 350 mm. Similar to Experiment B ( 1 ) , the system scanned the target and obtained its point clouds with the scanning distance set at 650 mm, and the 3D measurement range was 1100 mm × 1300 mm. The position and angle of the target were arbitrarily changed within the scanning range, and the reconstruction was repeated three times for each target. For the plate target’s point cloud, a plane equation was fitted using the RANSAC algorithm. The distances between all the points and the fitted plane were calculated, and the average of these distances was considered the error of the dynamic 3D system reconstruction. For the sphere target’s point cloud, a sphere was fitted based on the RANSAC algorithm, and the diameter was calculated. The difference between the calculated diameter and the sphere’s actual diameter was taken as the error of the dynamic 3D reconstruction. The RMSE values for the three measurements were 0.281 mm for the plate and 0.226 mm for the sphere, respectively.
This experimental result demonstrates that the system achieves sub-millimeter reconstruction accuracy within a meter-scale reconstruction range, indicating that the system enables high-precision multi-scale 3D reconstruction across a wide-range reconstruction area. Moreover, a limitation of this system is that the dynamic camera extends the FOV using a galvanometer. When the extension angle is large, there can be some defocus, resulting in blurred images and subsequently affecting the accuracy of the 3D reconstruction. To address this issue, image deblurring algorithms can be used to improve imaging quality, thereby enhancing the reconstruction accuracy at the edges of the FOV.

5. Conclusions

A dynamic light-section 3D reconstruction system is proposed in this paper, which overcomes the trade-off between accuracy and measurement range by using multiple galvanometers. A geometric model of the system was established, and a flexible and accurate calibration method was developed. The experimental results demonstrate that the proposed system performs well in terms of measurements, indicating its potential for industrial applications where high-precision and wide-range 3D reconstruction is required. Furthermore, the proposed method can be used in conjunction with an active tracking system for 3D reconstruction of moving targets. This will be introduced in our subsequent work.

Author Contributions

Conceptualization, M.C. and I.I.; methodology, M.C., Q.G. and I.I.; software, M.C. and Q.L.; validation, M.C. and S.H.; formal analysis, M.C., Q.G. and I.I.; investigation, M.C. and K.S.; resources, K.S. and I.I.; data curation, M.C. and Q.L.; writing—original draft preparation, M.C.; writing—review and editing, M.C., S.H. and I.I.; supervision, Q.G. and I.I.; project administration, I.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, J.; Lian, K.; Deng, Y.; Jiang, P.; Zhang, C. Multi-Objective Parameter Optimization of Fiber Laser Welding Considering Energy Consumption and Bead Geometry. IEEE Trans. Autom. Sci. Eng. 2022, 19, 3561–3574. [Google Scholar] [CrossRef]
  2. Du, X.; Chen, Q. Dual-Laser Goniometer: A Flexible Optical Angular Sensor for Joint Angle Measurement. IEEE Trans. Ind. Electron. 2021, 68, 6328–6338. [Google Scholar] [CrossRef]
  3. Deng, B.; Wu, W.; Li, X.; Wang, H.; He, Y.; Shen, G.; Tang, Y.; Zhou, K.; Zhang, Z.; Wang, Y. Active 3-D Thermography Based on Feature-Free Registration of Thermogram Sequence and 3-D Shape Via a Single Thermal Camera. IEEE Trans. Ind. Electron. 2022, 69, 11774–11784. [Google Scholar] [CrossRef]
  4. Gazafrudi, S.M.M.; Younesian, D.; Torabi, M. A High Accuracy and High Speed Imaging and Measurement System for Rail Corrugation Inspection. IEEE Trans. Ind. Electron. 2021, 68, 8894–8903. [Google Scholar] [CrossRef]
  5. Zhao, Y.j.; Xiong, Y.x.; Wang, Y. Three-dimensional accuracy of facial scan for facial deformities in clinics: A new evaluation method for facial scanner accuracy. PLoS ONE 2017, 12, e0169402. [Google Scholar] [CrossRef] [PubMed]
  6. Wei, C.; Sihai, C.; Dong, L.; Guohua, J. A compact two-dimensional laser scanner based on piezoelectric actuators. Rev. Sci. Instrum. 2015, 86, 013102. [Google Scholar] [CrossRef]
  7. Czimmermann, T.; Chiurazzi, M.; Milazzo, M.; Roccella, S.; Barbieri, M.; Dario, P.; Oddo, C.M.; Ciuti, G. An Autonomous Robotic Platform for Manipulation and Inspection of Metallic Surfaces in Industry 4.0. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1691–1706. [Google Scholar] [CrossRef]
  8. Xu, X.; Fei, Z.; Yang, J.; Tan, Z.; Luo, M. Line structured light calibration method and centerline extraction: A review. Results Phys. 2020, 19, 103637. [Google Scholar] [CrossRef]
  9. Yin, S.; Ren, Y.; Guo, Y.; Zhu, J.; Yang, S.; Ye, S. Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology. Measurement 2014, 54, 65–76. [Google Scholar] [CrossRef]
  10. Xiao, J.; Hu, X.; Lu, W.; Ma, J.; Guo, X. A new three-dimensional laser scanner design and its performance analysis. Optik 2015, 126, 701–707. [Google Scholar] [CrossRef]
  11. Jiang, T.; Cui, H.; Cheng, X. Accurate Calibration for Large-Scale Tracking-Based Visual Measurement System. IEEE Trans. Instrum. Meas. 2021, 70, 5003011. [Google Scholar] [CrossRef]
  12. Wang, T.; Yang, S.; Li, S.; Yuan, Y.; Hu, P.; Liu, T.; Jia, S. Error Analysis and Compensation of Galvanometer Laser Scanning Measurement System. Acta Opt. Sin. 2020, 40, 1–13. [Google Scholar]
  13. Eisert, P.; Polthier, K.; Hornegger, J. A mathematical model and calibration procedure for galvanometric laser scanning systems. In Proceedings of the Vision, Modeling, and Visualization, Berlin, Germany, 4–6 October 2011; Volume 591, pp. 207–214. [Google Scholar]
  14. Yu, C.; Chen, X.; Xi, J. Modeling and calibration of a novel one-mirror galvanometric laser scanner. Sensors 2017, 17, 164. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, S.; Yang, L.; Zhang, G.; Wang, T.; Yang, X. Modeling and calibration of the galvanometric laser scanning three-dimensional measurement system. Nanomanuf. Metrol. 2018, 1, 180–192. [Google Scholar] [CrossRef]
  16. Ying, X.; Peng, K.; Hou, Y.; Guan, S.; Kong, J.; Zha, H. Self-Calibration of Catadioptric Camera with Two Planar Mirrors from Silhouettes. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1206–1220. [Google Scholar] [CrossRef] [PubMed]
  17. Wu, Z.; Radke, R.J. Keeping a Pan-Tilt-Zoom Camera Calibrated. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1994–2007. [Google Scholar] [CrossRef]
  18. Schmidt, A.; Sun, L.; Aragon-Camarasa, G.; Siebert, J.P. The Calibration of the Pan-Tilt Units for the Active Stereo Head. In Proceedings of the Vision, Modeling, and Visualization, Bayreuth, Germany, 10–12 October 2016; Volme 389, pp. 213–221. [Google Scholar]
  19. Kumar, S.; Micheloni, C.; Piciarelli, C. Stereo localization using dual ptz cameras. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Münster, Germany, 2–4 September 2009; Volume 5702, pp. 1061–1069. [Google Scholar]
  20. Junejo, I.N.; Foroosh, H. Optimizing PTZ camera calibration from two images. Mach. Vis. Appl. 2012, 23, 375–389. [Google Scholar] [CrossRef]
  21. Kumar, S.; Micheloni, C.; Piciarelli, C.; Foresti, G.L. Stereo rectification of uncalibrated and heterogeneous images. Pattern Recognit. Lett. 2010, 31, 1445–1452. [Google Scholar] [CrossRef]
  22. Han, Z.; Zhang, L. Modeling and Calibration of a Galvanometer-Camera Imaging System. IEEE Trans. Instrum. Meas. 2022, 71, 5016809. [Google Scholar] [CrossRef]
  23. De Boi, I.; Sels, S.; Penne, R. Semidata-Driven Calibration of Galvanometric Setups Using Gaussian Processes. IEEE Trans. Instrum. Meas. 2022, 71, 2503408. [Google Scholar] [CrossRef]
  24. Boi, I.D.; Sels, S.; De Moor, O.; Vanlanduit, S.; Penne, R. Input and Output Manifold Constrained Gaussian Process Regression for Galvanometric Setup Calibration. IEEE Trans. Instrum. Meas. 2022, 71, 2509408. [Google Scholar] [CrossRef]
  25. Hu, S.; Matsumoto, Y.; Takaki, T.; Ishii, I. Monocular stereo measurement using high-speed catadioptric tracking. Sensors 2017, 17, 1839. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 666–673. [Google Scholar]
  27. Yi, S.; Min, S. A Practical Calibration Method for Stripe Laser Imaging System. IEEE Trans. Instrum. Meas. 2021, 70, 5003307. [Google Scholar] [CrossRef]
  28. Li, Z.; Ma, L.; Long, X.; Chen, Y.; Deng, H.; Yan, F.; Gu, Q. Hardware-Oriented Algorithm for High-Speed Laser Centerline Extraction Based on Hessian Matrix. IEEE Trans. Instrum. Meas. 2021, 70, 5010514. [Google Scholar] [CrossRef]
  29. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, SPIE, Boston, MA, USA, 14–15 November 1992; Volume 1611, pp. 586–606. [Google Scholar]
  30. Li, X.; Liu, B.; Mei, X.; Wang, W.; Wang, X.; Li, X. Development of an in-situ laser machining system using a three-dimensional galvanometer scanner. Engineering 2020, 6, 68–76. [Google Scholar] [CrossRef]
  31. He, K.; Sui, C.; Huang, T.; Zhang, Y.; Zhou, W.; Chen, X.; Liu, Y.H. 3D surface reconstruction of transparent objects using laser scanning with a four-layers refinement process. Opt. Express 2022, 30, 8571–8591. [Google Scholar] [CrossRef] [PubMed]
  32. Zexiao, X.; Jianguo, W.; Ming, J. Study on a full field of view laser scanning system. Int. J. Mach. Tools Manuf. 2007, 47, 33–43. [Google Scholar] [CrossRef]
  33. Chi, S.; Xie, Z.; Chen, W. A laser line auto-scanning system for underwater 3D reconstruction. Sensors 2016, 16, 1534. [Google Scholar] [CrossRef]
Figure 1. System design.
Figure 1. System design.
Sensors 24 03793 g001
Figure 2. Framework of dynamic 3D reconstruction method. (a) The flowchart of dynamic light-section 3D reconstruction. (b) Geometric modeling and calibration of dynamic 3D reconstruction system.
Figure 2. Framework of dynamic 3D reconstruction method. (a) The flowchart of dynamic light-section 3D reconstruction. (b) Geometric modeling and calibration of dynamic 3D reconstruction system.
Sensors 24 03793 g002
Figure 3. The 3D dynamic reconstruction system based on multiple galvanometers and light section.
Figure 3. The 3D dynamic reconstruction system based on multiple galvanometers and light section.
Sensors 24 03793 g003
Figure 4. Calibration accuracy verification. (a) Error of dynamic camera transformation matrix. (b) Visualization of laser planes and the calibrated rotation axis. (c) Error curve before and after correction.
Figure 4. Calibration accuracy verification. (a) Error of dynamic camera transformation matrix. (b) Visualization of laser planes and the calibrated rotation axis. (c) Error curve before and after correction.
Sensors 24 03793 g004
Figure 5. Standard blocks reconstruction test at different angles. (a) The point clouds of the stair at different angles. (b) 3D reconstruction error distribution at different angles.
Figure 5. Standard blocks reconstruction test at different angles. (a) The point clouds of the stair at different angles. (b) 3D reconstruction error distribution at different angles.
Sensors 24 03793 g005
Table 1. Error Source and Analysis.
Table 1. Error Source and Analysis.
 ErrorSource and Analysis
1Rotation angle of galvanometer mirror ( θ 1 , θ 2 ) Non-linear deviations exist between voltage and rotation angle of Galvanometer-1.
2Dynamic camera geometric modelThe spectacular reflection geometric model has deviations with mechanical structure.
3Calibration of parameter ( l , d ) This error has been optimized by the proposed error model and objective function in this paper.
4Camera intrinsic parameters calibrationThis non-linear error is optimized using Zhang’s [26] calibration method.
5Rotation angle of galvanometer mirror ( θ 3 , θ 4 ) Non-linear deviations exist between voltage and rotation angle of Galvanometer-2.
6Dynamic laser geometric modelThis error depends on the accuracy of the laser’s mechanical installation.
7Calibration of laser rotation axisThis error is optimized by the proposed objective function.
8Laser center curve extractionCenter extraction algorithm is based on the Hessian Matrix and ensures a high extraction accuracy.
Table 2. Accuracy and reconstruction range comparison of traditional and proposed methods.
Table 2. Accuracy and reconstruction range comparison of traditional and proposed methods.
Working
Distance/mm
Traditional MethodsProposed Method
Name Year Accuracy/mm Range/mm Accuracy/mm Range/mm Factor of
Expanded Range
100NOM-LSS 2017 [16]0.0110 × 100.01130 × 200260
 2503DM-LS 2018 [17]0.180 × 80
EAC-LSS 2020 [14]0.06180 × 80 0.057 350 × 500 27.3
350IS-LSS 2020 [30]0.08200 × 2000.08500 × 7008.75
 400FFV-LSS 2007 [32]0.222150 × 200
FLR-LSS 2022 [31]0.1150 × 200 0.092 650 × 800 17.3
1000U3D-LSS 2016 [33]0.382200 × 3000.3141600 × 200053.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, M.; Li, Q.; Shimasaki, K.; Hu, S.; Gu, Q.; Ishii, I. A Novel Dynamic Light-Section 3D Reconstruction Method for Wide-Range Sensing. Sensors 2024, 24, 3793. https://doi.org/10.3390/s24123793

AMA Style

Chen M, Li Q, Shimasaki K, Hu S, Gu Q, Ishii I. A Novel Dynamic Light-Section 3D Reconstruction Method for Wide-Range Sensing. Sensors. 2024; 24(12):3793. https://doi.org/10.3390/s24123793

Chicago/Turabian Style

Chen, Mengjuan, Qing Li, Kohei Shimasaki, Shaopeng Hu, Qingyi Gu, and Idaku Ishii. 2024. "A Novel Dynamic Light-Section 3D Reconstruction Method for Wide-Range Sensing" Sensors 24, no. 12: 3793. https://doi.org/10.3390/s24123793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop