Next Article in Journal
Optical Neuroimaging in Delirium
Next Article in Special Issue
Spin-Hall Effect of Cylindrical Vector Vortex Beams
Previous Article in Journal
Modeling of Subwavelength Gratings: Near-Field Behavior
Previous Article in Special Issue
Topological Edge States on Different Domain Walls of Two Opposed Helical Waveguide Arrays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Flatness Error Evaluation of Metal Workpieces Based on Line Laser Scanning Digital Imaging Technology

1
School of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, China
2
China Productivity Center for Machinery Co., Ltd., No. 2 Shouti South Road, Haidian District, Beijing 100044, China
3
School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(12), 1333; https://doi.org/10.3390/photonics10121333
Submission received: 23 October 2023 / Revised: 27 November 2023 / Accepted: 28 November 2023 / Published: 30 November 2023
(This article belongs to the Special Issue Emerging Topics in Structured Light)

Abstract

:
With the development of intelligent manufacturing, the production and assembly accuracy of components in factories is increasing in line with growing demand. However, the traditional manual quality inspection is inefficient, inaccurate, and costly. To this end, digital and optical imaging techniques are used to achieve intelligent quality inspection. However, during the reconstruction process, the high reflectivity of object materials affects the speed and accuracy of reconstruction results. To overcome these problems, this study investigated the three-dimensional (3D) digital imaging techniques based on line laser scanning. It advances a novel methodology for image segmentation, underpinned by deep learning algorithms, to augment the precision of the reconstruction results while simultaneously enhancing processing velocity. After the reconstruction phase, the research assesses flatness tolerance using point cloud registration technology. Finally, we constructed a measurement platform with a cost of less than CNY 100,000 (about USD 14,000) and obtained a measurement accuracy of 30 microns.

1. Introduction

With the growing demand for parts precision, quality control is integrated throughout the product development process, from design to inspection. As the final step of product development, measurement and inspection are significant to the performance of the complex assembly. After the emergence of intelligent workshops, most of the workshop technicians were assigned to work in quality inspection positions. For the development of intelligence in manufacturing, many engineers and researchers have conducted related research to solve the slow-speed and low-accuracy problems due to long hours of manual inspection of products [1,2,3,4,5,6]. At present, intelligent measurement is mainly employed in the fields of industry, machinery, medical treatment and digitization of cultural relics, and the measurement methods for products are divided into contact and non-contact measurements. In contact measurement, many scholars use coordinate measuring machines (CMM) for product measurement. This highly accurate method can accomplish some measurements with high requirements for specific tolerances. However, the measurement process using this method relies on manual intervention. Nowadays, with the development of measurement technology, researchers conducted the non-contacted measurement method to overcome the disadvantages of the contacted measurement method, and the non-contact measurement method has gradually been widely used in industry applications. Reconstruction and measurement via optical methods have become popular methods recently. The popular categories of optical measurement methods include laser-based measurement systems, photogrammetry, and fringe projection [7,8]. Many scholars have used optical measurement methods for object two-dimensional (2D) defect detection [9]. Wang et al. combined the laser infrared thermography method with deep learning to detect the defect shape and size of carbon fiber-reinforced polymer [10]. It used a novel method to determine the defect depth via the long-short-term memory recurrent neural network (LSTM-RNN) model. Still, this method cannot obtain an accurate measurement result compared with the traditional measurement method and does not obtain the width and length information. Cao et al. reconstructed the 3D surface of the rail by using the structured light system and analyzed the defect of the reconstruction data [11]. This study proposed using the line laser scanning method to realize the rail surface defect inspection, but it only obtains the 2D defect area information. Tao et al. used the grating structured light for the 3D reconstruction of objects with a six-step phase unwrapping algorithm [12]. This study used a cost-effective solution for 3D measurement. However, the photoelastic fringes are influenced by the polycarbonate disk. Because the disk may be deformed after loading, resulting in unwanted fringes. Its reconstruction speed and accuracy are not ideal. The accuracy reached 60–80 mm. Li et al. used the mobile laser scanning method to estimate and calculate the total leaf area [13]. This study solved the accuracy problem by controlling the moving speed. Although the accuracy has been improved, the moving distance of this study is too large, which is suitable for measuring large objects but not for high-precision workpiece measurement.
In the defect detection and measurement research area, Chen et al. proposed a method to identify and classify robotic weld joints by using line laser scanning light and deep learning [14]. Xiao et al. classified welded joints as discontinuous and continuous by using deep learning and point cloud data obtained from the line laser scanning method [15]. Due to exceeding the qualified height of the avalanche photodiode chips, the transistor outline optical devices are always discarded. Liu et al. used the line laser scanning approach to detect overflow silver from avalanche photodiode chips. They designed and implemented an optical 3D slice intelligent measurement system utilizing the line laser scanning method [16]. Although most of them can complete the 3D reconstruction and 3D measurement of the object, a common problem is that the measurement process is highly reflective due to the light intensity caused by the object measurement material, which brings complex noise and the accuracy of the measurement results of these studies is greatly affected [17,18].
To deal with the problem of high reflection during the measurement process, some related research has also been performed in recent years. He et al. reconstructed the object by using a dual monocular structured light system to obtain different angle object images and performed a fusion of images to compensate for the reflective area [19]. Zhu proposed a method based on enhanced polarization and Gray-Code fringe structured light to reconstruct high dynamic range objects [20]. Karami et al. proposed a fusion method to replace the low spatial frequencies of photometric stereo with the corresponding photogrammetric frequencies to correct the low frequencies based on the Fourier domain [21]. These studies captured lots of images and used the image fusion method to deal with the reflective problem and realize the 3D reconstruction and measurement. However, this method needs to capture many images and spend time fusing the images captured in different conditions. Pei et al. proposed a hybrid approach to reconstructing objects via fringe projection profilometry and photometric stereo technology to obtain the data used in the deep neural network to estimate the normal map of the object surface to obtain the full intact point cloud [22]. This study can obtain submillimeter-level measurement results, but it needs to perform the phase unwrapping, which is time-consuming. He proposed a laser tracking frame-to-frame method to solve the reflective problem and reconstruct the transparent object [23]. Wu used the grating fringe structured light to encode the unsaturated luminance of the pre-projected multiple grating fringe patterns, so that each position of the measured object and the image pixel position are guaranteed to be unsaturated grayscale values, and it can avoid the object reflection phenomenon during the measurement process [24]. The above studies determined which pixel points to keep by setting the threshold with the help of the camera model and the parameters of the specific camera pixel size or generating different luminance of the projected images. Although these methods are feasible, they rely on a priori values to determine the threshold range, and the threshold range can influence the measurement accuracy. Li et al. proposed a post-processing-based approach to reconstruct three-dimensional shapes [25]. They obtained a complete reconstruction of the object by projecting the grating fringe patterns which do not overlap onto the surface of the measured object, ignored part of the reflective region, processed only the non-reflective region, and then performed a 3D point cloud matching after the rough reconstruction of the object contour by the moiré profilometry. So, this measurement loses shape detail in the reflective areas.
Based on the above current state of research, the methods in the field of dealing with the reflection problem can be summarized in the following three categories: (1) adjusting the luminance to ensure the grey values of the pixel points in the acquired image are not overexposed; (2) acquiring images from different angles and then performing multi-angle reconstruction of the object shape; and (3) adjusting the exposure for multiple exposures and then subjecting the images to a fusion operation to obtain an image that is not overexposed for reconstruction. These methods require capturing multiple images or adjusting the luminance and light intake. The process is cumbersome and time-consuming, and it cannot satisfy the real industrial assembly line that only captures images once. So, this study proposed an image segmentation method based on deep learning to solve the reflection effects during the measurement process of one image instead of multiple images.
Also, many scholars have conducted research in the field of flatness detection. The standard method to measure flatness is to obtain every point on the measured object with the help of mechanical precision instruments. Vanrusselt [26] and Pathak [27] majorly reviewed contact flatness measurement methods, and they thought the flatness results via contact measurement are always affected by the used artefact error. The contact measurement has another disadvantage. It is not only slow but also requires manual assistance to complete the measurement. Xiao et al. [28] analyzed and performed compensations for the thin-walled valve body parts’ surface flatness by using the contact measurement with the help of the wireless touch-trigger probe installed on the three-axis vertical machine. Wang [29] proposed a way to obtain the circular saw flatness by using the point laser displacement sensor. Although it can extract the point cloud of the measured object via the non-contact measurement method, the efficiency of the measurement is slow, and he just compared the experiment results with the contact measurement result. Liu et al. [30] proposed a flatness error evaluation method using the Marine predator algorithm, which aimed to reduce the calculation time effectively, and he compared it with the particle swarm optimization algorithm and other common plane fitting methods. However, this method cannot be applied to parts with complex surface texture. This study proposed using point cloud registration to obtain the measured workpiece flatness error to determine whether the workpiece manufacturing qualified according to the specific tolerance. Generally, the average measurement accuracy can reach 70–130 microns above these studies.
To better fit the actual measurement scenario and obtain faster and higher accuracy measurement results, this study focused on the reflection of the object and the overall flatness error assessment according to the designed computer-aided design (CAD) model by using the line laser scanning reconstruction system.
The rest of this paper is organized as follows. Section 2 constructs the principle of line laser 3D reconstruction. Section 3 describes the calibration process of the line laser system and proposes a method to reduce the time spent in the step motor sliding system calibration process. Section 4 constructs the approach to solve the reflective scattering region on the specific reflective material objects with the line laser turned on. It also compares the image difference method with the proposed deep learning-based image segmentation approach, hence improving the structure of the neural network to obtain more accurate laser region segmentation results. Section 5 constructs an overall flatness error assessment method. At the end of the paper, the proposed and improved method is verified through experiments in Section 6, and, finally, the conclusions are given through the experimental results in Section 7.

2. The Principle of Line Laser Scanning Reconstruction System

The line laser scanning reconstruction system consists of five components: a line laser emitter device, an industrial camera, a stepper motor slide device, a signal control board and a personal computer to perform image processing technology. The schematic diagram is shown in Figure 1. The line laser emitter device projects the line laser light on the surface of the measured objects. The industrial camera is used as an acquisition device to capture and store the line laser stripes images that are deformed by the height modulation of the object on the personal computer. The signal control board is used to receive electrical signals from the PC and then control the movement of the stepper motor slide by the high- and low-frequency cycles of the electrical signals. The stepper motor slide device is used to place objects and complete dynamic scanning of objects. The personal computer is used for completing algorithm and software development of line laser scanning 3D reconstruction, including the synchronous acquisition of the industrial camera and the stepper motor slide module, the system calibration module, the imaging module, etc.
Before scanning the object, the industrial camera should be calibrated, and the following work is finishing the calibration of the stepper motor slide device. In addition, it is necessary to turn on the line laser emitter device to capture the image through the industrial camera and complete the centerline extraction to complete the light plane fitting to determine the relative position relationship between the industrial camera and the line laser emitter device and determine the reference plane for object reconstruction. After completing the above work, the acquisition work of 3D reconstruction of the object can start, and each scanned acquisition image moving through the stepper motor slide device is obtained. A rigid transformation is performed through the mapping model of the small-aperture imaging model and the camera imaging-related coordinates. With the above-calibrated parameters, the centerline of the line laser stripe in the acquired scanned image is extracted. The pixel points corresponding to the extracted centerline are converted to coordinates. Then, the transformed point data are spliced according to the calibration information of the motion of the stepper motor slide device to complete the 3D reconstruction result of the measured object. Figure 2 shows the overall flowchart of the 3D reconstruction by the line laser scanning method.

3. The Process of the System Calibration and Light Plane Fitting

There are three steps in the line laser scanning system calibration process, i.e., industrial camera calibration, stepper motor slide device calibration and line laser plane fitting.

3.1. Industrial Camera Calibration

Four coordinate systems are involved with the following mapping relationship in Figure 3 between the camera imaging and real-world position. Due to the unavoidable errors in the processing and assembly of the industrial camera optical lens and CMOS sensor, the imaging of the camera will be aberrated due to these factors, leading to the deviation of its actual imaging point and the theoretical corresponding imaging point position. Therefore, the aberrations need to be eliminated by utilizing camera calibration. At the same time, the parameters of the camera imaging model are obtained, which reveal the mapping relationship between the real object and the pixel coordinate system. Then, the 3D coordinate conversion can be completed using the calibration parameters.
In Figure 3, The world coordinate system (Ow-XwYwZw) refers to the coordinate position of the object in real space. Each calibrated image has its coordinate system. The camera coordinate system (Oc-XcYcZc) takes the center of the optical lens as the origin, and it can complete the conversion between itself and the world coordinate system by rotation and translation. The image coordinate system (Oxy-xy) is a projection of the image onto the image plane in the camera coordinate system, and it only needs to establish two-dimensional imaging coordinates. The pixel coordinate system (Ouv-uv) is built on the pixel points on the captured image. The imaging sensor does not guarantee keeping the vertical installation with the camera’s optical axis, so it will produce the angular mapping relationship of coordinates, which needs to be realized by affine transformation. According to the camera image mapping, the relationship is expressed in Equation (1), where the parameters Pi and Po are the internal and external reference matrices obtained from the camera calibration, respectively.
Z c u v 1 = [ P i | 0 ] P o X w Y w Z w 1 = f d x cot ω d x u 0 0 0 f d y sin ω v 0 0 0 0 1 0 R 3 3 T 3 1 0 3 3 1 X w Y w Z w 1
However, the actual situation is that due to the design of the optical lens and non-linear factors such as industrial camera CMOS optical sensor position installation, it will lead to deviations in the imaging. The captured image produced the aberration phenomenon of internal concave and external convexity. To further improve the accuracy of calibration results, the elimination of radial distortion was researched [31], and the true mapping relationship equation was obtained as the following equation:
m = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) n = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
where (x,y) is the ideal coordinate, (m,n) is the radial distortion coordinate, k1, k2, and k3 are Taylor expansion coefficients, and Odx and Ody are the distortion center coordinates, respectively. The internal and external parameters of the camera can be obtained by identifying the feature points of the calibrated target object.

3.2. Improved Stepper Motor Slide Calibration

Since the line laser dynamic scanning can only acquire the reconstruction data of one laser stripe at a time, the motion device needs to be calibrated for direction and distance.
Most of the traditional motion platform calibration methods are performed with the help of high-precision standard 3D target objects with characteristic points. However, due to the high price of these targets, it is too expensive to use them only for calibration in the measurement process, which makes the budget too high, and the number of characteristic points is limited. Therefore, some scholars also performed the calibration task of motion stages using target feature points of 2D targets. They completed the calibration data of the motion platform by detecting the displacement distance of the upper left corner point or the center corner point of the checkerboard calibration board several times. This approach first requires multiple movements, which makes the calibration process complicated. Secondly, they are only calibrated based on a specific single corner point on the 2D targets, so the accuracy of the data obtained is not accurate enough.
This study proposed an improved calibration method based on motion vectors and mean values to address these problems. By detecting all the corner points of the 2D chessboard grid at once, this method ensures calibration accuracy and saves time by completing the calibration in one operation. Figure 4 shows the improved stepper motor slide device calibration algorithm proposed in this study.
The motion vector-based calibration method detects the target feature points of the two sets of images. After judging their displacement differences, the pixel coordinates of the feature points at two different locations are converted to the same coordinate system according to the previous camera calibration parameters. Finally, the world coordinates of the two are differenced according to the three directional axes by Equation (3). The unit move distance calibration is calculated from the moving displacement and the camera frame rate according to Equation (4), where the parameter (X0, Y0, Z0) is the world coordinates converted from feature points acquired before moving, (Xm, Ym, Zm) is the world coordinates obtained from feature point conversion after moving, and n represents the number of images acquired by the camera in one second, which is also the camera frame rate. The obtained calibration result provides the conditions for the subsequent point cloud splicing operation.
δ = X m Y m Z m X 0 Y 0 Z 0
u = δ n

3.3. Light Plane Fitting

The light plane fitting has two purposes. The first is to determine the relative position relationship between the line laser emitter device and the industrial camera, and the second is to determine the reference plane for the 3D reconstruction of the object under measurement. That means the subsequent reconstruction is performed on the light plane equation obtained from the fitting.
The light plane fitting is a two-step operation. First, the centerline of the line laser stripe needs to be extracted. This is because the intensity of the light conforms to a Gaussian model distribution under the ideal condition, and the centerline is the brightest, contains the most information, and is the most accurate. The second step is to fit the line laser light plane equation according to the pixels extracted from the centerline of the line laser stripes projected on the calibration plate in different positions. Figure 5 shows the schematic diagram of the light plane fitting.
The most commonly used line laser extraction methods include the extreme value method, the Steger algorithm based on normal vector calculation, the directional template algorithm and the grey centroid algorithm. The extreme value method and the grey centroid algorithm have the advantage of less computation time, but the centerline extraction accuracy is low. The Steger algorithm can yield high-accuracy extraction results, but it takes a long time to calculate due to the Hessian matrix, which consists of second derivatives. The directional template algorithm requires convolution of the pixel points with the template, which is also time-consuming.
Many scholars have conducted line laser centerline extraction algorithms in recent years. This study used a hybrid improved line laser centerline extraction algorithm proposed by Mao et al. [32]. The algorithm combines the skeleton thinning algorithm and the grey centroid algorithm. It improved the speed of the thinning operation for the original skeleton thinning algorithm. A high-power weighted grey centroid method was enhanced based on the grey centroid algorithm to ensure accuracy. This algorithm is not as fast as the grey centroid algorithm, but it considers the speed and precision of line laser centerline extraction. Figure 6 is the result of line laser centerline extraction.
Then, the light plane equation listed as the following Equation (5) is fitted by the least-squares method to find the light plane equation based on the results of the extracted line laser centerline with different positions and finally obtain the parameters of the light plane.
A X + B Y + C Z + D = 0

4. Removing the Reflection Affected by the Object Material

After the calibration of the system is completed, the calibration parameters of the camera and the light plane fitting parameters required for the 3D reconstruction are obtained. Then, according to these parameters, the next step is to scan and reconstruct the object. Although 3D reconstruction and measurement of objects can be accomplished on some dark plastic objects, on reflective objects of special materials such as metals, the line laser stripe is reflective and scattered, leading to inaccurate reconstruction of objects. Thus, this study was conducted to solve the problems caused by reflection and scattering.

4.1. Traditional Method

The image difference method is used in highlight image processing to make it easier to deal with overexposure problems and minimize the number of image acquisitions. This algorithm mainly calculates the corresponding grey value difference between the foreground image and the background image, as shown in the following Equation (6), where Gfore is the grey value of the specific pixel in the foreground image, Gback is the grey value of the particular pixel in the background image, and Gdiff is the difference grey value of the corresponding pixel in the foreground image and the background image. The noise of the other light sources without the line laser stripe can be eliminated by highlighting the regional location of the line laser stripe using this algorithm.
G d i f f = G f o r e G b a c k
This study also used the image difference method in the 3D measurement of objects made of glass. After the image difference operation, to speed up the line laser centerline extraction and avoid the pixel-by-pixel calculation of the whole image, the edge detection operator is applied to extract the edges of the line laser stripe.
Although the influence of other noise can be removed, there is still a scattering phenomenon at the edge of the obtained line laser stripe due to the color and material of the measured object surface, and the result is shown in Figure 7, which also affects the reconstruction accuracy. Although this algorithm can reduce the number of images acquired compared to the multiple exposure method and the photometric stereo method, it still requires one acquisition for the foreground image and one for the background image (with the laser turned on and off, respectively). This does not meet the needs of a real assembly line for 3D measurement in one process.

4.2. A Novel Image Segmentation Method Based on Deep Learning

To further ensure that the speed and accuracy of the line laser centerline extraction is not affected by the reflection and scattering phenomenon, it is necessary to determine the real area of the line laser stripe. This study proposed a method based on deep learning image segmentation to determine the line laser centerline extraction region to reduce the effect of reflection and scattering and the time of centerline extraction.
Deep learning requires a certain amount of data to train the model. During the scanning of the measured object, many images were collected in this study. Figure 8 shows the schematic diagram of the captured data.
Due to the specificity of the measured object and the collected data, no publicly available dataset can be applied in this study, so the data enhancement operation was performed on the collected image data. Common image enhancement methods such as the gamma transform method, the logarithmic transform algorithm, the global image histogram equalization method, the restricted contrast adaptive histogram equalization method and the Laplace enhancement method are performed to expand the dataset. Figure 9 shows the schematic diagram of the data generated by different enhancement algorithms.
After the data enhancement was completed, the dataset was annotated. The annotation was divided into three categories: the real laser line, the reflective scattering artifact, and the ambient light. The schematic diagram of the labelled result is shown in Figure 10 below.
Regarding selecting deep learning network models, the training model is currently divided into Vision Transformer models and convolutional neural networks. Vision Transformer models can learn image features well compared to the traditional convolutional neural network which divides the image into multiple patches to learn the features in the patches and the global features between the patches [33,34]. However, the Transformer models require a larger number of datasets than the convolutional neural network to obtain a good result of training parameters and models. Although image enhancement algorithms expand the dataset, this study used the convolutional neural network as the training model, considering the limited number of datasets.
The ConvNeXt [35] model drew on the Swin-Transformer model’s network structure advantages. ConvNeXt adjusted the stacking block ratio to close to that of the Transformer model, adjusted the original ResNeXt [36] convolutional kernel size, and then introduced the MobileNet [37] depth-wise convolution to achieve the effect of Swin-Transformer self-attentive mechanism for interaction and fusion of feature information in spatial dimensions. In addition, ConvNeXt increased the size of the convolution kernel perceptual field according to the size of Swin-Transformer convolutional kernel size, and the inverse bottleneck structure of MobileNetV2 [38] was introduced to reduce the computation. Due to the above improvements, the training result of ConvNeXt is beyond that of Transformer. Figure 11 shows the comparison diagram schematic of ConvNeXt, Swin-Transformer and ResNeXt blocks.
During the experiment, further improvements were made to optimize the original ConvNeXt to improve the accuracy of the segmentation of the line laser stripe further. In this study, the projection area of the line laser stripe is placed in the middle of the field of view to make the industrial camera capture the field of view conform to the imaging range of objects of different heights. Then, the position of the line laser stripe has certain spatial information.
To make the convolutional neural network notice the spatial location feature information of the image, this study introduced the convolutional block attention module (CBAM) [39] in the original ConvNeXt network. This module learns the features from the image channel and the image space, respectively, so that it can notice the color, brightness, position and other features of the image. Moreover, to enable the network model to learn the features of the image faster in as few iterative epochs as possible, this study introduced the dilated convolution kernel, which was proposed in DeepLab [40], into the convolution kernel of ConvNeXt and CBAM. It can control the resolution of the response better when calculating the feature response by expanding the convolution kernel and expanding the perceptual field of the convolution kernel to integrate more feature information without increasing the number of parameters and computational effort. Figure 12 shows the improved network structure model of this study.

5. An Overall Flatness Inspection of the Measured Object

After solving the reflection and scattering problem, the reconstruction of the measurement object by line laser scanning is improved. The flatness of the tolerance feature in the industrial manufacturing and assembly process determines whether the product conforms to the designed standard parts model and whether it is within the designed tolerance range.
Flatness error is the variation between the actual surface being measured and its ideal surface. When measuring flatness, two ideal planes must be determined, and these superior planes should be tangent to the high or low point of the measured surface. These two places are the locations where the two ideal planes are located, respectively. Figure 13 represents the flatness error schematic, where the red color indicates the actual measurement plane, and the two upper and lower rectangles indicate the intersecting ideal position reference planes.
The existing flatness error measurement methods usually require high-precision optics and auxiliary equipment. However, there are shortcomings in the traditional flatness error measurement methods. First of all, the measurement needs to be completed with the help of high-precision manufacturing instruments, and the cost of measurement is high. Secondly, some methods measure points by contacting points or along the linear direction, so the speed and efficiency of measurement are very slow, and the operation is complicated. Also, the determination of flatness error by determining the farthest point of the actual measurement object surface through manual selection cannot guarantee the accuracy of the farthest point selection. Finally, the flatness error is judged by a few characteristic points, which makes the measurement accuracy inaccurate.
Therefore, this study proposed a method to register the scanned point cloud results of the measured object with the CAD model of the standard part designed by computer vision to complete the flatness error evaluation. Instead of determining the flatness error only by multiple feature points, this method takes into account the flatness error of the whole surface of the measured object.
The CAD model data are preprocessed in point cloud registration to extract geometric information. Then, sampling is performed to generate a dense point grid to obtain the point cloud data of the standard CAD model. The measured point cloud must be downsampled to retain its general features to match the standard CAD point cloud. The downsampling can further reduce the number of point cloud registration processes and improve the speed of point cloud registration. This study used the grid average downsampling algorithm to implement point cloud downsampling. This algorithm first divides point clouds into very small grids to contain some points. These points are averaged or weighted to obtain a point, replacing all the original grid points. Figure 14 is the result after downsampling the schematic diagram.
During the registration process, this study chose the Fast Point Feature Histogram (FPFH) algorithm [41] for rough registration to bring two far-apart point cloud data closer. FPFH is an algorithm used to describe features in point cloud data. Firstly, it finds the k nearest neighbor points to the point in the point cloud. It calculates the normal directions of points in its neighborhood for each point and their relative positions to the center point. The normal direction and relative position are then converted into a histogram to represent the neighborhood’s points distribution. The histogram features of each point are merged with the features of other points in its neighborhood to describe the geometric structure information of the entire neighborhood comprehensively. The combined features are normalized to facilitate comparison and matching between different areas. Then, this study used the Iterative Closest Point (ICP) algorithm [42] for point cloud registration. The principle of the ICP algorithm is to rigidly transform one piece of point cloud data to match another piece of point cloud in space by rotation and translation, which is more similar to the principle of industrial camera calibration to obtain the external parameters in this study. The transformation of the point cloud can be carried out by Equation (7),
x i y i z i = 1 0 0 0 cos α sin α 0 sin α cos α cos β 0 sin β 0 1 0 sin β 0 cos β cos γ sin γ 0 sin γ cos γ 0 0 0 1 x i y i z i + t x t y t z
where, the parameters α, β, γ are the angle of rotation corresponding to the X, Y, Z axis, respectively, and tx, ty, tz are the translation distance corresponding to the X, Y, Z axis, (xi′, yi′, zi′) is the coordinate of the point cloud to be registered, (xi, yi, zi) is the target point cloud coordinate.
The rotation parameters and translation parameters corresponding to each axis are estimated through iterations until the loss function reaches the minimum value, and the parameters corresponding to the minimum value are used as the transformation parameters for point cloud registration to complete the point cloud transformation and point cloud registration. After point cloud registration, a threshold value is set by the production flatness tolerance according to the measured object, and the measured object larger than the threshold value is designated as an unqualified object.

6. Experiments

In this section, a 3D measurement platform based on line laser scanning is built in this study. The equipment parameters applied to this platform are given in the following Table 1. The experiments in this section include system calibration and light plane fitting, reflection and scattering segmentation, and different object measurement and flatness error determination.

6.1. System Calibration and Light Plane Fitting

This study first generated the chessboard grid pattern used for camera calibration, as shown in the following Figure 15. The size of each cell in the chessboard grid is 5mm. Then, the industrial camera captured the checkerboard grid with different poses and the corner point detection was performed on the captured images. The error is calculated, and the results of the internal and external parameters of the industrial camera are obtained by reprojection. Figure 16 shows a schematic diagram of some of the acquired images. The results of the camera calibration are presented in Table 2.
Then, the stepper motor slide device was calibrated by the improved calibration algorithm proposed in this study. The calibration process is shown schematically in Figure 17. The results of the calibration for the first 10 points are presented in Table 3. It can be observed that the calibration method proposed in this study is more accurate than the calibration algorithm through a single feature point.
Then, the light plane Equation (8) was obtained by fitting the extracted line laser centerline by the least squares method in this study. The fitting light plane results are shown in Figure 18.
0.058701 X + 0.002176 Y + 0.057126 Z + 1 = 0

6.2. Reflection and Scattering Segmentation

In this study, the segmentation results of the original ConvNeXt, the network model with the introduction of CBAM and dilated convolution kernel are compared in the segmentation experiments of reflective and scattering regions using deep learning. This study chooses mean intersection over union (mIoU) and mean accuracy (mAcc) as the evaluation indexes for the two networks. Figure 19 is a schematic diagram of the comparison between the segmentation results of ConvNeXt and the segmentation results of the improved network of this study. Table 4 and Table 5 show the validation results for two networks. They demonstrate that the real line laser stripe region segmentation accuracy is improved, and the reflective scattering region and ambient light region are also detected, but the accuracy is not very high. This may be due to the irregular position of reflective scattering and ambient light and the fact that the labelled area occupies too small of the image area.

6.3. Objects Measurement and Flatness Error Determination

The subject was reconstructed and measured in 3D by different types of objects. The first set of experiments was conducted to verify the overall accuracy of the reconstruction by measuring a standard measuring block of 30 × 10 × 10 and a standard ball with a radius of 5.99 mm and 8.99 mm, respectively. The measurement results are shown in the following Figure 20. Table 6 and Table 7 list three sets of measurement data. It can be concluded that the error is controlled at about 30 μm.
The subsequent experiments were performed on different metal workpieces as well as flatness inspection. Based on the tolerance range of the flatness of the specific metal workpiece design, it was determined whether the error in the corresponding position of the manufactured product was within the tolerance range of the flatness of the standard-design CAD model. If the flatness tolerance threshold is exceeded, the product is judged to be an unqualified product. Figure 21 shows the results of flatness inspection performed on different metal workpieces.

7. Conclusions

In this study, the 3D measurement of metal workpieces is investigated. Firstly, this study proposed a motion vector-based mean value calibration method to address the problem of inaccuracy in calibrating the stepper motor slide device. The method proposed by this study can simplify the calibration process and improve calibration accuracy through experiments compared to the traditional single-corner motion calibration method. The calibration result accuracy can improve by 6%. Then, this study proposed a processing method based on deep learning image segmentation to reduce the effect of reflection and scattering on the 3D reconstruction of the object. The feasibility of this method has been verified through experiments, which can reduce the impact of reflection on measurement accuracy. However, the segmentation accuracy using the original neural network architecture cannot accurately segment the real laser line from the reflective and ambient light areas. This will cause the extraction centerline area to be too wide, and the extraction results in a long time and low accuracy. Therefore, this study introduced the CBAM to improve the neural network’s attention to the image space and image channels to improve the segmentation accuracy. The segmentation accuracy of the real line improves by 3%. Moreover, for the problem of the high price of flatness error assessment, which requires high-precision instruments for measurement and the problem of flatness calculation by measuring certain feature points of data without comparison and visualization with the designed model, a method based on point cloud registration is proposed to judge the manufacturing metal by measuring point clouds and CAD standard point clouds whether the manufacturing workpiece meets the standard designed workpiece or not by the design tolerance of flatness in different areas of specific parts. Compared with the flatness error measurement accuracy mentioned in Section 1 (average accuracy reaches 70-130 microns), the average accuracy of this study can reach 30–50 microns, and the measurement accuracy has been increased by nearly 50%. Although the measurement accuracy has been improved, some factors are in need of improvement. For example, the point cloud registration algorithm also brings the matching error. In future work, we will continue to research and improve the point cloud registration algorithm, thereby reducing errors to achieve a minimum measurement accuracy of about 10 microns. Then, we will use lightweight models to deploy AI boards to realize low-cost products.

Author Contributions

Conceptualization, Z.M., Y.X. and C.K.; Methodology, Z.M., Y.X. and C.K.; Software, Z.M., C.Z. and B.G.; Validation, C.K., Y.Z. and B.G.; Formal analysis, C.K. and Z.X.; Data curation, J.J.; Writing—original draft, Z.M., C.Z.; Writing—review and editing, C.K., C.Z., Z.X. and J.J.; Supervision, Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (NSFC) (61203172), the Sichuan Science and Technology Program (2023NSFSC0361, 2022002), the Chengdu Science and Technology Program (2022-YF05-00837-SN), and the Research Foundation of Chengdu University of Information Technology (KYTZ2023032).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the article.

Conflicts of Interest

Author Yue Zhu was employed by the company China Productivity Center for Machinery Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, S.; Zhou, Y.; Tang, J.; Tang, K.; Li, Z. Digital tooth contact analysis of face gear drives with an accurate measurement model of face gear tooth surface inspected by CMMs. Mech. Mach. Theory 2022, 167, 104498. [Google Scholar] [CrossRef]
  2. Zanini, F.; Carmignato, S. Reference object for traceability establishment in X-ray computed tomography measurements of fiber length in fiber-reinforced polymeric materials. Precis. Eng. 2022, 77, 33–39. [Google Scholar] [CrossRef]
  3. Kim, B.-S.; Jeong, S.-T.; Ahn, H.-J. The Prediction of the Angular Transmission Error of a Harmonic Drive by Measuring Non-contact Tooth Profile and Considering Three-dimensional Tooth Engagement. Int. J. Precis. Eng. Manuf. 2023, 24, 371–378. [Google Scholar] [CrossRef]
  4. Revilla-León, M.; Att, W.; Özcan, M.; Rubenstein, J. Comparison of conventional, photogrammetry, and intraoral scanning accuracy of complete-arch implant impression procedures evaluated with a coordinate measuring machine. J. Prosthet. Dent. 2021, 125, 470–478. [Google Scholar] [CrossRef] [PubMed]
  5. Khanna, N.; Pusavec, F.; Agrawal, C.; Krolczyk, G.M. Measurement and evaluation of hole attributes for drilling CFRP composites using an indigenously developed cryogenic machining facility. Measurement 2020, 154, 107504. [Google Scholar] [CrossRef]
  6. Taraphdar, P.K.; Thakare, J.G.; Pandey, C.; Mahapatra, M.M. Novel residual stress measurement technique to evaluate through thickness residual stress fields. Mater. Lett. 2020, 277, 128347. [Google Scholar] [CrossRef]
  7. Catalucci, S.; Thompson, A.; Eastwood, J.; Zhang, Z.M.; Branson III, D.T.; Leach, R.; Piano, S. Smart optical coordinate and surface metrology. Meas. Sci. Technol. 2023, 34, 12001. [Google Scholar] [CrossRef]
  8. Pears, N.E.; Liu, Y.; Bunting, P. 3D Imaging, Analysis and Applications; Springer: London, UK, 2012. [Google Scholar]
  9. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf. Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  10. Wang, Q.; Liu, Q.; Xia, R.; Li, G.; Gao, J.; Zhou, H.; Zhao, B. Defect Depth Determination in Laser Infrared Thermography Based on LSTM-RNN. IEEE Access 2020, 8, 153385–153393. [Google Scholar] [CrossRef]
  11. Cao, X.; Xie, W.; Ahmed, S.M.; Li, C.R. Defect detection method for rail surface based on line-structured light. Measurement 2020, 159, 107771. [Google Scholar] [CrossRef]
  12. Tao, B.; Liu, Y.; Huang, L.; Chen, G.; Chen, B. 3D reconstruction based on photoelastic fringes. Concurr. Comput. 2022, 34, e6481. [Google Scholar] [CrossRef]
  13. Li, Q.; Xue, Y. Total leaf area estimation based on the total grid area measured using mobile laser scanning. Comput. Electron. Agric. 2023, 204, 107503. [Google Scholar] [CrossRef]
  14. Chen, S.; Yang, D.; Liu, J.; Tian, Q.; Zhou, F. Automatic weld type classification, tacked spot recognition and weld ROI determination for robotic welding based on modified YOLOv5. Robot. Comput. Integr. Manuf. 2023, 81, 102490. [Google Scholar] [CrossRef]
  15. Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators A Phys. 2019, 297, 111533. [Google Scholar] [CrossRef]
  16. Liu, L.; Ye, Y. Optical 3d Laser Measurement for the Height of Silver Paste Overflow from Apd Chip in Optical Component Packaging. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
  17. Han, Y.; Fan, J.; Yang, X. A structured light vision sensor for on-line weld bead measurement and weld quality inspection. Int. J. Adv. Manuf. Technol. 2020, 106, 2065–2078. [Google Scholar] [CrossRef]
  18. Wang, N.; Zhong, K.; Shi, X.; Zhang, X. A robust weld seam recognition method under heavy noise based on structured-light vision. Robot. Comput. Integr. Manuf. 2020, 61, 101821. [Google Scholar] [CrossRef]
  19. He, K.; Sui, C.; Lyu, C.; Wang, Z.; Liu, Y. 3D reconstruction of objects with occlusion and surface reflection using a dual monocular structured light system. Appl. Opt. 2020, 59, 9259–9271. [Google Scholar] [CrossRef]
  20. Zhu, Z.; You, D.; Zhou, F.; Wang, S.; Xie, Y. Rapid 3D reconstruction method based on the polarization-enhanced fringe pattern of an HDR object. Opt. Express 2021, 29, 2162–2171. [Google Scholar] [CrossRef]
  21. Karami, A.; Varshosaz, M.; Menna, F.; Remondino, F.; Luhmann, T. Fft-based filtering approach to fuse photogrammetry and photometric stereo 3D data. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2023, X-4/W1-2022, 363–370. [Google Scholar] [CrossRef]
  22. Pei, X.; Ren, M.; Wang, X.; Ren, J.; Zhu, L.; Jiang, X. Profile measurement of non-Lambertian surfaces by integrating fringe projection profilometry with near-field photometric stereo. Measurement 2022, 187, 110277. [Google Scholar] [CrossRef]
  23. He, K.; Sui, C.; Huang, T.; Dai, R.; Lyu, C.; Liu, Y.-H. 3D Surface reconstruction of transparent objects using laser scanning with LTFtF method. Opt. Lasers Eng. 2022, 148, 106774. [Google Scholar] [CrossRef]
  24. Wu, K.; Xie, Y.; Lu, L.; Yin, Y.; Xi, J. Three-dimensional reconstruction of moving HDR object based on PSP. Opt. Lasers Eng. 2023, 163, 107451. [Google Scholar] [CrossRef]
  25. Li, M.; Cao, Y.; Wu, H. Three-dimensional reconstruction for highly reflective diffuse object based on online measurement. Opt. Commun. 2023, 533, 129276. [Google Scholar] [CrossRef]
  26. Vanrusselt, M.; Haitjema, H.; Leach, R.; de Groot, P. International comparison of flatness deviation in areal surface topography measurements. CIRP Ann. 2022, 71, 453–456. [Google Scholar] [CrossRef]
  27. Pathak, V.K.; Singh, R. A Comprehensive Review on Computational Techniques for Form Error Evaluation. Arch. Comput. Methods Eng. 2022, 29, 1199–1228. [Google Scholar] [CrossRef]
  28. Xiao, Y.; Ge, G.; Feng, X.; Yang, J.; Du, Z. Analysis and compensation of surface flatness of thin-walled valve body parts. Int. J. Adv. Manuf. Technol. 2022, 123, 1679–1693. [Google Scholar] [CrossRef]
  29. Wang, Q.; Jing, G. Circular saw flatness on machine measurement using a point laser displacement sensor. Electron. Lett. 2022, 58, 879–880. [Google Scholar] [CrossRef]
  30. Changying, L.; Yuanyuan, Y.; Yuguang, H.; Bowen, A. Flatness Error Evaluation Based on Marine Predator Algorithm. In Proceedings of the IEEE Conference on Telecommunications, Optics and Computer Science, TOCS, Virtual, 11–12 December 2022. [Google Scholar]
  31. Ricolfe-Viala, C.; Sánchez-Salmerón, A.J. Correcting non-linear lens distortion in cameras without using a model. Opt. Laser Technol. 2010, 42, 628–639. [Google Scholar] [CrossRef]
  32. Mao, Z.; Xu, Y.; Guo, B.; Li, T.; Jiang, X.; Shi, Y.; Cao, Y.; Xu, Z.; Zhang, C.; Huang, J. A Hybrid Algorithm for the Laser Stripe Centreline Extraction. Procedia CIRP 2022, 114, 30–35. [Google Scholar] [CrossRef]
  33. Li, J.; Chen, J.; Tang, Y.; Wang, C.; Landman, B.A.; Zhou, S.K. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med. Image Anal. 2023, 85, 102762. [Google Scholar] [CrossRef]
  34. He, X.; Zhou, Y.; Zhao, J.; Zhang, D.; Yao, R.; Xue, Y. Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  35. Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the CVPR2022, New Orleans, LA, USA, 19–24 June 2022. [Google Scholar]
  36. Yadav, D.P.; Jalal, A.S.; Garlapati, D.; Hossain, K.; Goyal, A.; Pant, G. Deep learning-based ResNeXt model in phycological studies for future. Algal Res. 2020, 50, 102018. [Google Scholar] [CrossRef]
  37. Sae-Lim, W.; Wettayaprasit, W.; Aiyarak, P. Convolutional Neural Networks Using MobileNet for Skin Lesion Classification. In Proceedings of the 2019 16th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chonburi, Thailand, 10–12 July 2019; pp. 242–247. [Google Scholar]
  38. Chen, J.; Zhang, D.; Suzauddola, M.; Zeb, A. Identifying crop diseases using attention embedded MobileNet-V2 model. Appl. Soft Comput. 2021, 113, 107901. [Google Scholar] [CrossRef]
  39. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Munich, Germany, 8–14 September 2018; Volume 11211 LNCS. [Google Scholar]
  40. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  41. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  42. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
Figure 1. The line laser scanning platform schematic diagram.
Figure 1. The line laser scanning platform schematic diagram.
Photonics 10 01333 g001
Figure 2. The flowchart of 3D reconstruction is based on line laser scanning.
Figure 2. The flowchart of 3D reconstruction is based on line laser scanning.
Photonics 10 01333 g002
Figure 3. The relationship of the four coordinates.
Figure 3. The relationship of the four coordinates.
Photonics 10 01333 g003
Figure 4. The schematic diagram of the stepper motor slide calibration.
Figure 4. The schematic diagram of the stepper motor slide calibration.
Photonics 10 01333 g004
Figure 5. The process of the light plane fitting schematic diagram. (a) The partial images of different positions on the chessboard with the line laser emitter turned on. (b) The result of the light plane fitting schematic diagram. The red lines on the light plane represent the extracted centerlines in different positions.
Figure 5. The process of the light plane fitting schematic diagram. (a) The partial images of different positions on the chessboard with the line laser emitter turned on. (b) The result of the light plane fitting schematic diagram. The red lines on the light plane represent the extracted centerlines in different positions.
Photonics 10 01333 g005
Figure 6. The result of the line laser centerline extraction.
Figure 6. The result of the line laser centerline extraction.
Photonics 10 01333 g006
Figure 7. The image difference method result. (a) The foreground image with the line laser emitter turned on. (b) The background image with the line laser emitter turned off. (c) The result of the image difference method. The red circles represent the reflective areas.
Figure 7. The image difference method result. (a) The foreground image with the line laser emitter turned on. (b) The background image with the line laser emitter turned off. (c) The result of the image difference method. The red circles represent the reflective areas.
Photonics 10 01333 g007
Figure 8. Partial image acquisition by line laser scanning.
Figure 8. Partial image acquisition by line laser scanning.
Photonics 10 01333 g008
Figure 9. The images are generated by image enhancement algorithms. (a) The image is the original captured by the industrial camera. (b) The image is generated by the global image histogram equalization method. (c) The image is generated by the gamma transform method. (d) The image is generated by the Laplace enhancement method. (e) The image is generated by the logarithmic transform method. (f) The image is generated by the restricted contrast adaptive histogram equalization method.
Figure 9. The images are generated by image enhancement algorithms. (a) The image is the original captured by the industrial camera. (b) The image is generated by the global image histogram equalization method. (c) The image is generated by the gamma transform method. (d) The image is generated by the Laplace enhancement method. (e) The image is generated by the logarithmic transform method. (f) The image is generated by the restricted contrast adaptive histogram equalization method.
Photonics 10 01333 g009
Figure 10. Data-labeling schematic diagram. (a) The image of the label software and the labeled image. (b)The image of the labeled result generated by the labeled image.
Figure 10. Data-labeling schematic diagram. (a) The image of the label software and the labeled image. (b)The image of the labeled result generated by the labeled image.
Photonics 10 01333 g010
Figure 11. The structure of the Swin-Transformer, ResNeXt and ConvNeXt Block.
Figure 11. The structure of the Swin-Transformer, ResNeXt and ConvNeXt Block.
Photonics 10 01333 g011
Figure 12. The improved structure of the ConvNeXt network proposed by this study.
Figure 12. The improved structure of the ConvNeXt network proposed by this study.
Photonics 10 01333 g012
Figure 13. The flatness schematic diagram of the object.
Figure 13. The flatness schematic diagram of the object.
Photonics 10 01333 g013
Figure 14. The result of the point cloud downsampling.
Figure 14. The result of the point cloud downsampling.
Photonics 10 01333 g014
Figure 15. The generated chessboard grid pattern.
Figure 15. The generated chessboard grid pattern.
Photonics 10 01333 g015
Figure 16. The partial images of camera calibration.
Figure 16. The partial images of camera calibration.
Photonics 10 01333 g016
Figure 17. The result of the stepper motor slide device calibration. The different color dash line represents detected corners in different row.
Figure 17. The result of the stepper motor slide device calibration. The different color dash line represents detected corners in different row.
Photonics 10 01333 g017
Figure 18. The result of light plane fitting. (a) Chessboard calibration plate images were captured at different positions. (b) The image of light plane fitting.
Figure 18. The result of light plane fitting. (a) Chessboard calibration plate images were captured at different positions. (b) The image of light plane fitting.
Photonics 10 01333 g018
Figure 19. The segmentation result of the ConvNeXt model and improved model proposed by this study. (a) The captured image. (b) The segmentation result of the ConvNeXt network. (c) The segmentation result of the improved network proposed by this study.
Figure 19. The segmentation result of the ConvNeXt model and improved model proposed by this study. (a) The captured image. (b) The segmentation result of the ConvNeXt network. (c) The segmentation result of the improved network proposed by this study.
Photonics 10 01333 g019
Figure 20. The result of the standard block and the standard ball. (a) The captured image of the standard block. (b) The reconstruction and measurement result of the standard block. (c) The captured image of the standard ball. (d) The reconstruction and measurement result of the standard ball. The depth of blue represents the height of the object.
Figure 20. The result of the standard block and the standard ball. (a) The captured image of the standard block. (b) The reconstruction and measurement result of the standard block. (c) The captured image of the standard ball. (d) The reconstruction and measurement result of the standard ball. The depth of blue represents the height of the object.
Photonics 10 01333 g020
Figure 21. The flatness inspection result of different metal workpieces. (a) The captured images of the measured objects. (b) The reconstruction and measurement results of the different measured objects. (c) The downsampling point cloud results in different objects. (d) The point cloud registration results of different measured objects. (e) The flatness inspection results of different measured objects.
Figure 21. The flatness inspection result of different metal workpieces. (a) The captured images of the measured objects. (b) The reconstruction and measurement results of the different measured objects. (c) The downsampling point cloud results in different objects. (d) The point cloud registration results of different measured objects. (e) The flatness inspection results of different measured objects.
Photonics 10 01333 g021
Table 1. The device of this study uses the line laser scanning method.
Table 1. The device of this study uses the line laser scanning method.
DevicesParameters
Industrial CameraMV-CA050-11UM
Resolution: 2048 × 2448
Line Laser Emitter Device650nm 1mw red line laser
Stepper Motor Slide DeviceMinimum movement unit:0.03 mm
Slide size:120 × 100 mm
Auxiliary LightBlue coaxial light source
Table 2. The calibration result of the industrial camera.
Table 2. The calibration result of the industrial camera.
ParametersCalibration Results
Principal Points[1208.2, 1007.2]
Focal Length[7824.0, 7836.5]
Radial distortion parameters[−0.0886, 1.6794]
Rotation Matrix 0.09885 0.1486 0.0283 0.1481 0.9888 0.0185 0.0307 0.0141 0.9994
Translation Matrix 29.7415 13.2232 338.7268
Table 3. The calibration result of the stepper motor slide device by using the improved calibration method.
Table 3. The calibration result of the stepper motor slide device by using the improved calibration method.
Corresponding CornerX Axis (mm)Y Axis (mm)Z Axis (mm)
10.30940.01270.0054
20.31010.01280.0052
30.31080.01290.0051
40.31100.01310.0054
50.31220.01320.0053
60.31280.01330.0053
70.31350.01340.0053
80.31420.01350.0052
90.31020.01310.0045
100.31090.01310.0044
Average Distance0.311510.013110.00511
Table 4. The segmentation result of the ConvNeXt network model.
Table 4. The segmentation result of the ConvNeXt network model.
CategoriesIntersection over UnionAccuracy
Real Laser 69.02%78.27%
Reflection and Scattering53.31%58.52%
Ambient49.91%66.89%
Mean Value57.41%67.89%
Table 5. The segmentation result of the improved network model proposed by this study.
Table 5. The segmentation result of the improved network model proposed by this study.
CategoriesIntersection over UnionAccuracy
Real Laser 71.03%81.54%
Reflection and Scattering53.47%59.1%
Ambient50.21%67.42%
Mean Value58.23%69.35%
Table 6. The measurement result of the standard block.
Table 6. The measurement result of the standard block.
ParametersStandard ValueMeasurement Result Mean Error
Length (mm)3029.936
29.941
29.925
0.666
Width (mm)109.968
9.963
9.964
0.035
Height (mm)109.971
9.968
9.971
0.03
Table 7. The measurement result of the standard ball.
Table 7. The measurement result of the standard ball.
ParameterStandard ValueMeasurement Result Mean Error
Radius (mm)5.995.996
5.988
5.898
0.029
8.998.95
8.962
8.959
0.033
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mao, Z.; Zhang, C.; Guo, B.; Xu, Y.; Kong, C.; Zhu, Y.; Xu, Z.; Jin, J. The Flatness Error Evaluation of Metal Workpieces Based on Line Laser Scanning Digital Imaging Technology. Photonics 2023, 10, 1333. https://doi.org/10.3390/photonics10121333

AMA Style

Mao Z, Zhang C, Guo B, Xu Y, Kong C, Zhu Y, Xu Z, Jin J. The Flatness Error Evaluation of Metal Workpieces Based on Line Laser Scanning Digital Imaging Technology. Photonics. 2023; 10(12):1333. https://doi.org/10.3390/photonics10121333

Chicago/Turabian Style

Mao, Zirui, Chaolong Zhang, Benjun Guo, Yuanping Xu, Chao Kong, Yue Zhu, Zhijie Xu, and Jin Jin. 2023. "The Flatness Error Evaluation of Metal Workpieces Based on Line Laser Scanning Digital Imaging Technology" Photonics 10, no. 12: 1333. https://doi.org/10.3390/photonics10121333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop