Next Article in Journal
Identification of Bolt Loosening Damage of Steel Truss Structure Based on MFCC-WPES and Optimized Random Forest
Previous Article in Journal
Simulation of Different Control Strategies of a Three-Phase Induction Motor Coupled to a Real Decanter Centrifuge for Olive Oil Extraction Focusing on Energy Saving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera

1
Key Laboratory of Advanced Manufacturing and Automation Technology, Guilin University of Technology, Education Department of Guangxi Zhuang Autonomous Region, Guilin 541006, China
2
Guangxi Engineering Research Center of Intelligent Rubber Equipment, Guilin University of Technology, Guilin 541006, China
3
Guilin GLESI Scientific Technology Co., Ltd., Guilin 541004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6625; https://doi.org/10.3390/app14156625 (registering DOI)
Submission received: 14 June 2024 / Revised: 17 July 2024 / Accepted: 23 July 2024 / Published: 29 July 2024

Abstract

:
To achieve an accurate measurement of tread size after fixed-length cutting, this paper proposes a point-cloud-based tread size measurement method. Firstly, a mathematical model of corner points and a reprojection error is established, and the optimal solution of the number of corner points is determined by the non-dominated sorting genetic algorithm II (NSGA-II), which reduces the reprojection error of the RGB-D camera. Secondly, to address the problem of the low accuracy of the traditional pixel metric ratio measurement method, the random sampling consensus point cloud segmentation algorithm (RANSAC) and the oriented bounding box (OBB) collision detection algorithm are introduced to complete the accurate detection of the tread size. By comparing the absolute error and relative error data of several groups of experiments, the accuracy of the detection method in this paper reaches 1 mm, and the measurement deviation is between 0.14% and 2.67%, which is in line with the highest accuracy standard of the national standard. In summary, the RGB-D visual inspection method constructed in this paper has the characteristics of low cost and high inspection accuracy, which is a potential solution to enhance the pickup guidance of tread size measurement.

1. Introduction

The accuracy of tread size is crucial for ensuring the high-quality production of subsequent finished tires. However, the industrial production line environment introduces various interferences that affect the accuracy of tread size detection after cutting. Factors such as production line debris, mechanical vibrations, and conveyor speed all contribute to potential measurement errors. Relying on manual empirical methods can easily be disrupted by these uncontrollable factors, leading to material waste, poor detection effectiveness, and decreased production efficiency [1]. Thus, optimizing the measurement techniques for tread size after cutting is of significant practical importance.
Machine vision technology has evolved significantly over the past 60 years. Currently, depth measurement technology based on RGB-D data is not only an essential tool in computer vision but also a prominent research focus in point cloud processing applications. Depth cameras have been used in 3D applications, such as ranging and 3D reconstruction, for over a decade. The authors of [2] demonstrate the use of a mobile single Kinect to create indoor scenes with real-time multi-touch interaction capabilities. The authors of [3] establish that the scanning accuracy of depth cameras is inversely correlated with the scanning distance. However, the quality of 3D reconstruction is generally poor. To address this, ref. [4] invoked the implicit function of non-surface volume in the NeRF framework and proposed a new RGB-D camera pose relocation technique to achieve high-quality, measurable 3D reconstruction. Additionally, ref. [5] proposes an external parameter calibration method based on a monocular laser scattering projection system, which effectively improves both the calibration efficiency and the depth reconstruction accuracy of the system. In the process of 3D reconstruction, errors and gaps often occur when reconstructing reflective and protruding objects. To address these issues, ref. [6] employs two depth cameras to simultaneously capture infrared images of objects. By fusing the parallax calculation results with the depth images, this approach repairs and refines voids in the depth map and compensates for the mutual interference from the infrared spot projections of the two cameras, thereby improving the reconstruction quality of photo-reflective anomalous objects. Additionally, ref. [7] proposes a new RGB-D saliency detection model, CAAI-Net, which utilizes a complementary attention mechanism and adaptive feature fusion to detect saliency in multimodal RGB-D images, effectively overcoming the limitations of existing methods for accurately detecting salient objects. Although the quality of 3D reconstruction has improved, acquiring target data typically requires expensive equipment. To reduce data acquisition costs, ref. [8] proposes a voxel-based pipeline that employs voxel cropping operations, removes redundant voxels, and fills holes on surfaces. This method achieves better results in the complete and accurate reconstruction of sparse RGB-D image geometric models and texture mapping, offering greater applicability and flexibility for various conditions and multiple RGB-D camera devices. To further reduce the 3D reconstruction errors, the researchers introduced a point cloud alignment method. The authors of [9] present an object alignment method that combines the LK optical flow algorithm with the ICP point cloud alignment algorithm. This approach constructs an enhanced assembly guidance system for complex products. Using the engine model and complex weapon compartment equipment as examples, the experimental results demonstrate that the proposed alignment method is more accurate and stable. Point cloud methods are utilized not only in the industrial sector but also in the agricultural sector. In agriculture, the size of plant crown volume is closely related to yield. To achieve an accurate prediction of the crown volume of orange trees, ref. [10] proposes a dynamic slicing and reconstruction algorithm using 3D point clouds. This algorithm outperforms other point cloud reconstruction methods and accurately demonstrates the relationship between citrus tree volume and growth patterns. To address the challenges posed by the complex morphology and overlapping structure of wheat plants, ref. [11] combines point cloud data with virtual design. Basic information on wheat growth, such as the number of stems and inclination angles, is extracted from the point cloud data to build an initial 3D network model. This model is then iteratively refined by adjusting the leaf azimuth to construct a virtual 3D model of the wheat plant. This approach provides technical support for the analysis of wheat plant phenotype and functional structure.
The above research methods provide high-quality data for the 3D reconstruction of object surfaces and cover many industrial areas. However, there are fewer studies applying 3D reconstruction to the dimensional inspection of rubber treads. Considering the errors and limitations of traditional tread measurement techniques [12,13], this paper proposes a new point cloud dimensional measurement method based on RGB-D technology. Using the design data from Guilin Rubber Design Institute and its five-compound extruder linkage production line as the basis for actual production measurements, the proposed method first corrects the corner point positions of the depth camera using the checkerboard grid method. Statistical filtering is then employed to reduce image noise in the point cloud data. Finally, by comparing this method with other traditional methods and the national industry error standard, the RGB-D-based point cloud measurement method for rubber tread size is shown to offer higher measurement accuracy, lower equipment costs, stable performance, and compliance with the highest accuracy standard of the national standard.

2. Related Work

The main components of the system’s hardware platform include the light source and camera. Considering the production line site environment, factors such as light source intensity and continuous light uniformity were comprehensively evaluated. A white-bar LED light source, model OPT-LI14030, was used for visual inspection in the field. The image acquisition equipment chosen was the Astra Pro RGB-D depth camera from OPT-LI, which features both RGB and IR lens modules [14]. This camera allows real-time image previews and video playback, connecting to the computer via USB and utilizing a driver-free UVC method for video transmission. The program calls interfaces in PyCharm based on the Open NI framework [15], enabling real-time video transmission; high-resolution, high-precision image acquisition; and image calibration. The structure of the vision measurement system is shown in Figure 1. The tread size data in this study were obtained using this vision measurement system. Astra series cameras are recognized for their high accuracy, low power consumption, fast response, stability, and reliability [16]. Therefore, an Astra series depth camera from ObiZoomOptics was selected for close-up photography to acquire the tread image data. The Astra camera offers several models, including short-range, long-range, and high-resolution RGB video cameras. The configuration of the Astra camera parameters is shown in Table 1.
RGB-D camera calibration is a necessary prerequisite for color depth alignment, point cloud data unification, and target identification, and is the process of solving for the camera’s external references R , T , and internal references u 0 , v 0 , f x , and f y through a set of three-dimensional spatial points and the corresponding image pixel coordinate pairs [17]. Common calibration methods include traditional camera calibration, active vision calibration, and camera self-calibration, among others [18]. However, the traditional method only accounts for radial distortion and is inadequate when distortion is significant. Active vision calibration requires controlling the camera to perform specific movements, making it unsuitable for industrial environments. The self-calibration method often exhibits poor robustness, limiting its practical applications [19].
Given the field of view of the camera and the constraints of installation positions in an actual industrial production line, this paper adopts the planar tessellated grid camera calibration method proposed by Professor Zhengyou Zhang [20]. This method necessitates precise knowledge of the geometric structure of the checkerboard grid. In this section, the black and white checkerboard printed on A3 paper is used, and the calibration plate size is 252 × 360 mm, which consists of several squares, each of which has a size of 36 × 36 mm, as shown in Figure 2.
This calibration experiment utilizes the reprojection error as the technical index to evaluate the camera’s calibration performance [21]. The reprojection error is defined as the difference between the actual projection pixel of a 3D spatial point on the image plane and the virtual projection pixel obtained through calibration calculations. A smaller reprojection error indicates more accurate positional parameters of the camera and coordinates of the 3D spatial point [22]. By using the reprojection error as the evaluation index, both the parameter calibration error and image point alignment error are considered, thereby enhancing calibration accuracy [21]. The projection schematic is illustrated in Figure 3.
In the figure, p i are the i tessellated corner pixel points. The corner points are extracted using the calibrated camera parameters. When the corner points are detected, the corresponding bitmap projection of the image point in the image plane is computed, i.e., the virtual pixel point p i and its real original pixel point p i , and the reprojection error is E. The calculation of the reprojection error is shown in Equation (1):
E = i = 1 54 p i p i 2

3. Methodology and Design

3.1. Reprojection Calibration Based on NSGA-II Multi-Objective Optimization

Calibration experiments are conducted on both RGB and IR cameras. Due to laser scattering interference with the IR camera, some corner points of the tessellated grid may not be recognized [23], resulting in projection errors with the original pixel points [24]. An example diagram is shown in Figure 4. Considering the recognized corner points of the checkerboard grid and the number of corner targets identified by the IR camera, a mathematical model of corner retention and the reprojection error is established. The reprojection error for monocular and binocular setups (where monocular refers to individual IR and RGB cameras and binocular refers to the combined IR and RGB cameras) is used as the objective function. The NSGA-II genetic algorithm is employed to optimize corner retention and achieve accurate calibration results.

3.2. Number of Corner Points and Error Modeling

During the parameter calibration process, at least four corner points are required to solve the constraint equations, providing eight linear constraint equations. To ensure that the number of corner points is sufficient for establishing the mathematical model, corner points with a reprojection error greater than 0.2 from the original pixel points in the checkerboard grid are screened. The retention rate of corner points in the overall checkerboard grid is maintained within the range of 50–100%. The three sets of reprojection error data were processed using the curve fitting tool in MATLAB software [25]. The reprojection error fitting curves for the binocular camera setup are shown in Figure 5.
The corresponding Gaussian fitting function is shown in Equation (2):
f R = 0.7068 × e ( r 1.005 0.01232 ) 2 + 4.198 × 10 8 × e ( r 29.37 6.075 ) 2 f I = 0.8965 × e ( r 1.1277 0.8064 ) 2 + 1.043 × 10 13 × e ( r 5.019 0.07239 ) 2 f S = 0.2803 × e ( r 0.8035 0.01324 ) 2 + 0.3408 × e ( r 0.6365 0.1393 ) 2 + 0.3869 × e ( r 0.9696 0.2891 ) 2 + 0.2434 × e ( r 0.5015 0.06993 ) 2
where r is the corner point retention rate; f R , f I , and f S are the reprojection functions of RGB, IR, and binocular cameras, respectively; and the fitting effect table is shown in Table 2.
From Table 2, it is evident that the sum of variance and root mean square (RMS) for the RGB, IR, and binocular cameras are close to 0, indicating minimal errors between the fitted function data and the original data points. Additionally, the coefficients of determination for all three cameras are close to 1, signifying a strong explanatory power of the fitted function variables for the actual function values. In summary, the fitted function model demonstrates a high degree of accuracy and interpretability of the data.

3.3. NSGA-II Genetic Algorithm

In practical engineering, most optimization problems involve multi-objective constraints, where mutual constraints and contradictions often exist between different optimization objectives. Therefore, it is necessary to compare and weigh these objectives to achieve a balanced, comprehensive optimization effect. Genetic algorithms, as adaptive global optimization probabilistic search algorithms, simulate the genetic and evolutionary processes of organisms in the natural environment. Their core function is to manage the dominance relationship of each solution within the population during each iteration to find the Pareto optimal solution [26].
From Equation (2), it can be seen that within the constraints, the reprojection error function f R of the RGB camera is lower than the overall impact, so in order to simultaneously optimize f R , f I , f S , and the three objective functions to solve the optimal corner retention rate, this experiment simplifies the objective function, selects f I and f S , and uses the NSGA-II non-dominated genetic algorithm as the objective function to optimize the NSGA-II, which is a type of genetic algorithm that was equal to the improvement in the NSGA by Deb [27] in 2002. The NSGA-II algorithm, an improved version of the original NSGA genetic algorithm, incorporates an elitist strategy and fast non-dominated sorting. These enhancements address the challenges of determining shared parameters and reduce the high computational complexity associated with dominated sorting [28,29]. The execution process of NSGA-II is illustrated in Figure 6.
The specific execution steps of the genetic algorithm are as follows. (1) Initialization of population: an initial population is randomly generated, with each individual (i.e., solution) comprising a set of genes (i.e., variables). (2) Fitness evaluation: the fitness value of each individual is calculated to measure the effectiveness of its problem-solving ability. (3) Selection: individuals with higher fitness values are selected as parents using methods such as roulette wheel selection or tournament selection. (4) New offspring are generated from the selected parents through a crossover operation, simulating the genetic process of organisms. (5) Mutation: mutation operations are performed on offspring to introduce new genes, thereby increasing the genetic diversity of the population. (6) Adaptation evaluation: the fitness values of the newly generated individuals are calculated to assess their problem-solving effectiveness. (7) Selection of a new population: individuals with higher fitness are chosen from both the current and newly generated populations to form a new population. (8) Termination condition checking: verify whether the termination condition is met, such as reaching the maximum number of generations or achieving a predetermined fitness threshold. If the condition is satisfied, the algorithm terminates; otherwise, it returns to step 3.

3.4. Optimization Results

Multi-objective optimization problems typically yield multiple solutions rather than a single optimal solution. Due to the mutual constraints among various optimization objectives, finding a true global optimal solution is challenging. Improving one objective function often compromises at least one other objective function. The set of solutions comprising these multiple trade-offs is known as the Pareto front. The advantages and disadvantages of these solutions must be analyzed in the context of the specific problem [30]. The values of the main parameters involved in the algorithm are shown in Table 3.
The optimized Pareto frontier diagram is presented in Figure 7. All solution sets within the diagram achieve optimality. However, given that the priority of minimizing the reprojection error for the binocular camera is higher than that for the RGB and IR cameras, the binocular camera reprojection error function is selected as the optimal objective [31]. The results of the camera reprojection error parameters before and after optimization are detailed in Table 4.
Table 4 shows that after optimizing the focus retention rate, the reprojection errors of the RGB and IR cameras are significantly reduced, with a particularly notable decrease in the error of the binocular camera. This indicates that the calibration of the camera is improved using this method.

3.5. Three-Dimensional Dimensional Inspection of Tread Based on Point Cloud Data

In this study, the calibrated RGB-D camera is utilized to combine the point cloud data generated from tread depth information to perform tread quality inspection experiments and result analysis. Based on the production line requirements for tread size, the object distance is set to 0.7 m. Tread depth information is acquired using an Astra Pro. After collecting the undistorted RGB color image and the corresponding depth map of the tread, the RGB color map is aligned to the depth camera coordinate system to establish a one-to-one correspondence between the depth point cloud and the color pixel points. The library function of Open3D is then used to transform the data into a PCD object supported by the Open3D library, resulting in high-density point cloud data aligned with the color and depth maps, as shown in Figure 8.

3.6. Point Cloud Data Preprocessing

In this study, the noise points represent a small amount of rubber tread edge information that has not been calibrated correctly. The noise points are related to the camera resolution and calibration errors. In general, high-resolution cameras have a greater pixel density and are more prone to noise points. The primary steps in point cloud data preprocessing include filtering, alignment, and segmentation. Preprocessing enhances processing efficiency, improves data quality and alignment accuracy, reduces noise interference, and increases the accuracy of analysis and feature extraction. Additionally, it improves visualization and algorithm robustness. These preprocessing steps ensure cleaner and more reliable point cloud data, thereby laying a solid foundation for subsequent analysis, identification, classification, and modeling.

3.6.1. Hood Layer Screening Point Cloud

In the actual enterprise production line workshop, the environment is highly standardized, with minimal background interference during the transportation of rubber tread semi-finished products on the conveyor belt. Thus, the acquisition of point cloud data can disregard background influences.
In the screening of point cloud data, image segmentation technology is employed using a mask layer (Mask). A binary mask is utilized to eliminate excess background from the point cloud data image, retaining only the selected tread area. Visualization through the mask demonstrates the segmented point cloud data, as exemplified in Figure 9. The rightmost image visualizes the mask map, delineating the region from which the tread point cloud data should be segmented from the original image [32].

3.6.2. RANSAC to Remove Outliers

Based on the above discussion, the complexity of acquiring tread point cloud data is relatively low, allowing the use of the RANSAC algorithm [33] to detect and remove outlier data caused by measurement errors and mismatches. This algorithm can extract two planes from the working plane: one containing in-area points (points conforming to the model) and another containing out-of-area points (points not conforming to the model) within the surface point cloud data of the tread.
The method treats all point clouds as a complete dataset and employs random sampling to mitigate the impact of noise on segmentation outcomes. From this full dataset, a subset is randomly selected. A specific model is derived from this subset, and the deviation of sample data points from the model is computed using minimal variance. Points with deviations below a predefined threshold are classified as in-model data points and recorded. Those exceeding the threshold are categorized as out-of-model data points. This process iterates across all data points, progressively accumulating in-model data points. Once enough points are classified as in-model, the best estimated model parameters are selected, and the in-model data points are retained to achieve point cloud segmentation [34]. The detailed process is illustrated in Figure 10.
The constraint of the RANSAC algorithm to limit the number of iterations of the optimal model [35] is described by Equation (3):
1 P = ( 1 W n ) k
where P is the probability that a data point is a local point, W is the proportion of local points in the point cloud data point set, n is the number of data points needed for each iteration of the model, and k is the number of model iterations.

3.6.3. Statistical Filtering Denoising (SFD)

After removing outlier points, sparse noise remains in the tread surface point cloud, primarily originating from minor uncalibrated information at the edges of the rubber tread. This noise is influenced by camera resolution and calibration errors, with higher-resolution cameras being more susceptible due to their greater pixel density. The tread point cloud data obtained through the RANSAC algorithm fall into the category of conventional point clouds, typically filtered using digital image processing filtering algorithms for noise removal operations. Failure to filter effectively can introduce bias errors in subsequent geometric analysis [36,37].
The distance threshold can be established as depicted in Equation (4):
d = λ + r σ
where λ is the mean value of the average distance d ¯ between each point and its surrounding i neighbors in the point cloud data traversing the upper surface of the tread, and r and σ are the scale factor and the mean square deviation, respectively.
By traversing the surface point cloud data on the tread again, it is sufficient to remove the anomalies that have an average distance d ¯ from i neighboring points. The filtering uses the statistical filtering algorithm in the Open3d library to configure the values of i and r : i specifies the number of neighbors to be considered and determines the average distance [38], which affects whether or not most of the noise points can be filtered out, while r controls the threshold level. After experimental trial and error, when i and r are taken as 15 and 1.5, respectively, they can effectively remove the noise points in the point cloud data, and the filtering results are shown in Figure 11:

3.6.4. Point Cloud Alignment

In the problem of aligning 3D point cloud models, the crucial task is to accurately establish the rigid-body transformation matrix between two sets of points, ensuring optimal alignment of their relative positions and orientations. The mainstream alignment process typically involves establishing the topological relationships between point clouds, computing local geometric features, performing feature matching, estimating rigid-body transformations, and transforming the point clouds accordingly.
In this study, point cloud alignment is achieved using the ICP algorithm. The iterative process minimizes the Euclidean distance between point clouds P and Q until a predetermined threshold is met. The Euclidean distance formula and the target error function are detailed in Equation (5):
D = ( x q x p ) 2 + ( y q y p ) 2 + ( z q z p ) 2 F ( H R , H T ) = i = 1 N P Q i H R P i H T 2
where Q i is within the point cloud Q , the distance P is P i on the point cloud, the Euclidean distance D is the shortest point, and N P is the number of points corresponding to the two point clouds, and iterating the distance to the smallest point yields the final rigid-body transformation translations and rotations matrices H T and H R .
The ICP exact alignment process is as follows:
(1)
In the initial iterative point cloud set, the point with the smallest Euclidean distance value in the P and Q point clouds is taken as the corresponding point, and the corresponding point set is formed;
(2)
Calculate the translation and rotation matrices H T and H R from the corresponding point set to find the current objective loss function F ;
(3)
With the rigid-body transformation matrix obtained in the second step, perform a rigid-body transformation on the target point cloud, and follow the idea of the first step to update the corresponding point set according to the Euclidean distance between the new point cloud;
(4)
Repeat the second step when the value of the target error function is larger than the set threshold and stop the iteration when the opposite or the number of iterations reaches the upper limit.
At the end of the alignment, the Root-Mean-Square Error (RMSE) between the two-point clouds is calculated and used to evaluate the point cloud alignment effect, as shown in Equation (6); the smaller the value of V R M S E , the better the alignment effect.
V R M S E = 1 N P i = 1 N P ( Q i P i ) 2

4. Results and Analysis

4.1. Evaluation Index of Image Denoising Accuracy

In this study, the effectiveness of statistical filter denoising and image enhancement is assessed using the PSNR metric, which quantifies image distortion on a pixel-by-pixel basis in logarithmic decibel units (dB). The evaluation principle involves comparing the mean squared error (MSE) of grayscale values between pixels in the reconstructed image and those in a standard reference image. The MSE is defined as
M S E = 1 m n i = 1 m j = 1 n x ( i , j ) x ( i , j ) 2
where m and n are the image dimensions, x is a clean image, x is a noisy image, and the PSNR is calculated as shown in Equation (8):
P S N R = 10 × lg M A X I 2 M S E
where M A X 1 represents the maximum grayscale value of the image, and the comparison between the image before and after denoising indicates that higher PSNR values correspond to reduced image degradation. At maximum degradation, the PSNR value tends towards 0 dB. The quality of the image corresponding to the PSNR value is illustrated in Table 5.

4.2. Analysis of Image Denoising Results

The denoising experiments involve tread image data corrupted with gaussian noise, binomial noise, impulse noise, and line noise. Subsequently, PSNR values are computed and compared to analyze the denoising efficacy, validating the feasibility and effectiveness of the statistical filtering denoising method.
In comparison with Table 5, the PSNR values in Table 6 demonstrate that statistical filter denoising performs effectively compared to other algorithms (e.g., BM3D, DBSN) in denoising tread images contaminated with various types of noise. The resulting image quality achieves good to very good levels, laying a solid foundation for accurately measuring tread dimensions in subsequent steps.

4.3. Analysis of Rubber Tread Size Measurement Results

The actual size of the rubber tread of 200.28 × 150.17 × 25.98 mm is selected for the experiment for 3D inspection.
The parameters of the RANSAC algorithm are set as follows: Each iteration requires estimation based on three data points, and the algorithm iterates 500 times. Extract the reference surface of the workbench, recorded as α 1 ( A 1 , B 1 , C 1 , D 1 ); remove the point cloud that does not belong to the plane, as shown in Figure 12; and then extract the upper surface of the tread from the remaining point cloud data, recorded as α 2 ( A 2 , B 2 , C 2 , D 2 ), as shown in Figure 13.
The planar expressions for the table datum and the upper surface of the tread can be obtained by fitting the planes to them, respectively, as shown in Equation (9):
α 1 : 0.03 x 0.00 y + 1.00 z 0.798384 = 0 α 2 : 0.01 x 0.00 y + 1.00 z 0.771813 = 0
Referring to Figure 7 and the two-plane representation, it is evident that the reference plane of the workbench aligns parallel to the upper surface of the tread. According to Equation (9), the distance from the workbench to the upper surface of the tread measures 0.026571 m, indicating a tread thickness of 26.571 mm. This value deviates by 0.591 mm from the actual tread product thickness of 25.98 mm, resulting in an error of 2.3%.
Using the OBB enclosing box to detect the edge collision on the upper surface of the tread [39], connecting the vertices of the OBB enclosing box yields the enclosing result of the upper surface of the tread, as shown in Figure 14.
After filtering the enclosing box coordinates to the maximum value, the matrix of coordinates of each vertex of the enclosing box is shown in Equation (10):
E = 0.013 0.132 0.773 0.185 0.123 0.771 0.041 0.019 0.774 0.013 0.132 0.769 0.139 0.050 0.768 0.041 0.019 0.770 0.185 0.123 0.767 0.139 0.050 0.772
Combined with Figure 14 and according to Equation (10), encircling box vertex matrix calculations can be solved on the upper surface of the tire surface OBB encircling box side length of 198.3 × 152.9 mm, that is, an upper tire surface length and width of 198.3 mm and 152.9 mm, respectively, and the actual size of the tire surface of 200.28 × 150.17 mm of the products of the difference in the length of 1.7 mm, the difference in the width of 2.9 mm, the error of the length of 0.85%, and the error of the width of 1.9%. Comprehensive measurement results, the actual measured size, and the error level control relationship are shown in Table 7.
Table 7 illustrates the measurement errors in thickness, length, and width, conforming to national standards and meeting the accuracy requirements of ST1 ( ± 3.5 % ) and SW1 ( ± 2 % ). The comparison results of the tread size inspection based on pixel metrics are depicted in Figure 15.
In Figure 15, SXx represents the error level (ST for thickness; SW for width/length) within the display range. The depth of the point cloud measurement results adheres to the upper and lower limits of error specified by national standards. A comparison between two inspection methods reveals the accuracy of measuring the length of the tread’s three dimensions relative to the true value. The visual measurement accuracy using the RGB-D camera tread point cloud data shows proximity to the actual tread dimensions, aligning closely with industrial standards for tread dimension measurement. The measurement effect is notably excellent and substantial.
To further verify the stability and reliability of the measurement effect, this study aligned with the cutting conveyor belt’s transportation direction and the camera’s shooting frequency. The 200.28 × 150.17 × 25.98 mm tread is divided into five equal parts of 150.17 × 37.00 × 25.98 mm, screening five groups of tread point cloud data for size measurement experiments (cover layer screening, fixed width). Aliquot inspection of the tread actually takes into account the accuracy of the dimensions of the tread workpieces after cutting in the actual production line. This requires a batch inspection of the tread to ensure uniformity of the dimensional edges of the same workpiece. In this paper, five equal parts are carried out in consideration of the length of the tread samples, which can be divided into five parts to better avoid the randomness of the experimental results, and the size of its aliquot is closer to the size of the actual production. The experimental data acquisition schematic is shown in Figure 16.
Using the Astra Pro depth camera for scanning to collect rubber tread point cloud data requires careful consideration of the object’s distance from the camera and its impact on depth frame accuracy. The accuracy of the collected tread distance is inversely proportional to the distance from the camera; closer distances yield a higher accuracy in the scanned point cloud data but limit the scanning range. Table 1 specifies that the optimal distance between the Astra Pro camera and the object should range from 0.6 to 8.0 m. The camera provides a vertical viewing angle of 40.2° degrees and a horizontal viewing angle of 58.4° degrees, facilitating the simultaneous acquisition of RGB color maps and depth information [40]. The camera’s acquisition range is illustrated in Figure 17.
The object distance from the camera to the tread is D ( D 0.6 , 8.0 ), the horizontal viewing angle of the camera is θ H ( θ H = 58.4 ° ), the vertical viewing angle is θ V ( θ V = 40.2 ° ), and the length and width of the scanned tread are L and W , respectively, which can be described by Equation (11):
L = 2 × D × tan θ H 2 W = 2 × D × tan θ V 2
where L 0.67 , 8.94 and W 0.44 , 5.86 ; that is, in the camera’s recommended working distance, it can scan the tread length range of 0.67–8.94 m and tread width range of 0.44–5.86 m (the experimental tread product size of 200.28 × 150.17 × 25.98 mm).
For uniform sections of the tread measurement area, the tread point cloud data underwent preprocessing steps, followed by measurement of 3D information using the RANSAC-OBB dimensional detection algorithm [41]. Partial processing results and five sets of tread point cloud data are depicted in Figure 18 and Figure 19, with measurement outcomes summarized in Table 8.
Based on Table 8, the relative and absolute error distributions of the true values and measured values for the five sets of three-dimensional tread information are illustrated in Figure 20 below.
As depicted in Figure 20, the absolute error in the five groups of three-dimensional tread size measurement experiments is within 2.17 mm, with minimal fluctuation. The relative error in three-dimensional size measurement is less than 2.67%, aligning with the highest-precision testing standards specified by national standards. This confirms the stable and reliable measurement effect.

5. Conclusions

The quality of the rubber tread product is crucial to the overall quality of the finished tire. Therefore, measuring the three dimensions of the semi-finished tread in the rubber compound extrusion line is both critical and necessary. In this paper, according to the depth camera hardware, the collected tread size threshold is determined, and the RANSAC algorithm is used to optimize the plane and the OBB enveloping box collision algorithm to detect the boundary. The thickness of the tread and the 3D information of the upper surface are calculated. Data processing and experimentation are conducted using the MATLAB and PyCharm platforms. Comparative analysis with pixel-point measurement results shows that the point cloud size measurement method proposed in this paper is superior. It meets the highest-accuracy requirements of the national standard for error detection. In practical production processes, this method enables a more accurate measurement of rubber tread size, thereby improving the production yield of high-quality finished tires.

6. Discussion

In this study, based on RGB-D vision technology, the dimensions of tread products after cutting at a fixed length were detected. This method achieved non-human participation and high accuracy, which is significant for improving the intelligence of production lines and the efficiency of product quality detection in the industry. It also promotes the development of traditional rubber extrusion production lines towards higher quality. However, there are still some areas that need further optimization and improvement:
(1)
In the tread dimension inspection process, equipment hardware constraints lead to lengthy data processing times, making it difficult to detect tread images and point cloud data collected by the RGB-D camera’s RGB and IR lens modules in real-time. Consequently, all measurements are processed offline, preventing online debugging. The next step involves upgrading the acquisition equipment, further improving algorithm efficiency, reducing detection complexity, and developing stable preview software to provide real-time feedback for monitoring tread quality detection results.
(2)
This study initially verifies the quality inspection of rigid rubber treads. The next step is to study the point cloud alignment and 3D reconstruction methods for non-rigid treads, designing an algorithm with improved robustness to inspect the quality of irregular and curved surface rubber products and archive the 3D data of retained products.
(3)
This study only conducts experimental verification, demonstrating the feasibility, reliability, and accuracy of detecting tread quality based on point cloud data. The next step involves converting the inspection program written in Python and Matlab into an industrial control program and integrating this inspection method into the production line after the treads are cut to fixed lengths. Subsequently, extensive on-site experiments and verifications will be conducted.

Author Contributions

Conceptualization, Z.P.; methodology, L.H.; software, L.H.; validation, L.H.; formal analysis, L.H.; investigation, L.H. and Z.P.; resources, M.C.; data curation, M.C.; writing—original draft preparation, L.H.; writing—review and editing, L.H.; visualization, L.H.; supervision, M.C.; project administration, M.C.; funding acquisition, M.C. and L.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study is funded by the National Natural Science Foundation of China [Grant No. 61863009] and the Guangxi Key R&D Program [Grant No. Guike-AB22080093].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article and further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Zihao Peng was employed by the company Guilin GLESI Scientific Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Xing, Z.Y.; Chen, Y.J.; Wang, X.H.; Qin, Y.; Chen, S. Online detection system for wheel-set size of rail vehicle based on 2D laser displacement sensors. Optik 2016, 127, 1695–1702. [Google Scholar] [CrossRef]
  2. Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A.; et al. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 10 October 2011; pp. 559–568. [Google Scholar]
  3. Khoshelham, K.; Elberink, S.O. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef]
  4. Azinovi, D.; Martin-Brualla, R.; Goldman, D.B.; Niener, M.; Thies, J. Neural RGB-D Surface Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–25 June 2021; pp. 6290–6301. [Google Scholar]
  5. Etchepareborda, P.; Moulet, M.-H.; Melon, M. Random laser speckle pattern projection for non-contact vibration measurements using a single high-speed camera. Mech. Syst. Signal Process. 2021, 158, 107719. [Google Scholar] [CrossRef]
  6. Alhwarin, F.; Ferrein, A.; Scholl, I. IR Stereo Kinect: Improving Depth Images by Combining Structured Light with IR Stereo. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Gold Coast, QLD, Australia, 1–5 December 2014; pp. 409–421. [Google Scholar]
  7. Bi, H.-B.; Liu, Z.-Q.; Wang, K.; Dong, B.; Chen, G.; Ma, J.-Q. Towards accurate RGB-D saliency detection with complementary attention and adaptive integration. Neurocomputing 2021, 439, 63–74. [Google Scholar] [CrossRef]
  8. Fu, Y.; Yan, Q.; Liao, J.; Chow, A.L.H.; Xiao, C. Real-time dense 3D reconstruction and camera tracking via embedded planes representation. Vis. Comput. 2020, 36, 2215–2226. [Google Scholar] [CrossRef]
  9. Yang, K.; Guo, Y.; Tang, P.; Zhang, H.; Li, H. Object registration using an RGB-D camera for complex product augmented assembly guidance. Virtual Real. Intell. Hardw. 2020, 2, 501–517. [Google Scholar] [CrossRef]
  10. Li, W.; Tang, B.; Hou, Z.; Wang, H.; Bing, Z.; Yang, Q.; Zheng, Y. Dynamic Slicing and Reconstruction Algorithm for Precise Canopy Volume Estimation in 3D Citrus Tree Point Clouds. Remote Sens. 2024, 16, 2142. [Google Scholar] [CrossRef]
  11. Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
  12. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. 2021, 43, 4338–4364. [Google Scholar] [CrossRef]
  13. Liu, F.; Liang, L.; Hou, C.; Xu, G.; Liu, D.; Zhang, B.; Wang, L.; Chen, X.; Du, H. On-Machine Measurement of Wheel Tread Profile With the 1-D Laser Sensor. IEEE Trans. Instrum. Meas. 2021, 70, 1011011. [Google Scholar] [CrossRef]
  14. Sharma, P.; Katrolia, J.S.; Rambach, J.; Mirbach, B.; Stricker, D.; Seiler, J. Resilient Consensus Sustained Collaboratively. arXiv 2023, arXiv:2306.17636. [Google Scholar]
  15. Gallo, A.; Phung, M.D. Classification of EEG Motor Imagery Using Deep Learning for Brain-Computer Interface Systems. arXiv 2022, arXiv:2206.07655. [Google Scholar]
  16. Vila, O.; Boada, I.; Raba, D.; Farres, E. A Method to Compensate for the Errors Caused by Temperature in Structured-Light 3D Cameras. Sensors 2021, 21, 2073. [Google Scholar] [CrossRef]
  17. Xie, R.P.; Yao, J.; Liu, K.; Lu, X.H.; Liu, Y.H.; Xia, M.H.; Zeng, Q.F. Automatic multi-image stitching for concrete bridge in-spection by combining point and line features. Autom. Constr. 2018, 90, 265–280. [Google Scholar] [CrossRef]
  18. Xue, T.; Wu, B. Reparability measurement of vision sensor in active stereo visual system. Measurement 2014, 49, 275–282. [Google Scholar] [CrossRef]
  19. Qiao, X.Y.; Fan, C.J.; Chen, X.; Ding, G.Q.; Cai, P.; Shao, L. Uncertainty analysis of two-dimensional self-calibration with hybrid position using the GUM and MCM methods. Meas. Sci. Technol. 2021, 32, 125012. [Google Scholar] [CrossRef]
  20. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  21. Han, H.; Wu, S.; Song, Z. An Accurate Calibration Means for the Phase Measuring Deflectometry System. Sensors 2019, 19, 5377. [Google Scholar] [CrossRef] [PubMed]
  22. Zhou, K.; Meng, X.X.; Cheng, B. Review of Stereo Matching Algorithms Based on Deep Learning. Comput. Intell. Neurosci. 2020, 2020, 8562323. [Google Scholar] [CrossRef]
  23. Kirby, B.J.; Hanson, R.K. Linear excitation schemes for IR planar-induced fluorescence imaging of CO and CO2. Appl. Opt. 2002, 41, 1190–1201. [Google Scholar] [CrossRef]
  24. Guerra, B.M.V.; Ramat, S.; Beltrami, G.; Schmid, M. Recurrent Network Solutions for Human Posture Recognition Based on Kinect Skeletal Data. Sensors 2023, 23, 5260. [Google Scholar] [CrossRef]
  25. Ahmed, I.; Modu, G.U.; Yusuf, A.; Kumam, P.; Yusuf, I. A mathematical model of Coronavirus Disease (COVID-19) containing asymptomatic and symptomatic classes. Results Phys. 2021, 21, 103776. [Google Scholar] [CrossRef]
  26. Yazdinejad, A.; Dehghantanha, A.; Parizi, R.M.; Epiphaniou, G. An optimized fuzzy deep learning model for data classification based on NSGA-II. Neurocomputing 2023, 522, 116–128. [Google Scholar] [CrossRef]
  27. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  28. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
  29. Li, H.; Xu, B.; Lu, G.; Du, C.; Huang, N. Multi-objective optimization of PEM fuel cell by coupled significant variables recognition, surrogate models and a multi-objective genetic algorithm. Energy Convers. Manag. 2021, 236, 114063. [Google Scholar] [CrossRef]
  30. Alam, S.J.; Arya, S.R. Volterra LMS/F Based Control Algorithm for UPQC With Multi-Objective Optimized PI Controller Gains. IEEE J. Emerg. Sel. Top. Power Electron. 2023, 11, 4368–4376. [Google Scholar] [CrossRef]
  31. Zou, W.; Wei, Z. Flexible Extrinsic Parameter Calibration for Multicameras With Nonoverlapping Field of View. IEEE Trans. Instrum. Meas. 2021, 70, 5017514. [Google Scholar] [CrossRef]
  32. Wei, S.J.; Zeng, X.F.; Zhang, H.; Zhou, Z.C.; Shi, J.; Zhang, X.L. LFG-Net: Low-Level Feature Guided Network for Precise Ship Instance Segmentation in SAR Images. IEEE Trans. Geosci. Remote 2022, 60, 5231017. [Google Scholar] [CrossRef]
  33. Raguram, R.; Chum, O.; Pollefeys, M.; Matas, J.; Frahm, J.-M. USAC: A Universal Framework for Random Sample Consensus. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2022–2038. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Yuan, L.; Liang, W.; Xia, X.; Pang, Z. 3D-SWiM: 3D vision based seam width measurement for industrial composite fiber layup in-situ inspection. Robot. Comput. Integr. Manuf. 2023, 82, 102546. [Google Scholar] [CrossRef]
  35. Yang, H.; Shi, J.; Carlone, L. TEASER: Fast and Certifiable Point Cloud Registration. IEEE Trans. Robot. 2021, 37, 314–333. [Google Scholar] [CrossRef]
  36. You, N.; Han, L.; Zhu, D.; Song, W. Research on Image Denoising in Edge Detection Based on Wavelet Transform. Appl. Sci. 2023, 13, 1837. [Google Scholar] [CrossRef]
  37. Lalak, M.; Wierzbicki, D. Methodology of Detection and Classification of Selected Aviation Obstacles Based on UAV Dense Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1869–1883. [Google Scholar] [CrossRef]
  38. Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radia-tion Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology 2021, 298, 180–188. [Google Scholar] [CrossRef]
  39. Gao, L.; Gao, H.; Wang, Y.H.; Liu, D.; Momanyi, B.M. Center-Ness and Repulsion: Constraints to Improve Remote Sensing Object Detection via RepPoints. Remote Sens. 2023, 15, 1479. [Google Scholar] [CrossRef]
  40. Köhler, N.A.; Nöh, C.; Geis, M.; Kerzel, S.; Frey, J.; Gross, V.; Sohrabi, K. Influence of Ambient Factors on the Acquisition of 3-D Respiratory Motion Measurements in Infants-A Preclinical Assessment. IEEE Trans. Instrum. Meas. 2023, 72, 5014510. [Google Scholar] [CrossRef]
  41. Lamprecht, S.; Stoffels, J.; Dotzler, S.; Hass, E.; Udelhoven, T. aTrunk-An ALS-Based Trunk Detection Algorithm. Remote Sens. 2015, 7, 9975–9997. [Google Scholar] [CrossRef]
Figure 1. Structural diagram of visual measurement system.
Figure 1. Structural diagram of visual measurement system.
Applsci 14 06625 g001
Figure 2. Checkerboard calibration board.
Figure 2. Checkerboard calibration board.
Applsci 14 06625 g002
Figure 3. Schematic diagram of projection.
Figure 3. Schematic diagram of projection.
Applsci 14 06625 g003
Figure 4. Schematic diagram of IR camera error corners. All the colors represent the lines connecting the corners of the checkerboard grid. (a) error corner diagram 1; (b) error corner diagram 2; (c) error corner diagram 3.
Figure 4. Schematic diagram of IR camera error corners. All the colors represent the lines connecting the corners of the checkerboard grid. (a) error corner diagram 1; (b) error corner diagram 2; (c) error corner diagram 3.
Applsci 14 06625 g004
Figure 5. Fitting curve of binocular camera reprojection error.
Figure 5. Fitting curve of binocular camera reprojection error.
Applsci 14 06625 g005
Figure 6. Genetic algorithm flowchart. Y denotes YES and N denotes NO.
Figure 6. Genetic algorithm flowchart. Y denotes YES and N denotes NO.
Applsci 14 06625 g006
Figure 7. Experimental Pareto solution set.
Figure 7. Experimental Pareto solution set.
Applsci 14 06625 g007
Figure 8. Alignment diagram of color depth information collection. (a) Acquisition of tread depth information and image color using RGB−D. (b) Correspondence between the depth point cloud and colored pixel points in the coordinate system of the depth camera.
Figure 8. Alignment diagram of color depth information collection. (a) Acquisition of tread depth information and image color using RGB−D. (b) Correspondence between the depth point cloud and colored pixel points in the coordinate system of the depth camera.
Applsci 14 06625 g008
Figure 9. Overlay screening: (a) original image; (b) region of interest; (c) binarized mask layer.
Figure 9. Overlay screening: (a) original image; (b) region of interest; (c) binarized mask layer.
Applsci 14 06625 g009
Figure 10. Flow chart of RANSAC algorithm.
Figure 10. Flow chart of RANSAC algorithm.
Applsci 14 06625 g010
Figure 11. Filter effect diagram: (a) before filtering; (b) after filtering.
Figure 11. Filter effect diagram: (a) before filtering; (b) after filtering.
Applsci 14 06625 g011
Figure 12. Workbench reference plane point cloud.
Figure 12. Workbench reference plane point cloud.
Applsci 14 06625 g012
Figure 13. Point cloud of the upper surface of the tread.
Figure 13. Point cloud of the upper surface of the tread.
Applsci 14 06625 g013
Figure 14. The result of building the OBB bounding box on the upper surface of the tread.
Figure 14. The result of building the OBB bounding box on the upper surface of the tread.
Applsci 14 06625 g014
Figure 15. Comparison of two detection methods.
Figure 15. Comparison of two detection methods.
Applsci 14 06625 g015
Figure 16. Schematic diagram of data acquisition.
Figure 16. Schematic diagram of data acquisition.
Applsci 14 06625 g016
Figure 17. Astra Pro collects the rubber tread range.
Figure 17. Astra Pro collects the rubber tread range.
Applsci 14 06625 g017
Figure 18. Some processing steps.
Figure 18. Some processing steps.
Applsci 14 06625 g018
Figure 19. Five sets of tread surface point cloud data.
Figure 19. Five sets of tread surface point cloud data.
Applsci 14 06625 g019
Figure 20. Error distribution (unit: mm).
Figure 20. Error distribution (unit: mm).
Applsci 14 06625 g020
Table 1. Astra Pro camera parameters.
Table 1. Astra Pro camera parameters.
Parameters Parameters
Baseline (mm)75data transmissionUSB 2.0
Depth distance (m)0.6–8.0video interfaceUVC
Maximum power consumption (W)2.50operating systemWindows 10
Accurate (mm)±3Working temperature (°)10~40
Depth FOV (°)H58.4, V45.5Depth map resolution1280 × 1024@7FPs
Color FOV (°)H66.1, V40.2Color map resolution1280 × 720@7FPs
Delay (ms)30–45Dimension (mm)164.85 × 30.00 × 48.25
Table 2. Fitting effect.
Table 2. Fitting effect.
Sum of Squares for Error (SSE)Root-Mean-Squared Error (RMSE)Coefficient of Determination (R-Square)
RGB camera0.00027640.0024780.9994
IR camera0.012420.016610.994
Binocular camera0.089850.0480.5691
Table 3. Parameter settings.
Table 3. Parameter settings.
Probability of Variation (Pm)Crossing Probability (Xovr)Population Size (Nind)Evolutionary Algebra (Gen)
Retrieve value0.20.9100200
Table 4. Comparison of optimization results.
Table 4. Comparison of optimization results.
Corner Point Retention (%)Reprojection Error
RGB CameraIR CameraBinocular Camera
Pre-optimization1000.2131.3720.964
Post optimization780.1010.7370.379
Table 5. Table of PSNR values corresponding to image quality.
Table 5. Table of PSNR values corresponding to image quality.
Range of PSNR ValuesImage Quality Effects
40 , + excellent
30 , 40 good
20 , 30 poor
, 20 extremely poor
Table 6. Denoising results for different noise data.
Table 6. Denoising results for different noise data.
Noise level (σ)GaussianBinomialImpulseLine
25500.50.525
PSNRBM3D35.7735.8635.7835.6135.90
DBSN43.9940.9946.0141.4734.83
SFD44.0541.2149.5748.0337.35
Table 7. Comparison results.
Table 7. Comparison results.
Measured Value/
(mm)
Actual Value/
(mm)
Error/
(%)
ST1/
SW1 (%)
ST2/
SW2 (%)
ST3/
SW3 (%)
Thickness26.5725.982.273.55.07.0
Width152.90150.171.822.02.53.0
Length198.30200.280.992.02.53.0
Table 8. Measurement results.
Table 8. Measurement results.
Measured Value
(mm)
True Value (mm)Absolute Error (mm)Relative Error
(%)
ThicknessGroup 125.5425.980.441.69
Group 225.7526.110.361.38
Group 325.4425.980.542.08
Group 425.1925.880.692.67
Group 525.3625.950.592.27
WidthGroup 136.8037.000.20.54
Group 236.8037.000.20.54
Group 336.9537.000.050.14
Group 436.9037.000.10.27
Group 536.9037.000.10.27
LengthGroup 1148.46150.632.171.44
Group 2148.32149.761.440.96
Group 3147.34148.821.480.99
Group 4148.42150.311.891.26
Group 5147.65149.381.731.16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, L.; Chen, M.; Peng, Z. Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera. Appl. Sci. 2024, 14, 6625. https://doi.org/10.3390/app14156625

AMA Style

Huang L, Chen M, Peng Z. Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera. Applied Sciences. 2024; 14(15):6625. https://doi.org/10.3390/app14156625

Chicago/Turabian Style

Huang, Luobin, Mingxia Chen, and Zihao Peng. 2024. "Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera" Applied Sciences 14, no. 15: 6625. https://doi.org/10.3390/app14156625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop