Next Article in Journal
Photon-Number-Resolving Detection with Highly Efficient InGaAs/InAlAs Single-Photon Avalanche Diode
Next Article in Special Issue
Laser-Tracing Multi-Station Measurements in a Non-Uniform-Temperature Field
Previous Article in Journal
A Review of Indoor Optical Wireless Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Structured Light Centerline Extraction Algorithm Based on Unilateral Tracing

1
School of Information Science and Engineering, Harbin Institute of Technology, Weihai 264209, China
2
Center of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150080, China
3
Key Lab of Ultra-Precision Intelligent Instrumentation, Harbin Institute of Technology, Ministry of Industry and Information Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(8), 723; https://doi.org/10.3390/photonics11080723
Submission received: 6 July 2024 / Revised: 28 July 2024 / Accepted: 31 July 2024 / Published: 1 August 2024
(This article belongs to the Special Issue Micro-nano Optics and High-End Measurement Instruments: 2nd Edition)

Abstract

:
The measurement precision of a line-structured light measurement system is directly affected by the accuracy of extracting the center points of the laser stripes. When the measured object’s surface has significant undulations and severe reflections, existing algorithms are prone to issues such as significant susceptibility to noise and the extraction of false center points. To address these issues, an improved unilateral tracing-based structured light centerline extraction algorithm is proposed. The algorithm first performs unilateral and bidirectional tracing on the upper boundary of the preprocessed laser stripes, then uses the grayscale centroid method to extract the initial coordinates of the center points, and finally corrects them by calculating the stripe’s normal direction using the Hessian matrix. Experimental results show that the proposed algorithm can still extract the stripe center points well under strong interference, with the RMSE reduced by 37% compared to the Steger method and the running speed increased by almost 4 times compared to the grayscale centroid method. The algorithm’s strong robustness, high accuracy, and efficiency provide a viable solution for real-time measurement of line-structured light and high-precision three-dimensional reconstruction.

1. Introduction

The line-structured light measurement technology based on laser triangulation is a non-contact 3D measurement technique widely used in fields such as 3D measurement [1], defect recognition [2], and reverse engineering [3]. Line-structured light measurement involves capturing images with laser stripes, then extracting the center position of the laser stripe, and finally calculating the 3D spatial information of the object. Therefore, the accuracy of extracting the laser stripe center directly affects the precision of the entire measurement system.
Currently, the main methods for laser center point extraction include the edge center method [4], the extremum method [5], the grayscale centroid method [6], and the Steger method [7]. The edge center method analyzes the edge information of the laser stripe region and extracts the stripe centerline based on edge characteristics. The extremum method utilizes the Gaussian distribution characteristic of light intensity in the vertical direction of the stripe, calculating the light intensity gradient values across the cross-section. When the gradient value is zero, it corresponds to the point of maximum light intensity, which is the center point of the stripe on that section line. The grayscale centroid method assigns a weight to each pixel’s grayscale value and, along the stripe’s normal direction, performs a weighted sum of the grayscale values using their coordinate positions. The sum is then divided by the total grayscale value of that cross-sectional area to obtain the coordinate position of the stripe’s grayscale centroid, determining the center position of the stripe. The Steger method utilizes the Hessian matrix for second-order morphological analysis of the image. Each pixel corresponds to a Hessian matrix, and the eigenvalues and eigenvectors of the Hessian matrix are used to determine the stripe’s normal direction. Su et al. [8] proposed a fast structured light center extraction algorithm based on the geometric centroid method, direction template method, and grayscale centroid method. This algorithm improves speed but is not suitable for highly precise 3D measurement systems. Xia et al. [9] improved the extraction accuracy in areas with significant curvature changes in structured light by using direction vectors to correct the grayscale centroid method, achieving good results. Zhou et al. [10] proposed an improved thinning method for extracting the centerline of laser stripes, which achieves fast and high-precision extraction but is sensitive to noise. Ye et al. [11] used deep learning to first extract stripe shape information, then combined the normal vector with the grayscale centroid method to extract the laser stripe center, effectively reducing errors. However, deep learning requires a large amount of data samples, making it unsuitable for all measurement systems. Wang and Li combined boundary tracing and the grayscale centroid method to extract the laser stripe center [12,13]. However, these algorithms did not consider the overlap in the same column of the laser stripe image caused by significant surface undulations. Wu et al. [14] effectively reduced the impact of noise points by setting a threshold for the number of stripe points in each segment. However, this also eliminated some shorter valid laser points, which need to be recovered using other algorithms.
Through the above analysis, it is evident that algorithms based on the grayscale centroid and Steger methods achieve high accuracy. However, these algorithms are sensitive to noise and typically require traversing the entire image, which reduces the extraction speed. To address this, this paper proposes a single-sided bidirectional boundary-tracing algorithm to detect the upper boundary of the laser stripe, effectively avoiding the impact of reflective noise on stripe center extraction. The algorithm determines the initial position of the laser stripe center using the grayscale centroid method and then constructs a Hessian matrix at the initial center position to calculate the normal direction for adaptive adjustment of the center point. Experiments show that this algorithm can effectively and accurately extract the center point even in the presence of significant surface undulations and numerous reflective noise points. Additionally, the algorithm only traces and calculates within the laser stripe region without needing to traverse the entire image, enabling fast extraction.

2. Materials and Methods

2.1. Image Preprocessing

Because the laser stripe occupies only a small portion of the image captured by the camera and has a regular distribution, the image can be segmented to extract the region of interest (ROI) before proceeding with subsequent operations. This approach significantly reduces computational load, accelerates algorithm execution, and improves efficiency and performance [15]. Due to the presence of noise in the image, it is necessary to perform filtering to reduce the impact of noise on subsequent image processing. Common image filtering methods include median filtering, mean filtering, Gaussian filtering, and bilateral filtering. To remove Gaussian noise generated during image capture and transmission while preserving edge information effectively, this paper selects Gaussian filtering, which smooths the image while preserving edge information effectively [16].
Gaussian filtering uses a filter based on the Gaussian distribution, which can effectively remove Gaussian noise while preserving image details to a certain extent. In image processing, Gaussian filtering is typically implemented using a sliding window convolution with a discretized window. The Gaussian convolution kernel is generated using the Gaussian function, which can be expressed as follows:
G ( u , v ) = 1 2 π σ 2 e u 2 + v 2 2 σ 2
where σ represents the standard deviation of the Gaussian kernel.
After Gaussian filtering, the laser stripe can be separated from the background by applying threshold segmentation with an appropriate threshold value. The mathematical expression for threshold segmentation is shown in Equation (2):
I ( u , v ) = 1 , I ( u , v ) > = T 0 , I ( u , v ) < T
where I ( u , v ) is the mask image generated after threshold segmentation, and T is the segmentation threshold.
The segmented mask image is a binary image and may contain discontinuities, as shown in Figure 1a. Therefore, it can be subjected to a morphological closing operation to connect discontinuities and eliminate edge burrs. The closing operation can be mathematically expressed as follows:
C l o s i n g ( A , B ) = ( A B ) B
where A is the input image, B is the structuring element, denotes the erosion operation, and denotes the dilation operation. After applying the closing operation with a (7 × 7) kernel to the original mask image (a), the result is shown in Figure 1b. It can be seen that the edge burrs of the laser stripe are effectively suppressed, and the discontinuities are successfully connected.
The processed mask image has white areas with a value of 1 and black areas with a value of 0. Multiplying this mask image with the original image will set the grayscale value of the background image region to 0 while keeping the grayscale value of the laser stripe region unchanged.

2.2. Improved Boundary-Tracing Algorithm

The traditional boundary-tracing algorithm [17] involves tracing the contour of an image clockwise or counterclockwise based on the grayscale values of the image’s 8-neighborhood. It traverses the image to find the first point greater than a set threshold as the starting point, then traces back to the starting point to terminate and form a closed contour. Figure 2 shows the pixel 8-neighborhood chain code representation.
The principle of the traditional clockwise 8-neighborhood tracing algorithm is as follows:
(1)
Traverse the image to find the first pixel point greater than a threshold value as the starting point.
(2)
Take the starting point as the current point and, starting from the pixel point with chain code value 0 (as shown in Figure 2), search clockwise for the next boundary point greater than the threshold value, then set the searched point as the current point.
(3)
When the chain code value of the current point is even, subtract 1 from the starting chain code value of the next boundary point search, which corresponds to a counterclockwise rotation of 45°; when it is odd, subtract 2 from the chain code value, corresponding to a counterclockwise rotation of 90°.
(4)
Repeat steps (2) and (3) until the next boundary point searched is the starting point, then end the boundary tracing.
Due to the symmetry of the upper and lower boundaries of the laser stripe, this paper proposes a boundary-tracing algorithm that extracts only the upper boundary of the laser stripe. This tracing algorithm is divided into leftward and rightward tracing, selectively searching for pixel points in the 8-neighborhood. The search method is shown in Figure 3.
The gray areas in the figure represent the current boundary points, and the black areas represent the points to be searched. Taking the rightward search in Figure 3a as an example, according to the sequence of encoding values 0, 1, 2, 3, 4 of the current boundary point, search clockwise for the next boundary point with a grayscale value not equal to 0 in the image extracted from the laser stripe region of interest in Section 2.1, then set the searched point as the current point. To prevent the search from getting stuck in a vertical loop, when the previous search’s encoding value is 0 (i.e., tracing to the pixel directly above), the next boundary point is searched only in the area with encoding values 0, 1, 2, 3; when the previous search’s encoding value is 4 (i.e., tracing to the pixel directly below), the next boundary point is searched only in the area with encoding values 1, 2, 3, 4. When there is no point satisfying the search criteria in the search area, end the current search.
The specific process of single-sided search is as follows:
(1)
Search for a pixel point with a grayscale value not equal to 0 from top to bottom and from left to right in the preprocessed image as the initial point, mark this point as p s t a r t , and set it as the current point.
(2)
Perform rightward boundary tracing from the current point, then set the traced boundary point as the new current point. When there is no pixel that meets the criteria, stop tracing and repeat step (1) to find a new initial point.
(3)
For the new initial point p s t a r t , check the left-side pixels. If there are pixels satisfying the boundary conditions among the three pixels to the left (u, v − 1), (u + 1, v − 1), (u + 2, v − 1), consider that there is also a laser stripe on the left side of the new initial point, mark this point as p start , and perform leftward boundary tracing from p start .
(4)
After tracing the upper boundary of the laser stripe to the left, perform rightward boundary tracing from p s t a r t .
(5)
Repeat steps (2), (3), and (4) until completing the upper boundary tracing of the laser stripe by traversing the columns of the image.
The flowchart and illustration are shown in Figure 4 and Figure 5, respectively.
In Figure 5, the black and gray areas represent regions where the grayscale value is not equal to zero, indicating the presence of laser stripes and specular reflection noise. The gray area represents the traced laser boundary, while the isolated black areas are specular reflection noise points. The red arrows represent the pixels indexed by the tracing algorithm. It can be seen that the improved boundary-tracing algorithm effectively retrieves the upper boundary of the stripe and avoids the influence of noise points.

2.3. Initial Center Point Determination Based on the Gray-Level Centroid Method

In this paper, while tracing the upper boundary of the laser stripe, the gray-level centroid method is used to calculate the initial center point position. The traditional formula for the gray-level centroid method is as follows:
v u ¯ = v = m v = n I ( u , v ) v v = m v = n I ( u , v )
where v u ¯ represents the v-component coordinate of the center point, I(u, v) represents the grayscale value at (u, v), n represents the row value of the upper boundary of the stripe, and m represents the row value of the lower boundary of the stripe.
The traditional gray-level centroid method typically sums the values of each column or row of the image to calculate the centroid value. In the algorithm presented in this paper, when the new stripe boundary column traced is different from the current point, i.e., the traced boundary point has a code value of 1, 2, or 3, the new boundary point is taken as the starting point to search downwards for a set of points with non-zero grayscale values. These points are then used for the gray-level centroid calculation. The specific flowchart is shown in Figure 6.
Compared to the traditional gray-level centroid method, the algorithm presented in this paper does not require traversing the entire image, which speeds up the algorithm’s runtime. Additionally, it can accurately extract the center point in images where overlapping laser stripes are present in the same column, while also avoiding interference from stray noise points, thus enhancing the algorithm’s robustness.

2.4. Center Point Optimization Based on the Hessian Matrix

Although the center points of the laser stripe extracted using the gray-level centroid method can achieve sub-pixel accuracy, this method only calculates the pixel points in the columns without considering the orientation of the stripe. Therefore, after using the gray-level centroid method to calculate the initial center points, this paper combines the Hessian matrix to optimize the center points in the normal direction. The Hessian matrix [18] is commonly used to determine the normal direction of image pixel points. Its representation in a two-dimensional image is as follows:
H ( u , v ) = I u u I u v I u v I v v
where H(u, v) represents the Hessian matrix of the pixel point at (u, v) in the image, I u u represents the second-order difference in the row direction, I v v represents the second-order difference in the column direction, and I u v represents the mixed second-order difference first in the row direction and then in the column direction. The specific calculations for these differences are as follows:
I u = I ( u + 1 , v ) I ( u , v ) I v = I ( u , v + 1 ) I ( u , v ) I u u = I ( u + 1 , v ) + I ( u 1 , v ) 2 I ( u , v ) I v v = I ( u , v + 1 ) + I ( u , v 1 ) 2 I ( u , v ) I u v = I v u = I ( u + 1 , v + 1 ) I ( u + 1 , v ) I ( u , v + 1 ) + I ( u , v )
Taking the initial center point ( u 0 , v 0 ) as the center, we constructed the Hessian matrix. The eigenvector corresponding to the maximum eigenvalue of this matrix is the normal direction at the current point, denoted as ( e u , e v ) . By performing a second-order Taylor expansion of I ( u 0 , v 0 ) along the normal direction, the laser center point corrected in the normal direction can be obtained as ( u 0 + t e u , v 0 + t e v ) , where t = e u I u + e v I v e u 2 I u u + 2 e u e v I u v + e v 2 I v v . When both t e u and t e v are less than or equal to 0.5, and the second derivative at the point ( u 0 , v 0 ) is greater than a threshold, the point is corrected accordingly.

3. Results and Discussion

The experiments were conducted on a computer equipped with an Intel(R) Core(TM) i5-8300H CPU @ 2.30 GHz and 16 GB of RAM. The programming language used was Python, and the development environment was Spyder 5.4.3. The selected equipment included the Daheng MER-531-20GC-P industrial camera and a 650 nm line laser as the light source.

3.1. Line-Structured Light Center Point Extraction Experiment

The experiment selected the traditional grayscale centroid method, the Steger method, and the improved thinning method [10] for comparative testing. First, an image of a cup lid with laser stripes was captured. This vertically protruding structure is commonly found in many mechanical parts. Figure 7 shows the extraction results of the three algorithms.
The left side of the figure shows the enlarged laser stripe images at the vertical protrusion of the cup lid and the center points extracted by various algorithms. From the figure, it can be seen that, due to the vertical protrusion causing overlapping laser stripes in the same column, the grayscale centroid method calculates the centroid for the entire column, thus failing to correctly extract the center of the overlapping part of the laser. The Steger method can extract the centers of the overlapping parts separately, but since the distribution of the laser stripes is not a perfect Gaussian distribution, there are multiple extreme points in the center. Points where the second derivative is 0 are not unique, resulting in the extraction of multiple laser centers. Additionally, at the ends of the laser stripes, the light bands are broken, and refraction and scattering occur, making the light intensity distribution near the edges complex, which can also lead to erroneous extraction of multiple centers. The improved thinning method can effectively extract the center points of the two laser stripes without producing false center points. However, similar to the original thinning method [19], some laser center points at the edges of the stripes are missing. The proposed algorithm, however, due to its ability to identify whether there are laser stripes on the left side after finding a new tracing starting point, effectively extracts the center of the overlapping part, and the center extraction algorithm also ensures the uniqueness of the extracted center.
To verify the performance of the proposed algorithm, center point extraction was performed on a model with significant surface roughness and reflection, as shown in Figure 8.
From the image, we can see that the surface of the model is uneven, with overlapping laser stripes in the same column. Additionally, due to the metallic luster of the model’s material, there are many specular reflection noise points in the image, which greatly affect the extraction of the center points of the stripes. The extraction results for this model are shown in Figure 9.
From the image, it is evident that the grayscale centroid method is severely affected by specular reflection noise, as it calculates the centroid for the entire column. The Steger algorithm can extract the laser stripes, but it mistakenly includes noise as part of the laser stripes due to its reliance on the derivative of local gradients for extraction. The improved thinning method can also extract the laser center points well, but it still inevitably extracts the skeleton of noise points, resulting in false center points. In contrast, the proposed algorithm is not affected by noise and extracts a set of center points that align well with the shape of the laser stripes. Additionally, the extracted stripe centerlines are continuous without generating false center points. This is because the algorithm only traces the upper boundary of the laser stripes and extracts center points only from the valid region of the tracing results, which helps avoid the influence of environmental noise.

3.2. Algorithm Accuracy and Efficiency Experiment

The effectiveness of the line-structured light center point extraction algorithm is mainly measured in terms of accuracy and efficiency. Since the true positions of the laser center points are not known, this paper adopts the approach from reference [20], using the root mean square error (RMSE) of the distances from the algorithm-extracted light stripe center points to the fitted line to characterize the accuracy of the algorithm in extracting the light stripe center points. Since the grayscale centroid method and the Steger method cannot correctly extract the center points under strong interference, their accuracy is poor and cannot be used as a reference. This paper selects a chessboard calibration board used in the calibration of structured light 3D reconstruction systems as the target object for the experiment, as shown in Figure 10. The formula for calculating the RMSE from the center points to the fitted line is as follows:
R M S E = i = 1 n ( d i d ¯ ) 2 n
where n is the number of laser center points, d i is the distance from the current point to the fitted line, and d ¯ is the average distance from each center point to the line. d i is calculated using the formula for the distance from a point to a line, which is
d i = A v i + B u i + C A 2 + B 2
where v i and u i are the coordinates of the currently extracted center point, and A, B, and C are the parameters of the fitted line equation A v + B u + C = 0 . The experimental data for the root mean square error (RMSE) from the center points to the fitted line for the three algorithms are shown in Table 1.
From the table, it can be seen that the algorithm proposed in this paper reduces the error by 41% compared to the grayscale centroid method, 37% compared to the Steger method, and 41% compared to the improved thinning method. The experiment proves that this algorithm has high accuracy and can better meet the corresponding measurement requirements.
To verify the efficiency of the algorithm, this paper compared the time required by each algorithm to extract the laser stripes from the image with strong interference, as mentioned earlier. The results are shown in Table 2.
From the table, it can be seen that the algorithm proposed in this paper has the shortest running time. Compared to the grayscale centroid method, it is about 4 times faster, and compared to the Steger method and the improved thinning method, it is about 45 times and 28 times faster, respectively. This is because the algorithm only needs to process the grayscale values near the laser stripes, without the need to traverse the entire image, which greatly improves the real-time performance of the algorithm.

4. Conclusions

In this paper, we presented an improved single-sided tracing structured light center point extraction algorithm. By identifying the presence of laser stripes on the left side when searching for new center points, our algorithm effectively handles cases where overlapping stripes exist in the same column. Additionally, single-sided tracing avoids the influence of specular reflection noise. This novel tracing method effectively overcomes the sensitivity of existing algorithms to reflective noise and the need to traverse the entire image, while also providing a new research direction for contour tracing of objects in image processing. The algorithm corrects the initial center points extracted by the grayscale centroid method using the direction of the Hessian matrix calculation, resulting in higher accuracy. Experimental results demonstrate that compared to traditional structured light extraction algorithms, the proposed algorithm improves accuracy by at least 37% compared to the Steger method and reduces runtime by at least 4 times compared to the grayscale centroid method. The algorithm’s strong robustness, high accuracy, and efficiency provide a viable solution for real-time measurement of line-structured light and high-precision three-dimensional reconstruction. However, the algorithm is occasionally affected by noise. In future research, we plan to design a more refined threshold segmentation and region of interest (ROI) extraction algorithm to generate mask images with fewer noise points, thereby reducing noise impact and enhancing the robustness of the algorithm.

Author Contributions

Conceptualization, Y.H.; methodology, Y.H. and W.K.; software, Y.H.; validation, Y.H. and W.K.; formal analysis, Y.H.; investigation, Y.H.; resources, W.K. and Z.L.; data curation, Y.H.; writing—original draft preparation, Y.H.; writing—review and editing, Y.H., W.K. and Z.L.; visualization, Y.H.; supervision, W.K.; project administration, W.K. and Z.L.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation (grant number ZR2020MF141) and the National Nature Science Foundation of China (grant number 61975046).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nguyen, T.T.; Slaughter, D.C.; Max, N.; Maloof, J.N.; Sinha, N. Structured Light-Based 3D Reconstruction System for Plants. Sensors 2015, 15, 18587–18612. [Google Scholar] [CrossRef] [PubMed]
  2. Cao, X.; Xie, W.; Ahmed, S.M.; Li, C.R. Defect detection method for rail surface based on line-structured light. Measurement 2020, 159, 107771. [Google Scholar] [CrossRef]
  3. Park, C.S.; Chang, M. Reverse engineering with a structured light system. Comput. Ind. Eng. 2009, 57, 1377–1384. [Google Scholar] [CrossRef]
  4. Lyvers, E.P.; Mitchell, O.R.; Akey, M.L.; Reeves, A.P. Subpixel measurements using a moment-based edge operator. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 1293–1309. [Google Scholar] [CrossRef]
  5. Jiang, C.; Li, W.-L.; Wu, A.; Yu, W.-Y. A novel centerline extraction algorithm for a laser stripe applied for turbine blade inspection. Meas. Sci. Technol. 2020, 31, 095403. [Google Scholar] [CrossRef]
  6. Li, Y.; Zhou, J.; Huang, F.; Liu, L. Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method . Sensors 2017, 17, 814. [Google Scholar] [CrossRef]
  7. Baumgartner, A.; Steger, C.; Mayer, H.; Eckstein, W.; Ebner, H. Automatic road extraction based on multi-scale, grouping, and context. Photogramm. Eng. Remote Sens. 1999, 65, 777–786. [Google Scholar]
  8. Su, X.Q.; Xiong, X.M. High-speed method for extracting center of line structured light. J. Comput. Appl. 2016, 36, 238–242. [Google Scholar]
  9. Xia, X.; Fu, S.P.; Xia, R.B.; Zhao, J.B.; Hou, W.G. Extraction algorithm of line structured light center based on improved gray gravity method. Laser J. 2024, 45, 75–79. [Google Scholar]
  10. Zhou, X.M.; Wang, H.; Li, L.J.; Zheng, S.C.; Fu, J.J.; Tian, Q.H. Line laser center extraction method based on the improved thinning method. Laser J. 2023, 44, 70–74. [Google Scholar]
  11. Ye, C.; Feng, W.; Wang, Q.; Wang, C.; Pan, B.; Xie, Y.; Hu, Y.; Chen, J. Laser stripe segmentation and centerline extraction based on 3D scanning imaging. Appl. Opt. 2022, 61, 5409–5418. [Google Scholar] [CrossRef]
  12. Wang, R.J.; Huang, M.M.; Ma, L.D. Research on Center Extraction Algorithm of Line Structured Light Based on Unilateral Tracking and Midpoint Prediction. Chin. J. Lasers 2024, 51, 108–118. [Google Scholar]
  13. Li, W.; Peng, G.; Gao, X.; Ding, C. Fast Extraction Algorithm for Line Laser Strip Centers. Chin. J. Lasers 2020, 47, 192–199. [Google Scholar]
  14. Wu, Y.B.; Chen, D.L.; Yang, C.; He, W.; Sun, X.J. Multi-Line Structured Light Center Extraction Based on Improved Steger Algorithm. Appl. Laser 2023, 43, 188–195. [Google Scholar]
  15. Izadpanahkakhk, M.; Razavi, S.M.; Taghipour-Gorjikolaie, M.; Zahiri, S.H.; Uncini, A. Deep Region of Interest and Feature Extraction Models for Palmprint Verification Using Convolutional Neural Networks Transfer Learning. Appl. Sci. 2018, 8, 1210. [Google Scholar] [CrossRef]
  16. Yu, J. Based on Gaussian filter to improve the effect of the images in Gaussian noise and pepper noise. J. Phys. Conf. Ser. 2023, 2580, 012062. [Google Scholar] [CrossRef]
  17. Seo, J.; Chae, S.; Shim, J.; Kim, D.; Cheong, C.; Han, T.-D. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors. Sensors 2016, 16, 353. [Google Scholar] [CrossRef] [PubMed]
  18. Liu, J.; Liu, L.H. Laser Stripe Center Extraction Based on Hessian Matrix and Regional Growth. Laser Optoelectron. 2019, 56, 113–118. [Google Scholar]
  19. Zhang, T.Y.; Sun, C.Y. A fast parallel algorithm for thinning digital patterns. CACM 1984, 27, 236–239. [Google Scholar] [CrossRef]
  20. Zhang, J.; Lu, Y.H.; Liang, L.P.; Zhao, C.Y. Optimization Method of Adaptive Center Extraction of Linear Structured Light Stripe. Appl. Laser 2019, 39, 28–1034. [Google Scholar]
Figure 1. Mask images of the region of interest extraction. (a) after threshold segmentation; (b) after closing operation.
Figure 1. Mask images of the region of interest extraction. (a) after threshold segmentation; (b) after closing operation.
Photonics 11 00723 g001
Figure 2. Pixel 8-neighborhood chain code representation.
Figure 2. Pixel 8-neighborhood chain code representation.
Photonics 11 00723 g002
Figure 3. Illustration of single-sided bidirectional search. (a) rightward search; (b) leftward search.
Figure 3. Illustration of single-sided bidirectional search. (a) rightward search; (b) leftward search.
Photonics 11 00723 g003
Figure 4. Flowchart of laser upper boundary tracing.
Figure 4. Flowchart of laser upper boundary tracing.
Photonics 11 00723 g004
Figure 5. Illustration of laser upper boundary tracing.
Figure 5. Illustration of laser upper boundary tracing.
Photonics 11 00723 g005
Figure 6. Flowchart of the gray-level centroid method.
Figure 6. Flowchart of the gray-level centroid method.
Photonics 11 00723 g006
Figure 7. Extraction results of the cup-lid experiment. (a) the original structured light image of the cup lid; (b) the grayscale centroid method; (c) the Steger method; (d) the improved thinning method; (e) the proposed method.
Figure 7. Extraction results of the cup-lid experiment. (a) the original structured light image of the cup lid; (b) the grayscale centroid method; (c) the Steger method; (d) the improved thinning method; (e) the proposed method.
Photonics 11 00723 g007
Figure 8. Structured light image of a complex model.
Figure 8. Structured light image of a complex model.
Photonics 11 00723 g008
Figure 9. Comparison of algorithms. (a) the grayscale centroid method; (b) the Steger method; (c) the improved thinning method; (d) the proposed algorithm.
Figure 9. Comparison of algorithms. (a) the grayscale centroid method; (b) the Steger method; (c) the improved thinning method; (d) the proposed algorithm.
Photonics 11 00723 g009
Figure 10. Chessboard calibration board with laser stripes.
Figure 10. Chessboard calibration board with laser stripes.
Photonics 11 00723 g010
Table 1. Comparison of RMSE between extracted center points and the fitted line.
Table 1. Comparison of RMSE between extracted center points and the fitted line.
AlgorithmGrayscale CentroidStegerImproved Thinning MethodProposed Algorithm
RMSE (pixels)0.3070.2860.3040.180
Table 2. Comparison of runtime between the proposed algorithm and other classic algorithms.
Table 2. Comparison of runtime between the proposed algorithm and other classic algorithms.
AlgorithmGrayscale CentroidStegerImproved Thinning MethodProposed Algorithm
Runtime (s)0.4924.973.050.111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.; Kang, W.; Lu, Z. Improved Structured Light Centerline Extraction Algorithm Based on Unilateral Tracing. Photonics 2024, 11, 723. https://doi.org/10.3390/photonics11080723

AMA Style

Huang Y, Kang W, Lu Z. Improved Structured Light Centerline Extraction Algorithm Based on Unilateral Tracing. Photonics. 2024; 11(8):723. https://doi.org/10.3390/photonics11080723

Chicago/Turabian Style

Huang, Yu, Wenjing Kang, and Zhengang Lu. 2024. "Improved Structured Light Centerline Extraction Algorithm Based on Unilateral Tracing" Photonics 11, no. 8: 723. https://doi.org/10.3390/photonics11080723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop