Next Article in Journal
Governance of a Blockchain-Enabled IoT Ecosystem: A Variable Geometry Approach
Next Article in Special Issue
An Investigation into the Application of Acceleration Responses’ Trendline for Bridge Damage Detection Using Quadratic Regression
Previous Article in Journal
A Homogeneous Colorimetric Strategy Based on Rose-like CuS@Prussian Blue/Pt for Detection of Dopamine
Previous Article in Special Issue
Quantification of Construction Materials Quality via Frequency Response Measurements: A Mobile Testing Station
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Four-Step Algorithm for Detecting a Single Circle in Complex Images

School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215137, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(22), 9030; https://doi.org/10.3390/s23229030
Submission received: 13 October 2023 / Revised: 30 October 2023 / Accepted: 1 November 2023 / Published: 7 November 2023

Abstract

:
Single-circle detection is vital in industrial automation, intelligent navigation, and structural health monitoring. In these fields, the circle is usually present in images with complex textures, multiple contours, and mass noise. However, commonly used circle-detection methods, including random sample consensus, random Hough transform, and the least squares method, lead to low detection accuracy, low efficiency, and poor stability in circle detection. To improve the accuracy, efficiency, and stability of circle detection, this paper proposes a single-circle detection algorithm by combining Canny edge detection, a clustering algorithm, and the improved least squares method. To verify the superiority of the algorithm, the performance of the algorithm is compared using the self-captured image samples and the GH dataset. The proposed algorithm detects the circle with an average error of two pixels and has a higher detection accuracy, efficiency, and stability than random sample consensus and random Hough transform.

1. Introduction

1.1. Background

Single-circle detection has many application scenarios, namely for use in automated inspection and assembly, the identification of weld joints and weld seams, PCB hole detection, and non-destructive testing [1,2,3,4,5]. For example, only the single circle in the image needs to be detected when welding the inner diameter edges of tube heat exchanger bores utilizing machine vision. In the abovementioned fields, the circles to be detected are usually present in complex images. Complex images are images with multiple contours, intricate textures, mass noise, and various levels of brightness, generally containing a lot of information about edge and structure. Therefore, the task of circle detection in complex images and obtaining the localization and shaping parameters of the circle is more challenging. Widely used circle parameter detection methods include random sample consensus, stochastic Hough transform, and least squares, with good robustness and accuracy [6,7].

1.2. Literature Review

Hough transform (HT) has received attention from a wide range of scholars due to its insensitivity to noise and ease of implementation in parallel computing. However, the HT algorithm has a long computation time and requires a large storage space, making circle detection inefficient. To solve this problem, Xu et al. [8] proposed the randomized Hough transform (RHT). The RHT algorithm maps multiple pixels on the edge to a single point in the parameter space. It determines the circle’s parameters by randomly obtaining three points in the parameter space, which can significantly reduce the computation time and storage space. Consequently, many scholars have conducted studies based on the RHT algorithm [9,10]. Nonetheless, the algorithm uses three points instead of all points along the circle edge in order to determine the circle parameters, which may reduce the detection accuracy. To enhance the probability that the three points belong to the same circle, Wang [11] proposed an improved RHT method integrated with a fitting subpixel circle detection algorithm, where he removed isolated points after edge extraction. The method effectively eliminates the noise and improves circle detection accuracy. To better remove noise and determine the fitted sample points, Jiang [12] proposed an efficient stochastic Hough transform circle detection algorithm based on probabilistic sampling and feature points, which optimizes the methods of determining sample points and finding candidate circles. Experimental results show that the algorithm improves the effectiveness of sampling the fitted sample points and prevents the fake circles from being regarded as candidate circles. Both the RHT algorithm and the improved algorithm based on RHT obtain the circle parameters via random sampling, which is challenging to apply to circle detection in complex images because the number of noise points in complex images is more than the number of feature points at the circle edge.
Randomized sample consensus (RANSAC) was proposed by Fishler and Bolles in 1981 as an iterative uncertainty algorithm for estimating parameters from noise datasets; this has been used in various image processing and computer vision applications [13]. When fitting a circle, a part of the sample points is randomly selected as the set of fitted sample points, and a circle is fitted. Kiddee et al. [14] used the RANSAC algorithm to determine the location of the edge feature points in circular weld tracking. Although the algorithm can estimate the parameters of the circle edges, it is only suitable for cases where there are fewer noise points outside of the circle. To solve the problem of excessive noise points outside the circle, Ma et al. [15] proposed a spatial circle center fitting method based on the RANSAC algorithm, which reduces the noise outside the circle and improves the robustness of the circle detection. However, circle detection in complex images is still unable to obtain an excellent fitting effect.
Least squares fitting of circles (LSC) [16,17,18] is the method of fitting a circle by minimizing the sum of the squares of the distances between the sample points and the corresponding points on the fitted circle, which has a high fitting accuracy and faster detection speed compared with the RHT and RANSAC algorithms. However, the results obtained based on the LSC algorithm are easily affected by noise. Therefore, scholars have improved and compared the LSC algorithm. Zhou et al. [19] proposed the MFLS algorithm, which removes the noise points by establishing a mathematical model in polar coordinates and then uses the LSC algorithm for circle fitting. The method has high positioning accuracy. However, the detection results may be seriously affected when the noise points are not entirely removed. To detect the parameters of the punched circle quickly and accurately, Cao et al. [20] proposed a circle fitting method based on LSC and the mean shift algorithm. This algorithm concentrates the center of the fitted circle around the true circle’s center in order to obtain the best actual circle. Experiments show that this algorithm detects circles faster than the RHT algorithm.
In addition to the methods mentioned above, AI-based approaches, e.g., deep learning, have also been used in the literature to detect circle contours. Essentially, AI-based circular contour detection methods usually have high accuracy and robustness. However, their performance depends on the algorithms used and the quality of the training data. The recognition results are usually good if the algorithms and models are adequately trained and have good generalization capabilities. However, some false or missed detections may occur for complex or noisy image scenes.
In summary, the increased complexity of images in which the target circles are located leads to some limitations of the detection algorithms. RHT and RANSAC algorithms are designed based on the random sampling method to obtain the fitted circle. They fit the circle by selecting some of the sample points (pixels at the edges) instead of the sample points of the whole circle, which may lead to the selection of sample points that are not representative enough, especially when the sample points contain noise or outliers. The LSC algorithm has high accuracy but is extremely sensitive to noise. In addition, excessive sample points increase the complexity of the least squares’ nonlinear optimization computation. Thus, a new single-circle detection algorithm is desired for its application in complex images.

1.3. Organization

The rest of the paper is organized as follows: Section 2 states the single-circle detection problem, Section 3 describes the proposed single-circle detection principle, Section 4 performs comparative experiments and results analysis, and Section 5 concludes the paper and discusses the future work.

2. Problem Statement

Although single-circle detection with a simple background is a typical computer vision problem, which has been well solved in the literature, single-circle detection with a complex environment requires a more efficient and accurate method. The expected detection method addresses the following four issues:
  • The removal of mass noise in the image edge preprocessing stage. Interfering points are an adverse factor affecting the accuracy and efficiency of single-circle detection. Mass noise increases the difficulty of de-noising and main contour detection; therefore, the noise needs to be removed accurately when detecting single circle with a complex background.
  • The selection of sample points for fitting circles. After image edge processing, interfering points affect the fitting results. These interfering points are scattered in a low-density region. In contrast, the sample points of the main contour are connected in an arc and are more tightly connected in a high-density area. Considering the characteristics of the interfering points and sample points, establishing a sample point selection method for fitting candidate circles is another challenge.
  • The iteration of candidate circles and determination of ideal circles. Overfitting and underfitting are prevented via suitable methods during the exact fitting of circles. We need to find an effective and fast iterative solution for the candidate circle, which in turn ensures the quality of the ideal circle.
  • The improvement of output circle detection accuracy. Despite reducing the frequency of overfitting and underfitting occurrences, there may still be an error between the ideal circle and the real-world circle due to the influence of various interfering points. To improve the accuracy of output circle detection, the effect of interfering points on output circle parameters needs to be further reduced.
Definitions of terms used in the methods are given below.
Definition 1. 
Candidate circle denotes the circle constructed by fitting during the process in the least squares fitting circles’ iteration process. It cannot be directly output as the final result, and needs further analysis and screening.
Definition 2. 
Ideal circle denotes the last circle fitted by the least square method in Section 3.3.
Definition 3. 
Output circle denotes the final circle output, which is expected to be with high accuracy and stability.
Definition 4. 
Edge detection denotes a method to extract the edges of an image with a large gradient, which includes the circle edges to be detected and the interference points.
Definition 5. 
Main contour denotes the target edge to be detected in the image.
Definition 6. 
Sample points denote pixels in an image. In this paper, the corresponding pixels are put into the coordinate system to explain the principle of each algorithm. Therefore, the pixels are called sample points.
Definition 7. 
Data points in K-means algorithm represent the coordinate values of the center and radius of all candidate circles.

3. Methods

The algorithm is proposed for single-circle detection, which combines the DBSACN clustering algorithm, the least squares method (LS), and the K-means clustering algorithm. To briefly express our proposed algorithm, it is named as DBLSKCF algorithm. The four steps in the DBLSKCF algorithm respond to the abovementioned four challenges. The complete single-circle detection process is schematically shown in Figure 1.
  • Step 1. Image edge preprocessing.
  • Step 2. Selection of sample points for fitting curves.
  • Step 3. Iteration of candidate circles and determination of the ideal circles.
  • Step 4. Accuracy improvement of the output circle detection.

3.1. Image Edge Preprocessing

Image edge preprocessing is the foundation for extracting target edges and aims to highlight real and valuable information. However, the images often contain noise due to the impact of uncertainties, such as acquisition equipment and lighting conditions. Therefore, image edge processing, which consists of two key steps, Canny edge detection and main contour screening, can significantly improve single-circle detection performance.

3.1.1. Canny Edge Detection

Edge detection is used in many object edge detection applications to observe image features based on a significant change in the gray level. In addition, it can reduce the amount of data in an image while preserving its structural properties [21]. Therefore, the classical Canny edge detection algorithm is used for the extraction of edge features in images [22]. The edge detection accuracy depends on the thresholds, and a series of pre-experiments are conducted to determine the appropriate thresholds. By analyzing the pre-experimental results, applicable high and low thresholds are selected to extract information about the target edges.
Differentiation is the basis of gradient computation, which is very sensitive to the image’s mutations (generally denotes noise). To improve the accuracy of the detection results, the image needs to be filtered before edge detection to remove interfering points and reduce pseudo edges. Gaussian filtering is effective in smoothing the image and reducing distracting points.
The Gaussian kernel size and the standard deviation affect the filtering effect. The standard deviation in this algorithm uses the default parameter. To determine the optimal Gaussian kernel size, this section performs the filtering process by filtering three complex images with Gaussian kernel sizes of 5 × 5, 7 × 7, 9 × 9, 11 × 11, respectively. The filtered images are then subjected to edge detection using the Canny edge detection algorithm. The results are as follows.
As shown in Figure 2, the Gaussian kernel size is 5 × 5, 7 × 7, and the corresponding edge detection results have many interfering points and pseudo edges around the detected edges. This phenomenon may reduce the single-circle detection accuracy. The filtering effect is better when the Gaussian kernel size is 9 × 9, 11 × 11. Nevertheless, excessive Gaussian kernel size may cause some target edges to be filtered out, which leads to significant deviations in the circle detection results. Thus, the Gaussian kernel size chosen in the DBLSKCF algorithm is 9 × 9.

3.1.2. Main Contour Screening

Canny edge detection results show that the edges of the main contour are more tightly connected. Meanwhile, the interfering points are mostly irregularly distributed, making it difficult to form a complete edge. Even if these interfering points are connected to form an edge, the length of the edge will be much smaller than the length of the main contour. Based on this feature, the edge lengths are utilized to achieve the main contour screening. Precisely, we can calculate the length of each edge and arrange these lengths in descending order to obtain a sequence of edge lengths. We select a few longer edges to narrow the main contour range and determine the number of retained edges by setting a threshold. The calculation formula is as follows:
C m = e 1 , e 2 , , e m
C s 0 = e 1 0 , e 2 0 , , e s 0 ( s m )
C p s = p 1 s , p 2 s , , p l s
where  C m  denotes the set of all edges before sorting,  e 1 , e 2 , , e m  represent the edge in the image,  C s 0  denotes the set of the first s long edges after sorting,  e 1 0 , e 2 0 , , e s 0  denote the first s long edges after sorting,  C p s  denotes the set of pixels of the edges in  C s 0 , and  p 1 s , p 2 s , , p l s  denote the pixel at the edge in  C p s . Please note that  C p s  is the pixel set output after the step of image edge preprocessing.
The threshold directly affects the accuracy and efficiency of single-circle detection. In the Canny edge detection algorithm, some irrelevant small edges may be detected due to noise, influencing the circle detection process. The DBLSKCF algorithm keeps a few edges with longer lengths to improve the accuracy and efficiency of circle detection. The length of the edges can be used to assess their continuity. Usually, longer edges are more representative of a part of the real-world circle. To determine the optimal number of retained edge, 4–8 long edges are kept for each of the images in Figure 2d, and the results are shown in Figure 3.
To better express the significance of retaining different numbers of edges, the denoising ratio is introduced in this paper. As shown in Formula (4), the denoising rate indicates the ratio of the number of removed interfering points to the number of detected edge pixels, revealing the denoising ability of the image. The algorithm’s performance under different interfering points levels can be evaluated by retaining the comparison experiments with distinct edges. The computational expression is as follows:
β = N m N p s N m
where  N m  means the number of pixels after edge detection.  N p s  indicates the number of pixels in the set  C p s  and  β  represents the denoising rate.
Main contour screening aims to select the fitted sample points better. If the denoising rate is excessively low, it will lead to many interfering points in the fitted samples, which may reduce the accuracy and efficiency of the fitting. On the contrary, although raising the denoising rate reduces the number of interfering points, it may result in a lack of representative contour points in the fitted sample, adversely affecting the accuracy of the detecting results. The DBLSKCF algorithm integrates the final detection results while increasing denoising to ensure detection efficiency. Additionally, it chooses to retain six edges to obtain accurate fitting results. This strategy enables the DBLSKCF algorithm to obtain better results in real-world circle detection.
Image edge preprocessing improves the accuracy and efficiency of edge detection and provides reliable input data for the subsequent circle detection stage. The steps involved in image edge preprocessing are given in Algorithm 1.
Algorithm 1: Image Edge Preprocessing
Input: The image with a circle outline, the Gaussian kernel size  k s , the number of retained edges  s , threshold value 1 is  t h 1  and threshold value 2 is  t h 2  in Canny edge detection algorithm.
Output: Edge pixels under retention.
1: Initialize  k s  = 9,  s    = 6,  t h 1  = 200,  t h 2  = 255.
2: Calculate the  C m  by Canny edge detection algorithm and Formula (1).
3: Calculate the  C s 0  with (2).
4: Calculate the  C p s  with (3).

3.2. Selection of Fitting Sample Points

The DBSCAN clustering algorithm separates the main contour sample points and interfering points. The algorithm clusters edges of arbitrary shapes and splits complex and irregularly shaped edges well using two parameters: the neighborhood radius and the minimum number of sample points within the circle determined by this neighborhood radius.
The DBSCAN algorithm clusters sample points into different classes based on the size of their neighborhood density. The clustering principle is shown in Figure 4. Formula (5) is used to classify the sample points. If the number of sample points in the neighborhood of sample point A is greater than or equal to the minimum number of sample points, the sample point A is classified as a core point. If the number of the sample points in B’s neighborhood are less than the minimum number of sample points, point B is classified as a boundary point. If the number of sample points in N’s neighborhood is 0, point N is classified as an outlier point.
( x c , y c ) C A , n ε n min p t s ( x c , y c ) C B , 1 < n ε < n min p t s ( x c , y c ) C N , n ε = 1
where  ( x c , y c )  represent the coordinates of the sample point,  C A  denotes the set formed by the core point A,  C B  represents the set created by the boundary point B, and  C N  indicates the set created by the outlier point N.  n ε  indicates the number of sample points in a circle centered at  ( x c , y c )  with a radius of  ε . Please note that after the screening of the fitted samples, the output is the set  C A .
If the sample point is marked as the core point, the above clustering process will be repeated for the sample points in the neighborhood until all sample points are marked.
Two results occur after image preprocessing: the first is that the longest edge in the image includes only the main contour, as shown in Figure 5a; the other is that it contains both the main contour and the outer interfering points, as shown in Figure 5b. If clustering results with fewer sample points selected for fitting, they may suffer from image interfering points or broken edges. Therefore, the algorithm proposed in this paper retains the class with the most sample points. However, it is possible that interfering points that are close to the main contour are incorrectly clustered into fitted sample points due to unreasonable values of  ε  and  n min p t s . The algorithm specifically related to the selection of the fitted sample points is shown in Algorithm 2.
Algorithm 2: Select the Fitting Sample Points
Input: The  C p s  in Algorithm 1, the radius of the neighborhood  ε  and the minimum number of points in the neighborhood  n min p t s  in the DBSCAN algorithm.
Output: The edge set  C A  with the most sample points.
1: Initialize  ε  = 5,  n min p t s  = 3.
2: According to Section 3.2, clustering the sample points in  C p s .
3: Calculate the number of sample points in each cluster and retain the class with the most sample points.

3.3. Candidate Circle Iteration and Ideal Circle Determination

A set of sample points with a main contour is obtained in Section 3.2, and the sample points show certain distributional features.
(1).
Sample points on the main contour are connected into superior or inferior arcs. An arc with a central angle of less than or equal to 180° is called inferior arc, as shown in Figure 6a; an arc with a central angle of larger than 180° is called superior arc, as shown in Figure 6b.
(2).
Besides the sample points of the main contour, a few interfering points are distributed on the outer side of the main contour. Fitting candidate and ideal circles in such cases is studied in this section.
Based on the comparison of the RHT algorithm, RANSAC algorithm, and LSC algorithm in the literature review, the LSC algorithm has a better fitting circle effect. To improve the accuracy and efficiency of single-circle detection, the algorithm uses the residual sum of squares to fit the circles.
It can be seen from Figure 7a, when only the main contour sample points exist in the fitting sample, the fitting results are better. However, the traditional least squares method is more sensitive to the interfering points, leading to errors in the results of the circle fitting, as shown in Figure 7b. To obtain the desired circle fitting results, this section proposes a method to remove the fitted failure points one by one based on the least squares method. This method introduces two parameters: the maximum number of iterations allowed and the critical residual sum of squares, based on the following principle (depicted in Figure 8).
As shown in Figure 8 (take the example of three iterations), the upper left interfering points is biased relative to the other interfering points. The first candidate circle is biased to the top left, sensitive to the interfering points, and has a large residual sum of squares. Interfering points outside the candidate circle are removed by comparing the distance from the sample points to the circle’s center with the radius. This sample point is kept if the distance is less than the radius. On the contrary, this sample point is deleted, as depicted in Figure 8a. Compared with Figure 8a, the distribution of sample points in Figure 8b are relatively uniform. As shown in Figure 8c, by removing the interfering points outside the candidate circle and fitting a third candidate circle, the fitted candidate circle gradually converges to the real-world circle. The specific iterative process is as follows:
k 1 , K ,   k Z
For each iteration k,
Q k = i = 0 n i p ( x i a k * ) 2 + ( y i b k * ) 2 r k * 2 2
where  k  represents the current number of iterations and  K  represents the maximum number of iterations allowed;  Q k  indicates the residual sum of squares of the kth iteration;  ( a k * , b k * ) r k *  indicate the center coordinate and radius of the kth iteration, respectively; and  n i p  denotes the number of sample points used for the iteration.
The DBLSKCF algorithm obtains the optimal single-circle parameters by minimizing the residual sum of squares. Formula (6) is used to set a range of values for the number of iterations. The residual sum of squares for each fit is calculated by Formula (7). From Formula (7), it can be seen that  Q k  is a function of  a k * b k * r k * . We can obtain the values of  a k * b k * r k *  when  Q k  is minimized by Formula (8):
Q k r k * = 0 Q k a k * = 0 Q k b k * = 0
r k i = ( x i a k * ) 2 + ( y i b k * ) 2 1 2 , i f Q k Q * o r k K
where  Q *  denotes the critical residual sum of squares.  ( x i , y i )  represent the coordinates of the sample points.  r k i  denotes the distance from the sample points to the center of the circle at the kth iteration. Please note that after the candidate-circle iteration and ideal-circle determination, the center coordinates and radius of all candidate circles are obtained.
If the number of iterations is less than the maximum number of iterations allowed, or the residual sum of squares is greater than the critical residual sum of squares, then calculate the distance from each sample point to the center of the fitted circle using Formula (9). If the distance is less than the radius of the fitted candidate circle, the sample points are retained for the next fitting of the candidate circle. Instead, the sample points outside the fitted candidate circle are deleted. If  K  and  Q *  do not satisfy Formula (9), the iteration will be stopped, and the center coordinates and radius of the candidate circle are outputted. The corresponding algorithm for obtaining candidate and ideal circles are shown in Algorithm 3.
K  and  Q *  are introduced to reduce the frequency with which underfitting or overfitting occurs. However, the numerical settings of the two parameters may lead to underfitting or overfitting. Under the joint action of the two parameters, ideal circle fitting results are shown in Table 1 below:
From Table 1, it can be seen that different values affect the ideal circle to various degrees; and therefore, the optimal value of the parameter needs to be determined. In this paper, twenty-six complex images of different types are randomly selected, including dials, wheels, traffic signs, etc., and the plot of the residual sum of squares versus the number of iterations determines the optimal combination.
Theoretically, the result of circle fitting is the best when  Q k  is close to 0. To prevent overfitting, the value of  Q k  is set to 0.0005, as well as the value of  K  to 100. According to Figure 9f, it can be seen that the residual sum of squares in different types of complex images has an identical trend of change. Due to the irregular distribution of the interfering points, the residual sum of squares for the first iteration is large, and the second iteration has a significant decrease, with large changes in the center coordinates and radius of the circle. After four iterations, the residual sum of squares decreases to a relatively stable value. As shown in Figure 9g, only a few images are subjected to the seventh iteration, and the slopes on both sides changed less before and after the sixth iteration. With the increase in the number of iterations during the subsequent iterations, the residual sum of squares ceases to change or decline in a small range. Therefore, we set the maximum number of iterations allowed in the DBLSKCF algorithm to six. The minimum value of the residual sum of squares in the sixth iteration is 0.003, and it makes it most reasonable to set it to 0.003 to make more images with only six iterations in single-circle detection. It balances ideal circle detection accuracy and efficiency with and together.
Algorithm 3: Fit Candidate Circles and Determine Ideal Circles
Input: The edge set  C A  in Algorithm 2, the maximum number of iterations allowed  K , the critical residual sum of squares  Q * , the iteration number  k .
Output: Center ( a k * , b k * ) and radius  r k *  of the candidate circle.
1: Initialize  K  = 6,  Q *  = 0.003, k = 1.
2: Calculate the  Q k , ( a k * , b k * ) and  r k *  with (7) and (8).
3: while  k K  or  Q k Q *  do
4:   Calculate the  r k i  with (9).
5:   if  r k i     r k *  then
6:     Save ( x i , y i )
7:   else
8:     Delete ( x i , y i )
9:   end if
10:  Update  k = k + 1
11: end while

3.4. Improvement of Output Circle Detection Accuracy

Section 3.3 determines the ideal circle by  Q *  and  K , reducing the frequency of occurrence of overfitting and underfitting. Error exists between the ideal circle and real-world circle due to various interfering points. To improve the accuracy of output circle detection, this section adopts the K-means clustering algorithm in machine learning to cluster the center coordinates and radius of all candidate circles to achieve the purpose of error compensation for output circle parameters. The clustering process of the K-means clustering algorithm is illustrated in Figure 10.
According to Figure 10, the data points are clustered into two clusters, and the clustering center is updated by calculating the distance from the data points to the clustering center. The distance is calculated using Formula (10), and the data points are assigned to the cluster with the closest distance.
D j = i = 1 N j X i B j 2 , X i S j , j n k
where  S j  denotes the jth cluster and  B j  denotes the clustering center;  N j  indicates the number of data contained in the jth cluster;  n k  represents the number of clusters;  X i  denotes the data point in  S j ; and  D j  denotes the distance from the data point to the corresponding clustering center.
From the clustering principle, it is necessary to minimize the distance from the data points in each cluster to the corresponding cluster center.  B j  is determined by taking partial derivatives of  D j  to  B j . As shown in Formulas (11) and (12):
B j i = 1 N j X i B 2 = 0
B j = 1 N j i = 1 N j X i
The final clustering result is obtained by a continuous iteration of Formulas (11) and (12). In Section 3.3, we can determine that  K  is 6 and  Q *  is 0.003. However, the final number of iterations may be less than 6. Therefore, the algorithm is discussed in terms of categorization based on the number of data points in the cluster results:
(1)
Different numbers of data points in the two clustering results.
When the numbers of data points in the two clustering results differ, the algorithm proposed choose the cluster with more data points as the target cluster. The reason is as follows: the K-means clustering algorithm clusters data points based on their distance from the clustering center. The fitting results for the first few iterations vary widely, and the clustering algorithm clusters these data points into one cluster. In the later iterations, the fitting results change stably. The clustering algorithm will cluster these data points into one cluster, and clustering centers are close to data points. Therefore, the target cluster can be obtained by filtering the number of data points. Clustering results with many data points are obtained through Formula (13). Calculate the mean value of the circle parameter of the cluster using Formula (14), which is the result of the error compensation for the output circle, as follows:
C = f ( C 1 , C 2 )
x = 1 n c k = 0 n c a k * , y = 1 n c k = 0 n c b k * , r = 1 n c k = 0 n c r k *
where  C 1 C 2  indicate the set of data points in the results of the two clusters, respectively.  f ( )  is a function retaining the set with the most elements.  C  denotes the set of clustering results with more data points.  n c  represents the number of data points in set  C ( x , y ) r  are the center coordinates and radius of the output circle, respectively.
(2)
Same number of data points in two clustering results.
In this case, the algorithm selects clustering results based on the mean of the candidate circle radius. The reason is shown as follows: The fitting results for the first few iterations vary widely, and the clustering algorithm clusters these data points into one cluster. In the later iterations, the fitting results change stably. The mean of all candidate circles radius lie between the radius corresponding to the center of these two clusters. Formulas (15) and (16) are used to calculate the radius mean of the candidate circles in the two clustering results, respectively. Formula (17) are used to calculate the radius mean of all candidate circles. The target cluster is the cluster corresponding to the clustering result more minor than the mean by Formulas (18) and (19).
r 1 = 1 n 1 k = 0 n 1 r k *
r 2 = 1 n 2 k = 0 n 2 r k *
r m e a n = 1 n 1 + n 2 k = 0 n 1 + n 2 r k *
R = min { r 1 , r 2 }
x = 1 n R k = 0 n R a k * , y = 1 n R k = 0 n R b k * ,   r = R
where  r 1  denotes the mean of the radius of the candidate circles in set  C 1 r 2  denotes the mean of the radius of the candidate circles in set  C 2 ,and  r m e a n  denotes the mean of the radius of all candidate circles. R represents the minimum value in  r 1 r 2 n 1  and  n 2  denote the number of data points in the two clustering results, respectively, and  n R  denotes the number of data points in the target cluster. Please note that the center coordinates and radius of the output circle are obtained after the improvement of output circle detection accuracy.
The corresponding algorithm to obtain a high precision output circle is proposed. Please see Algorithm 4. In this paper, the algorithm that does not use K-means clustering is called the DBLSCF algorithm. The method will be verified via specific experiments in Section 4 to improve the accuracy of output circle.
Algorithm 4: Improve the Output Circle’s Detection Accuracy
Input: The k-means algorithm clustering number  n k , center ( a k * , b k * ) and radius  r k *  of the candidate circle in Algorithm 3.
Output: Center coordinates  ( x , y )  and radius  r  of the output circle.
1: Initialize  n k  = 2.
2: According to the method in Section 3.4, the center coordinates are clustered into   C 1  and   C 2 , respectively.
3: if num( C 1 ) is not equal to num( C 2 ) then
4:   Calculate  C  with (13).
5:   Calculate  ( x , y )  and  r  with (14).
6: else
7:   Calculate  r 1 r 2  and  r m e a n  with (15), (16) and (17), respectively.
8:   if  r 1 r m e a n  then
9:      ( x , y ) r  = mean(( a k * , b k * ), r k * ) ( k ( 0 , n u m ( C 1 ) )  )
10:  if  r 2 r m e a n  then
11:     ( x , y ) r  = mean(( a k * , b k * ), r k * ) ( k ( 0 , n u m ( C 2 ) )  )
12:  end if
13: end if

4. Experiments and Results

This section compares the DBLSKCF algorithm with the RANSAC, RHT, and DBLSCF algorithms regarding detection accuracy, efficiency, and stability. Two groups of experiments are launched. The first experiments in Section 4.1 are conducted under laboratory conditions, using images captured with various lighting intensities. The experiments are designed to evaluate the stability of the four algorithms under different lighting conditions. Section 4.2 aims to verify the accuracy and efficiency of the DBLSKCF algorithm by the GH dataset [23]. The comparative experimental setup is as follows:
(1).
All experiments are carried out using the same computer. The computer parameters are shown in Table 2.
(2).
To comparatively validate the detection speed, the four algorithms are terminated as soon as a circle was detected in the image.

4.1. Comparison of Stability of Circle Detection

We use a mean light intensity of 650 lx and a standard deviation of 50 lx to simulate variations in light intensity and randomly select twenty-four datasets. The stability of the four algorithms in practical applications is evaluated by comparing the detection results under various light intensity. The edges detected in the experiment are selected to be the inner diameter edges of the steel pipe, accompanied by rust, scratches, and strong reflectivity on the end face. The experimental platform consists of an industrial camera, a white ring light source, and a tube sheet. The experimental platform is shown in Figure 11.
In the experiment, the center coordinates and the radius change are used as stability measures. Under various light intensities, if the center coordinates and radius only change in a small range, it indicates that the algorithm has high stability and can effectively resist external interference. On the contrary, the algorithm is less stable and less resistant to external interference.
From Figure 12, for the RHT, RANSAC, and DBLSCF algorithms, the algorithm’s circle detection results under different light intensities are highly differentiated, and the detection results of the DBLSKCF algorithm maintain almost the same. All results are shown in Figure 13. With the increase in light intensity, the circle detection results of the RHT algorithm change unstably. In contrast, the circle detection results of the RANSAC algorithm tend to be stable. In addition, the circle detection results of the DBLSKCF and DBLSCF algorithms vary within a small range. We use the standard deviation from Table 3 to measure the algorithm’s stability better. A significant standard deviation indicates greater variability in the circle detection results, more excellent dispersion, and lower stability. Both in terms of coordinates of x and y, and radius, the detection results of the RHT algorithm have the more significant standard deviation and the worst stability. The detection results of the RANSAC algorithm show substantial variations in the relatively weak phase of light intensity and gradually stabilize with light intensity enhancement. The detection results of the DBLSCF and DBLSKCF algorithms change almost synchronously. But by calculating the standard deviation, it is found that the DBLSKCF algorithm has better stability. Therefore, the DBLSKCF algorithm has better stability and resistance to external interference.

4.2. Validation of Algorithm Detection Accuracy and Efficiency

To verify the accuracy and efficiency of the algorithm in this paper, we validate it with the GH dataset. The images in the GH dataset cover a variety of scenes and backgrounds, including indoor and outdoor environments, different lighting conditions, and different levels of an object and background clutter. The GH dataset is therefore well suited for testing and training the robustness and generalization of circle detection algorithms. Forty-eight images with a single circle are selected from the GH dataset, each labeled with the corresponding circle parameter.
As seen from Figure 14, the single-circle detection results of the RANSAC, RHT, and DBLSCF algorithms show varying degrees of fitting error, and only the results of the DBLSKCF algorithm are closest to the real-world circle parameters. The four algorithms’ circle detection results and running times are specified below.
After analyzing Figure 15 and Table 4, the single-circle detection results of the RHT algorithm have high error and low efficiency. Please find Table A1, Table A2, Table A3 and Table A4 in the Appendix A for the data results of the experiments, respectively. The RANSAC algorithm performs better in efficiency, but has a higher error than the DBLSCF and DBLSKCF algorithms because, in the RHT and RANSAC algorithms, the sample points for fitting the circle are chosen randomly, resulting in the selected sample points not being points on the main contour. Since the DBLSCF algorithm outputs an ideal circle as an output circle, the ideal circle may be affected by interfering points, resulting in a significant deviation of the circle parameters. Thus, the circle detection accuracy of the DBLSKCF algorithm is 3–5 times higher than that of the DBLSCF algorithm. The experimental results also justify the K-means clustering algorithm to improve the accuracy of circle detection. Considering the circle detection accuracy and efficiency, the DBLSKCF algorithm is significantly better than the other compared algorithms.

5. Conclusions

This paper proposes a single-circle detection algorithm, i.e., the DBLSKCF algorithm, that combines Canny edge detection, two clustering algorithms, and improved least squares method. The proposed algorithm has proven to be an excellent solution to single-circle detection in complex images. Compared with RHT, RANSAC, and DBLSCF, DBLSKCF demonstrates clear advantages in detection accuracy and stability. The highlights (and also the core steps) of the detection methods are summarized below:
  • Image edge preprocessing removes as many interfering points as possible while retaining the main contour edge information.
  • The DBSCAN algorithm is utilized to cluster the main contours and interfering points into different clusters, from which the cluster with more sample points is extracted as the fitting samples of the candidate circles.
  • An improved least square fitting of the circle with the residual sum of squares is raised. Removing the fitting failure points one by one makes the circle fitting result gradually closer to the real-world circle.
  • The K-means clustering algorithm is implemented to cluster the center coordinates and radius of all candidate circles to improve the accuracy of output circle detection.
Performance of the DBLSKCF algorithm:
(1).
Stability: The standard deviation of the X-coordinate, Y-coordinate, and radius detection results are 2.7 pixels, 2.3 pixels, and 3.27 pixels, respectively.
(2).
Detection accuracy: The average errors of X-coordinate, Y-coordinate, and radius detection are 1.8 pixels, 1.4 pixels, and 1.9 pixels, respectively.
(3).
Running time: The average running time is 0.1 s.
By comparing the detection performance with other algorithms, the proposed DBLSKCF algorithm outperforms in detection accuracy and stability.
Future work will be carried out in two main perspectives:
(1).
Adaptively determining the neighborhood radius and the minimum number of sample points within the neighborhood radius in the DBSCAN clustering algorithm.
(2).
An improvement of the proposed algorithm to enable multi-circle detection.

Author Contributions

Methodology, software, validation, and writing—original draft, J.C.; methodology, resources, funding acquisition, and writing—review and editing, Y.G.; supervision and funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 52075354 and 72201186), the Natural Science Foundation of Jiangsu Province (Grant No. BK20220481), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 22KJB410002), and the China Postdoctoral Science Foundation (Grant No. 2023M731450).

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Comparison Result of the DBLSKCF Algorithm with the Other Three Algorithms

Table A1. X-coordinate error (pixel).
Table A1. X-coordinate error (pixel).
Image NumberRANSACRHTDBLSCFDBLSKCF
11.430.50.512.38
220.5183.743.73
32.264.50.320.41
419.7116.52.321.09
587.57.157.41
62.51231.310.08
70.4600.050.18
81.67-1.491.49
92.5129.50.360.65
104.2652.52.190.38
111.51000.710.38
121.240.880.890.9
132.810.491.551.75
142156.51.141.15
153320.391.43
16560.30.37
170.390.52.192.04
182.0290.211.85
194.51300.071.14
207.4815.57.3840.7
216.83776.75.31
221.87290.480.59
239.6510.350.35
240.551.922.04
251080.290.750.84
260.360.50.570.57
270.2701.091.09
281.372.061.343.5
291.010.50.550.17
30200.51.5145.340.92
312542.552.53
320.48-0.720.68
330.79-0.871
341.111.041.03
351.2121.211.1
365.2024.090.83
379.570.51.90.25
380.80.180.480.45
391.250.51.526.44
40511.551.76
411.7321.581.69
42470.7510.2510.23
432.140.73.433.58
4417.94161.883.21
4519.080.460.80.8
461.8122.540.39
472.0212.52.722.71
480.371050.480.69
Note: The “-” in the table means that the algorithm did not detect the circle in the corresponding image.
Table A2. Y-coordinate error (pixel).
Table A2. Y-coordinate error (pixel).
Image NumberRANSACRHTDBLSCFDBLSKCF
16.27120.50.611.48
29780.52.662.68
33.13157.51.251.28
411.87932.151.05
5019.50.20.12
629123.53.180.35
70.461121.861.88
80.1-0.090.11
901550.530.53
1012.580.20.23
110.5950.120.1
124.090.461.821.84
133.43.260.210.12
1451411.531.55
1551.51.741.47
166481.031
170.4815821.460.85
182.1781.50.2450.03
190.5890.380.29
204.4541.9254.6
212.05932.2411.7
221.6922.51.361.24
230.950.650.420.39
241.896.50.440.34
2589.568.750.660.75
260.02200.410.38
270.14810.330.33
280.290.050.091.5
290.6955.50.940.32
30401.5268532.02
3141633.33.28
320.11-0.290.26
330.39-0.920.88
340.32145.50.540.51
350.59136.51.071.1
360.4463.52.051.4
37613.51.562.04
381.351.791.281.25
3999.6108.57.294.65
4025188.58.710.5
410.7550.50.950.35
42529.6110.2610.25
431.190.245.425.47
4412.175.50.030.07
453.660.050.030.02
463.3140.599.140.03
472.75102.52.222.19
480.28128.50.520.72
Note: The “-” in the table means that the algorithm did not detect the circle in the corresponding image.
Table A3. Radius error (pixel).
Table A3. Radius error (pixel).
Image NumberRANSACRHTDBLSCFDBLSKCF
188.55421.21.13
222.5512.965.534.45
30.82122.253.081.96
418.5220.52.812
586.0988.7586.730.49
612.59911.281.66
70.91271.631
82.99-1.971.98
91.19107.250.950.38
105.816.591.81
110.6145.621.391.69
121.871.147.9413.99
135.3611.730.840.36
140.53120.751.511.53
150.490.620.70.07
16132.451.220.93
170.0279.2520.991.91
183.6275.251.561.49
197.2573.2514.190.32
202.7316.594.484.4
214.4868.5410.942.46
222.9790.091.351.09
230.551.181.521.34
242.71372.272.26
25116.0476.780.580.61
260.22101.190.720.65
270.8240.752.642.63
2815.0613.2715.261.2
290.4281.571
300.1388.25133.160.35
314159.944.694.69
3228.66-2.842.8
330.95-0.510.67
340.2929.2510.83
350.3795.50.840.67
364.3814.2461.551.85
370.2538.070.890.28
381.762.8900.06
392.6962.756.45.65
4015.5993.50.360.29
410.476.751.721.54
4291.1858.913.933.78
430.090.467.972.15
4417.0359.621.752.36
4516.831.740.090.01
464.2790.05105.353.1
477.31777.292.24
480.4247.530.040.15
Note: The “-” in the table means that the algorithm did not detect the circle in the corresponding image.
Table A4. Running time (s).
Table A4. Running time (s).
Image NumberRANSACRHTDBLSCFDBLSKCF
10.061.490.0990.177
20.1391.1960.0260.033
30.0596.370.050.059
40.0612.480.0580.104
50.0511.340.0240.025
60.1640.550.0330.061
70.12333.720.0670.079
80.079-0.0630.122
90.0624.910.0670.099
100.0581.730.0570.09
110.0561.990.0350.038
120.1299.580.1130.233
130.0672.030.070.122
140.0530.5590.0280.036
150.0682.4070.0660.096
160.0570.920.0260.029
170.22310.770.10.234
180.0591.350.0710.109
190.060.7390.0550.066
200.0692.990.1340.177
210.0621.9310.0680.076
220.14319.810.2060.263
230.05319.80.0460.062
240.0653.140.0690.121
250.0570.850.0380.083
260.06712.840.0830.208
270.053.0780.0310.419
280.06319.590.1770.243
290.0531.050.0280.033
300.1018.980.0820.161
310.0540.3870.020.029
320.076-0.1640.168
330.059-0.0590.114
340.05219.0860.0330.041
350.1194.480.0420.054
360.0598.550.130.174
370.0430.1230.0170.024
380.1725.10.1460.198
390.063.0540.0660.096
400.060.940.0470.064
410.0524.020.0660.08
420.0581.570.0460.063
430.1552.650.0510.062
440.0571.050.0430.059
450.1332.240.0350.041
460.06110.40.1240.114
470.0551.370.0370.046
480.1295.680.1220.102
Note: The “-” in the table means that the algorithm did not detect the circle in the corresponding image.

References

  1. Mohammadi, S.; Mohammadi, M.; Dehlaghi, V.; Ahmadi, A. Automatic Segmentation, Detection, and Diagnosis of Abdominal Aortic Aneurysm (AAA) Using Convolutional Neural Networks and Hough Circles Algorithm. Cardiovasc. Eng. Technol. 2019, 10, 490–499. [Google Scholar] [CrossRef] [PubMed]
  2. Liang, Q.; Long, J.; Nan, Y.; Coppola, G.; Zou, K.; Zhang, D.; Sun, W. Angle Aided Circle Detection Based on Randomized Hough Transform and Its Application in Welding Spots Detection. Math. Biosci. Eng. 2019, 16, 1244–1257. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, T.; Zheng, P.; Bao, J. Deep Learning-Based Welding Image Recognition: A Comprehensive Review. J. Manuf. Syst. 2023, 68, 601–625. [Google Scholar] [CrossRef]
  4. Cheng, L.; Zhu, Y.; Kersemans, M. DMD-T: Thermographic Inspection of Composites Using Dynamic Mode Decomposition. In Proceedings of the 5th International Conference on Industrial Artificial Intelligence, Shenyang, China, 21–24 August 2023. [Google Scholar]
  5. Zhu, W.; Gu, H.; Su, W. A Fast PCB Hole Detection Method Based on Geometric Features. Meas. Sci. Technol. 2020, 31, 095402. [Google Scholar] [CrossRef]
  6. Shakarji, C.M.; Srinivasan, V. On Algorithms and Heuristics for Constrained Least-Squares Fitting of Circles and Spheres to Support Standards. J. Comput. Inf. Sci. Eng. 2019, 19, 031012. [Google Scholar] [CrossRef]
  7. Jing, Z.; Hongtao, C.; Fan, L. Remote Sensing Image Fusion Based on Multivariate Empirical Mode Decomposition and Weighted Least Squares Filter. Acta Photonica Sin. 2019, 48, 510003. [Google Scholar] [CrossRef]
  8. Xu, L.; Oja, E.; Kultanen, P. A New Curve Detection Method: Randomized Hough Transform (RHT). Pattern Recognit. Lett. 1990, 11, 331–338. [Google Scholar] [CrossRef]
  9. Li, D.; Nan, F.; Xue, T.; Yu, X. Circle Detection of Short Arc Based on Randomized Hough Transform. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 258–263. [Google Scholar]
  10. Mukhopadhyay, P.; Chaudhuri, B.B. A Survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  11. Wang, G. A Sub-Pixel Circle Detection Algorithm Combined with Improved RHT and Fitting. Multimed. Tools Appl. 2020, 79, 29825–29843. [Google Scholar] [CrossRef]
  12. Jiang, L. Efficient Randomized Hough Transform for Circle Detection Using Novel Probability Sampling and Feature Points. Optik 2012, 123, 1834–1840. [Google Scholar] [CrossRef]
  13. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  14. Kiddee, P.; Fang, Z.; Tan, M. A Real-Time and Robust Feature Detection Method Using Hierarchical Strategy and Modified Kalman Filter for Thick Plate Seam Tracking. Int. J. Autom. Control. 2017, 11, 428–446. [Google Scholar] [CrossRef]
  15. Ma, Y.; Fan, J.; Yang, H.; Yang, L.; Ji, Z.; Jing, F.; Tan, M. A Fast and Robust Seam Tracking Method for Spatial Circular Weld Based on Laser Visual Sensor. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  16. Chaudhuri, D. A Simple Least Squares Method for Fitting of Ellipses and Circles Depends on Border Points of a Two-Tone Image and Their 3-D Extensions. Pattern Recognit. Lett. 2010, 31, 818–829. [Google Scholar] [CrossRef]
  17. Ahn, S.J.; Rauh, W. Least-Squares Orthogonal Distances fitting of Circle, Sphere, Ellipse, Hyperbola, and Parabola. Pattern Recognit. 2001, 34, 2283–2303. [Google Scholar] [CrossRef]
  18. Umbach, D.; Jones, K.N. A Few Methods for Fitting Circles to Data. IEEE Trans. Instrum. Meas. 2003, 52, 1881–1885. [Google Scholar] [CrossRef]
  19. Zhou, X.; Wang, Y.; Zhu, Q.; Zhang, H.; Chen, Q. Circle Detection with Model Fitting in Polar Coordinates for Glass Bottle Mouth Localization. Int. J. Adv. Manuf. Technol. 2022, 120, 1041–1051. [Google Scholar] [CrossRef]
  20. Cao, B.; Li, J.; Liang, Y.; Sun, X.; Li, W. Real-Time Detection of Nickel Plated Punched Steel Strip Parameters Based on Improved Circle Fitting Algorithm. Electronics 2023, 12, 1865. [Google Scholar] [CrossRef]
  21. Park, M.-J.; Kim, H.-J. A Real-Time Edge-Detection CMOS Image Sensor for Machine Vision Applications. IEEE Sens. J. 2023, 23, 9254–9261. [Google Scholar] [CrossRef]
  22. Agrawal, S.; Dean, B.K. Edge Detection Algorithm for $Musca-Domestica$ Inspired Vision System. IEEE Sens. J. 2019, 19, 10591–10599. [Google Scholar] [CrossRef]
  23. Circle Detection. Available online: https://Github.Com/Zikai1/CircleDetection (accessed on 10 April 2022).
Figure 1. Single-circle detection process.
Figure 1. Single-circle detection process.
Sensors 23 09030 g001
Figure 2. Filtering effect of Gaussian kernels of different sizes. (a) Original image; (b) 5 × 5; (c) 7 × 7; (d) 9 × 9; (e) 11 × 11.
Figure 2. Filtering effect of Gaussian kernels of different sizes. (a) Original image; (b) 5 × 5; (c) 7 × 7; (d) 9 × 9; (e) 11 × 11.
Sensors 23 09030 g002
Figure 3. Retention results for different numbers of edges: (a) 4, (b) 5, (c) 6, (d) 7, (e) 8.
Figure 3. Retention results for different numbers of edges: (a) 4, (b) 5, (c) 6, (d) 7, (e) 8.
Sensors 23 09030 g003
Figure 4. Clustering process of DBSCAN algorithm.
Figure 4. Clustering process of DBSCAN algorithm.
Sensors 23 09030 g004
Figure 5. DBSCAN clustering results: (a) main contour; (b) main contour and outer interfering points. Note: Different-colored sample points in the figure represent different clustering results.
Figure 5. DBSCAN clustering results: (a) main contour; (b) main contour and outer interfering points. Note: Different-colored sample points in the figure represent different clustering results.
Sensors 23 09030 g005
Figure 6. Inferior arcs and superior arcs.
Figure 6. Inferior arcs and superior arcs.
Sensors 23 09030 g006
Figure 7. Conventional least squares fitting circle method.
Figure 7. Conventional least squares fitting circle method.
Sensors 23 09030 g007
Figure 8. Principle of the method for removing the fitted failure points one by one based on the least squares method: (a) fitting of the first candidate circle; (b) fitting of the second candidate circle; (c) fitting of the third candidate circle.
Figure 8. Principle of the method for removing the fitted failure points one by one based on the least squares method: (a) fitting of the first candidate circle; (b) fitting of the second candidate circle; (c) fitting of the third candidate circle.
Sensors 23 09030 g008
Figure 9. Determination of the optimal critical residual sum of squares and the maximum number of iterations allowed; (ae) denote the detection results of circles in different complex scenarios; (f) shows the plot of the residual sum of squares versus the number of iterations; (g) represents the localized zoomed-in view after four iterations in (f).
Figure 9. Determination of the optimal critical residual sum of squares and the maximum number of iterations allowed; (ae) denote the detection results of circles in different complex scenarios; (f) shows the plot of the residual sum of squares versus the number of iterations; (g) represents the localized zoomed-in view after four iterations in (f).
Sensors 23 09030 g009
Figure 10. Schematic diagram of clustering process of K-means algorithm. (a) Original data points. (b) Start of clustering. (c) Clustering result.
Figure 10. Schematic diagram of clustering process of K-means algorithm. (a) Original data points. (b) Start of clustering. (c) Clustering result.
Sensors 23 09030 g010
Figure 11. Experimental platform.
Figure 11. Experimental platform.
Sensors 23 09030 g011
Figure 12. Circle detection results under various light intensities. From top to bottom: the light intensity is 598 lx, 645 lx, and 686 lx, respectively. (a) Original image; (be) denote the single-circle detection results of the RANSAC, RHT, DBLSCF, and DBLSKCF algorithms, respectively.
Figure 12. Circle detection results under various light intensities. From top to bottom: the light intensity is 598 lx, 645 lx, and 686 lx, respectively. (a) Original image; (be) denote the single-circle detection results of the RANSAC, RHT, DBLSCF, and DBLSKCF algorithms, respectively.
Sensors 23 09030 g012aSensors 23 09030 g012b
Figure 13. Detection results of four algorithms under various light intensities. (a) X-coordinate change. (b) Y-coordinate change. (c) Radius change.
Figure 13. Detection results of four algorithms under various light intensities. (a) X-coordinate change. (b) Y-coordinate change. (c) Radius change.
Sensors 23 09030 g013
Figure 14. Circle detection results of four algorithms. (a) Original image. (b) RANSAC. (c) RHT. (d) DBLSCF. (e) DBLSKCF.
Figure 14. Circle detection results of four algorithms. (a) Original image. (b) RANSAC. (c) RHT. (d) DBLSCF. (e) DBLSKCF.
Sensors 23 09030 g014
Figure 15. Detection results of different images for four algorithms. (a) X-coordinate error. (b) Y-coordinate error. (c) Radius error. (d) Running time. Note: Where the RHT algorithm has circle detection failures for a few images in the dataset, these are represented by breakpoints.
Figure 15. Detection results of different images for four algorithms. (a) X-coordinate error. (b) Y-coordinate error. (c) Radius error. (d) Running time. Note: Where the RHT algorithm has circle detection failures for a few images in the dataset, these are represented by breakpoints.
Sensors 23 09030 g015aSensors 23 09030 g015b
Table 1. Effect of  K  and  Q *  on the ideal circle fitting result.
Table 1. Effect of  K  and  Q *  on the ideal circle fitting result.
KQ*Ideal Circle
LargeLargeunderfitting
LargeSmalloverfitting
SmallLargeunderfitting
SmallSmalloverfitting
Table 2. Specific parameters of the operating computer.
Table 2. Specific parameters of the operating computer.
Development
Environment
Internal StorageExecutive SystemDevelopment Tool
CPU:AMD Ryzen 7 6800HS Creator Edition 3.20 GHz16 GWindows 11Python 3.6
Table 3. Standard deviation of circle detection results for four algorithms.
Table 3. Standard deviation of circle detection results for four algorithms.
AlgorithmRANSACRHTDBLSCFDBLSKCF
X coordinate (pixel)18.758.12.82.7
Y coordinate (pixel)49.266.62.42.3
Radius (pixel)19.246.33.373.27
Table 4. Comparison of circle detection mean value.
Table 4. Comparison of circle detection mean value.
AlgorithmRANSACRHTDBLSCFDBLSKCF
X-coordinate error (pixel)8.928.45.31.8
Y-coordinate error (pixel)36.577.25.21.4
Radius error (pixel)13.35111.41.9
Running time (s)0.0860.070.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, J.; Gao, Y.; Wang, C. A Novel Four-Step Algorithm for Detecting a Single Circle in Complex Images. Sensors 2023, 23, 9030. https://doi.org/10.3390/s23229030

AMA Style

Cao J, Gao Y, Wang C. A Novel Four-Step Algorithm for Detecting a Single Circle in Complex Images. Sensors. 2023; 23(22):9030. https://doi.org/10.3390/s23229030

Chicago/Turabian Style

Cao, Jianan, Yue Gao, and Chuanyang Wang. 2023. "A Novel Four-Step Algorithm for Detecting a Single Circle in Complex Images" Sensors 23, no. 22: 9030. https://doi.org/10.3390/s23229030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop